Compare commits

..

30 Commits
v0.1 ... v0.2

Author SHA1 Message Date
Daniel J Walsh
ac2aad6343 Update the version number we identify as to 0.2.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #199
Approved by: nalind
2017-07-18 20:55:06 +00:00
Dan Walsh
c8a887f512 Add support for -- ending options parsing to buildah run
If you specify an option in a buildah run command, the command fails.
The proper syntax for this is to add --

buildah run $ctr -- ls -l /

Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #197
Approved by: nalind
2017-07-18 19:19:45 +00:00
Dan Walsh
f4a5511e83 Only print heading once when executing buildah images
The current buildah images command prints the heading twice, this
bug was introduced when --json flag was added.

Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #195
Approved by: rhatdan
2017-07-17 20:37:08 +00:00
Daniel J Walsh
a6f7d725a0 Add/Copy need to support glob syntax
This patch allows users to do
buildah add $ctr * /dest

Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #194
Approved by: nalind
2017-07-17 20:11:48 +00:00
Daniel J Walsh
dd98523b8d Add flag to remove containers on commit
I think this would be good practice to eliminate wasted disk space.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #189
Approved by: rhatdan
2017-07-17 19:07:21 +00:00
Daniel J Walsh
98ca81073e Improve buildah push man page and help information
This better documents the options available to the user.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #186
Approved by: rhatdan
2017-07-13 19:58:04 +00:00
Tomas Tomecek
5f80a1033b add a way to disable PTY allocation
Fixes #179

Signed-off-by: Tomas Tomecek <ttomecek@redhat.com>

Closes: #181
Approved by: nalind
2017-07-13 14:18:08 +00:00
Tomas Tomecek
70518e7093 clarify --runtime-flag of run command
Signed-off-by: Tomas Tomecek <ttomecek@redhat.com>

Fixes #178

Closes: #182
Approved by: rhatdan
2017-07-10 19:12:48 +00:00
Tomas Tomecek
0c70609031 gitignore build artifacts
* manpages
* buildah binary
* imgtype binary

Signed-off-by: Tomas Tomecek <ttomecek@redhat.com>

Closes: #183
Approved by: rhatdan
2017-07-10 19:02:13 +00:00
Daniel J Walsh
b1c6243f8a Need \n for Printing untagged message
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #176
Approved by: rhatdan
2017-06-29 13:32:47 +00:00
Nalin Dahyabhai
12a3abf6fa Update to match newer storage and image-spec APIs
Update to adjust to new types and method signatures in just-updated
vendored code.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #174
Approved by: rhatdan
2017-06-28 21:05:58 +00:00
Nalin Dahyabhai
f46ed32a11 Build imgtype independently
Just build imgtype once, and reuse the flags we use for the main binary.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #174
Approved by: rhatdan
2017-06-28 21:05:58 +00:00
Nalin Dahyabhai
be2d536f52 Bump containers/storage and containers/image
Bump containers/storage and containers/image, and pin them to their
current versions.  This requires that we update image-spec to rc6 and
add github.com/ostreedev/ostree-go, which adds build-time requirements
on glib2-devel and ostree-devel.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #174
Approved by: rhatdan
2017-06-28 21:05:58 +00:00
Nalin Dahyabhai
72253654d5 imgtype: don't log at Fatal level
Logging at Fatal calls os.Exit(), which keeps us from shutting down
storage properly, which prevents test cleanup from succeeding.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #162
Approved by: rhatdan
2017-06-28 20:16:31 +00:00
Nalin Dahyabhai
a2bd274d11 imgtype: add debugging, reexec, an optimization
In the imgtype test helper, add a -debug flag, correctly handle things
on the off chance that we need to call a reexec handler, and read the
manifest using the Manifest() method of an image that we're already
opening, rather than creating a source image just so that we can call
its GetManifest() method.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #162
Approved by: rhatdan
2017-06-28 20:16:31 +00:00
Nalin Dahyabhai
416301306a Make it possible to run tests with non-vfs drivers
Make the tests use the storage driver named in $STORAGE_DRIVER, if one's
set, instead of hard-coding the default of "vfs".

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #162
Approved by: rhatdan
2017-06-28 20:16:31 +00:00
Daniel J Walsh
a49a32f55f Add buildah export support
Will export the contents of a container as a tar ball.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #170
Approved by: rhatdan
2017-06-28 20:06:42 +00:00
Daniel J Walsh
7af6ab2351 Add the missing buildah version from the readme and the buildah man page
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #173
Approved by: rhatdan
2017-06-28 15:59:35 +00:00
Ryan Cole
6d85cd3f7d update 'buildah images' and 'buildah rmi' commands
add more flags to `buildah images` and `buildah rmi`, and write tests

Signed-off-by: Ryan Cole <rcyoalne@gmail.com>

Closes: #155
Approved by: rhatdan
2017-06-28 15:36:19 +00:00
Tomas Tomecek
d9a77b38fc readme: link to actual manpages
Signed-off-by: Tomas Tomecek <ttomecek@redhat.com>

Closes: #165
Approved by: rhatdan
2017-06-27 16:10:40 +00:00
Brent Baude
e50fee5738 cmd/buildah/containers.go: Add JSON output option
Consumers of the buildah output will need structured text like
the JSON format.  This commit adds a --json option to
buildah containers.

Example output:
```
[
    {
        "ID": "8911b523771cb2e0a26ab9bb324fb5be4e992764fdd5ead86a936aa6de964d9a",
        "Builder": true,
        "ImageId": "26db5ad6e82d85265d1609e6bffc04331537fdceb9740d36f576e7ee4e8d1be3",
        "ImageName": "docker.io/library/alpine:latest",
        "ContainerName": "alpine-working-container"
    }
]

```

Signed-off-by: Brent Baude <bbaude@redhat.com>

Closes: #164
Approved by: rhatdan
2017-06-27 16:01:07 +00:00
umohnani8
63ca9028bc Add 'buildah version' command
Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #157
Approved by: rhatdan
2017-06-27 15:50:36 +00:00
Brent Baude
62845372ad cmd/buildah/images.go: Add JSON output option
The Atomic CLI will eventually need to be able to consume
structured output (in something like JSON).  This commit
adds a -j option to output to trigger JSON output of
images.

Example output:
```
[
    {
        "id": "aa66247d48aedfa3e9b74e4a41d2c9e5d2529122c8f0d43417012028a66f4f3b",
        "names": [
            "docker.io/library/busybox:latest"
        ]
    },
    {
        "id": "26db5ad6e82d85265d1609e6bffc04331537fdceb9740d36f576e7ee4e8d1be3",
        "names": [
            "docker.io/library/alpine:latest"
        ]
    }
]
```

Signed-off-by: Brent Baude <bbaude@redhat.com>

Closes: #161
Approved by: rhatdan
2017-06-26 16:05:58 +00:00
Nalin Dahyabhai
5458250462 Update the example commit target to skip transport
Update the target name that we use when committing an image in the
example script to not mention the local storage transport, since the
default is to use it anyway.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #163
Approved by: rhatdan
2017-06-26 15:56:18 +00:00
Nalin Dahyabhai
8efeb7f4ac Handle "run" without an explicit command correctly
When "run" isn't explicitly given a command, mix the command and
entrypoint options and configured values together correctly.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #160
Approved by: rhatdan
2017-06-26 13:21:53 +00:00
Nalin Dahyabhai
303a8df35d Ensure volume points get created, and with perms
Ensure that volume points are created, if they don't exist, when they're
defined in a Dockerfile (#151), and that if we create them, we create
them with 0755 permissions (#152).

When processing RUN instructions or the run command, if we're not
mounting something in a volume's location, create a copy of the volume's
initial contents under the container directory and bind mount that.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #154
Approved by: rhatdan
2017-06-24 10:37:13 +00:00
Nalin Dahyabhai
21b1a9349d Add a -a/--all option to "buildah containers"
Add a --all option to "buildah containers" that causes it to go through
the full list of containers, providing information about the ones that
aren't buildah containers in addition to the ones that are.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #148
Approved by: rhatdan
2017-06-22 21:01:33 +00:00
TomSweeneyRedHat
8b99eae5e8 Add runc to required packages in README.md
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #153
Approved by: rhatdan
2017-06-20 21:04:51 +00:00
Nalin Dahyabhai
cd6b5870e2 Make shallowCopy() not use a temporary image
Modify shallowCopy() to not use a temporary image.  Assume that the big
data items that we formerly added to the temporary image are small
enough that we can just hang on to them.

Write everything to the destination reference instead of a temporary
image, read it all back using the low level APIs, delete the image, and
then recreate it using the new layer and the saved items and names.

This lets us lift the requirement that we shallowCopy only to images
with names, so that build-using-dockerfile will work without them again.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #150
Approved by: rhatdan
2017-06-16 15:34:53 +00:00
Nalin Dahyabhai
02f5235773 Add a quick note about state version values
Add a note that the version we record in our state file isn't
necessarily the package version, but is meant to change when we make
incompatible changes to the contents of the state file.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #147
Approved by: rhatdan
2017-06-14 17:47:07 +00:00
139 changed files with 5024 additions and 499 deletions

3
.gitignore vendored Normal file
View File

@@ -0,0 +1,3 @@
docs/buildah*.1
/buildah
/imgtype

View File

@@ -14,10 +14,12 @@ dnf install -y \
device-mapper-devel \
findutils \
git \
glib2-devel \
golang \
gpgme-devel \
libassuan-devel \
make \
ostree-devel \
which
# Red Hat CI adds a merge commit, for testing, which fails the

View File

@@ -8,7 +8,7 @@ sudo: required
before_install:
- sudo add-apt-repository -y ppa:duggan/bats
- sudo apt-get -qq update
- sudo apt-get -qq install bats btrfs-tools git libdevmapper-dev libgpgme11-dev
- sudo apt-get -qq install bats btrfs-tools git libdevmapper-dev libglib2.0-dev libgpgme11-dev
script:
- make install.tools all validate
- make install.tools all validate TAGS=containers_image_ostree_stub
- cd tests; sudo PATH="$PATH" ./test_runner.sh

View File

@@ -4,10 +4,18 @@ BINDIR := $(PREFIX)/bin
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
BUILDFLAGS := -tags "$(AUTOTAGS) $(TAGS)"
all: buildah docs
GIT_COMMIT := $(shell git rev-parse --short HEAD)
BUILD_INFO := $(shell date +%s)
LDFLAGS := -ldflags '-X main.gitCommit=${GIT_COMMIT} -X main.buildInfo=${BUILD_INFO}'
all: buildah imgtype docs
buildah: *.go imagebuildah/*.go cmd/buildah/*.go docker/*.go util/*.go
go build -o buildah $(BUILDFLAGS) ./cmd/buildah
go build $(LDFLAGS) -o buildah $(BUILDFLAGS) ./cmd/buildah
imgtype: *.go docker/*.go util/*.go tests/imgtype.go
go build $(LDFLAGS) -o imgtype $(BUILDFLAGS) ./tests/imgtype.go
.PHONY: clean
clean:

View File

@@ -21,13 +21,16 @@ Prior to installing buildah, install the following packages on your linux distro
* make
* golang (Requires version 1.8.1 or higher.)
* bats
* btrfs-progs-devel
* device-mapper-devel
* gpgme-devel
* libassuan-devel
* git
* btrfs-progs-devel
* bzip2
* device-mapper-devel
* git
* go-md2man
* gpgme-devel
* glib2-devel
* libassuan-devel
* ostree-devel
* runc
* skopeo-containers
In Fedora, you can use this command:
@@ -39,23 +42,26 @@ In Fedora, you can use this command:
bats \
btrfs-progs-devel \
device-mapper-devel \
glib2-devel \
gpgme-devel \
libassuan-devel \
ostree-devel \
git \
bzip2 \
go-md2man \
runc \
skopeo-containers
```
Then to install buildah follow the steps in this example:
Then to install buildah follow the steps in this example:
```
mkdir ~/buildah
cd ~/buildah
export GOPATH=`pwd`
git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
cd ./src/github.com/projectatomic/buildah
make
export GOPATH=`pwd`
git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
cd ./src/github.com/projectatomic/buildah
make
make install
buildah --help
```
@@ -65,24 +71,26 @@ encounters a `RUN` instruction, so you'll also need to build and install a compa
[runc](https://github.com/opencontainers/runc) for buildah to call for those cases.
## Commands
| Command | Description |
| --------------------- | --------------------------------------------------- |
| buildah-add(1) | Add the contents of a file, URL, or a directory to the container. |
| buildah-bud(1) | Build an image using instructions from Dockerfiles. |
| buildah-commit(1) | Create an image from a working container. |
| buildah-config(1) | Update image configuration settings. |
| buildah-containers(1) | List the working containers and their base images. |
| buildah-copy(1) | Copies the contents of a file, URL, or directory into a container's working directory. |
| buildah-from(1) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| buildah-images(1) | List images in local storage. |
| buildah-inspect(1) | Inspects the configuration of a container or image. |
| buildah-mount(1) | Mount the working container's root filesystem. |
| buildah-push(1) | Copies an image from local storage. |
| buildah-rm(1) | Removes one or more working containers. |
| buildah-rmi(1) | Removes one or more images. |
| buildah-run(1) | Run a command inside of the container. |
| buildah-tag(1) | Add an additional name to a local image. |
| buildah-umount(1) | Unmount a working container's root file system. |
| Command | Description |
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| [buildah-add(1)](/docs/buildah-add.md) | Add the contents of a file, URL, or a directory to the container. |
| [buildah-bud(1)](/docs/buildah-bud.md) | Build an image using instructions from Dockerfiles. |
| [buildah-commit(1)](/docs/buildah-commit.md) | Create an image from a working container. |
| [buildah-config(1)](/docs/buildah-config.md) | Update image configuration settings. |
| [buildah-containers(1)](/docs/buildah-containers.md) | List the working containers and their base images. |
| [buildah-copy(1)](/docs/buildah-copy.md) | Copies the contents of a file, URL, or directory into a container's working directory. |
| [buildah-export(1)](/docs/buildah-export.md) | Export the contents of a container's filesystem as a tar archive |
| [buildah-from(1)](/docs/buildah-from.md) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| [buildah-images(1)](/docs/buildah-images.md) | List images in local storage. |
| [buildah-inspect(1)](/docs/buildah-inspect.md) | Inspects the configuration of a container or image. |
| [buildah-mount(1)](/docs/buildah-mount.md) | Mount the working container's root filesystem. |
| [buildah-push(1)](/docs/buildah-push.md) | Copies an image from local storage. |
| [buildah-rm(1)](/docs/buildah-rm.md) | Removes one or more working containers. |
| [buildah-rmi(1)](/docs/buildah-rmi.md) | Removes one or more images. |
| [buildah-run(1)](/docs/buildah-run.md) | Run a command inside of the container. |
| [buildah-tag(1)](/docs/buildah-tag.md) | Add an additional name to a local image. |
| [buildah-umount(1)](/docs/buildah-umount.md) | Unmount a working container's root file system. |
| [buildah-version(1)](/docs/buildah-version.md) | Display the Buildah Version Information |
**Future goals include:**
* more CI tests

77
add.go
View File

@@ -8,6 +8,7 @@ import (
"path"
"path/filepath"
"strings"
"syscall"
"time"
"github.com/Sirupsen/logrus"
@@ -120,44 +121,54 @@ func (b *Builder) Add(destination string, extract bool, source ...string) error
}
continue
}
srcfi, err := os.Stat(src)
glob, err := filepath.Glob(src)
if err != nil {
return errors.Wrapf(err, "error reading %q", src)
return errors.Wrapf(err, "invalid glob %q", src)
}
if srcfi.IsDir() {
// The source is a directory, so copy the contents of
// the source directory into the target directory. Try
// to create it first, so that if there's a problem,
// we'll discover why that won't work.
d := dest
if err := os.MkdirAll(d, 0755); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists", d)
}
logrus.Debugf("copying %q to %q", src+string(os.PathSeparator)+"*", d+string(os.PathSeparator)+"*")
if err := chrootarchive.CopyWithTar(src, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", src, d)
}
continue
if len(glob) == 0 {
return errors.Wrapf(syscall.ENOENT, "no files found matching %q", src)
}
if !extract || !archive.IsArchivePath(src) {
// This source is a file, and either it's not an
// archive, or we don't care whether or not it's an
// archive.
d := dest
if destfi != nil && destfi.IsDir() {
d = filepath.Join(dest, filepath.Base(src))
for _, gsrc := range glob {
srcfi, err := os.Stat(gsrc)
if err != nil {
return errors.Wrapf(err, "error reading %q", gsrc)
}
// Copy the file, preserving attributes.
logrus.Debugf("copying %q to %q", src, d)
if err := chrootarchive.CopyFileWithTar(src, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", src, d)
if srcfi.IsDir() {
// The source is a directory, so copy the contents of
// the source directory into the target directory. Try
// to create it first, so that if there's a problem,
// we'll discover why that won't work.
d := dest
if err := os.MkdirAll(d, 0755); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists", d)
}
logrus.Debugf("copying %q to %q", gsrc+string(os.PathSeparator)+"*", d+string(os.PathSeparator)+"*")
if err := chrootarchive.CopyWithTar(gsrc, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", gsrc, d)
}
continue
}
if !extract || !archive.IsArchivePath(gsrc) {
// This source is a file, and either it's not an
// archive, or we don't care whether or not it's an
// archive.
d := dest
if destfi != nil && destfi.IsDir() {
d = filepath.Join(dest, filepath.Base(gsrc))
}
// Copy the file, preserving attributes.
logrus.Debugf("copying %q to %q", gsrc, d)
if err := chrootarchive.CopyFileWithTar(gsrc, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", gsrc, d)
}
continue
}
// We're extracting an archive into the destination directory.
logrus.Debugf("extracting contents of %q into %q", gsrc, dest)
if err := chrootarchive.UntarPath(gsrc, dest); err != nil {
return errors.Wrapf(err, "error extracting %q into %q", gsrc, dest)
}
continue
}
// We're extracting an archive into the destination directory.
logrus.Debugf("extracting contents of %q into %q", src, dest)
if err := chrootarchive.UntarPath(src, dest); err != nil {
return errors.Wrapf(err, "error extracting %q into %q", src, dest)
}
}
return nil

View File

@@ -19,9 +19,17 @@ const (
// identify working containers.
Package = "buildah"
// Version for the Package
Version = "0.1"
Version = "0.2"
// The value we use to identify what type of information, currently a
// serialized Builder structure, we are using as per-container state.
// This should only be changed when we make incompatible changes to
// that data structure, as it's used to distinguish containers which
// are "ours" from ones that aren't.
containerType = Package + " 0.0.1"
stateFile = Package + ".json"
// The file in the per-container directory which we use to store our
// per-container state. If it isn't there, then the container isn't
// one of our build containers.
stateFile = Package + ".json"
)
const (

View File

@@ -36,6 +36,10 @@ var (
Name: "quiet, q",
Usage: "don't output progress information when writing images",
},
cli.BoolFlag{
Name: "rm",
Usage: "remove the container and its content after committing it to an image. Default leaves the container and its content in place.",
},
}
commitDescription = "Writes a new image using the container's read-write layer and, if it is based\n on an image, the layers of that image"
commitCommand = cli.Command{
@@ -128,5 +132,8 @@ func commitCmd(c *cli.Context) error {
return errors.Wrapf(err, "error committing container %q to %q", builder.Container, image)
}
if c.Bool("rm") {
return builder.Delete()
}
return nil
}

View File

@@ -1,15 +1,28 @@
package main
import (
"encoding/json"
"os"
"strings"
"time"
is "github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
type imageMetadata struct {
Tag string `json:"tag"`
CreatedTime time.Time `json:"created-time"`
ID string `json:"id"`
Blobs []types.BlobInfo `json:"blob-list"`
Layers map[string][]string `json:"layers"`
SignatureSizes []string `json:"signature-sizes"`
}
var needToShutdownStore = false
func getStore(c *cli.Context) (storage.Store, error) {
@@ -71,3 +84,31 @@ func openImage(store storage.Store, name string) (builder *buildah.Builder, err
}
return builder, nil
}
func parseMetadata(image storage.Image) (imageMetadata, error) {
var im imageMetadata
dec := json.NewDecoder(strings.NewReader(image.Metadata))
if err := dec.Decode(&im); err != nil {
return imageMetadata{}, err
}
return im, nil
}
func getSize(image storage.Image, store storage.Store) (int64, error) {
is.Transport.SetStore(store)
storeRef, err := is.Transport.ParseStoreReference(store, "@"+image.ID)
if err != nil {
return -1, err
}
img, err := storeRef.NewImage(nil)
if err != nil {
return -1, err
}
imgSize, err := img.Size()
if err != nil {
return -1, err
}
return imgSize, nil
}

View File

@@ -0,0 +1,99 @@
package main
import (
"os/user"
"testing"
"flag"
is "github.com/containers/image/storage"
"github.com/containers/storage"
"github.com/urfave/cli"
)
func TestGetStore(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
set := flag.NewFlagSet("test", 0)
globalSet := flag.NewFlagSet("test", 0)
globalSet.String("root", "", "path to the root directory in which data, including images, is stored")
globalCtx := cli.NewContext(nil, globalSet, nil)
command := cli.Command{Name: "imagesCommand"}
c := cli.NewContext(nil, set, globalCtx)
c.Command = command
_, err := getStore(c)
if err != nil {
t.Error(err)
}
}
func TestParseMetadata(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
} else if len(images) == 0 {
t.Fatalf("no images with metadata to parse")
}
_, err = parseMetadata(images[0])
if err != nil {
t.Error(err)
}
}
func TestGetSize(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
_, err = getSize(images[0], store)
if err != nil {
t.Error(err)
}
}
func failTestIfNotRoot(t *testing.T) {
u, err := user.Current()
if err != nil {
t.Log("Could not determine user. Running without root may cause tests to fail")
} else if u.Uid != "0" {
t.Fatal("tests will fail unless run as root")
}
}
func pullTestImage(imageName string) error {
set := flag.NewFlagSet("test", 0)
set.Bool("pull", true, "pull the image if not present")
globalSet := flag.NewFlagSet("globaltest", 0)
globalCtx := cli.NewContext(nil, globalSet, nil)
command := cli.Command{Name: "imagesCommand"}
c := cli.NewContext(nil, set, globalCtx)
c.Command = command
c.Set("pull", "true")
c.Args = append(c.Args, imageName)
return fromCommand(c)
}

View File

@@ -1,6 +1,7 @@
package main
import (
"encoding/json"
"fmt"
"github.com/pkg/errors"
@@ -8,6 +9,14 @@ import (
"github.com/urfave/cli"
)
type jsonContainer struct {
ID string `json:"id"`
Builder bool `json:"builder"`
ImageID string `json:"imageid"`
ImageName string `json:"imagename"`
ContainerName string `json:"containername"`
}
var (
containersFlags = []cli.Flag{
cli.BoolFlag{
@@ -22,6 +31,14 @@ var (
Name: "notruncate",
Usage: "do not truncate output",
},
cli.BoolFlag{
Name: "all, a",
Usage: "also list non-buildah containers",
},
cli.BoolFlag{
Name: "json",
Usage: "output in JSON format",
},
}
containersDescription = "Lists containers which appear to be " + buildah.Package + " working containers, their\n names and IDs, and the names and IDs of the images from which they were\n initialized"
containersCommand = cli.Command{
@@ -52,31 +69,91 @@ func containersCmd(c *cli.Context) error {
if c.IsSet("notruncate") {
truncate = !c.Bool("notruncate")
}
all := false
if c.IsSet("all") {
all = c.Bool("all")
}
jsonOut := false
JSONContainers := []jsonContainer{}
if c.IsSet("json") {
jsonOut = c.Bool("json")
}
list := func(n int, containerID, imageID, image, container string, isBuilder bool) {
if jsonOut {
JSONContainers = append(JSONContainers, jsonContainer{ID: containerID, Builder: isBuilder, ImageID: imageID, ImageName: image, ContainerName: container})
return
}
if n == 0 && !noheading && !quiet {
if truncate {
fmt.Printf("%-12s %-8s %-12s %-32s %s\n", "CONTAINER ID", "BUILDER", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
} else {
fmt.Printf("%-64s %-8s %-64s %-32s %s\n", "CONTAINER ID", "BUILDER", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
}
}
if quiet {
fmt.Printf("%s\n", containerID)
} else {
isBuilderValue := ""
if isBuilder {
isBuilderValue = " *"
}
if truncate {
fmt.Printf("%-12.12s %-8s %-12.12s %-32s %s\n", containerID, isBuilderValue, imageID, image, container)
} else {
fmt.Printf("%-64s %-8s %-64s %-32s %s\n", containerID, isBuilderValue, imageID, image, container)
}
}
}
seenImages := make(map[string]string)
imageNameForID := func(id string) string {
if id == "" {
return buildah.BaseImageFakeName
}
imageName, ok := seenImages[id]
if ok {
return imageName
}
img, err2 := store.Image(id)
if err2 == nil && len(img.Names) > 0 {
seenImages[id] = img.Names[0]
}
return seenImages[id]
}
builders, err := openBuilders(store)
if err != nil {
return errors.Wrapf(err, "error reading build containers")
}
if len(builders) > 0 && !noheading && !quiet {
if truncate {
fmt.Printf("%-12s %-12s %-10s %s\n", "CONTAINER ID", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
} else {
fmt.Printf("%-64s %-64s %-10s %s\n", "CONTAINER ID", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
if !all {
for i, builder := range builders {
image := imageNameForID(builder.FromImageID)
list(i, builder.ContainerID, builder.FromImageID, image, builder.Container, true)
}
} else {
builderMap := make(map[string]struct{})
for _, builder := range builders {
builderMap[builder.ContainerID] = struct{}{}
}
containers, err2 := store.Containers()
if err2 != nil {
return errors.Wrapf(err2, "error reading list of all containers")
}
for i, container := range containers {
name := ""
if len(container.Names) > 0 {
name = container.Names[0]
}
_, ours := builderMap[container.ID]
list(i, container.ID, container.ImageID, imageNameForID(container.ImageID), name, ours)
}
}
for _, builder := range builders {
if builder.FromImage == "" {
builder.FromImage = buildah.BaseImageFakeName
}
if quiet {
fmt.Printf("%s\n", builder.ContainerID)
} else {
if truncate {
fmt.Printf("%-12.12s %-12.12s %-10s %s\n", builder.ContainerID, builder.FromImageID, builder.FromImage, builder.Container)
} else {
fmt.Printf("%-64s %-64s %-10s %s\n", builder.ContainerID, builder.FromImageID, builder.FromImage, builder.Container)
}
if jsonOut {
data, err := json.MarshalIndent(JSONContainers, "", " ")
if err != nil {
return err
}
fmt.Printf("%s\n", data)
}
return nil

87
cmd/buildah/export.go Normal file
View File

@@ -0,0 +1,87 @@
package main
import (
"fmt"
"io"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
var (
exportFlags = []cli.Flag{
cli.StringFlag{
Name: "output, o",
Usage: "write to a file, instead of STDOUT",
},
}
exportCommand = cli.Command{
Name: "export",
Usage: "Export container's filesystem contents as a tar archive",
Description: `This command exports the full or shortened container ID or container name to
STDOUT and should be redirected to a tar file.
`,
Flags: exportFlags,
Action: exportCmd,
ArgsUsage: "CONTAINER",
}
)
func exportCmd(c *cli.Context) error {
var builder *buildah.Builder
args := c.Args()
if len(args) == 0 {
return errors.Errorf("container name must be specified")
}
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
name := args[0]
store, err := getStore(c)
if err != nil {
return err
}
builder, err = openBuilder(store, name)
if err != nil {
return errors.Wrapf(err, "error reading build container %q", name)
}
mountPoint, err := builder.Mount("")
if err != nil {
return errors.Wrapf(err, "error mounting %q container %q", name, builder.Container)
}
defer func() {
if err := builder.Unmount(); err != nil {
fmt.Printf("Failed to umount %q: %v\n", builder.Container, err)
}
}()
input, err := archive.Tar(mountPoint, 0)
if err != nil {
return errors.Wrapf(err, "error reading directory %q", name)
}
outFile := os.Stdout
if c.IsSet("output") {
outfile := c.String("output")
outFile, err = os.Create(outfile)
if err != nil {
return errors.Wrapf(err, "error creating file %q", outfile)
}
defer outFile.Close()
}
if logrus.IsTerminal(outFile) {
return errors.Errorf("Refusing to save to a terminal. Use the -o flag or redirect.")
}
_, err = io.Copy(outFile, input)
return err
}

View File

@@ -2,11 +2,42 @@ package main
import (
"fmt"
"os"
"strings"
"text/template"
"time"
"encoding/json"
is "github.com/containers/image/storage"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
type jsonImage struct {
ID string `json:"id"`
Names []string `json:"names"`
}
type imageOutputParams struct {
ID string
Name string
Digest string
CreatedAt string
Size string
}
type filterParams struct {
dangling string
label string
beforeImage string // Images are sorted by date, so we can just output until we see the image
sinceImage string // Images are sorted by date, so we can just output until we don't see the image
beforeDate time.Time
sinceDate time.Time
referencePattern string
}
var (
imagesFlags = []cli.Flag{
cli.BoolFlag{
@@ -18,10 +49,27 @@ var (
Usage: "do not print column headings",
},
cli.BoolFlag{
Name: "notruncate",
Name: "no-trunc, notruncate",
Usage: "do not truncate output",
},
cli.BoolFlag{
Name: "json",
Usage: "output in JSON format",
},
cli.BoolFlag{
Name: "digests",
Usage: "show digests",
},
cli.StringFlag{
Name: "format",
Usage: "pretty-print images using a Go template. will override --quiet",
},
cli.StringFlag{
Name: "filter, f",
Usage: "filter output based on conditions provided (default [])",
},
}
imagesDescription = "Lists locally stored images."
imagesCommand = cli.Command{
Name: "images",
@@ -39,6 +87,11 @@ func imagesCmd(c *cli.Context) error {
return err
}
images, err := store.Images()
if err != nil {
return errors.Wrapf(err, "error reading images")
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
@@ -48,38 +101,306 @@ func imagesCmd(c *cli.Context) error {
noheading = c.Bool("noheading")
}
truncate := true
if c.IsSet("notruncate") {
truncate = !c.Bool("notruncate")
if c.IsSet("no-trunc") {
truncate = !c.Bool("no-trunc")
}
images, err := store.Images()
if err != nil {
return errors.Wrapf(err, "error reading images")
digests := false
if c.IsSet("digests") {
digests = c.Bool("digests")
}
formatString := ""
hasTemplate := false
if c.IsSet("format") {
formatString = c.String("format")
hasTemplate = true
}
if len(images) > 0 && !noheading && !quiet {
if truncate {
fmt.Printf("%-12s %s\n", "IMAGE ID", "IMAGE NAME")
} else {
fmt.Printf("%-64s %s\n", "IMAGE ID", "IMAGE NAME")
name := ""
if len(c.Args()) == 1 {
name = c.Args().Get(0)
} else if len(c.Args()) > 1 {
return errors.New("'buildah images' requires at most 1 argument")
}
if c.IsSet("json") {
JSONImages := []jsonImage{}
for _, image := range images {
JSONImages = append(JSONImages, jsonImage{ID: image.ID, Names: image.Names})
}
data, err2 := json.MarshalIndent(JSONImages, "", " ")
if err2 != nil {
return err2
}
fmt.Printf("%s\n", data)
return nil
}
var params *filterParams
if c.IsSet("filter") {
params, err = parseFilter(images, c.String("filter"))
if err != nil {
return errors.Wrapf(err, "error parsing filter")
}
} else {
params = nil
}
if len(images) > 0 && !noheading && !quiet && !hasTemplate {
outputHeader(truncate, digests)
}
return outputImages(images, formatString, store, params, name, hasTemplate, truncate, digests, quiet)
}
func parseFilter(images []storage.Image, filter string) (*filterParams, error) {
params := new(filterParams)
filterStrings := strings.Split(filter, ",")
for _, param := range filterStrings {
pair := strings.SplitN(param, "=", 2)
switch strings.TrimSpace(pair[0]) {
case "dangling":
if pair[1] == "true" || pair[1] == "false" {
params.dangling = pair[1]
} else {
return nil, fmt.Errorf("invalid filter: '%s=[%s]'", pair[0], pair[1])
}
case "label":
params.label = pair[1]
case "before":
beforeDate, err := setFilterDate(images, pair[1])
if err != nil {
return nil, fmt.Errorf("no such id: %s", pair[0])
}
params.beforeDate = beforeDate
case "since":
sinceDate, err := setFilterDate(images, pair[1])
if err != nil {
return nil, fmt.Errorf("no such id: %s", pair[0])
}
params.sinceDate = sinceDate
case "reference":
params.referencePattern = pair[1]
default:
return nil, fmt.Errorf("invalid filter: '%s'", pair[0])
}
}
return params, nil
}
func setFilterDate(images []storage.Image, imgName string) (time.Time, error) {
for _, image := range images {
if quiet {
fmt.Printf("%s\n", image.ID)
continue
}
names := []string{""}
if len(image.Names) > 0 {
names = image.Names
}
for _, name := range names {
if truncate {
fmt.Printf("%-12.12s %s\n", image.ID, name)
} else {
fmt.Printf("%-64s %s\n", image.ID, name)
for _, name := range image.Names {
if matchesReference(name, imgName) {
// Set the date to this image
im, err := parseMetadata(image)
if err != nil {
return time.Time{}, errors.Wrapf(err, "could not get creation date for image %q", imgName)
}
date := im.CreatedTime
return date, nil
}
}
}
return time.Time{}, fmt.Errorf("Could not locate image %q", imgName)
}
func outputHeader(truncate, digests bool) {
if truncate {
fmt.Printf("%-20s %-56s ", "IMAGE ID", "IMAGE NAME")
} else {
fmt.Printf("%-64s %-56s ", "IMAGE ID", "IMAGE NAME")
}
if digests {
fmt.Printf("%-64s ", "DIGEST")
}
fmt.Printf("%-22s %s\n", "CREATED AT", "SIZE")
}
func outputImages(images []storage.Image, format string, store storage.Store, filters *filterParams, argName string, hasTemplate, truncate, digests, quiet bool) error {
for _, image := range images {
imageMetadata, err := parseMetadata(image)
if err != nil {
fmt.Println(err)
}
createdTime := imageMetadata.CreatedTime.Format("Jan 2, 2006 15:04")
digest := ""
if len(imageMetadata.Blobs) > 0 {
digest = string(imageMetadata.Blobs[0].Digest)
}
size, _ := getSize(image, store)
names := []string{""}
if len(image.Names) > 0 {
names = image.Names
} else {
// images without names should be printed with "<none>" as the image name
names = append(names, "<none>")
}
for _, name := range names {
if !matchesFilter(image, store, name, filters) || !matchesReference(name, argName) {
continue
}
if quiet {
fmt.Printf("%-64s\n", image.ID)
// We only want to print each id once
break
}
params := imageOutputParams{
ID: image.ID,
Name: name,
Digest: digest,
CreatedAt: createdTime,
Size: formattedSize(size),
}
if hasTemplate {
err = outputUsingTemplate(format, params)
if err != nil {
return err
}
continue
}
outputUsingFormatString(truncate, digests, params)
}
}
return nil
}
func matchesFilter(image storage.Image, store storage.Store, name string, params *filterParams) bool {
if params == nil {
return true
}
if params.dangling != "" && !matchesDangling(name, params.dangling) {
return false
} else if params.label != "" && !matchesLabel(image, store, params.label) {
return false
} else if params.beforeImage != "" && !matchesBeforeImage(image, name, params) {
return false
} else if params.sinceImage != "" && !matchesSinceImage(image, name, params) {
return false
} else if params.referencePattern != "" && !matchesReference(name, params.referencePattern) {
return false
}
return true
}
func matchesDangling(name string, dangling string) bool {
if dangling == "false" && name != "<none>" {
return true
} else if dangling == "true" && name == "<none>" {
return true
}
return false
}
func matchesLabel(image storage.Image, store storage.Store, label string) bool {
storeRef, err := is.Transport.ParseStoreReference(store, "@"+image.ID)
if err != nil {
}
img, err := storeRef.NewImage(nil)
if err != nil {
return false
}
info, err := img.Inspect()
if err != nil {
return false
}
pair := strings.SplitN(label, "=", 2)
for key, value := range info.Labels {
if key == pair[0] {
if len(pair) == 2 {
if value == pair[1] {
return true
}
} else {
return false
}
}
}
return false
}
// Returns true if the image was created since the filter image. Returns
// false otherwise
func matchesBeforeImage(image storage.Image, name string, params *filterParams) bool {
im, err := parseMetadata(image)
if err != nil {
return false
}
if im.CreatedTime.Before(params.beforeDate) {
return true
}
return false
}
// Returns true if the image was created since the filter image. Returns
// false otherwise
func matchesSinceImage(image storage.Image, name string, params *filterParams) bool {
im, err := parseMetadata(image)
if err != nil {
return false
}
if im.CreatedTime.After(params.sinceDate) {
return true
}
return false
}
func matchesID(id, argID string) bool {
return strings.HasPrefix(argID, id)
}
func matchesReference(name, argName string) bool {
if argName == "" {
return true
}
splitName := strings.Split(name, ":")
// If the arg contains a tag, we handle it differently than if it does not
if strings.Contains(argName, ":") {
splitArg := strings.Split(argName, ":")
return strings.HasSuffix(splitName[0], splitArg[0]) && (splitName[1] == splitArg[1])
}
return strings.HasSuffix(splitName[0], argName)
}
func formattedSize(size int64) string {
suffixes := [5]string{"B", "KB", "MB", "GB", "TB"}
count := 0
formattedSize := float64(size)
for formattedSize >= 1024 && count < 4 {
formattedSize /= 1024
count++
}
return fmt.Sprintf("%.4g %s", formattedSize, suffixes[count])
}
func outputUsingTemplate(format string, params imageOutputParams) error {
tmpl, err := template.New("image").Parse(format)
if err != nil {
return errors.Wrapf(err, "Template parsing error")
}
err = tmpl.Execute(os.Stdout, params)
if err != nil {
return err
}
fmt.Println()
return nil
}
func outputUsingFormatString(truncate, digests bool, params imageOutputParams) {
if truncate {
fmt.Printf("%-20.12s %-56s", params.ID, params.Name)
} else {
fmt.Printf("%-64s %-56s", params.ID, params.Name)
}
if digests {
fmt.Printf(" %-64s", params.Digest)
}
fmt.Printf(" %-22s %s\n", params.CreatedAt, params.Size)
}

629
cmd/buildah/images_test.go Normal file
View File

@@ -0,0 +1,629 @@
package main
import (
"bytes"
"fmt"
"io"
"os"
"strings"
"testing"
"time"
is "github.com/containers/image/storage"
"github.com/containers/storage"
)
func TestTemplateOutputBlankTemplate(t *testing.T) {
params := imageOutputParams{
ID: "0123456789abcdef",
Name: "test/image:latest",
Digest: "sha256:012345789abcdef012345789abcdef012345789abcdef012345789abcdef",
CreatedAt: "Jan 01 2016 10:45",
Size: "97 KB",
}
err := outputUsingTemplate("", params)
//Output: Words
if err != nil {
t.Error(err)
}
}
func TestTemplateOutputValidTemplate(t *testing.T) {
params := imageOutputParams{
ID: "0123456789abcdef",
Name: "test/image:latest",
Digest: "sha256:012345789abcdef012345789abcdef012345789abcdef012345789abcdef",
CreatedAt: "Jan 01 2016 10:45",
Size: "97 KB",
}
templateString := "{{.ID}}"
output, err := captureOutputWithError(func() error {
return outputUsingTemplate(templateString, params)
})
if err != nil {
t.Error(err)
} else if strings.TrimSpace(output) != strings.TrimSpace(params.ID) {
t.Errorf("Error with template output:\nExpected: %s\nReceived: %s\n", params.ID, output)
}
}
func TestFormatStringOutput(t *testing.T) {
params := imageOutputParams{
ID: "012345789abcdef",
Name: "test/image:latest",
Digest: "sha256:012345789abcdef012345789abcdef012345789abcdef012345789abcdef",
CreatedAt: "Jan 01 2016 10:45",
Size: "97 KB",
}
output := captureOutput(func() {
outputUsingFormatString(true, true, params)
})
expectedOutput := fmt.Sprintf("%-12.12s %-40s %-64s %-22s %s\n", params.ID, params.Name, params.Digest, params.CreatedAt, params.Size)
if output != expectedOutput {
t.Errorf("Error outputting using format string:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
}
func TestSizeFormatting(t *testing.T) {
size := formattedSize(0)
if size != "0 B" {
t.Errorf("Error formatting size: expected '%s' got '%s'", "0 B", size)
}
size = formattedSize(1024)
if size != "1 KB" {
t.Errorf("Error formatting size: expected '%s' got '%s'", "1 KB", size)
}
size = formattedSize(1024 * 1024 * 1024 * 1024 * 1024)
if size != "1024 TB" {
t.Errorf("Error formatting size: expected '%s' got '%s'", "1024 TB", size)
}
}
func TestOutputHeader(t *testing.T) {
output := captureOutput(func() {
outputHeader(true, false)
})
expectedOutput := fmt.Sprintf("%-12s %-40s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "CREATED AT", "SIZE")
if output != expectedOutput {
t.Errorf("Error outputting header:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
output = captureOutput(func() {
outputHeader(true, true)
})
expectedOutput = fmt.Sprintf("%-12s %-40s %-64s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "DIGEST", "CREATED AT", "SIZE")
if output != expectedOutput {
t.Errorf("Error outputting header:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
output = captureOutput(func() {
outputHeader(false, false)
})
expectedOutput = fmt.Sprintf("%-64s %-40s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "CREATED AT", "SIZE")
if output != expectedOutput {
t.Errorf("Error outputting header:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
}
func TestMatchWithTag(t *testing.T) {
isMatch := matchesReference("docker.io/kubernetes/pause:latest", "pause:latest")
if !isMatch {
t.Error("expected match, got not match")
}
isMatch = matchesReference("docker.io/kubernetes/pause:latest", "kubernetes/pause:latest")
if !isMatch {
t.Error("expected match, got no match")
}
}
func TestNoMatchesReferenceWithTag(t *testing.T) {
isMatch := matchesReference("docker.io/kubernetes/pause:latest", "redis:latest")
if isMatch {
t.Error("expected no match, got match")
}
isMatch = matchesReference("docker.io/kubernetes/pause:latest", "kubernetes/redis:latest")
if isMatch {
t.Error("expected no match, got match")
}
}
func TestMatchesReferenceWithoutTag(t *testing.T) {
isMatch := matchesReference("docker.io/kubernetes/pause:latest", "pause")
if !isMatch {
t.Error("expected match, got not match")
}
isMatch = matchesReference("docker.io/kubernetes/pause:latest", "kubernetes/pause")
if !isMatch {
t.Error("expected match, got no match")
}
}
func TestNoMatchesReferenceWithoutTag(t *testing.T) {
isMatch := matchesReference("docker.io/kubernetes/pause:latest", "redis")
if isMatch {
t.Error("expected no match, got match")
}
isMatch = matchesReference("docker.io/kubernetes/pause:latest", "kubernetes/redis")
if isMatch {
t.Error("expected no match, got match")
}
}
func TestOutputImagesQuietTruncated(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests quiet and truncated output
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "", store, nil, "", false, true, false, true)
})
expectedOutput := fmt.Sprintf("%-64s\n", images[0].ID)
if err != nil {
t.Error("quiet/truncated output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("quiet/truncated output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestOutputImagesQuietNotTruncated(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests quiet and non-truncated output
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "", store, nil, "", false, false, false, true)
})
expectedOutput := fmt.Sprintf("%-64s\n", images[0].ID)
if err != nil {
t.Error("quiet/non-truncated output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("quiet/non-truncated output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestOutputImagesFormatString(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests output with format template
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "{{.ID}}", store, nil, "", true, true, false, false)
})
expectedOutput := fmt.Sprintf("%s", images[0].ID)
if err != nil {
t.Error("format string output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("format string output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestOutputImagesFormatTemplate(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests quiet and non-truncated output
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "", store, nil, "", false, false, false, true)
})
expectedOutput := fmt.Sprintf("%-64s\n", images[0].ID)
if err != nil {
t.Error("format template output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("format template output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestOutputImagesArgNoMatch(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests output with an arg name that does not match. Args ending in ":" cannot match
// because all images in the repository must have a tag, and here the tag is an
// empty string
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "", store, nil, "foo:", false, true, false, false)
})
expectedOutput := fmt.Sprintf("")
if err != nil {
t.Error("arg no match output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Error("arg no match output should be empty")
}
}
func TestOutputMultipleImages(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests quiet and truncated output
output, err := captureOutputWithError(func() error {
return outputImages(images[:2], "", store, nil, "", false, true, false, true)
})
expectedOutput := fmt.Sprintf("%-64s\n%-64s\n", images[0].ID, images[1].ID)
if err != nil {
t.Error("multi-image output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("multi-image output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestParseFilterAllParams(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=true,label=a=b,before=busybox:latest,since=busybox:latest,reference=abcdef"
params, err := parseFilter(images, label)
if err != nil {
t.Fatalf("error parsing filter")
}
expectedParams := &filterParams{dangling: "true", label: "a=b", beforeImage: "busybox:latest", sinceImage: "busybox:latest", referencePattern: "abcdef"}
if *params != *expectedParams {
t.Errorf("filter did not return expected result\n\tExpected: %v\n\tReceived: %v", expectedParams, params)
}
}
func TestParseFilterInvalidDangling(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=NO,label=a=b,before=busybox:latest,since=busybox:latest,reference=abcdef"
_, err = parseFilter(images, label)
if err == nil || err.Error() != "invalid filter: 'dangling=[NO]'" {
t.Fatalf("expected error parsing filter")
}
}
func TestParseFilterInvalidBefore(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=false,label=a=b,before=:,since=busybox:latest,reference=abcdef"
_, err = parseFilter(images, label)
if err == nil || !strings.Contains(err.Error(), "no such id") {
t.Fatalf("expected error parsing filter")
}
}
func TestParseFilterInvalidSince(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=false,label=a=b,before=busybox:latest,since=:,reference=abcdef"
_, err = parseFilter(images, label)
if err == nil || !strings.Contains(err.Error(), "no such id") {
t.Fatalf("expected error parsing filter")
}
}
func TestParseFilterInvalidFilter(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "foo=bar"
_, err = parseFilter(images, label)
if err == nil || err.Error() != "invalid filter: 'foo'" {
t.Fatalf("expected error parsing filter")
}
}
func TestMatchesDangingTrue(t *testing.T) {
if !matchesDangling("<none>", "true") {
t.Error("matchesDangling() should return true with dangling=true and name=<none>")
}
if !matchesDangling("hello", "false") {
t.Error("matchesDangling() should return true with dangling=false and name='hello'")
}
}
func TestMatchesDangingFalse(t *testing.T) {
if matchesDangling("hello", "true") {
t.Error("matchesDangling() should return false with dangling=true and name=hello")
}
if matchesDangling("<none>", "false") {
t.Error("matchesDangling() should return false with dangling=false and name=<none>")
}
}
func TestMatchesLabelTrue(t *testing.T) {
//TODO: How do I implement this?
}
func TestMatchesLabelFalse(t *testing.T) {
// TODO: How do I implement this?
}
func TestMatchesBeforeImageTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// by default, params.seenImage is false
params := new(filterParams)
params.beforeDate = time.Now()
params.beforeImage = "foo:bar"
if !matchesBeforeImage(images[0], ":", params) {
t.Error("should have matched beforeImage")
}
}
func TestMatchesBeforeImageFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// by default, params.seenImage is false
params := new(filterParams)
params.beforeDate = time.Time{}
params.beforeImage = "foo:bar"
// Should return false because the image has been seen
if matchesBeforeImage(images[0], ":", params) {
t.Error("should not have matched beforeImage")
}
}
func TestMatchesSinceeImageTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// by default, params.seenImage is false
params := new(filterParams)
params.sinceDate = time.Time{}
params.sinceImage = "foo:bar"
if !matchesSinceImage(images[0], ":", params) {
t.Error("should have matched SinceImage")
}
}
func TestMatchesSinceImageFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// by default, params.seenImage is false
params := new(filterParams)
params.sinceDate = time.Now()
params.sinceImage = "foo:bar"
// Should return false because the image has been seen
if matchesSinceImage(images[0], ":", params) {
t.Error("should not have matched sinceImage")
}
if matchesSinceImage(images[0], "foo:bar", params) {
t.Error("image should have been filtered out")
}
}
func captureOutputWithError(f func() error) (string, error) {
old := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
err := f()
w.Close()
os.Stdout = old
var buf bytes.Buffer
io.Copy(&buf, r)
return buf.String(), err
}
// Captures output so that it can be compared to expected values
func captureOutput(f func()) string {
old := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
f()
w.Close()
os.Stdout = old
var buf bytes.Buffer
io.Copy(&buf, r)
return buf.String()
}

View File

@@ -78,6 +78,7 @@ func main() {
configCommand,
containersCommand,
copyCommand,
exportCommand,
fromCommand,
imagesCommand,
inspectCommand,
@@ -88,6 +89,7 @@ func main() {
runCommand,
tagCommand,
umountCommand,
versionCommand,
}
err := app.Run(os.Args)
if err != nil {

View File

@@ -1,8 +1,11 @@
package main
import (
"fmt"
"os"
"strings"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
@@ -25,14 +28,24 @@ var (
Usage: "don't output progress information when pushing images",
},
}
pushDescription = "Pushes an image to a specified location."
pushCommand = cli.Command{
pushDescription = fmt.Sprintf(`
Pushes an image to a specified location.
The Image "DESTINATION" uses a "transport":"details" format.
Supported transports:
%s
See buildah-push(1) section "DESTINATION" for the expected format
`, strings.Join(transports.ListNames(), ", "))
pushCommand = cli.Command{
Name: "push",
Usage: "Push an image to a specified location",
Usage: "Push an image to a specified destination",
Description: pushDescription,
Flags: pushFlags,
Action: pushCmd,
ArgsUsage: "IMAGE [TRANSPORT:]IMAGE",
ArgsUsage: "IMAGE DESTINATION",
}
)

View File

@@ -2,29 +2,41 @@ package main
import (
"fmt"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/image/storage"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/urfave/cli"
)
var (
rmiDescription = "Removes one or more locally stored images."
rmiCommand = cli.Command{
rmiDescription = "removes one or more locally stored images."
rmiFlags = []cli.Flag{
cli.BoolFlag{
Name: "force, f",
Usage: "force removal of the image",
},
}
rmiCommand = cli.Command{
Name: "rmi",
Usage: "Removes one or more images from local storage",
Usage: "removes one or more images from local storage",
Description: rmiDescription,
Action: rmiCmd,
ArgsUsage: "IMAGE-NAME-OR-ID [...]",
Flags: rmiFlags,
}
)
func rmiCmd(c *cli.Context) error {
force := false
if c.IsSet("force") {
force = c.Bool("force")
}
args := c.Args()
if len(args) == 0 {
return errors.Errorf("image name or ID must be specified")
@@ -35,75 +47,192 @@ func rmiCmd(c *cli.Context) error {
return err
}
var e error
for _, id := range args {
// If it's an exact name or ID match with the underlying
// storage library's information about the image, then it's
// enough.
_, err = store.DeleteImage(id, true)
image, err := getImage(id, store)
if err != nil {
var ref types.ImageReference
// If it's looks like a proper image reference, parse
// it and check if it corresponds to an image that
// actually exists.
if ref2, err2 := alltransports.ParseImageName(id); err2 == nil {
if img, err3 := ref2.NewImage(nil); err3 == nil {
img.Close()
ref = ref2
return errors.Wrapf(err, "could not get image %q", id)
}
if image != nil {
ctrIDs, err := runningContainers(image, store)
if err != nil {
return errors.Wrapf(err, "error getting running containers for image %q", id)
}
if len(ctrIDs) > 0 && len(image.Names) <= 1 {
if force {
removeContainers(ctrIDs, store)
} else {
logrus.Debugf("error confirming presence of image %q: %v", transports.ImageName(ref2), err3)
for ctrID := range ctrIDs {
return fmt.Errorf("Could not remove image %q (must force) - container %q is using its reference image", id, ctrID)
}
}
}
// If the user supplied an ID, we cannot delete the image if it is referred to by multiple tags
if matchesID(image.ID, id) {
if len(image.Names) > 1 && !force {
return fmt.Errorf("unable to delete %s (must force) - image is referred to in multiple tags", image.ID)
}
// If it is forced, we have to untag the image so that it can be deleted
image.Names = image.Names[:0]
} else {
logrus.Debugf("error parsing %q as an image reference: %v", id, err2)
}
if ref == nil {
// If it's looks like an image reference that's
// relative to our storage, parse it and check
// if it corresponds to an image that actually
// exists.
if ref2, err2 := storage.Transport.ParseStoreReference(store, id); err2 == nil {
if img, err3 := ref2.NewImage(nil); err3 == nil {
img.Close()
ref = ref2
} else {
logrus.Debugf("error confirming presence of image %q: %v", transports.ImageName(ref2), err3)
}
} else {
logrus.Debugf("error parsing %q as a store reference: %v", id, err2)
name, err2 := untagImage(id, image, store)
if err2 != nil {
return err
}
fmt.Printf("untagged: %s\n", name)
}
if ref == nil {
// If it might be an ID that's relative to our
// storage, parse it and check if it
// corresponds to an image that actually
// exists. This _should_ be redundant, since
// we already tried deleting the image using
// the ID directly above, but it can't hurt,
// either.
if ref2, err2 := storage.Transport.ParseStoreReference(store, "@"+id); err2 == nil {
if img, err3 := ref2.NewImage(nil); err3 == nil {
img.Close()
ref = ref2
} else {
logrus.Debugf("error confirming presence of image %q: %v", transports.ImageName(ref2), err3)
}
} else {
logrus.Debugf("error parsing %q as an image reference: %v", "@"+id, err2)
}
if len(image.Names) > 0 {
continue
}
if ref != nil {
err = ref.DeleteImage(nil)
id, err := removeImage(image, store)
if err != nil {
return err
}
fmt.Printf("%s\n", id)
}
if e == nil {
e = err
}
if err != nil {
fmt.Fprintf(os.Stderr, "error removing image %q: %v\n", id, err)
continue
}
fmt.Printf("%s\n", id)
}
return e
return nil
}
func getImage(id string, store storage.Store) (*storage.Image, error) {
var ref types.ImageReference
ref, err := properImageRef(id)
if err != nil {
//logrus.Debug(err)
}
if ref == nil {
if ref, err = storageImageRef(store, id); err != nil {
//logrus.Debug(err)
}
}
if ref == nil {
if ref, err = storageImageID(store, id); err != nil {
//logrus.Debug(err)
}
}
if ref != nil {
image, err2 := is.Transport.GetStoreImage(store, ref)
if err2 != nil {
return nil, err2
}
return image, nil
}
return nil, err
}
func untagImage(imgArg string, image *storage.Image, store storage.Store) (string, error) {
// Remove name from image.Names and set the new name in the ImageStore
imgStore, err := store.ImageStore()
if err != nil {
return "", errors.Wrap(err, "could not untag image")
}
newNames := []string{}
removedName := ""
for _, name := range image.Names {
if matchesReference(name, imgArg) {
removedName = name
continue
}
newNames = append(newNames, name)
}
imgStore.SetNames(image.ID, newNames)
err = imgStore.Save()
return removedName, err
}
func removeImage(image *storage.Image, store storage.Store) (string, error) {
imgStore, err := store.ImageStore()
if err != nil {
return "", errors.Wrapf(err, "could not open image store")
}
err = imgStore.Delete(image.ID)
if err != nil {
return "", errors.Wrapf(err, "could not remove image")
}
err = imgStore.Save()
if err != nil {
return "", errors.Wrapf(err, "could not save image store")
}
return image.ID, nil
}
// Returns a list of running containers associated with the given ImageReference
func runningContainers(image *storage.Image, store storage.Store) ([]string, error) {
ctrIDs := []string{}
ctrStore, err := store.ContainerStore()
if err != nil {
return nil, err
}
containers, err := ctrStore.Containers()
if err != nil {
return nil, err
}
for _, ctr := range containers {
if ctr.ImageID == image.ID {
ctrIDs = append(ctrIDs, ctr.ID)
}
}
return ctrIDs, nil
}
func removeContainers(ctrIDs []string, store storage.Store) error {
ctrStore, err := store.ContainerStore()
if err != nil {
return err
}
for _, ctrID := range ctrIDs {
if err = ctrStore.Delete(ctrID); err != nil {
return errors.Wrapf(err, "could not remove container %q", ctrID)
}
}
return nil
}
// If it's looks like a proper image reference, parse it and check if it
// corresponds to an image that actually exists.
func properImageRef(id string) (types.ImageReference, error) {
var ref types.ImageReference
var err error
if ref, err = alltransports.ParseImageName(id); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, fmt.Errorf("error confirming presence of image reference %q: %v", transports.ImageName(ref), err)
}
return nil, fmt.Errorf("error parsing %q as an image reference: %v", id, err)
}
// If it's looks like an image reference that's relative to our storage, parse
// it and check if it corresponds to an image that actually exists.
func storageImageRef(store storage.Store, id string) (types.ImageReference, error) {
var ref types.ImageReference
var err error
if ref, err = is.Transport.ParseStoreReference(store, id); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, fmt.Errorf("error confirming presence of storage image reference %q: %v", transports.ImageName(ref), err)
}
return nil, fmt.Errorf("error parsing %q as a storage image reference: %v", id, err)
}
// If it might be an ID that's relative to our storage, parse it and check if it
// corresponds to an image that actually exists. This _should_ be redundant,
// since we already tried deleting the image using the ID directly above, but it
// can't hurt either.
func storageImageID(store storage.Store, id string) (types.ImageReference, error) {
var ref types.ImageReference
var err error
if ref, err = is.Transport.ParseStoreReference(store, "@"+id); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, fmt.Errorf("error confirming presence of storage image reference %q: %v", transports.ImageName(ref), err)
}
return nil, fmt.Errorf("error parsing %q as a storage image reference: %v", "@"+id, err)
}

145
cmd/buildah/rmi_test.go Normal file
View File

@@ -0,0 +1,145 @@
package main
import (
"strings"
"testing"
is "github.com/containers/image/storage"
"github.com/containers/storage"
)
func TestProperImageRefTrue(t *testing.T) {
// Pull an image so we know we have it
err := pullTestImage("busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove")
}
// This should match a url path
imgRef, err := properImageRef("docker://busybox:latest")
if err != nil {
t.Errorf("could not match image: %v", err)
} else if imgRef == nil {
t.Error("Returned nil Image Reference")
}
}
func TestProperImageRefFalse(t *testing.T) {
// Pull an image so we know we have it
err := pullTestImage("busybox:latest")
if err != nil {
t.Fatal("could not pull image to remove")
}
// This should match a url path
imgRef, _ := properImageRef("docker://:")
if imgRef != nil {
t.Error("should not have found an Image Reference")
}
}
func TestStorageImageRefTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
options := storage.DefaultStoreOptions
store, err := storage.GetStore(options)
if store != nil {
is.Transport.SetStore(store)
}
if err != nil {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
imgRef, err := storageImageRef(store, "busybox")
if err != nil {
t.Errorf("could not match image: %v", err)
} else if imgRef == nil {
t.Error("Returned nil Image Reference")
}
}
func TestStorageImageRefFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
options := storage.DefaultStoreOptions
store, err := storage.GetStore(options)
if store != nil {
is.Transport.SetStore(store)
}
if err != nil {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
imgRef, _ := storageImageRef(store, "")
if imgRef != nil {
t.Error("should not have found an Image Reference")
}
}
func TestStorageImageIDTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
options := storage.DefaultStoreOptions
store, err := storage.GetStore(options)
if store != nil {
is.Transport.SetStore(store)
}
if err != nil {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
//Somehow I have to get the id of the image I just pulled
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
id, err := captureOutputWithError(func() error {
return outputImages(images, "", store, nil, "busybox:latest", false, false, false, true)
})
if err != nil {
t.Fatalf("Error getting id of image: %v", err)
}
id = strings.TrimSpace(id)
imgRef, err := storageImageID(store, id)
if err != nil {
t.Errorf("could not match image: %v", err)
} else if imgRef == nil {
t.Error("Returned nil Image Reference")
}
}
func TestStorageImageIDFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
options := storage.DefaultStoreOptions
store, err := storage.GetStore(options)
if store != nil {
is.Transport.SetStore(store)
}
if err != nil {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
id := ""
imgRef, _ := storageImageID(store, id)
if imgRef != nil {
t.Error("should not have returned Image Reference")
}
}

View File

@@ -28,6 +28,10 @@ var (
Name: "volume, v",
Usage: "bind mount a host location into the container while running the command",
},
cli.BoolFlag{
Name: "tty",
Usage: "allocate a pseudo-TTY in the container",
},
}
runDescription = "Runs a specified command using the container's root filesystem as a root\n filesystem, using configuration settings inherited from the container's\n image or as specified using previous calls to the config command"
runCommand = cli.Command{
@@ -47,6 +51,9 @@ func runCmd(c *cli.Context) error {
}
name := args[0]
args = args.Tail()
if len(args) > 0 && args[0] == "--" {
args = args[1:]
}
runtime := ""
if c.IsSet("runtime") {
@@ -80,6 +87,15 @@ func runCmd(c *cli.Context) error {
Runtime: runtime,
Args: flags,
}
if c.IsSet("tty") {
if c.Bool("tty") {
options.Terminal = buildah.WithTerminal
} else {
options.Terminal = buildah.WithoutTerminal
}
}
for _, volumeSpec := range volumes {
volSpec := strings.Split(volumeSpec, ":")
if len(volSpec) >= 2 {

48
cmd/buildah/version.go Normal file
View File

@@ -0,0 +1,48 @@
package main
import (
"fmt"
"runtime"
"strconv"
"time"
ispecs "github.com/opencontainers/image-spec/specs-go"
rspecs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
//Overwritten at build time
var (
gitCommit string
buildInfo string
)
//Function to get and print info for version command
func versionCmd(c *cli.Context) error {
//converting unix time from string to int64
buildTime, err := strconv.ParseInt(buildInfo, 10, 64)
if err != nil {
return err
}
fmt.Println("Version: ", buildah.Version)
fmt.Println("Go Version: ", runtime.Version())
fmt.Println("Image Spec: ", ispecs.Version)
fmt.Println("Runtime Spec: ", rspecs.Version)
fmt.Println("Git Commit: ", gitCommit)
//Prints out the build time in readable format
fmt.Println("Built: ", time.Unix(buildTime, 0).Format(time.ANSIC))
fmt.Println("OS/Arch: ", runtime.GOOS+"/"+runtime.GOARCH)
return nil
}
//cli command to print out the version info of buildah
var versionCommand = cli.Command{
Name: "version",
Usage: "Display the Buildah Version Information",
Action: versionCmd,
}

112
commit.go
View File

@@ -13,7 +13,6 @@ import (
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/stringid"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/projectatomic/buildah/util"
@@ -82,38 +81,23 @@ type PushOptions struct {
// We assume that "dest" is a reference to a local image (specifically, a containers/image/storage.storageReference),
// and will fail if it isn't.
func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReference, systemContext *types.SystemContext) error {
var names []string
// Read the target image name.
if dest.DockerReference() == nil {
return errors.New("can't write to an unnamed image")
if dest.DockerReference() != nil {
names = []string{dest.DockerReference().String()}
}
names, err := util.ExpandTags([]string{dest.DockerReference().String()})
if err != nil {
return err
}
// Make a temporary image reference.
tmpName := stringid.GenerateRandomID() + "-tmp-" + Package + "-commit"
tmpRef, err := is.Transport.ParseStoreReference(b.store, tmpName)
if err != nil {
return err
}
defer func() {
if err2 := tmpRef.DeleteImage(systemContext); err2 != nil {
logrus.Debugf("error deleting temporary image %q: %v", tmpName, err2)
}
}()
// Open the source for reading and a temporary image for writing.
// Open the source for reading and the new image for writing.
srcImage, err := src.NewImage(systemContext)
if err != nil {
return errors.Wrapf(err, "error reading configuration to write to image %q", transports.ImageName(dest))
}
defer srcImage.Close()
tmpImage, err := tmpRef.NewImageDestination(systemContext)
destImage, err := dest.NewImageDestination(systemContext)
if err != nil {
return errors.Wrapf(err, "error opening temporary copy of image %q for writing", transports.ImageName(dest))
return errors.Wrapf(err, "error opening image %q for writing", transports.ImageName(dest))
}
defer tmpImage.Close()
// Write an empty filesystem layer, because the image layer requires at least one.
_, err = tmpImage.PutBlob(bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Size: int64(len(gzippedEmptyLayer))})
_, err = destImage.PutBlob(bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Size: int64(len(gzippedEmptyLayer))})
if err != nil {
return errors.Wrapf(err, "error writing dummy layer for image %q", transports.ImageName(dest))
}
@@ -126,12 +110,12 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
return errors.Errorf("error reading new configuration for image %q: it's empty", transports.ImageName(dest))
}
logrus.Debugf("read configuration blob %q", string(config))
// Write the configuration to the temporary image.
// Write the configuration to the new image.
configBlobInfo := types.BlobInfo{
Digest: digest.Canonical.FromBytes(config),
Size: int64(len(config)),
}
_, err = tmpImage.PutBlob(bytes.NewReader(config), configBlobInfo)
_, err = destImage.PutBlob(bytes.NewReader(config), configBlobInfo)
if err != nil && len(config) > 0 {
return errors.Wrapf(err, "error writing image configuration for temporary copy of %q", transports.ImageName(dest))
}
@@ -140,24 +124,42 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
if err != nil {
return errors.Wrapf(err, "error reading new manifest for image %q", transports.ImageName(dest))
}
// Write the manifest to the temporary image.
err = tmpImage.PutManifest(manifest)
// Write the manifest to the new image.
err = destImage.PutManifest(manifest)
if err != nil {
return errors.Wrapf(err, "error writing new manifest to temporary copy of image %q", transports.ImageName(dest))
return errors.Wrapf(err, "error writing new manifest to image %q", transports.ImageName(dest))
}
// Save the temporary image.
err = tmpImage.Commit()
// Save the new image.
err = destImage.Commit()
if err != nil {
return errors.Wrapf(err, "error committing new image %q", transports.ImageName(dest))
}
// Locate the temporary image in the lower-level API. Read its item names.
tmpImg, err := is.Transport.GetStoreImage(b.store, tmpRef)
err = destImage.Close()
if err != nil {
return errors.Wrapf(err, "error locating temporary image %q", transports.ImageName(dest))
return errors.Wrapf(err, "error closing new image %q", transports.ImageName(dest))
}
items, err := b.store.ListImageBigData(tmpImg.ID)
// Locate the new image in the lower-level API. Extract its items.
destImg, err := is.Transport.GetStoreImage(b.store, dest)
if err != nil {
return errors.Wrapf(err, "error reading list of named data for image %q", tmpImg.ID)
return errors.Wrapf(err, "error locating new image %q", transports.ImageName(dest))
}
items, err := b.store.ListImageBigData(destImg.ID)
if err != nil {
return errors.Wrapf(err, "error reading list of named data for image %q", destImg.ID)
}
bigdata := make(map[string][]byte)
for _, itemName := range items {
var data []byte
data, err = b.store.ImageBigData(destImg.ID, itemName)
if err != nil {
return errors.Wrapf(err, "error reading named data %q for image %q", itemName, destImg.ID)
}
bigdata[itemName] = data
}
// Delete the image so that we can recreate it.
_, err = b.store.DeleteImage(destImg.ID, true)
if err != nil {
return errors.Wrapf(err, "error deleting image %q for rewriting", destImg.ID)
}
// Look up the container's read-write layer.
container, err := b.store.Container(b.ContainerID)
@@ -174,9 +176,9 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
parentLayer = img.TopLayer
}
// Extract the read-write layer's contents.
layerDiff, err := b.store.Diff(parentLayer, container.LayerID)
layerDiff, err := b.store.Diff(parentLayer, container.LayerID, nil)
if err != nil {
return errors.Wrapf(err, "error reading layer from source image %q", transports.ImageName(src))
return errors.Wrapf(err, "error reading layer %q from source image %q", container.LayerID, transports.ImageName(src))
}
defer layerDiff.Close()
// Write a copy of the layer for the new image to reference.
@@ -184,8 +186,8 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
if err != nil {
return errors.Wrapf(err, "error creating new read-only layer from container %q", b.ContainerID)
}
// Create a low-level image record that uses the new layer.
image, err := b.store.CreateImage("", []string{}, layer.ID, "", nil)
// Create a low-level image record that uses the new layer, discarding the old metadata.
image, err := b.store.CreateImage(destImg.ID, []string{}, layer.ID, "{}", nil)
if err != nil {
err2 := b.store.DeleteLayer(layer.ID)
if err2 != nil {
@@ -193,7 +195,7 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
}
return errors.Wrapf(err, "error creating new low-level image %q", transports.ImageName(dest))
}
logrus.Debugf("created image ID %q", image.ID)
logrus.Debugf("(re-)created image ID %q using layer %q", image.ID, layer.ID)
defer func() {
if err != nil {
_, err2 := b.store.DeleteImage(image.ID, true)
@@ -202,30 +204,22 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
}
}
}()
// Copy the configuration and manifest, which are big data items, along with whatever else is there.
for _, item := range items {
var data []byte
data, err = b.store.ImageBigData(tmpImg.ID, item)
// Store the configuration and manifest, which are big data items, along with whatever else is there.
for itemName, data := range bigdata {
err = b.store.SetImageBigData(image.ID, itemName, data)
if err != nil {
return errors.Wrapf(err, "error copying data item %q", item)
return errors.Wrapf(err, "error saving data item %q", itemName)
}
err = b.store.SetImageBigData(image.ID, item, data)
logrus.Debugf("saved data item %q to %q", itemName, image.ID)
}
// Add the target name(s) to the new image.
if len(names) > 0 {
err = util.AddImageNames(b.store, image, names)
if err != nil {
return errors.Wrapf(err, "error copying data item %q", item)
return errors.Wrapf(err, "error assigning names %v to new image", names)
}
logrus.Debugf("copied data item %q to %q", item, image.ID)
logrus.Debugf("assigned names %v to image %q", names, image.ID)
}
// Set low-level metadata in the new image so that the image library will accept it as a real image.
err = b.store.SetMetadata(image.ID, "{}")
if err != nil {
return errors.Wrapf(err, "error assigning metadata to new image %q", transports.ImageName(dest))
}
// Move the target name(s) from the temporary image to the new image.
err = util.AddImageNames(b.store, image, names)
if err != nil {
return errors.Wrapf(err, "error assigning names %v to new image", names)
}
logrus.Debugf("assigned names %v to image %q", names, image.ID)
return nil
}

View File

@@ -21,8 +21,9 @@ func makeOCIv1Image(dimage *docker.V2Image) (ociv1.Image, error) {
if config == nil {
config = &dimage.ContainerConfig
}
dcreated := dimage.Created.UTC()
image := ociv1.Image{
Created: dimage.Created.UTC(),
Created: &dcreated,
Author: dimage.Author,
Architecture: dimage.Architecture,
OS: dimage.OS,
@@ -38,7 +39,7 @@ func makeOCIv1Image(dimage *docker.V2Image) (ociv1.Image, error) {
},
RootFS: ociv1.RootFS{
Type: "",
DiffIDs: []string{},
DiffIDs: []digest.Digest{},
},
History: []ociv1.History{},
}
@@ -51,13 +52,12 @@ func makeOCIv1Image(dimage *docker.V2Image) (ociv1.Image, error) {
}
if RootFS.Type == docker.TypeLayers {
image.RootFS.Type = docker.TypeLayers
for _, id := range RootFS.DiffIDs {
image.RootFS.DiffIDs = append(image.RootFS.DiffIDs, id.String())
}
image.RootFS.DiffIDs = append(image.RootFS.DiffIDs, RootFS.DiffIDs...)
}
for _, history := range dimage.History {
hcreated := history.Created.UTC()
ohistory := ociv1.History{
Created: history.Created.UTC(),
Created: &hcreated,
CreatedBy: history.CreatedBy,
Author: history.Author,
Comment: history.Comment,
@@ -98,13 +98,7 @@ func makeDockerV2S2Image(oimage *ociv1.Image) (docker.V2Image, error) {
}
if oimage.RootFS.Type == docker.TypeLayers {
image.RootFS.Type = docker.TypeLayers
for _, id := range oimage.RootFS.DiffIDs {
d, err := digest.Parse(id)
if err != nil {
return docker.V2Image{}, err
}
image.RootFS.DiffIDs = append(image.RootFS.DiffIDs, d)
}
image.RootFS.DiffIDs = append(image.RootFS.DiffIDs, oimage.RootFS.DiffIDs...)
}
for _, history := range oimage.History {
dhistory := docker.V2S2History{
@@ -224,8 +218,8 @@ func (b *Builder) fixupConfig() {
if b.Docker.Created.IsZero() {
b.Docker.Created = now
}
if b.OCIv1.Created.IsZero() {
b.OCIv1.Created = now
if b.OCIv1.Created == nil || b.OCIv1.Created.IsZero() {
b.OCIv1.Created = &now
}
if b.OS() == "" {
b.SetOS(runtime.GOOS)

View File

@@ -290,6 +290,7 @@ return 1
-D
--quiet
-q
--rm
"
local options_with_args="
@@ -376,6 +377,7 @@ return 1
_buildah_run() {
local boolean_options="
--help
--tty
-h
"
@@ -529,11 +531,14 @@ return 1
local boolean_options="
--help
-h
--json
--quiet
-q
--noheading
-n
--notruncate
-a
--all
"
local options_with_args="
@@ -552,6 +557,7 @@ return 1
local boolean_options="
--help
-h
--json
--quiet
-q
--noheading
@@ -628,6 +634,28 @@ return 1
esac
}
_buildah_version() {
local boolean_options="
--help
-h
"
local options_with_args="
"
}
_buildah_export() {
local boolean_options="
--help
-h
"
local options_with_args="
-o
--output
"
}
_buildah() {
local previous_extglob_setting=$(shopt -p extglob)
shopt -s extglob
@@ -640,6 +668,7 @@ return 1
config
containers
copy
export
from
images
inspect
@@ -651,6 +680,7 @@ return 1
tag
umount
unmount
version
)
# These options are valid as global options for all client commands

View File

@@ -21,11 +21,11 @@
# https://github.com/projectatomic/buildah
%global provider_prefix %{provider}.%{provider_tld}/%{project}/%{repo}
%global import_path %{provider_prefix}
%global commit a0a5333b94264d1fb1e072d63bcb98f9e2981b49
%global commit REPLACEWITHCOMMITID
%global shortcommit %(c=%{commit}; echo ${c:0:7})
Name: buildah
Version: 0.1
Version: 0.2
Release: 1.git%{shortcommit}%{?dist}
Summary: A command line tool used to creating OCI Images
License: ASL 2.0
@@ -41,6 +41,8 @@ BuildRequires: gpgme-devel
BuildRequires: device-mapper-devel
BuildRequires: btrfs-progs-devel
BuildRequires: libassuan-devel
BuildRequires: glib2-devel
BuildRequires: ostree-devel
Requires: runc >= 1.0.0-6
Requires: skopeo-containers
Provides: %{repo} = %{version}-%{release}
@@ -85,5 +87,24 @@ make DESTDIR=%{buildroot} PREFIX=%{_prefix} install install.completions
%{_datadir}/bash-completion/completions/*
%changelog
* Tue Jul 18 2017 Dan Walsh <dwalsh@redhat.com> 0.2.0-1
- buildah run: Add support for -- ending options parsing
- buildah Add/Copy support for glob syntax
- buildah commit: Add flag to remove containers on commit
- buildah push: Improve man page and help information
- buildah run: add a way to disable PTY allocation
- Buildah docs: clarify --runtime-flag of run command
- Update to match newer storage and image-spec APIs
- Update containers/storage and containers/image versions
- buildah export: add support
- buildah images: update commands
- buildah images: Add JSON output option
- buildah rmi: update commands
- buildah containers: Add JSON output option
- buildah version: add command
- buildah run: Handle run without an explicit command correctly
- Ensure volume points get created, and with perms
- buildah containers: Add a -a/--all option
* Fri Apr 14 2017 Dan Walsh <dwalsh@redhat.com> 0.0.1-1.git7a0a5333
- First package for Fedora

View File

@@ -33,11 +33,15 @@ Control the format for the image manifest and configuration data. Recognized
formats include *oci* (OCI image-spec v1.0, the default) and *docker* (version
2, using schema format 2 for the manifest).
**--rm**
Remove the container and its content after committing it to an image.
Default leaves the container and its content in place.
## EXAMPLE
buildah commit containerID
buildah commit containerID newImageName
buildah commit --rm containerID newImageName
buildah commit --disable-compression --signature-policy '/etc/containers/policy.json' containerID

View File

@@ -12,6 +12,10 @@ IDs, and the names and IDs of the images from which they were initialized.
## OPTIONS
**--json**
Output in JSON format.
**--noheading, -n**
Omit the table headings from the listing of containers.
@@ -24,6 +28,11 @@ Do not truncate IDs in output.
Displays only the container IDs.
**--all, -a**
List information about all containers, including those which were not created
by and are not being used by buildah.
## EXAMPLE
buildah containers
@@ -32,6 +41,8 @@ buildah containers --quiet
buildah containers -q --noheading --notruncate
buildah containers --json
## SEE ALSO
buildah(1)

40
docs/buildah-export.md Normal file
View File

@@ -0,0 +1,40 @@
% BUILDAH(1) Buildah User Manuals
% Buildah Community
% JUNE 2017
# NAME
buildah-export - Export container's filesystem content as a tar archive
# SYNOPSIS
**buildah export**
[**--help**]
[**-o**|**--output**[=*""*]]
CONTAINER
# DESCRIPTION
**buildah export** exports the full or shortened container ID or container name
to STDOUT and should be redirected to a tar file.
# OPTIONS
**--help**
Print usage statement
**-o**, **--output**=""
Write to a file, instead of STDOUT
# EXAMPLES
Export the contents of the container called angry_bell to a tar file
called angry_bell.tar:
# buildah export angry_bell > angry_bell.tar
# buildah export --output=angry_bell-latest.tar angry_bell
# ls -sh angry_bell.tar
321M angry_bell.tar
# ls -sh angry_bell-latest.tar
321M angry_bell-latest.tar
# See also
**buildah-import(1)** to create an empty filesystem image
and import the contents of the tarball into it, then optionally tag it.
# HISTORY
July 2017, Originally copied from docker project docker-export.1.md

View File

@@ -11,22 +11,38 @@ Displays locally stored images, their names, and their IDs.
## OPTIONS
**--json**
Display the output in JSON format.
**--digests**
Show image digests
**--filter, -f=[]**
Filter output based on conditions provided (default [])
**--format="TEMPLATE"**
Pretty-print images using a Go template. Will override --quiet
**--noheading, -n**
Omit the table headings from the listing of images.
**--notruncate**
**no-trunc**
Do not truncate output.
**--quiet, -q**
Lists only the image IDs.
## EXAMPLE
buildah images
buildah images --json
buildah images --quiet
buildah images -q --noheading --notruncate

View File

@@ -10,6 +10,37 @@ buildah push - Push an image from local storage to elsewhere.
Pushes an image from local storage to a specified destination, decompressing
and recompessing layers as needed.
## imageID
Image stored in local container/storage
## DESTINATION
The DESTINATION is a location to store container images
The Image "DESTINATION" uses a "transport":"details" format.
Multiple transports are supported:
**atomic:**_hostname_**/**_namespace_**/**_stream_**:**_tag_
An image served by an OpenShift(Atomic) Registry server. The current OpenShift project and OpenShift Registry instance are by default read from `$HOME/.kube/config`, which is set e.g. using `(oc login)`.
**dir:**_path_
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$HOME/.docker/config.json`, which is set e.g. using `(docker login)`.
**docker-archive:**_path_[**:**_docker-reference_]
An image is stored in the `docker save` formatted file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
**docker-daemon:**_docker-reference_
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
**oci:**_path_**:**_tag_
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
## OPTIONS
**--disable-compression, -D**
@@ -28,11 +59,25 @@ When writing the output image, suppress progress output.
## EXAMPLE
buildah push imageID dir:/path/to/image
This example extracts the imageID image to a local directory in docker format.
buildah push imageID oci-layout:/path/to/layout
`# buildah push imageID dir:/path/to/image`
buildah push imageID docker://registry/repository:tag
This example extracts the imageID image to a local directory in oci format.
`# buildah push imageID oci:/path/to/layout`
This example extracts the imageID image to a container registry named registry.example.com
`# buildah push imageID docker://registry.example.com/repository:tag`
This example extracts the imageID image and puts into the local docker container store
`# buildah push imageID docker-daemon:image:tag`
This example extracts the imageID image and pushes it to an OpenShift(Atomic) registry
`# buildah push imageID atomic:registry.example.com/company/image:tag`
## SEE ALSO
buildah(1)

View File

@@ -9,10 +9,18 @@ buildah rmi - Removes one or more images.
## DESCRIPTION
Removes one or more locally stored images.
## OPTIONS
**--force, -f**
Executing this command will stop all containers that are using the image and remove them from the system
## EXAMPLE
buildah rmi imageID
buildah rmi --force imageID
buildah rmi imageID1 imageID2 imageID3
## SEE ALSO

View File

@@ -1,3 +1,4 @@
## buildah-run "1" "March 2017" "buildah"
## NAME
@@ -14,23 +15,35 @@ the *buildah config* command.
## OPTIONS
**--tty**
By default a pseudo-TTY is allocated only when buildah's standard input is
attached to a pseudo-TTY. Setting the `--tty` option to `true` will cause a
pseudo-TTY to be allocated inside the container. Setting the `--tty` option to
`false` will prevent the pseudo-TTY from being allocated.
**--runtime** *path*
The *path* to an alternate OCI-compatible runtime.
**--runtime-flag** *flag*
Adds global flags for the container rutime.
Adds global flags for the container runtime. To list the supported flags, please
consult manpages of your selected container runtime (`runc` is the default
runtime, the manpage to consult is `runc(8)`)
**--volume, -v** *source*:*destination*:*flags*
Bind mount a location from the host into the container for its lifetime.
NOTE: End parsing of options with the `--` option, so that you can pass other
options to the command inside of the container
## EXAMPLE
buildah run containerID 'ps -auxw'
buildah run containerID -- ps -auxw
buildah run containerID --runtime-flag --no-new-keyring 'ps -auxw'
buildah run containerID --runtime-flag --no-new-keyring -- ps -auxw
## SEE ALSO
buildah(1)

27
docs/buildah-version.md Normal file
View File

@@ -0,0 +1,27 @@
## buildah-version "1" "June 2017" "buildah"
## NAME
buildah version - Display the Buildah Version Information.
## SYNOPSIS
**buildah version**
[**--help**|**-h**]
## DESCRIPTION
Shows the following information: Version, Go Version, Image Spec, Runtime Spec, Git Commit, Build Time, OS, and Architecture.
## OPTIONS
**--help, -h**
Print usage statement
## EXAMPLE
buildah version
buildah version --help
buildah version -h
## SEE ALSO
buildah(1)

View File

@@ -58,6 +58,7 @@ Print the version
| buildah-config(1) | Update image configuration settings. |
| buildah-containers(1) | List the working containers and their base images. |
| buildah-copy(1) | Copies the contents of a file, URL, or directory into a container's working directory. |
| buildah-export(1) | Export the contents of a container's filesystem as a tar archive |
| buildah-from(1) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| buildah-images(1) | List images in local storage. |
| buildah-inspect(1) | Inspects the configuration of a container or image |
@@ -67,3 +68,5 @@ Print the version
| buildah-run(1) | Run a command inside of the container. |
| buildah-tag(1) | Add an additional name to a local image. |
| buildah-umount(1) | Unmount a working container's root file system. |
| buildah-version(1) | Display the Buildah Version Information
|

View File

@@ -24,7 +24,7 @@ echo yay > $mountpoint1/file-in-root
read
: " Produce an image from the container "
read
buildah commit "$container1" containers-storage:${2:-first-new-image}
buildah commit "$container1" ${2:-first-new-image}
read
: " Verify that our new image is there "
read

View File

@@ -172,7 +172,7 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestType
}
oimage.RootFS.Type = docker.TypeLayers
oimage.RootFS.DiffIDs = []string{}
oimage.RootFS.DiffIDs = []digest.Digest{}
dimage.RootFS = &docker.V2S2RootFS{}
dimage.RootFS.Type = docker.TypeLayers
dimage.RootFS.DiffIDs = []digest.Digest{}
@@ -215,12 +215,12 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestType
dmanifest.Layers = append(dmanifest.Layers, dlayerDescriptor)
// Add a note about the diffID, which should be uncompressed digest of the blob, but
// just use the layer ID here.
oimage.RootFS.DiffIDs = append(oimage.RootFS.DiffIDs, fakeLayerDigest.String())
oimage.RootFS.DiffIDs = append(oimage.RootFS.DiffIDs, fakeLayerDigest)
dimage.RootFS.DiffIDs = append(dimage.RootFS.DiffIDs, fakeLayerDigest)
continue
}
// Start reading the layer.
rc, err := i.store.Diff("", layerID)
rc, err := i.store.Diff("", layerID, nil)
if err != nil {
return nil, errors.Wrapf(err, "error extracting layer %q", layerID)
}
@@ -283,14 +283,14 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestType
}
dmanifest.Layers = append(dmanifest.Layers, dlayerDescriptor)
// Add a note about the diffID, which is always an uncompressed value.
oimage.RootFS.DiffIDs = append(oimage.RootFS.DiffIDs, srcHasher.Digest().String())
oimage.RootFS.DiffIDs = append(oimage.RootFS.DiffIDs, srcHasher.Digest())
dimage.RootFS.DiffIDs = append(dimage.RootFS.DiffIDs, srcHasher.Digest())
}
if i.addHistory {
// Build history notes in the image configurations.
onews := v1.History{
Created: i.created,
Created: &i.created,
CreatedBy: i.createdBy,
Author: oimage.Author,
EmptyLayer: false,

View File

@@ -153,7 +153,15 @@ func (b *Executor) Preserve(path string) error {
logrus.Debugf("PRESERVE %q", path)
if b.volumes.Covers(path) {
// This path is already a subdirectory of a volume path that
// we're already preserving, so there's nothing new to be done.
// we're already preserving, so there's nothing new to be done
// except ensure that it exists.
archivedPath := filepath.Join(b.mountPoint, path)
if err := os.MkdirAll(archivedPath, 0755); err != nil {
return errors.Wrapf(err, "error ensuring volume path %q exists", archivedPath)
}
if err := b.volumeCacheInvalidate(path); err != nil {
return errors.Wrapf(err, "error ensuring volume path %q is preserved", archivedPath)
}
return nil
}
// Figure out where the cache for this volume would be stored.
@@ -166,9 +174,15 @@ func (b *Executor) Preserve(path string) error {
// Save info about the top level of the location that we'll be archiving.
archivedPath := filepath.Join(b.mountPoint, path)
st, err := os.Stat(archivedPath)
if os.IsNotExist(err) {
if err = os.MkdirAll(archivedPath, 0755); err != nil {
return errors.Wrapf(err, "error ensuring volume path %q exists", archivedPath)
}
st, err = os.Stat(archivedPath)
}
if err != nil {
logrus.Debugf("error reading info about %q: %v", archivedPath, err)
return err
return errors.Wrapf(err, "error reading info about volume path %q", archivedPath)
}
b.volumeCacheInfo[path] = st
if !b.volumes.Add(path) {
@@ -241,6 +255,9 @@ func (b *Executor) volumeCacheSave() error {
if !os.IsNotExist(err) {
return errors.Wrapf(err, "error checking for cache of %q in %q", archivedPath, cacheFile)
}
if err := os.MkdirAll(archivedPath, 0755); err != nil {
return errors.Wrapf(err, "error ensuring volume path %q exists", archivedPath)
}
logrus.Debugf("caching contents of volume %q in %q", archivedPath, cacheFile)
cache, err := os.Create(cacheFile)
if err != nil {
@@ -273,7 +290,7 @@ func (b *Executor) volumeCacheRestore() error {
if err := os.RemoveAll(archivedPath); err != nil {
return errors.Wrapf(err, "error clearing volume path %q", archivedPath)
}
if err := os.MkdirAll(archivedPath, 0700); err != nil {
if err := os.MkdirAll(archivedPath, 0755); err != nil {
return errors.Wrapf(err, "error recreating volume path %q", archivedPath)
}
err = archive.Untar(cache, archivedPath, nil)

54
run.go
View File

@@ -9,7 +9,9 @@ import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/ioutils"
digest "github.com/opencontainers/go-digest"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/opencontainers/runtime-tools/generate"
"github.com/pkg/errors"
@@ -64,7 +66,7 @@ type RunOptions struct {
Terminal int
}
func setupMounts(spec *specs.Spec, optionMounts []specs.Mount, bindFiles, volumes []string) error {
func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts []specs.Mount, bindFiles, volumes []string) error {
// The passed-in mounts matter the most to us.
mounts := make([]specs.Mount, len(optionMounts))
copy(mounts, optionMounts)
@@ -98,17 +100,37 @@ func setupMounts(spec *specs.Spec, optionMounts []specs.Mount, bindFiles, volume
Options: []string{"rbind", "ro"},
})
}
// Add tmpfs filesystems at volume locations, unless we already have something there.
// Add temporary copies of the contents of volume locations at the
// volume locations, unless we already have something there.
for _, volume := range volumes {
if haveMount(volume) {
// Already mounting something there, no need for a tmpfs.
// Already mounting something there, no need to bother.
continue
}
// Mount a tmpfs there.
cdir, err := b.store.ContainerDirectory(b.ContainerID)
if err != nil {
return errors.Wrapf(err, "error determining work directory for container %q", b.ContainerID)
}
subdir := digest.Canonical.FromString(volume).Hex()
volumePath := filepath.Join(cdir, "buildah-volumes", subdir)
logrus.Debugf("using %q for volume at %q", volumePath, volume)
// If we need to, initialize the volume path's initial contents.
if _, err = os.Stat(volumePath); os.IsNotExist(err) {
if err = os.MkdirAll(volumePath, 0755); err != nil {
return errors.Wrapf(err, "error creating directory %q for volume %q in container %q", volumePath, volume, b.ContainerID)
}
srcPath := filepath.Join(mountPoint, volume)
if err = archive.CopyWithTar(srcPath, volumePath); err != nil {
return errors.Wrapf(err, "error populating directory %q for volume %q in container %q using contents of %q", volumePath, volume, b.ContainerID, srcPath)
}
}
// Add the bind mount.
mounts = append(mounts, specs.Mount{
Source: "tmpfs",
Source: volumePath,
Destination: volume,
Type: "tmpfs",
Type: "bind",
Options: []string{"bind"},
})
}
// Set the list in the spec.
@@ -145,14 +167,16 @@ func (b *Builder) Run(command []string, options RunOptions) error {
}
if len(command) > 0 {
g.SetProcessArgs(command)
} else if len(options.Cmd) != 0 {
g.SetProcessArgs(options.Cmd)
} else if len(b.Cmd()) != 0 {
g.SetProcessArgs(b.Cmd())
} else if len(options.Entrypoint) != 0 {
g.SetProcessArgs(options.Entrypoint)
} else if len(b.Entrypoint()) != 0 {
g.SetProcessArgs(b.Entrypoint())
} else {
cmd := b.Cmd()
if len(options.Cmd) > 0 {
cmd = options.Cmd
}
ep := b.Entrypoint()
if len(options.Entrypoint) > 0 {
ep = options.Entrypoint
}
g.SetProcessArgs(append(ep, cmd...))
}
if options.WorkingDir != "" {
g.SetProcessCwd(options.WorkingDir)
@@ -206,7 +230,7 @@ func (b *Builder) Run(command []string, options RunOptions) error {
}
bindFiles := []string{"/etc/hosts", "/etc/resolv.conf"}
err = setupMounts(spec, options.Mounts, bindFiles, b.Volumes())
err = b.setupMounts(mountPoint, spec, options.Mounts, bindFiles, b.Volumes())
if err != nil {
return errors.Wrapf(err, "error resolving mountpoints for container")
}

View File

@@ -95,6 +95,12 @@ load helpers
cmp ${TESTDIR}/other-randomfile $yetanothernewroot/other-randomfile
buildah delete $yetanothernewcid
newcid=$(buildah from new-image)
buildah commit --rm --signature-policy ${TESTSDIR}/policy.json $newcid containers-storage:remove-container-image
run buildah mount $newcid
[ "$status" -ne 0 ]
buildah rmi remove-container-image
buildah rmi containers-storage:other-new-image
buildah rmi another-new-image
run buildah --debug=false images -q

View File

@@ -81,6 +81,8 @@ load helpers
run test -s $root/vol/subvol/subvolfile
[ "$status" -ne 0 ]
test -s $root/vol/volfile
test -s $root/vol/Dockerfile
test -s $root/vol/Dockerfile2
run test -s $root/vol/anothervolfile
[ "$status" -ne 0 ]
buildah rm ${cid}
@@ -181,6 +183,38 @@ load helpers
buildah rm ${cid}
cid=$(buildah from ${target3}:latest)
buildah rm ${cid}
buildah rmi -f $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$output" = "" ]
}
@test "bud-volume-perms" {
# This Dockerfile needs us to be able to handle a working RUN instruction.
if ! which runc ; then
skip
fi
target=volume-image
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} ${TESTSDIR}/bud/volume-perms
cid=$(buildah from ${target})
root=$(buildah mount ${cid})
run test -s $root/vol/subvol/subvolfile
[ "$status" -ne 0 ]
run stat -c %f $root/vol/subvol
[ "$output" = 41ed ]
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$output" = "" ]
}
@test "bud-from-glob" {
target=alpine-image
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} -f Dockerfile2.glob ${TESTSDIR}/bud/from-multiple-files
cid=$(buildah from ${target})
root=$(buildah mount ${cid})
cmp $root/Dockerfile1.alpine ${TESTSDIR}/bud/from-multiple-files/Dockerfile1.alpine
cmp $root/Dockerfile2.withfrom ${TESTSDIR}/bud/from-multiple-files/Dockerfile2.withfrom
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$output" = "" ]

View File

@@ -0,0 +1,2 @@
FROM alpine
COPY Dockerfile* /

View File

@@ -14,3 +14,9 @@ VOLUME /vol/subvol
RUN dd if=/dev/zero bs=512 count=1 of=/vol/anothervolfile
# Which means that in the image we're about to commit, /vol/anothervolfile
# shouldn't exist, either.
# ADD files which should persist.
ADD Dockerfile /vol/Dockerfile
RUN stat /vol/Dockerfile
ADD Dockerfile /vol/Dockerfile2
RUN stat /vol/Dockerfile2

View File

@@ -0,0 +1,6 @@
FROM alpine
VOLUME /vol/subvol
# At this point, the directory should exist, with default permissions 0755, the
# contents below /vol/subvol should be frozen, and we shouldn't get an error
# from trying to write to it because we it was created automatically.
RUN dd if=/dev/zero bs=512 count=1 of=/vol/subvol/subvolfile

View File

@@ -21,6 +21,13 @@ load helpers
buildah copy $cid ${TESTDIR}/randomfile
buildah copy $cid ${TESTDIR}/other-randomfile ${TESTDIR}/third-randomfile ${TESTDIR}/randomfile /etc
buildah rm $cid
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
root=$(buildah mount $cid)
buildah config --workingdir / $cid
buildah copy $cid "${TESTDIR}/*randomfile" /etc
(cd ${TESTDIR}; for i in *randomfile; do cmp $i ${root}/etc/$i; done)
buildah rm $cid
}
@test "copy-local-plain" {

22
tests/export.bats Normal file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bats
load helpers
@test "extract" {
touch ${TESTDIR}/reference-time-file
for source in scratch alpine; do
cid=$(buildah from --pull=true --signature-policy ${TESTSDIR}/policy.json ${source})
mnt=$(buildah mount $cid)
touch ${mnt}/export.file
tar -cf - --transform s,^./,,g -C ${mnt} . | tar tf - | grep -v "^./$" | sort > ${TESTDIR}/tar.output
buildah umount $cid
buildah export "$cid" > ${TESTDIR}/${source}.tar
buildah export -o ${TESTDIR}/${source}1.tar "$cid"
diff ${TESTDIR}/${source}.tar ${TESTDIR}/${source}1.tar
tar -tf ${TESTDIR}/${source}.tar | sort > ${TESTDIR}/export.output
diff ${TESTDIR}/tar.output ${TESTDIR}/export.output
rm -f ${TESTDIR}/tar.output ${TESTDIR}/export.output
rm -f ${TESTDIR}/${source}1.tar ${TESTDIR}/${source}.tar
buildah rm "$cid"
done
}

View File

@@ -3,7 +3,6 @@
load helpers
@test "write-formats" {
buildimgtype
cid=$(buildah from --pull=false --signature-policy ${TESTSDIR}/policy.json scratch)
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid scratch-image-default
buildah commit --format dockerv2 --signature-policy ${TESTSDIR}/policy.json $cid scratch-image-docker
@@ -20,7 +19,6 @@ load helpers
}
@test "bud-formats" {
buildimgtype
buildah build-using-dockerfile --signature-policy ${TESTSDIR}/policy.json -t scratch-image-default -f bud/from-scratch/Dockerfile
buildah build-using-dockerfile --format dockerv2 --signature-policy ${TESTSDIR}/policy.json -t scratch-image-docker -f bud/from-scratch/Dockerfile
buildah build-using-dockerfile --format ociv1 --signature-policy ${TESTSDIR}/policy.json -t scratch-image-oci -f bud/from-scratch/Dockerfile

View File

@@ -1,18 +1,15 @@
#!/bin/bash
BUILDAH_BINARY=${BUILDAH_BINARY:-$(dirname ${BASH_SOURCE})/../buildah}
IMGTYPE_BINARY=${IMGTYPE_BINARY:-$(dirname ${BASH_SOURCE})/../imgtype}
TESTSDIR=${TESTSDIR:-$(dirname ${BASH_SOURCE})}
STORAGE_DRIVER=${STORAGE_DRIVER:-vfs}
function setup() {
suffix=$(dd if=/dev/urandom bs=12 count=1 status=none | base64 | tr +/ _.)
TESTDIR=${BATS_TMPDIR}/tmp.${suffix}
rm -fr ${TESTDIR}
mkdir -p ${TESTDIR}/{root,runroot}
REPO=${TESTDIR}/root
}
function buildimgtype() {
go build -tags "$(${TESTSDIR}/../btrfs_tag.sh; ${TESTSDIR}/../libdm_tag.sh)" -o imgtype ${TESTSDIR}/imgtype.go
}
function starthttpd() {
@@ -44,9 +41,9 @@ function createrandom() {
}
function buildah() {
${BUILDAH_BINARY} --debug --root ${TESTDIR}/root --runroot ${TESTDIR}/runroot --storage-driver vfs "$@"
${BUILDAH_BINARY} --debug --root ${TESTDIR}/root --runroot ${TESTDIR}/runroot --storage-driver ${STORAGE_DRIVER} "$@"
}
function imgtype() {
./imgtype -root ${TESTDIR}/root -runroot ${TESTDIR}/runroot -storage-driver vfs "$@"
${IMGTYPE_BINARY} -root ${TESTDIR}/root -runroot ${TESTDIR}/runroot -storage-driver ${STORAGE_DRIVER} "$@"
}

View File

@@ -4,6 +4,7 @@ import (
"encoding/json"
"flag"
"fmt"
"os"
"strings"
"github.com/Sirupsen/logrus"
@@ -16,10 +17,15 @@ import (
)
func main() {
if buildah.InitReexec() {
return
}
expectedManifestType := ""
expectedConfigType := ""
storeOptions := storage.DefaultStoreOptions
debug := flag.Bool("debug", false, "turn on debug logging")
root := flag.String("root", storeOptions.GraphRoot, "storage root directory")
runroot := flag.String("runroot", storeOptions.RunRoot, "storage runtime directory")
driver := flag.String("storage-driver", storeOptions.GraphDriverName, "storage driver")
@@ -29,6 +35,10 @@ func main() {
showm := flag.Bool("show-manifest", false, "output the manifest JSON")
showc := flag.Bool("show-config", false, "output the configuration JSON")
flag.Parse()
logrus.SetLevel(logrus.ErrorLevel)
if debug != nil && *debug {
logrus.SetLevel(logrus.DebugLevel)
}
switch *mtype {
case buildah.OCIv1ImageManifest:
expectedManifestType = *mtype
@@ -40,8 +50,9 @@ func main() {
expectedManifestType = ""
expectedConfigType = ""
default:
logrus.Fatalf("unknown -expected-manifest-type value, expected either %q or %q or %q",
logrus.Errorf("unknown -expected-manifest-type value, expected either %q or %q or %q",
buildah.OCIv1ImageManifest, buildah.Dockerv2ImageManifest, "*")
return
}
if root != nil {
storeOptions.GraphRoot = *root
@@ -65,10 +76,17 @@ func main() {
}
store, err := storage.GetStore(storeOptions)
if err != nil {
logrus.Fatalf("error opening storage: %v", err)
logrus.Errorf("error opening storage: %v", err)
return
}
defer store.Shutdown(false)
errors := false
defer func() {
store.Shutdown(false)
if errors {
os.Exit(1)
}
}()
for _, image := range args {
oImage := v1.Image{}
dImage := docker.V2Image{}
@@ -79,60 +97,74 @@ func main() {
ref, err := is.Transport.ParseStoreReference(store, image)
if err != nil {
logrus.Fatalf("error parsing reference %q: %v", image, err)
}
src, err := ref.NewImageSource(systemContext, []string{expectedManifestType})
if err != nil {
logrus.Fatalf("error opening source image %q: %v", image, err)
}
defer src.Close()
manifest, manifestType, err := src.GetManifest()
if err != nil {
logrus.Fatalf("error reading manifest from %q: %v", image, err)
logrus.Errorf("error parsing reference %q: %v", image, err)
errors = true
continue
}
img, err := ref.NewImage(systemContext)
if err != nil {
logrus.Fatalf("error opening image %q: %v", image, err)
logrus.Errorf("error opening image %q: %v", image, err)
errors = true
continue
}
defer img.Close()
config, err := img.ConfigBlob()
if err != nil {
logrus.Fatalf("error reading configuration from %q: %v", image, err)
logrus.Errorf("error reading configuration from %q: %v", image, err)
errors = true
continue
}
manifest, manifestType, err := img.Manifest()
if err != nil {
logrus.Errorf("error reading manifest from %q: %v", image, err)
errors = true
continue
}
switch expectedManifestType {
case buildah.OCIv1ImageManifest:
err = json.Unmarshal(manifest, &oManifest)
if err != nil {
logrus.Fatalf("error parsing manifest from %q: %v", image, err)
logrus.Errorf("error parsing manifest from %q: %v", image, err)
errors = true
continue
}
err = json.Unmarshal(config, &oImage)
if err != nil {
logrus.Fatalf("error parsing config from %q: %v", image, err)
logrus.Errorf("error parsing config from %q: %v", image, err)
errors = true
continue
}
manifestType = v1.MediaTypeImageManifest
configType = oManifest.Config.MediaType
case buildah.Dockerv2ImageManifest:
err = json.Unmarshal(manifest, &dManifest)
if err != nil {
logrus.Fatalf("error parsing manifest from %q: %v", image, err)
logrus.Errorf("error parsing manifest from %q: %v", image, err)
errors = true
continue
}
err = json.Unmarshal(config, &dImage)
if err != nil {
logrus.Fatalf("error parsing config from %q: %v", image, err)
logrus.Errorf("error parsing config from %q: %v", image, err)
errors = true
continue
}
manifestType = dManifest.MediaType
configType = dManifest.Config.MediaType
}
if expectedManifestType != "" && manifestType != expectedManifestType {
logrus.Fatalf("expected manifest type %q in %q, got %q", expectedManifestType, image, manifestType)
logrus.Errorf("expected manifest type %q in %q, got %q", expectedManifestType, image, manifestType)
errors = true
continue
}
if expectedConfigType != "" && configType != expectedConfigType {
logrus.Fatalf("expected config type %q in %q, got %q", expectedConfigType, image, configType)
logrus.Errorf("expected config type %q in %q, got %q", expectedConfigType, image, configType)
errors = true
continue
}
if showm != nil && *showm {
fmt.Println(string(manifest))

View File

@@ -19,10 +19,56 @@ load helpers
buildah run $cid cp /tmp/randomfile /tmp/other-randomfile
test -s $root/tmp/other-randomfile
cmp ${TESTDIR}/randomfile $root/tmp/other-randomfile
run buildah run $cid echo -n test
[ $status != 0 ]
run buildah run $cid echo -- -n test
[ $status != 0 ]
run buildah run $cid -- echo -n -- test
[ "$output" = "-- test" ]
run buildah run $cid -- echo -- -n test --
[ "$output" = "-- -n -- test --" ]
run buildah run $cid -- echo -n "test"
[ "$output" = "test" ]
buildah unmount $cid
buildah rm $cid
}
@test "run-cmd" {
if ! which runc ; then
skip
fi
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
buildah config $cid --workingdir /tmp
buildah config $cid --entrypoint ""
buildah config $cid --cmd pwd
run buildah --debug=false run $cid
[ "$output" = /tmp ]
buildah config $cid --entrypoint echo
run buildah --debug=false run $cid
[ "$output" = pwd ]
buildah config $cid --cmd ""
run buildah --debug=false run $cid
[ "$output" = "" ]
buildah config $cid --entrypoint ""
run buildah --debug=false run $cid echo that-other-thing
[ "$output" = that-other-thing ]
buildah config $cid --cmd echo
run buildah --debug=false run $cid echo that-other-thing
[ "$output" = that-other-thing ]
buildah config $cid --entrypoint echo
run buildah --debug=false run $cid echo that-other-thing
[ "$output" = that-other-thing ]
buildah rm $cid
}
@test "run-user" {
if ! which runc ; then
skip

11
tests/version.bats Normal file
View File

@@ -0,0 +1,11 @@
#!/usr/bin/env bats
load helpers
@test "buildah version test" {
run buildah version
echo "$output"
[ "$status" -eq 0 ]
}

View File

@@ -1,8 +1,8 @@
github.com/BurntSushi/toml master
github.com/Nvveen/Gotty master
github.com/blang/semver master
github.com/containers/image master
github.com/containers/storage master
github.com/containers/image 23bddaa64cc6bf3f3077cda0dbf1cdd7007434df
github.com/containers/storage 105f7c77aef0c797429e41552743bf5b03b63263
github.com/docker/distribution master
github.com/docker/docker 0f9ec7e47072b0c2e954b5b821bde5c1fe81bfa7
github.com/docker/engine-api master
@@ -22,12 +22,13 @@ github.com/mistifyio/go-zfs master
github.com/moby/moby 0f9ec7e47072b0c2e954b5b821bde5c1fe81bfa7
github.com/mtrmac/gpgme master
github.com/opencontainers/go-digest aa2ec055abd10d26d539eb630a92241b781ce4bc
github.com/opencontainers/image-spec v1.0.0-rc5
github.com/opencontainers/image-spec v1.0.0-rc6
github.com/opencontainers/runc master
github.com/opencontainers/runtime-spec v1.0.0-rc5
github.com/opencontainers/runtime-tools 8addcc695096a0fc61010af8766952546bba7cd0
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/openshift/imagebuilder master
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460
github.com/pborman/uuid master
github.com/pkg/errors master
github.com/Sirupsen/logrus master

View File

@@ -7,6 +7,7 @@ import (
"io"
"io/ioutil"
"reflect"
"runtime"
"strings"
"time"
@@ -157,6 +158,10 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
}
}()
if err := checkImageDestinationForCurrentRuntimeOS(src, dest); err != nil {
return err
}
if src.IsMultiImage() {
return errors.Errorf("can not copy %s: manifest contains multiple images", transports.ImageName(srcRef))
}
@@ -277,6 +282,22 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
return nil
}
func checkImageDestinationForCurrentRuntimeOS(src types.Image, dest types.ImageDestination) error {
if dest.MustMatchRuntimeOS() {
c, err := src.OCIConfig()
if err != nil {
return errors.Wrapf(err, "Error parsing image configuration")
}
osErr := fmt.Errorf("image operating system %q cannot be used on %q", c.OS, runtime.GOOS)
if runtime.GOOS == "windows" && c.OS == "linux" {
return osErr
} else if runtime.GOOS != "windows" && c.OS == "windows" {
return osErr
}
}
return nil
}
// updateEmbeddedDockerReference handles the Docker reference embedded in Docker schema1 manifests.
func updateEmbeddedDockerReference(manifestUpdates *types.ManifestUpdateOptions, dest types.ImageDestination, src types.Image, canModifyManifest bool) error {
destRef := dest.Reference().DockerReference()

View File

@@ -51,6 +51,11 @@ func (d *dirImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dirImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -78,6 +78,11 @@ func imageLoadGoroutine(ctx context.Context, c *client.Client, reader *io.PipeRe
defer resp.Body.Close()
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *daemonImageDestination) MustMatchRuntimeOS() bool {
return true
}
// Close removes resources associated with an initialized ImageDestination, if any.
func (d *daemonImageDestination) Close() error {
if !d.committed {

View File

@@ -34,6 +34,8 @@ const (
dockerCfgFileName = "config.json"
dockerCfgObsolete = ".dockercfg"
systemPerHostCertDirPath = "/etc/docker/certs.d"
resolvedPingV2URL = "%s://%s/v2/"
resolvedPingV1URL = "%s://%s/v1/_ping"
tagsPath = "/v2/%s/tags/list"
@@ -129,12 +131,29 @@ func newTransport() *http.Transport {
return tr
}
func setupCertificates(dir string, tlsc *tls.Config) error {
if dir == "" {
return nil
// dockerCertDir returns a path to a directory to be consumed by setupCertificates() depending on ctx and hostPort.
func dockerCertDir(ctx *types.SystemContext, hostPort string) string {
if ctx != nil && ctx.DockerCertPath != "" {
return ctx.DockerCertPath
}
var hostCertDir string
if ctx != nil && ctx.DockerPerHostCertDirPath != "" {
hostCertDir = ctx.DockerPerHostCertDirPath
} else if ctx != nil && ctx.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(ctx.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
}
return filepath.Join(hostCertDir, hostPort)
}
func setupCertificates(dir string, tlsc *tls.Config) error {
logrus.Debugf("Looking for TLS certificates and private keys in %s", dir)
fs, err := ioutil.ReadDir(dir)
if err != nil && !os.IsNotExist(err) {
if err != nil {
if os.IsNotExist(err) {
return nil
}
return err
}
@@ -146,7 +165,7 @@ func setupCertificates(dir string, tlsc *tls.Config) error {
return errors.Wrap(err, "unable to get system cert pool")
}
tlsc.RootCAs = systemPool
logrus.Debugf("crt: %s", fullPath)
logrus.Debugf(" crt: %s", fullPath)
data, err := ioutil.ReadFile(fullPath)
if err != nil {
return err
@@ -156,7 +175,7 @@ func setupCertificates(dir string, tlsc *tls.Config) error {
if strings.HasSuffix(f.Name(), ".cert") {
certName := f.Name()
keyName := certName[:len(certName)-5] + ".key"
logrus.Debugf("cert: %s", fullPath)
logrus.Debugf(" cert: %s", fullPath)
if !hasFile(fs, keyName) {
return errors.Errorf("missing key %s for client certificate %s. Note that CA certificates should use the extension .crt", keyName, certName)
}
@@ -169,7 +188,7 @@ func setupCertificates(dir string, tlsc *tls.Config) error {
if strings.HasSuffix(f.Name(), ".key") {
keyName := f.Name()
certName := keyName[:len(keyName)-4] + ".cert"
logrus.Debugf("key: %s", fullPath)
logrus.Debugf(" key: %s", fullPath)
if !hasFile(fs, certName) {
return errors.Errorf("missing client certificate %s for key %s", certName, keyName)
}
@@ -199,18 +218,18 @@ func newDockerClient(ctx *types.SystemContext, ref dockerReference, write bool,
return nil, err
}
tr := newTransport()
if ctx != nil && (ctx.DockerCertPath != "" || ctx.DockerInsecureSkipTLSVerify) {
tlsc := &tls.Config{}
if err := setupCertificates(ctx.DockerCertPath, tlsc); err != nil {
return nil, err
}
tlsc.InsecureSkipVerify = ctx.DockerInsecureSkipTLSVerify
tr.TLSClientConfig = tlsc
tr.TLSClientConfig = serverDefault()
// It is undefined whether the host[:port] string for dockerHostname should be dockerHostname or dockerRegistry,
// because docker/docker does not read the certs.d subdirectory at all in that case. We use the user-visible
// dockerHostname here, because it is more symmetrical to read the configuration in that case as well, and because
// generally the UI hides the existence of the different dockerRegistry. But note that this behavior is
// undocumented and may change if docker/docker changes.
certDir := dockerCertDir(ctx, reference.Domain(ref.ref))
if err := setupCertificates(certDir, tr.TLSClientConfig); err != nil {
return nil, err
}
if tr.TLSClientConfig == nil {
tr.TLSClientConfig = serverDefault()
if ctx != nil && ctx.DockerInsecureSkipTLSVerify {
tr.TLSClientConfig.InsecureSkipVerify = true
}
client := &http.Client{Transport: tr}

View File

@@ -99,6 +99,11 @@ func (d *dockerImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *dockerImageDestination) MustMatchRuntimeOS() bool {
return false
}
// sizeCounter is an io.Writer which only counts the total size of its input.
type sizeCounter struct{ size int64 }

View File

@@ -81,6 +81,11 @@ func (d *Destination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *Destination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -19,12 +19,12 @@ import (
type ociImageDestination struct {
ref ociReference
index imgspecv1.ImageIndex
index imgspecv1.Index
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ref ociReference) types.ImageDestination {
index := imgspecv1.ImageIndex{
index := imgspecv1.Index{
Versioned: imgspec.Versioned{
SchemaVersion: 2,
},
@@ -66,6 +66,11 @@ func (d *ociImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ociImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -173,15 +178,13 @@ func (d *ociImageDestination) PutManifest(m []byte) error {
}
annotations := make(map[string]string)
annotations["org.opencontainers.ref.name"] = d.ref.tag
annotations["org.opencontainers.image.ref.name"] = d.ref.tag
desc.Annotations = annotations
d.index.Manifests = append(d.index.Manifests, imgspecv1.ManifestDescriptor{
Descriptor: desc,
Platform: imgspecv1.Platform{
Architecture: runtime.GOARCH,
OS: runtime.GOOS,
},
})
desc.Platform = &imgspecv1.Platform{
Architecture: runtime.GOARCH,
OS: runtime.GOOS,
}
d.index.Manifests = append(d.index.Manifests, desc)
return nil
}

View File

@@ -12,7 +12,7 @@ import (
type ociImageSource struct {
ref ociReference
descriptor imgspecv1.ManifestDescriptor
descriptor imgspecv1.Descriptor
}
// newImageSource returns an ImageSource for reading from an existing directory.

View File

@@ -186,22 +186,22 @@ func (ref ociReference) NewImage(ctx *types.SystemContext) (types.Image, error)
return image.FromSource(src)
}
func (ref ociReference) getManifestDescriptor() (imgspecv1.ManifestDescriptor, error) {
func (ref ociReference) getManifestDescriptor() (imgspecv1.Descriptor, error) {
indexJSON, err := os.Open(ref.indexPath())
if err != nil {
return imgspecv1.ManifestDescriptor{}, err
return imgspecv1.Descriptor{}, err
}
defer indexJSON.Close()
index := imgspecv1.ImageIndex{}
index := imgspecv1.Index{}
if err := json.NewDecoder(indexJSON).Decode(&index); err != nil {
return imgspecv1.ManifestDescriptor{}, err
return imgspecv1.Descriptor{}, err
}
var d *imgspecv1.ManifestDescriptor
var d *imgspecv1.Descriptor
for _, md := range index.Manifests {
if md.MediaType != imgspecv1.MediaTypeImageManifest {
continue
}
refName, ok := md.Annotations["org.opencontainers.ref.name"]
refName, ok := md.Annotations["org.opencontainers.image.ref.name"]
if !ok {
continue
}
@@ -211,7 +211,7 @@ func (ref ociReference) getManifestDescriptor() (imgspecv1.ManifestDescriptor, e
}
}
if d == nil {
return imgspecv1.ManifestDescriptor{}, fmt.Errorf("no descriptor found for reference %q", ref.tag)
return imgspecv1.Descriptor{}, fmt.Errorf("no descriptor found for reference %q", ref.tag)
}
return *d, nil
}

View File

@@ -358,6 +358,11 @@ func (d *openshiftImageDestination) AcceptsForeignLayerURLs() bool {
return true
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *openshiftImageDestination) MustMatchRuntimeOS() bool {
return false
}
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.

View File

@@ -11,12 +11,15 @@ import (
"path/filepath"
"strconv"
"strings"
"time"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/containers/storage/pkg/archive"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/ostreedev/ostree-go/pkg/otbuiltin"
)
type blobToImport struct {
@@ -86,6 +89,11 @@ func (d *ostreeImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (d *ostreeImageDestination) MustMatchRuntimeOS() bool {
return true
}
func (d *ostreeImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobInfo) (types.BlobInfo, error) {
tmpDir, err := ioutil.TempDir(d.tmpDirPath, "blob")
if err != nil {
@@ -153,7 +161,17 @@ func fixFiles(dir string, usermode bool) error {
return nil
}
func (d *ostreeImageDestination) importBlob(blob *blobToImport) error {
func (d *ostreeImageDestination) ostreeCommit(repo *otbuiltin.Repo, branch string, root string, metadata []string) error {
opts := otbuiltin.NewCommitOptions()
opts.AddMetadataString = metadata
opts.Timestamp = time.Now()
// OCI layers have no parent OSTree commit
opts.Parent = "0000000000000000000000000000000000000000000000000000000000000000"
_, err := repo.Commit(root, branch, opts)
return err
}
func (d *ostreeImageDestination) importBlob(repo *otbuiltin.Repo, blob *blobToImport) error {
ostreeBranch := fmt.Sprintf("ociimage/%s", blob.Digest.Hex())
destinationPath := filepath.Join(d.tmpDirPath, blob.Digest.Hex(), "root")
if err := ensureDirectoryExists(destinationPath); err != nil {
@@ -181,11 +199,7 @@ func (d *ostreeImageDestination) importBlob(blob *blobToImport) error {
return err
}
}
return exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.size=%d", blob.Size),
"--branch", ostreeBranch,
fmt.Sprintf("--tree=dir=%s", destinationPath)).Run()
return d.ostreeCommit(repo, ostreeBranch, destinationPath, []string{fmt.Sprintf("docker.size=%d", blob.Size)})
}
func (d *ostreeImageDestination) importConfig(blob *blobToImport) error {
@@ -253,6 +267,16 @@ func (d *ostreeImageDestination) PutSignatures(signatures [][]byte) error {
}
func (d *ostreeImageDestination) Commit() error {
repo, err := otbuiltin.OpenRepo(d.ref.repo)
if err != nil {
return err
}
_, err = repo.PrepareTransaction()
if err != nil {
return err
}
for _, layer := range d.schema.LayersDescriptors {
hash := layer.Digest.Hex()
blob := d.blobs[hash]
@@ -261,7 +285,7 @@ func (d *ostreeImageDestination) Commit() error {
if blob == nil {
continue
}
err := d.importBlob(blob)
err := d.importBlob(repo, blob)
if err != nil {
return err
}
@@ -277,11 +301,11 @@ func (d *ostreeImageDestination) Commit() error {
}
manifestPath := filepath.Join(d.tmpDirPath, "manifest")
err := exec.Command("ostree", "commit",
"--repo", d.ref.repo,
fmt.Sprintf("--add-metadata-string=docker.manifest=%s", string(d.manifest)),
fmt.Sprintf("--branch=ociimage/%s", d.ref.branchName),
manifestPath).Run()
metadata := []string{fmt.Sprintf("docker.manifest=%s", string(d.manifest))}
err = d.ostreeCommit(repo, fmt.Sprintf("ociimage/%s", d.ref.branchName), manifestPath, metadata)
_, err = repo.CommitTransaction()
return err
}

View File

@@ -416,8 +416,15 @@ func (s *storageImageDestination) Commit() error {
return nil
}
var manifestMIMETypes = []string{
// TODO(runcom): we'll add OCI as part of another PR here
manifest.DockerV2Schema2MediaType,
manifest.DockerV2Schema1SignedMediaType,
manifest.DockerV2Schema1MediaType,
}
func (s *storageImageDestination) SupportedManifestMIMETypes() []string {
return nil
return manifestMIMETypes
}
// PutManifest writes manifest to the destination.
@@ -442,6 +449,11 @@ func (s *storageImageDestination) AcceptsForeignLayerURLs() bool {
return false
}
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
func (s *storageImageDestination) MustMatchRuntimeOS() bool {
return true
}
func (s *storageImageDestination) PutSignatures(signatures [][]byte) error {
sizes := []int{}
sigblob := []byte{}
@@ -509,7 +521,7 @@ func diffLayer(store storage.Store, layerID string) (rc io.ReadCloser, n int64,
} else {
n = layerMeta.CompressedSize
}
diff, err := store.Diff("", layer.ID)
diff, err := store.Diff("", layer.ID, nil)
if err != nil {
return nil, -1, err
}

View File

@@ -12,7 +12,7 @@ import (
_ "github.com/containers/image/docker/daemon"
_ "github.com/containers/image/oci/layout"
_ "github.com/containers/image/openshift"
_ "github.com/containers/image/ostree"
// The ostree transport is registered by ostree*.go
_ "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/types"

View File

@@ -0,0 +1,8 @@
// +build !containers_image_ostree_stub
package alltransports
import (
// Register the ostree transport
_ "github.com/containers/image/ostree"
)

View File

@@ -0,0 +1,9 @@
// +build containers_image_ostree_stub
package alltransports
import "github.com/containers/image/transports"
func init() {
transports.Register(transports.NewStubTransport("ostree"))
}

36
vendor/github.com/containers/image/transports/stub.go generated vendored Normal file
View File

@@ -0,0 +1,36 @@
package transports
import (
"fmt"
"github.com/containers/image/types"
)
// stubTransport is an implementation of types.ImageTransport which has a name, but rejects any references with “the transport $name: is not supported in this build”.
type stubTransport string
// NewStubTransport returns an implementation of types.ImageTransport which has a name, but rejects any references with “the transport $name: is not supported in this build”.
func NewStubTransport(name string) types.ImageTransport {
return stubTransport(name)
}
// Name returns the name of the transport, which must be unique among other transports.
func (s stubTransport) Name() string {
return string(s)
}
// ParseReference converts a string, which should not start with the ImageTransport.Name prefix, into an ImageReference.
func (s stubTransport) ParseReference(reference string) (types.ImageReference, error) {
return nil, fmt.Errorf(`The transport "%s:" is not supported in this build`, string(s))
}
// ValidatePolicyConfigurationScope checks that scope is a valid name for a signature.PolicyTransportScopes keys
// (i.e. a valid PolicyConfigurationIdentity() or PolicyConfigurationNamespaces() return value).
// It is acceptable to allow an invalid value which will never be matched, it can "only" cause user confusion.
// scope passed to this function will not be "", that value is always allowed.
func (s stubTransport) ValidatePolicyConfigurationScope(scope string) error {
// Allowing any reference in here allows tools with some transports stubbed-out to still
// use signature verification policies which refer to these stubbed-out transports.
// See also the treatment of unknown transports in policyTransportScopesWithTransport.UnmarshalJSON .
return nil
}

View File

@@ -2,6 +2,7 @@ package transports
import (
"fmt"
"sort"
"sync"
"github.com/containers/image/types"
@@ -69,3 +70,15 @@ func Register(t types.ImageTransport) {
func ImageName(ref types.ImageReference) string {
return ref.Transport().Name() + ":" + ref.StringWithinTransport()
}
// ListNames returns a list of transport names
func ListNames() []string {
kt.mu.Lock()
defer kt.mu.Unlock()
var names []string
for _, transport := range kt.transports {
names = append(names, transport.Name())
}
sort.Strings(names)
return names
}

View File

@@ -148,11 +148,11 @@ type ImageDestination interface {
SupportsSignatures() error
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
ShouldCompressLayers() bool
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
// uploaded to the image destination, true otherwise.
AcceptsForeignLayerURLs() bool
// MustMatchRuntimeOS returns true iff the destination can store only images targeted for the current runtime OS. False otherwise.
MustMatchRuntimeOS() bool
// PutBlob writes contents of stream and returns data representing the result (with all data filled in).
// inputInfo.Digest can be optionally provided if known; it is not mandatory for the implementation to verify it.
// inputInfo.Size is the expected length of stream, if known.
@@ -307,7 +307,10 @@ type SystemContext struct {
// If not "", a directory containing a CA certificate (ending with ".crt"),
// a client certificate (ending with ".cert") and a client ceritificate key
// (ending with ".key") used when talking to a Docker Registry.
DockerCertPath string
DockerCertPath string
// If not "", overrides the systems default path for a directory containing host[:port] subdirectories with the same structure as DockerCertPath above.
// Ignored if DockerCertPath is non-empty.
DockerPerHostCertDirPath string
DockerInsecureSkipTLSVerify bool // Allow contacting docker registries over HTTP, or HTTPS with failed TLS verification. Note that this does not affect other TLS connections.
// if nil, the library tries to parse ~/.docker/config.json to retrieve credentials
DockerAuthConfig *DockerAuthConfig

View File

@@ -50,6 +50,12 @@ type Container struct {
// that has been stored, if they're known.
BigDataSizes map[string]int64 `json:"big-data-sizes,omitempty"`
// Created is the datestamp for when this container was created. Older
// versions of the library did not track this information, so callers
// will likely want to use the IsZero() method to verify that a value
// is set before using it.
Created time.Time `json:"created,omitempty"`
Flags map[string]interface{} `json:"flags,omitempty"`
}
@@ -253,6 +259,7 @@ func (r *containerStore) Create(id string, names []string, image, layer, metadat
Metadata: metadata,
BigDataNames: []string{},
BigDataSizes: make(map[string]int64),
Created: time.Now().UTC(),
Flags: make(map[string]interface{}),
}
r.containers = append(r.containers, container)
@@ -309,10 +316,11 @@ func (r *containerStore) Delete(id string) error {
return ErrContainerUnknown
}
id = container.ID
newContainers := []*Container{}
for _, candidate := range r.containers {
if candidate.ID != id {
newContainers = append(newContainers, candidate)
toDeleteIndex := -1
for i, candidate := range r.containers {
if candidate.ID == id {
toDeleteIndex = i
break
}
}
delete(r.byid, id)
@@ -321,7 +329,14 @@ func (r *containerStore) Delete(id string) error {
for _, name := range container.Names {
delete(r.byname, name)
}
r.containers = newContainers
if toDeleteIndex != -1 {
// delete the container at toDeleteIndex
if toDeleteIndex == len(r.containers)-1 {
r.containers = r.containers[:len(r.containers)-1]
} else {
r.containers = append(r.containers[:toDeleteIndex], r.containers[toDeleteIndex+1:]...)
}
}
if err := r.Save(); err != nil {
return err
}

View File

@@ -46,6 +46,12 @@ type Image struct {
// that has been stored, if they're known.
BigDataSizes map[string]int64 `json:"big-data-sizes,omitempty"`
// Created is the datestamp for when this image was created. Older
// versions of the library did not track this information, so callers
// will likely want to use the IsZero() method to verify that a value
// is set before using it.
Created time.Time `json:"created,omitempty"`
Flags map[string]interface{} `json:"flags,omitempty"`
}
@@ -80,7 +86,7 @@ type ImageStore interface {
// Create creates an image that has a specified ID (or a random one) and
// optional names, using the specified layer as its topmost (hopefully
// read-only) layer. That layer can be referenced by multiple images.
Create(id string, names []string, layer, metadata string) (*Image, error)
Create(id string, names []string, layer, metadata string, created time.Time) (*Image, error)
// SetNames replaces the list of names associated with an image with the
// supplied values.
@@ -254,7 +260,7 @@ func (r *imageStore) SetFlag(id string, flag string, value interface{}) error {
return r.Save()
}
func (r *imageStore) Create(id string, names []string, layer, metadata string) (image *Image, err error) {
func (r *imageStore) Create(id string, names []string, layer, metadata string, created time.Time) (image *Image, err error) {
if !r.IsReadWrite() {
return nil, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to create new images at %q", r.imagespath())
}
@@ -274,6 +280,9 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
return nil, ErrDuplicateName
}
}
if created.IsZero() {
created = time.Now().UTC()
}
if err == nil {
image = &Image{
ID: id,
@@ -282,6 +291,7 @@ func (r *imageStore) Create(id string, names []string, layer, metadata string) (
Metadata: metadata,
BigDataNames: []string{},
BigDataSizes: make(map[string]int64),
Created: created,
Flags: make(map[string]interface{}),
}
r.images = append(r.images, image)
@@ -346,10 +356,10 @@ func (r *imageStore) Delete(id string) error {
return ErrImageUnknown
}
id = image.ID
newImages := []*Image{}
for _, candidate := range r.images {
if candidate.ID != id {
newImages = append(newImages, candidate)
toDeleteIndex := -1
for i, candidate := range r.images {
if candidate.ID == id {
toDeleteIndex = i
}
}
delete(r.byid, id)
@@ -357,7 +367,14 @@ func (r *imageStore) Delete(id string) error {
for _, name := range image.Names {
delete(r.byname, name)
}
r.images = newImages
if toDeleteIndex != -1 {
// delete the image at toDeleteIndex
if toDeleteIndex == len(r.images)-1 {
r.images = r.images[:len(r.images)-1]
} else {
r.images = append(r.images[:toDeleteIndex], r.images[toDeleteIndex+1:]...)
}
}
if err := r.Save(); err != nil {
return err
}

View File

@@ -15,6 +15,7 @@ import (
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/stringid"
"github.com/containers/storage/pkg/truncindex"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/vbatts/tar-split/tar/asm"
"github.com/vbatts/tar-split/tar/storage"
@@ -66,6 +67,38 @@ type Layer struct {
// mounted at the mount point.
MountCount int `json:"-"`
// Created is the datestamp for when this layer was created. Older
// versions of the library did not track this information, so callers
// will likely want to use the IsZero() method to verify that a value
// is set before using it.
Created time.Time `json:"created,omitempty"`
// CompressedDigest is the digest of the blob that was last passed to
// ApplyDiff() or Put(), as it was presented to us.
CompressedDigest digest.Digest `json:"compressed-diff-digest,omitempty"`
// CompressedSize is the length of the blob that was last passed to
// ApplyDiff() or Put(), as it was presented to us. If
// CompressedDigest is not set, this should be treated as if it were an
// uninitialized value.
CompressedSize int64 `json:"compressed-size,omitempty"`
// UncompressedDigest is the digest of the blob that was last passed to
// ApplyDiff() or Put(), after we decompressed it. Often referred to
// as a DiffID.
UncompressedDigest digest.Digest `json:"diff-digest,omitempty"`
// UncompressedSize is the length of the blob that was last passed to
// ApplyDiff() or Put(), after we decompressed it. If
// UncompressedDigest is not set, this should be treated as if it were
// an uninitialized value.
UncompressedSize int64 `json:"diff-size,omitempty"`
// CompressionType is the type of compression which we detected on the blob
// that was last passed to ApplyDiff() or Put().
CompressionType archive.Compression `json:"compression,omitempty"`
// Flags is arbitrary data about the layer.
Flags map[string]interface{} `json:"flags,omitempty"`
}
@@ -75,6 +108,12 @@ type layerMountPoint struct {
MountCount int `json:"count"`
}
// DiffOptions override the default behavior of Diff() methods.
type DiffOptions struct {
// Compression, if set overrides the default compressor when generating a diff.
Compression *archive.Compression
}
// ROLayerStore wraps a graph driver, adding the ability to refer to layers by
// name, and keeping track of parent-child relationships, along with a list of
// all known layers.
@@ -101,17 +140,31 @@ type ROLayerStore interface {
// Diff produces a tarstream which can be applied to a layer with the contents
// of the first layer to produce a layer with the contents of the second layer.
// By default, the parent of the second layer is used as the first
// layer, so it need not be specified.
Diff(from, to string) (io.ReadCloser, error)
// layer, so it need not be specified. Options can be used to override
// default behavior, but are also not required.
Diff(from, to string, options *DiffOptions) (io.ReadCloser, error)
// DiffSize produces an estimate of the length of the tarstream which would be
// produced by Diff.
DiffSize(from, to string) (int64, error)
// Size produces a cached value for the uncompressed size of the layer,
// if one is known, or -1 if it is not known. If the layer can not be
// found, it returns an error.
Size(name string) (int64, error)
// Lookup attempts to translate a name to an ID. Most methods do this
// implicitly.
Lookup(name string) (string, error)
// LayersByCompressedDigest returns a slice of the layers with the
// specified compressed digest value recorded for them.
LayersByCompressedDigest(d digest.Digest) ([]Layer, error)
// LayersByUncompressedDigest returns a slice of the layers with the
// specified uncompressed digest value recorded for them.
LayersByUncompressedDigest(d digest.Digest) ([]Layer, error)
// Layers returns a slice of the known layers.
Layers() ([]Layer, error)
}
@@ -164,15 +217,17 @@ type LayerStore interface {
}
type layerStore struct {
lockfile Locker
rundir string
driver drivers.Driver
layerdir string
layers []*Layer
idindex *truncindex.TruncIndex
byid map[string]*Layer
byname map[string]*Layer
bymount map[string]*Layer
lockfile Locker
rundir string
driver drivers.Driver
layerdir string
layers []*Layer
idindex *truncindex.TruncIndex
byid map[string]*Layer
byname map[string]*Layer
bymount map[string]*Layer
bycompressedsum map[digest.Digest][]string
byuncompressedsum map[digest.Digest][]string
}
func (r *layerStore) Layers() ([]Layer, error) {
@@ -203,7 +258,8 @@ func (r *layerStore) Load() error {
ids := make(map[string]*Layer)
names := make(map[string]*Layer)
mounts := make(map[string]*Layer)
parents := make(map[string][]*Layer)
compressedsums := make(map[digest.Digest][]string)
uncompressedsums := make(map[digest.Digest][]string)
if err = json.Unmarshal(data, &layers); len(data) == 0 || err == nil {
for n, layer := range layers {
ids[layer.ID] = layers[n]
@@ -215,10 +271,11 @@ func (r *layerStore) Load() error {
}
names[name] = layers[n]
}
if pslice, ok := parents[layer.Parent]; ok {
parents[layer.Parent] = append(pslice, layers[n])
} else {
parents[layer.Parent] = []*Layer{layers[n]}
if layer.CompressedDigest != "" {
compressedsums[layer.CompressedDigest] = append(compressedsums[layer.CompressedDigest], layer.ID)
}
if layer.UncompressedDigest != "" {
uncompressedsums[layer.UncompressedDigest] = append(uncompressedsums[layer.UncompressedDigest], layer.ID)
}
}
}
@@ -247,6 +304,8 @@ func (r *layerStore) Load() error {
r.byid = ids
r.byname = names
r.bymount = mounts
r.bycompressedsum = compressedsums
r.byuncompressedsum = uncompressedsums
err = nil
// Last step: if we're writable, try to remove anything that a previous
// user of this storage area marked for deletion but didn't manage to
@@ -369,6 +428,20 @@ func (r *layerStore) lookup(id string) (*Layer, bool) {
return nil, false
}
func (r *layerStore) Size(name string) (int64, error) {
layer, ok := r.lookup(name)
if !ok {
return -1, ErrLayerUnknown
}
// We use the presence of a non-empty digest as an indicator that the size value was intentionally set, and that
// a zero value is not just present because it was never set to anything else (which can happen if the layer was
// created by a version of this library that didn't keep track of digest and size information).
if layer.UncompressedDigest != "" {
return layer.UncompressedSize, nil
}
return -1, nil
}
func (r *layerStore) ClearFlag(id string, flag string) error {
if !r.IsReadWrite() {
return errors.Wrapf(ErrStoreIsReadOnly, "not allowed to clear flags on layers at %q", r.layerspath())
@@ -440,6 +513,7 @@ func (r *layerStore) Put(id, parent string, names []string, mountLabel string, o
Parent: parent,
Names: names,
MountLabel: mountLabel,
Created: time.Now().UTC(),
Flags: make(map[string]interface{}),
}
r.layers = append(r.layers, layer)
@@ -615,13 +689,21 @@ func (r *layerStore) Delete(id string) error {
if layer.MountPoint != "" {
delete(r.bymount, layer.MountPoint)
}
newLayers := []*Layer{}
for _, candidate := range r.layers {
if candidate.ID != id {
newLayers = append(newLayers, candidate)
toDeleteIndex := -1
for i, candidate := range r.layers {
if candidate.ID == id {
toDeleteIndex = i
break
}
}
if toDeleteIndex != -1 {
// delete the layer at toDeleteIndex
if toDeleteIndex == len(r.layers)-1 {
r.layers = r.layers[:len(r.layers)-1]
} else {
r.layers = append(r.layers[:toDeleteIndex], r.layers[toDeleteIndex+1:]...)
}
}
r.layers = newLayers
if err = r.Save(); err != nil {
return err
}
@@ -726,20 +808,20 @@ func (r *layerStore) newFileGetter(id string) (drivers.FileGetCloser, error) {
}, nil
}
func (r *layerStore) Diff(from, to string) (io.ReadCloser, error) {
func (r *layerStore) Diff(from, to string, options *DiffOptions) (io.ReadCloser, error) {
var metadata storage.Unpacker
from, to, toLayer, err := r.findParentAndLayer(from, to)
if err != nil {
return nil, ErrLayerUnknown
}
compression := archive.Uncompressed
if cflag, ok := toLayer.Flags[compressionFlag]; ok {
if ctype, ok := cflag.(float64); ok {
compression = archive.Compression(ctype)
} else if ctype, ok := cflag.(archive.Compression); ok {
compression = archive.Compression(ctype)
}
// Default to applying the type of encryption that we noted was used
// for the layerdiff when it was applied.
compression := toLayer.CompressionType
// If a particular compression type (or no compression) was selected,
// use that instead.
if options != nil && options.Compression != nil {
compression = *options.Compression
}
if from != toLayer.Parent {
diff, err := r.driver.Diff(to, from)
@@ -827,6 +909,10 @@ func (r *layerStore) DiffSize(from, to string) (size int64, err error) {
}
func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err error) {
if !r.IsReadWrite() {
return -1, errors.Wrapf(ErrStoreIsReadOnly, "not allowed to modify layer contents at %q", r.layerspath())
}
layer, ok := r.lookup(to)
if !ok {
return -1, ErrLayerUnknown
@@ -839,7 +925,9 @@ func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err
}
compression := archive.DetectCompression(header[:n])
defragmented := io.MultiReader(bytes.NewBuffer(header[:n]), diff)
compressedDigest := digest.Canonical.Digester()
compressedCounter := ioutils.NewWriteCounter(compressedDigest.Hash())
defragmented := io.TeeReader(io.MultiReader(bytes.NewBuffer(header[:n]), diff), compressedCounter)
tsdata := bytes.Buffer{}
compressor, err := gzip.NewWriterLevel(&tsdata, gzip.BestSpeed)
@@ -847,15 +935,20 @@ func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err
compressor = gzip.NewWriter(&tsdata)
}
metadata := storage.NewJSONPacker(compressor)
decompressed, err := archive.DecompressStream(defragmented)
uncompressed, err := archive.DecompressStream(defragmented)
if err != nil {
return -1, err
}
payload, err := asm.NewInputTarStream(decompressed, metadata, storage.NewDiscardFilePutter())
uncompressedDigest := digest.Canonical.Digester()
uncompressedCounter := ioutils.NewWriteCounter(uncompressedDigest.Hash())
payload, err := asm.NewInputTarStream(io.TeeReader(uncompressed, uncompressedCounter), metadata, storage.NewDiscardFilePutter())
if err != nil {
return -1, err
}
size, err = r.driver.ApplyDiff(layer.ID, layer.Parent, payload)
if err != nil {
return -1, err
}
compressor.Close()
if err == nil {
if err := os.MkdirAll(filepath.Dir(r.tspath(layer.ID)), 0700); err != nil {
@@ -866,15 +959,57 @@ func (r *layerStore) ApplyDiff(to string, diff archive.Reader) (size int64, err
}
}
if compression != archive.Uncompressed {
layer.Flags[compressionFlag] = compression
} else {
delete(layer.Flags, compressionFlag)
updateDigestMap := func(m *map[digest.Digest][]string, oldvalue, newvalue digest.Digest, id string) {
var newList []string
if oldvalue != "" {
for _, value := range (*m)[oldvalue] {
if value != id {
newList = append(newList, value)
}
}
if len(newList) > 0 {
(*m)[oldvalue] = newList
} else {
delete(*m, oldvalue)
}
}
if newvalue != "" {
(*m)[newvalue] = append((*m)[newvalue], id)
}
}
updateDigestMap(&r.bycompressedsum, layer.CompressedDigest, compressedDigest.Digest(), layer.ID)
layer.CompressedDigest = compressedDigest.Digest()
layer.CompressedSize = compressedCounter.Count
updateDigestMap(&r.byuncompressedsum, layer.UncompressedDigest, uncompressedDigest.Digest(), layer.ID)
layer.UncompressedDigest = uncompressedDigest.Digest()
layer.UncompressedSize = uncompressedCounter.Count
layer.CompressionType = compression
err = r.Save()
return size, err
}
func (r *layerStore) layersByDigestMap(m map[digest.Digest][]string, d digest.Digest) ([]Layer, error) {
var layers []Layer
for _, layerID := range m[d] {
layer, ok := r.lookup(layerID)
if !ok {
return nil, ErrLayerUnknown
}
layers = append(layers, *layer)
}
return layers, nil
}
func (r *layerStore) LayersByCompressedDigest(d digest.Digest) ([]Layer, error) {
return r.layersByDigestMap(r.bycompressedsum, d)
}
func (r *layerStore) LayersByUncompressedDigest(d digest.Digest) ([]Layer, error) {
return r.layersByDigestMap(r.byuncompressedsum, d)
}
func (r *layerStore) Lock() {
r.lockfile.Lock()
}

View File

@@ -20,6 +20,7 @@ import (
"github.com/containers/storage/pkg/idtools"
"github.com/containers/storage/pkg/ioutils"
"github.com/containers/storage/pkg/stringid"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
@@ -298,8 +299,8 @@ type Store interface {
DiffSize(from, to string) (int64, error)
// Diff returns the tarstream which would specify the changes returned by
// Changes.
Diff(from, to string) (io.ReadCloser, error)
// Changes. If options are passed in, they can override default behaviors.
Diff(from, to string, options *DiffOptions) (io.ReadCloser, error)
// ApplyDiff applies a tarstream to a layer. Information about the tarstream
// is cached with the layer. Typically, a layer which is populated using a
@@ -307,6 +308,18 @@ type Store interface {
// before or after the diff is applied.
ApplyDiff(to string, diff archive.Reader) (int64, error)
// LayersByCompressedDigest returns a slice of the layers with the
// specified compressed digest value recorded for them.
LayersByCompressedDigest(d digest.Digest) ([]Layer, error)
// LayersByUncompressedDigest returns a slice of the layers with the
// specified uncompressed digest value recorded for them.
LayersByUncompressedDigest(d digest.Digest) ([]Layer, error)
// LayerSize returns a cached approximation of the layer's size, or -1
// if we don't have a value on hand.
LayerSize(id string) (int64, error)
// Layers returns a list of the currently known layers.
Layers() ([]Layer, error)
@@ -422,6 +435,9 @@ type Store interface {
// ImageOptions is used for passing options to a Store's CreateImage() method.
type ImageOptions struct {
// CreationDate, if not zero, will override the default behavior of marking the image as having been
// created when CreateImage() was called, recording CreationDate instead.
CreationDate time.Time
}
// ContainerOptions is used for passing options to a Store's CreateContainer() method.
@@ -793,7 +809,13 @@ func (s *store) CreateImage(id string, names []string, layer, metadata string, o
if modified, err := ristore.Modified(); modified || err != nil {
ristore.Load()
}
return ristore.Create(id, names, layer, metadata)
creationDate := time.Now().UTC()
if options != nil {
creationDate = options.CreationDate
}
return ristore.Create(id, names, layer, metadata, creationDate)
}
func (s *store) CreateContainer(id string, names []string, image, layer, metadata string, options *ContainerOptions) (*Container, error) {
@@ -1747,7 +1769,7 @@ func (s *store) DiffSize(from, to string) (int64, error) {
return -1, ErrLayerUnknown
}
func (s *store) Diff(from, to string) (io.ReadCloser, error) {
func (s *store) Diff(from, to string, options *DiffOptions) (io.ReadCloser, error) {
rlstore, err := s.LayerStore()
if err != nil {
return nil, err
@@ -1764,7 +1786,7 @@ func (s *store) Diff(from, to string) (io.ReadCloser, error) {
rlstore.Load()
}
if rlstore.Exists(to) {
return rlstore.Diff(from, to)
return rlstore.Diff(from, to, options)
}
}
return nil, ErrLayerUnknown
@@ -1786,6 +1808,65 @@ func (s *store) ApplyDiff(to string, diff archive.Reader) (int64, error) {
return -1, ErrLayerUnknown
}
func (s *store) layersByMappedDigest(m func(ROLayerStore, digest.Digest) ([]Layer, error), d digest.Digest) ([]Layer, error) {
var layers []Layer
rlstore, err := s.LayerStore()
if err != nil {
return nil, err
}
stores, err := s.ROLayerStores()
if err != nil {
return nil, err
}
stores = append([]ROLayerStore{rlstore}, stores...)
for _, rlstore := range stores {
rlstore.Lock()
defer rlstore.Unlock()
if modified, err := rlstore.Modified(); modified || err != nil {
rlstore.Load()
}
slayers, err := m(rlstore, d)
if err != nil {
return nil, err
}
layers = append(layers, slayers...)
}
return layers, nil
}
func (s *store) LayersByCompressedDigest(d digest.Digest) ([]Layer, error) {
return s.layersByMappedDigest(func(r ROLayerStore, d digest.Digest) ([]Layer, error) { return r.LayersByCompressedDigest(d) }, d)
}
func (s *store) LayersByUncompressedDigest(d digest.Digest) ([]Layer, error) {
return s.layersByMappedDigest(func(r ROLayerStore, d digest.Digest) ([]Layer, error) { return r.LayersByUncompressedDigest(d) }, d)
}
func (s *store) LayerSize(id string) (int64, error) {
lstore, err := s.LayerStore()
if err != nil {
return -1, err
}
lstores, err := s.ROLayerStores()
if err != nil {
return -1, err
}
lstores = append([]ROLayerStore{lstore}, lstores...)
for _, rlstore := range lstores {
rlstore.Lock()
defer rlstore.Unlock()
if modified, err := rlstore.Modified(); modified || err != nil {
rlstore.Load()
}
if rlstore.Exists(id) {
return rlstore.Size(id)
}
}
return -1, ErrLayerUnknown
}
func (s *store) Layers() ([]Layer, error) {
var layers []Layer
rlstore, err := s.LayerStore()

View File

@@ -14,7 +14,11 @@
package v1
import "time"
import (
"time"
digest "github.com/opencontainers/go-digest"
)
// ImageConfig defines the execution parameters which should be used as a base when running a container using an image.
type ImageConfig struct {
@@ -40,7 +44,10 @@ type ImageConfig struct {
WorkingDir string `json:"WorkingDir,omitempty"`
// Labels contains arbitrary metadata for the container.
Labels map[string]string `json:"labels,omitempty"`
Labels map[string]string `json:"Labels,omitempty"`
// StopSignal contains the system call signal that will be sent to the container to exit.
StopSignal string `json:"StopSignal,omitempty"`
}
// RootFS describes a layer content addresses
@@ -49,13 +56,13 @@ type RootFS struct {
Type string `json:"type"`
// DiffIDs is an array of layer content hashes (DiffIDs), in order from bottom-most to top-most.
DiffIDs []string `json:"diff_ids"`
DiffIDs []digest.Digest `json:"diff_ids"`
}
// History describes the history of a layer.
type History struct {
// Created is the combined date and time at which the layer was created, formatted as defined by RFC 3339, section 5.6.
Created time.Time `json:"created,omitempty"`
Created *time.Time `json:"created,omitempty"`
// CreatedBy is the command which created the layer.
CreatedBy string `json:"created_by,omitempty"`
@@ -74,7 +81,7 @@ type History struct {
// This provides the `application/vnd.oci.image.config.v1+json` mediatype when marshalled to JSON.
type Image struct {
// Created is the combined date and time at which the image was created, formatted as defined by RFC 3339, section 5.6.
Created time.Time `json:"created,omitempty"`
Created *time.Time `json:"created,omitempty"`
// Author defines the name and/or email address of the person or entity which created and is responsible for maintaining the image.
Author string `json:"author,omitempty"`

View File

@@ -17,7 +17,8 @@ package v1
import digest "github.com/opencontainers/go-digest"
// Descriptor describes the disposition of targeted content.
// This structure provides `application/vnd.oci.descriptor.v1+json` mediatype when marshalled to JSON
// This structure provides `application/vnd.oci.descriptor.v1+json` mediatype
// when marshalled to JSON.
type Descriptor struct {
// MediaType is the media type of the object this schema refers to.
MediaType string `json:"mediaType,omitempty"`
@@ -33,4 +34,31 @@ type Descriptor struct {
// Annotations contains arbitrary metadata relating to the targeted content.
Annotations map[string]string `json:"annotations,omitempty"`
// Platform describes the platform which the image in the manifest runs on.
//
// This should only be used when referring to a manifest.
Platform *Platform `json:"platform,omitempty"`
}
// Platform describes the platform which the image in the manifest runs on.
type Platform struct {
// Architecture field specifies the CPU architecture, for example
// `amd64` or `ppc64`.
Architecture string `json:"architecture"`
// OS specifies the operating system, for example `linux` or `windows`.
OS string `json:"os"`
// OSVersion is an optional field specifying the operating system
// version, for example on Windows `10.0.14393.1066`.
OSVersion string `json:"os.version,omitempty"`
// OSFeatures is an optional field specifying an array of strings,
// each listing a required OS feature (for example on Windows `win32k`).
OSFeatures []string `json:"os.features,omitempty"`
// Variant is an optional field specifying a variant of the CPU, for
// example `v7` to specify ARMv7 when architecture is `arm`.
Variant string `json:"variant,omitempty"`
}

View File

@@ -1,63 +0,0 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package v1
import "github.com/opencontainers/image-spec/specs-go"
// Platform describes the platform which the image in the manifest runs on.
type Platform struct {
// Architecture field specifies the CPU architecture, for example
// `amd64` or `ppc64`.
Architecture string `json:"architecture"`
// OS specifies the operating system, for example `linux` or `windows`.
OS string `json:"os"`
// OSVersion is an optional field specifying the operating system
// version, for example `10.0.10586`.
OSVersion string `json:"os.version,omitempty"`
// OSFeatures is an optional field specifying an array of strings,
// each listing a required OS feature (for example on Windows `win32k`).
OSFeatures []string `json:"os.features,omitempty"`
// Variant is an optional field specifying a variant of the CPU, for
// example `ppc64le` to specify a little-endian version of a PowerPC CPU.
Variant string `json:"variant,omitempty"`
// Features is an optional field specifying an array of strings, each
// listing a required CPU feature (for example `sse4` or `aes`).
Features []string `json:"features,omitempty"`
}
// ManifestDescriptor describes a platform specific manifest.
type ManifestDescriptor struct {
Descriptor
// Platform describes the platform which the image in the manifest runs on.
Platform Platform `json:"platform"`
}
// ImageIndex references manifests for various platforms.
// This structure provides `application/vnd.oci.image.index.v1+json` mediatype when marshalled to JSON.
type ImageIndex struct {
specs.Versioned
// Manifests references platform specific manifests.
Manifests []ManifestDescriptor `json:"manifests"`
// Annotations contains arbitrary metadata for the image index.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -0,0 +1,29 @@
// Copyright 2016 The Linux Foundation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package v1
import "github.com/opencontainers/image-spec/specs-go"
// Index references manifests for various platforms.
// This structure provides `application/vnd.oci.image.index.v1+json` mediatype when marshalled to JSON.
type Index struct {
specs.Versioned
// Manifests references platform specific manifests.
Manifests []Descriptor `json:"manifests"`
// Annotations contains arbitrary metadata for the image index.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -27,6 +27,6 @@ type Manifest struct {
// Layers is an indexed list of layers referenced by the manifest.
Layers []Descriptor `json:"layers"`
// Annotations contains arbitrary metadata for the manifest.
// Annotations contains arbitrary metadata for the image manifest.
Annotations map[string]string `json:"annotations,omitempty"`
}

View File

@@ -18,6 +18,9 @@ const (
// MediaTypeDescriptor specifies the media type for a content descriptor.
MediaTypeDescriptor = "application/vnd.oci.descriptor.v1+json"
// MediaTypeLayoutHeader specifies the media type for the oci-layout.
MediaTypeLayoutHeader = "application/vnd.oci.layout.header.v1+json"
// MediaTypeImageManifest specifies the media type for an image manifest.
MediaTypeImageManifest = "application/vnd.oci.image.manifest.v1+json"

View File

@@ -25,7 +25,7 @@ const (
VersionPatch = 0
// VersionDev indicates development branch. Releases will be empty string.
VersionDev = "-rc5"
VersionDev = "-rc6-dev"
)
// Version is the specification version that the package types support.

17
vendor/github.com/ostreedev/ostree-go/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,17 @@
Portions of this code are derived from:
https://github.com/dradtke/gotk3
Copyright (c) 2013 Conformal Systems LLC.
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@@ -0,0 +1,60 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
import (
"unsafe"
)
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
/*
* GBoolean
*/
// GBoolean is a Go representation of glib's gboolean
type GBoolean C.gboolean
func NewGBoolean() GBoolean {
return GBoolean(0)
}
func GBool(b bool) GBoolean {
if b {
return GBoolean(1)
}
return GBoolean(0)
}
func (b GBoolean) Ptr() unsafe.Pointer {
return unsafe.Pointer(&b)
}
func GoBool(b GBoolean) bool {
if b != 0 {
return true
}
return false
}

View File

@@ -0,0 +1,47 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
import (
"unsafe"
)
// GIO types
type GCancellable struct {
*GObject
}
func (self *GCancellable) native() *C.GCancellable {
return (*C.GCancellable)(unsafe.Pointer(self))
}
func (self *GCancellable) Ptr() unsafe.Pointer {
return unsafe.Pointer(self)
}
// At the moment, no cancellable API, just pass nil

View File

@@ -0,0 +1,71 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
import (
"errors"
"unsafe"
)
/*
* GError
*/
// GError is a representation of GLib's GError
type GError struct {
ptr unsafe.Pointer
}
func NewGError() GError {
return GError{nil}
}
func (e GError) Ptr() unsafe.Pointer {
if e.ptr == nil {
return nil
}
return e.ptr
}
func (e GError) Nil() {
e.ptr = nil
}
func (e *GError) native() *C.GError {
if e == nil || e.ptr == nil {
return nil
}
return (*C.GError)(e.ptr)
}
func ToGError(ptr unsafe.Pointer) GError {
return GError{ptr}
}
func ConvertGError(e GError) error {
defer C.g_error_free(e.native())
return errors.New(C.GoString((*C.char)(C._g_error_get_message(e.native()))))
}

View File

@@ -0,0 +1,52 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
import (
"unsafe"
)
/*
* GFile
*/
type GFile struct {
ptr unsafe.Pointer
}
func (f GFile) Ptr() unsafe.Pointer {
return f.ptr
}
func NewGFile() *GFile {
return &GFile{nil}
}
func ToGFile(ptr unsafe.Pointer) *GFile {
gf := NewGFile()
gf.ptr = ptr
return gf
}

View File

@@ -0,0 +1,53 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
import (
"unsafe"
)
/*
* GFileInfo
*/
type GFileInfo struct {
ptr unsafe.Pointer
}
func (fi GFileInfo) Ptr() unsafe.Pointer {
return fi.ptr
}
func NewGFileInfo() GFileInfo {
var fi GFileInfo = GFileInfo{nil}
return fi
}
func ToGFileInfo(p unsafe.Pointer) *GFileInfo {
var fi *GFileInfo = &GFileInfo{}
fi.ptr = p
return fi
}

View File

@@ -0,0 +1,50 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
import (
"unsafe"
)
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
/*
* GHashTable
*/
type GHashTable struct {
ptr unsafe.Pointer
}
func (ht *GHashTable) Ptr() unsafe.Pointer {
return ht.ptr
}
func (ht *GHashTable) native() *C.GHashTable {
return (*C.GHashTable)(ht.ptr)
}
func ToGHashTable(ptr unsafe.Pointer) *GHashTable {
return &GHashTable{ptr}
}

View File

@@ -0,0 +1,50 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
import (
"unsafe"
)
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
/*
* GHashTableIter
*/
type GHashTableIter struct {
ptr unsafe.Pointer
}
func (ht *GHashTableIter) Ptr() unsafe.Pointer {
return ht.ptr
}
func (ht *GHashTableIter) native() *C.GHashTableIter {
return (*C.GHashTableIter)(ht.ptr)
}
func ToGHashTableIter(ptr unsafe.Pointer) *GHashTableIter {
return &GHashTableIter{ptr}
}

View File

@@ -0,0 +1,27 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"

View File

@@ -0,0 +1,17 @@
#include <glib.h>
static char *
_g_error_get_message (GError *error)
{
g_assert (error != NULL);
return error->message;
}
static const char *
_g_variant_lookup_string (GVariant *v, const char *key)
{
const char *r;
if (g_variant_lookup (v, key, "&s", &r))
return r;
return NULL;
}

View File

@@ -0,0 +1,79 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
import (
"unsafe"
)
/*
* GObject
*/
// IObject is an interface type implemented by Object and all types which embed
// an Object. It is meant to be used as a type for function arguments which
// require GObjects or any subclasses thereof.
type IObject interface {
toGObject() *C.GObject
ToObject() *GObject
}
// GObject is a representation of GLib's GObject.
type GObject struct {
ptr unsafe.Pointer
}
func (v *GObject) Ptr() unsafe.Pointer {
return v.ptr
}
func (v *GObject) native() *C.GObject {
if v == nil {
return nil
}
return (*C.GObject)(v.ptr)
}
func (v *GObject) Ref() {
C.g_object_ref(C.gpointer(v.Ptr()))
}
func (v *GObject) Unref() {
C.g_object_unref(C.gpointer(v.Ptr()))
}
func (v *GObject) RefSink() {
C.g_object_ref_sink(C.gpointer(v.native()))
}
func (v *GObject) IsFloating() bool {
c := C.g_object_is_floating(C.gpointer(v.native()))
return GoBool(GBoolean(c))
}
func (v *GObject) ForceFloating() {
C.g_object_force_floating(v.native())
}

View File

@@ -0,0 +1,51 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
import (
"unsafe"
)
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
/*
* GOptionContext
*/
type GOptionContext struct {
ptr unsafe.Pointer
}
func (oc *GOptionContext) Ptr() unsafe.Pointer {
return oc.ptr
}
func (oc *GOptionContext) native() *C.GOptionContext {
return (*C.GOptionContext)(oc.ptr)
}
func ToGOptionContext(ptr unsafe.Pointer) GOptionContext {
return GOptionContext{ptr}
}

View File

@@ -0,0 +1,97 @@
/*
* Copyright (c) 2013 Conformal Systems <info@conformal.com>
*
* This file originated from: http://opensource.conformal.com/
*
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
package glibobject
// #cgo pkg-config: glib-2.0 gobject-2.0
// #include <glib.h>
// #include <glib-object.h>
// #include <gio/gio.h>
// #include "glibobject.go.h"
// #include <stdlib.h>
import "C"
import (
"fmt"
"unsafe"
)
/*
* GVariant
*/
type GVariant struct {
ptr unsafe.Pointer
}
//func GVariantNew(p unsafe.Pointer) *GVariant {
//o := &GVariant{p}
//runtime.SetFinalizer(o, (*GVariant).Unref)
//return o;
//}
//func GVariantNewSink(p unsafe.Pointer) *GVariant {
//o := &GVariant{p}
//runtime.SetFinalizer(o, (*GVariant).Unref)
//o.RefSink()
//return o;
//}
func (v *GVariant) native() *C.GVariant {
return (*C.GVariant)(v.ptr)
}
func (v *GVariant) Ptr() unsafe.Pointer {
return v.ptr
}
func (v *GVariant) Ref() {
C.g_variant_ref(v.native())
}
func (v *GVariant) Unref() {
C.g_variant_unref(v.native())
}
func (v *GVariant) RefSink() {
C.g_variant_ref_sink(v.native())
}
func (v *GVariant) TypeString() string {
cs := (*C.char)(C.g_variant_get_type_string(v.native()))
return C.GoString(cs)
}
func (v *GVariant) GetChildValue(i int) *GVariant {
cchild := C.g_variant_get_child_value(v.native(), C.gsize(i))
return (*GVariant)(unsafe.Pointer(cchild))
}
func (v *GVariant) LookupString(key string) (string, error) {
ckey := C.CString(key)
defer C.free(unsafe.Pointer(ckey))
// TODO: Find a way to have constant C strings in golang
cstr := C._g_variant_lookup_string(v.native(), ckey)
if cstr == nil {
return "", fmt.Errorf("No such key: %s", key)
}
return C.GoString(cstr), nil
}
func ToGVariant(ptr unsafe.Pointer) *GVariant {
return &GVariant{ptr}
}

View File

View File

View File

View File

View File

View File

View File

View File

Some files were not shown because too many files have changed in this diff Show More