Compare commits

...

115 Commits
v0.3 ... v0.6

Author SHA1 Message Date
Daniel J Walsh
de0fb93f3d Bump to 0.6
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-11-15 17:47:53 +00:00
Urvashi Mohnani
4419612150 Add manifest type conversion to buildah push
buildah push supports manifest type conversion when pushing using the 'dir' transport
Manifest types include oci, v2s1, and v2s2
e.g buildah push --format v2s2 alpine dir:my-directory

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>

Closes: #321
Approved by: rhatdan
2017-11-15 13:38:28 +00:00
Urvashi Mohnani
5ececfad2c Vendor in latest container/image
Adds support for converting manifest types when using the dir transport

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>

Closes: #321
Approved by: rhatdan
2017-11-15 13:38:28 +00:00
TomSweeneyRedHat
4f376bbb5e Set option.terminal appropriately in run
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #323
Approved by: rhatdan
2017-11-14 19:28:51 +00:00
Anthony Green
d03123204d Add RHEL build instructions.
Signed-off-by: Anthony Green <green@redhat.com>

Closes: #322
Approved by: rhatdan
2017-11-10 11:36:11 +00:00
Nalin Dahyabhai
0df1c44b12 tests: check $status whenever we use run
Always be sure to check $status after using the run helper.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
75fbb8483e Test that "run" fails with unresolvable names
Add a test that makes sure that "buildah run" fails if it can't resolve
the name of the user for the container.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
52e2737460 Rework how we do UID resolution in images
* Use chroot() instead of trying to read the right file ourselves.

This should resolve #66.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
c83cd3fba9 Accept numeric USER values with no group ID
Change our behavior when we're given USER with a numeric UID and no GID,
so that we no longer error out if the UID doesn't correspond to a known
user so that we can use that user's primary GID.  Instead, use GID 0.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
d41ac23a03 Add a test for USER symlink resolution
Add a test that makes sure we catch cases where we attempt to open a
file in the container's tree that's actually a symlink that points out
of the tree.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
dbebeb7235 Never use host methods for parsing USER values
Drop fallbacks for resolving USER values that attempt to look up names
on the host, since that's never predictable.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
9e129fd653 fopenContainerFile: scope filename lookups better
Switch fopenContainerFile from using Stat/Lstat after opening the file
to using openat() to walk the given path, resolving links to keep them
from escaping the container's root fs.  This should resolve #66.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
0a44c7f162 "run --hostname test": do less setup
We don't need to mount the container for this test or add files to it,
and switching to a smaller base image that already includes a "hostname"
command means we don't need to run a package installer in the container.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #320
Approved by: nalind
2017-11-09 20:27:58 +00:00
Nalin Dahyabhai
b12735358a "run --hostname test": print $output more
Make it easier to troubleshoot the "run --hostname" test.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #320
Approved by: nalind
2017-11-09 20:27:58 +00:00
Nalin Dahyabhai
318beaa720 integration tests: default to /var/tmp
Default to running integration tests using /var/tmp as scratch space,
since it's more likely to support proper SELinux labeling than /tmp,
which is more likely to be on a tmpfs.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #320
Approved by: nalind
2017-11-09 20:27:57 +00:00
Nalin Dahyabhai
f7dc659e52 Bump github.com/vbatts/tar-split
Update github.com/vbatts/tar-split to v0.10.2 and pin that version
instead of master, to pick up https://github.com/vbatts/tar-split/pull/42

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #318
Approved by: rhatdan
2017-11-08 17:36:54 +00:00
Daniel J Walsh
35afa1c1f4 Merge pull request #317 from rhatdan/master
Bump to v0.5
2017-11-07 19:51:29 -05:00
Daniel J Walsh
c71b655cfc Bump to v0.5
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-11-08 00:50:37 +00:00
Daniel J Walsh
ec9db747d9 Merge pull request #316 from rhatdan/selinux
Add secrets patch to buildah
2017-11-07 19:40:04 -05:00
Daniel J Walsh
3e8ded8646 Add secrets patch to buildah
Signed-off-by: umohnani8 <umohnani@redhat.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-11-08 00:01:57 +00:00
Daniel J Walsh
966f32b2ac Add proper SELinux labeling to buildah run
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #294
Approved by: nalind
2017-11-07 22:40:29 +00:00
Daniel J Walsh
cde99f8517 Merge pull request #308 from TomSweeneyRedHat/dev/tsweeney/tip2
Add go tip to build, but allow it to have failures
2017-11-07 14:14:13 -05:00
TomSweeneyRedHat
01db066498 Add tls-verify to bud command
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #297
Approved by: nalind
2017-11-07 19:07:30 +00:00
TomSweeneyRedHat
9653e2ba9a Add go tip to build, but allow it to have failures
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-11-02 16:59:37 -04:00
Daniel J Walsh
4d87007327 Merge pull request #307 from ipbabble/tutorial-fix
Made some edits based on tsweeney feedback.
2017-11-02 13:40:05 -04:00
William Henry
dbea38b440 Add root prompt and some other minor changes
Signed-off-by: William Henry <whenry@redhat.com>
2017-11-02 10:30:11 -06:00
William Henry
0bc120edda Made some edits based on tsweeney feedback.
Signed-off-by: William Henry <whenry@redhat.com>
2017-11-02 10:13:30 -06:00
Daniel J Walsh
297bfa6b30 Merge pull request #305 from TomSweeneyRedHat/dev/tsweeney/tip
Add go 1.9.x to .travis.yml and remove go tip build temporarily
2017-11-02 11:57:17 -04:00
TomSweeneyRedHat
58c078fc88 Add go 1.9.x to .travis.yml and fix tip
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-11-02 11:09:38 -04:00
William Henry
79663fe1a0 Added file tutorials/01-intro.md
A basic introduction to buildah tutorial

Topics covered:
buildah from a base
briefly describes containers/storage and containers/image
buildah run
buildah from scratch
installing packages and files to a scratch image
buildah push and running a buildah built container in docker
buildah bud

Signed-off-by: William Henry <whenry@redhat.com>

Closes: #302
Approved by: rhatdan
2017-10-31 13:06:51 +00:00
Daniel J Walsh
9a4e0e8a28 Merge pull request #299 from rhatdan/logos
Add logos for buildah
2017-10-31 08:47:07 -04:00
TomSweeneyRedHat
515386e1a7 Fix for rpm.bats test issue
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #303
Approved by: nalind
2017-10-30 20:04:51 +00:00
Daniel J Walsh
49bf6fc095 Merge pull request #298 from rhatdan/contributing
Add CONTRIBUTING.md document from skopeo
2017-10-26 15:23:55 -07:00
Daniel J Walsh
d63314d737 Add logos for buildah
Thanks to Máirín Duffy for building these logos for e Buildah project

Patch also fixes references to the project Buildah to be upper case.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-26 15:23:24 -07:00
Daniel J Walsh
b186786563 add CONTRIBUTING.md
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-25 12:06:06 -07:00
Daniel J Walsh
3cc0218280 Merge pull request #296 from TomSweeneyRedHat/dev/tsweeney/docfix/17
Add required runc version to README.md
2017-10-22 06:42:48 -04:00
TomSweeneyRedHat
b794edef6a Add required runc version to README.md
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-10-21 14:41:49 -04:00
Ace-Tang
5cc3c510c5 Fix list images with digest not aligned
Signed-off-by: Ace-Tang <aceapril@126.com>

Closes: #289
Approved by: rhatdan
2017-10-18 13:58:31 +00:00
Daniel J Walsh
5aec4fe722 Vendor in latest code from containers/storage
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #288
Approved by: rhatdan
2017-10-13 19:53:54 +00:00
Daniel J Walsh
1513b82eed Merge pull request #237 from rhatdan/vendor
Vendor in latest containers/storage and vendor/github.com/sirupsen
2017-10-10 14:44:28 -04:00
Daniel J Walsh
7d5e57f7ff Optimize regex matching
Make lint is showing that we should compile regex before using it
in a for loop.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-10 17:36:24 +00:00
Daniel J Walsh
8ecefa978c Vendor in changes to support sirupsen/logrus
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-10 17:30:11 +00:00
Daniel J Walsh
a673ac7ae6 Merge pull request #284 from rhatdan/cleanup
Fix validateOptions to match kpod
2017-10-10 08:22:36 -04:00
Daniel J Walsh
99e512e3f2 Merge pull request #282 from nalind/unit-tests
Run unit tests in CI
2017-10-09 16:20:56 -04:00
Daniel J Walsh
166d4db597 Fix validateOptions to match kpod
This patch will allow for the possibility of a valid "-"
option value.  This is often used for taking from STDIN,
and should future proof us from this breaking if ever added.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-09 16:12:53 +00:00
Nalin Dahyabhai
c04748f3fb Allow specifying store locations for unit tests
Add options for specifying the root location when we're running unit
tests, so that we don't try to use the system's default location, which
we should avoid messing with if we can.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-09 11:29:49 -04:00
Nalin Dahyabhai
63e314ea22 Add make targets for unit tests, and run them
Add targets to the top-level Makefile for running unit and integration
tests, and start having Travis run the unit tests.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 15:44:06 -04:00
Nalin Dahyabhai
0d6bf94eb6 Fix some validation errors
Fix some validation errors flagged by metalinter.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
f88cddfb4d unit tests: always make sure we have images
In unit tests that assume that one or more images are present, make sure
we actually have pulled some images.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
0814bc19bd Make filtering by date use the image's date
When filtering "images" output by the date of an image, use the creation
date recorded in the image.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
422ad51afb Fix a compile error in the unit tests
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
a5a3a7be11 Update test for output formatting changes
When we added more spaces between the columns of output from the
"images" command, we didn't update the test to expect it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
a4b830a9fc images: don't list unnamed images twice
The "images" command was erroneously listing images that don't have
names twice, once with no name, and a second time with "<none>" as a
placeholder.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #280
Approved by: rhatdan
2017-10-06 15:35:10 +00:00
Daniel J Walsh
68ccdd77fe Fix timeout issue
We are seeing some wierd timeouts in testing, if we eliminate
the bridged networkin in the container runtime, we hope to be
able to eliminate this problem.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #283
Approved by: nalind
2017-10-06 14:30:56 +00:00
TomSweeneyRedHat
6124673bbc Add further tty verbiage to buildah run
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #276
Approved by: rhatdan
2017-10-03 10:44:07 +00:00
TomSweeneyRedHat
cac2dd4dd8 Make inspect try an image on failure if type not specified
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #277
Approved by: rhatdan
2017-10-03 10:31:45 +00:00
Lokesh Mandvekar
70b57afda6 use Makefile var for go compiler
This will allow compilation with a custom go binary,
for example /usr/lib/go-1.8/bin/go instead of /usr/bin/go on Ubuntu
16.04 which is still version 1.6

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>

Closes: #278
Approved by: rhatdan
2017-10-03 10:31:07 +00:00
Daniel J Walsh
f6c2a1e24e Make sure pushing ends up with CLI on a fresh new line
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #275
Approved by: rhatdan
2017-09-29 15:58:39 +00:00
Daniel J Walsh
480befa88f Add CHANGELOG.md to document buildah progress.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #273
Approved by: rhatdan
2017-09-28 17:08:11 +00:00
Daniel J Walsh
a3fef4879e Validate options
If an string option is passed in, and it is not followed by a value,
then error out with a message saying the option requires a value.

For example

buildah from --creds --pull dan
option --creds requires a value

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #270
Approved by: rhatdan
2017-09-28 17:07:03 +00:00
Daniel J Walsh
330cfc923c Only print logrus if in debug mode
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #270
Approved by: rhatdan
2017-09-28 17:07:03 +00:00
Daniel J Walsh
0fc0551edd Cleanup buildah-run code
Part of the general buildah code cleanup.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
296a752555 Cleanup buildah-rmi code
Part of the general buildah code cleanup.

1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
3fbfb56001 Cleanup buildah-mount code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
50a6a566ca Cleanup buildah-inspect code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
aca2c96602 Cleanup buildah-config code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
57a0f38db6 Cleanup buildah-images code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
ff39bf0b80 Cleanup buildah code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
4b38cff005 Cleanup buildah-push code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
89949a1156 Cleanup buildah-from code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
97ec4563b4 Cleanup buildah-containers code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
47665ad777 Cleanup buildah-commit code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
e1e58584a9 Cleanup buildah-bud code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
62fc48433c Add support for buildah run --hostname
Need to set the hostname inside of a container.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #266
Approved by: nalind
2017-09-24 09:55:24 +00:00
Daniel J Walsh
a72aaa2268 Update buildah spec file to match new version
Match the version 0.4 in the spec file and add the
comments that went into Fedora release

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #269
Approved by: rhatdan
2017-09-22 14:30:46 +00:00
Daniel J Walsh
9cbccf88cf Merge pull request #268 from rhatdan/master
Bump to version 0.4
2017-09-22 06:10:31 -04:00
Daniel J Walsh
de0d8cbdcf Bump to version 0.4
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-09-22 10:05:03 +00:00
Daniel J Walsh
333899deb6 Merge pull request #264 from lsm5/fix-readme-ubuntu
update README.md to remove Debian mentions
2017-09-22 05:59:07 -04:00
TomSweeneyRedHat
1d0b48d7da Add default transport to push if not provided
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #260
Approved by: rhatdan
2017-09-21 21:02:23 +00:00
Lokesh Mandvekar
a2765bb1be update README.md to remove Debian mentions
ppa install steps mentioned in README.md work only for Ubuntu xenial and
zesty so far.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2017-09-20 02:47:36 -04:00
Daniel J Walsh
c19c8f9503 Merge pull request #261 from vbatts/example
examples: adding a basic lighttpd example
2017-09-18 10:42:04 -04:00
Vincent Batts
f17bfb937f examples: adding a basic lighttpd example
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
2017-09-18 08:43:52 -04:00
Daniel J Walsh
4e4ceff6cf Merge pull request #244 from rhatdan/unbuntu
Add build information for Ubuntu
2017-09-08 07:56:11 -04:00
Daniel J Walsh
ef532adb2f Add build information for Ubuntu
We should document requied packages for installing on Ubuntu and Debian
to match up with the use on Fedora.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-09-06 06:30:27 -04:00
Nalin Dahyabhai
9327431e97 Avoid trying to print a nil ImageReference
When we fail to pull an image, don't try to include the name of the
image that pullImage() returned in the error text - it will have
returned nil for the pulled reference in most cases.  Instead, use the
name of the image as we were told it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #255
Approved by: nalind
2017-08-31 21:24:35 +00:00
TomSweeneyRedHat
c9c735e20d Add authentication to commit and push
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #250
Approved by: rhatdan
2017-08-29 15:20:19 +00:00
Nalin Dahyabhai
f28dcb3751 Auto-set build tags for ostree and selinux
Try to use pkg-config to check for 'ostree-1' and 'libselinux'.

If ostree-1 is not found, use the containers_image_ostree_stub build tag
to not require it, at the cost of not being able to use or write images
to the 'ostree' transport.

If libselinux is found, build with the 'selinux' tag,

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #252
Approved by: rhatdan
2017-08-29 13:22:53 +00:00
Daniel J Walsh
9e088bd41d Add information on buildah from man page on transports
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #234
Approved by: rhatdan
2017-08-29 10:37:54 +00:00
Daniel J Walsh
52087ca1c5 Remove --transport flag
This is no simpler then putting the transport in the image page,
we should default to the registry specified in containers/image
and not override it.  People are confused by this option, and I
see no value.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #234
Approved by: rhatdan
2017-08-29 10:37:54 +00:00
Nalin Dahyabhai
0de0d23df4 Run: don't complain about missing volume locations
Don't worry about not being able to populate temporary volumes using the
contents of the location in the image where they're expected to be
mounted if we fail to do so because that location doesn't exist.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #248
Approved by: rhatdan
2017-08-24 10:41:29 +00:00
TomSweeneyRedHat
498f0ae9d7 Add credentials to buildah from
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Add credentials to buildah from

Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #204
Approved by: nalind
2017-08-22 18:55:38 +00:00
Daniel J Walsh
ee91e6b981 Remove export command
We have implemented most of this code in kpod export, and we now
have kpod import/load/save.  No reason to implement them in both
commands.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #245
Approved by: nalind
2017-08-17 19:40:47 +00:00
Nalin Dahyabhai
265d2da6cf Always free signature.PolicyContexts
Whenever we create a containers/image/signature.PolicyContext, make sure
we don't forget to destroy it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #231
Approved by: rhatdan
2017-08-14 12:02:07 +00:00
Nalin Dahyabhai
8eb7d6d610 Run(): create the right working directory
When ensuring that the working directory exists before running a
command, make sure we create the location that we set in the
configuration file that we pass to runc.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #241
Approved by: rhatdan
2017-08-10 20:14:54 +00:00
Nalin Dahyabhai
94f2bf025a Replace --registry with --transport
Replace --registry command line flags with --transport.  For backward
compatibility, add Transport as an addtional setting that we prepend to
the still-optional Registry setting if the Transport and image name
alone don't provide a parseable image reference.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #235
Approved by: rhatdan
2017-08-03 15:55:13 +00:00
Nalin Dahyabhai
262b43a866 Improve "from" behavior with unnamed references
Fix our instantiation behavior when the source image reference is not a
named reference.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #235
Approved by: rhatdan
2017-08-03 15:55:13 +00:00
Daniel J Walsh
5259a84b7a atomic transport is being deprecated, so we should not document it.
containers/image now fully supports pushing images and signatures to an
openshift/atomic registry using the docker:// transport.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #238
Approved by: rhatdan
2017-08-02 12:01:26 +00:00
TomSweeneyRedHat
bf83bc208d Add quiet description and touch ups
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #233
Approved by: rhatdan
2017-07-30 12:34:36 +00:00
Nalin Dahyabhai
e616dc116a Avoid parsing image metadata for dates and layers
Avoid parsing metadata that the image library keeps in order to find an
image's digest and creation date; instead, compute the digest from the
manifest, and read the creation date value by inspecting the image,
logging a debug-level diagnostic if it doesn't match the value that the
storage library has on record.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #218
Approved by: rhatdan
2017-07-28 12:23:07 +00:00
Nalin Dahyabhai
933c18f2ad Read the image's creation date from public API
Use the storage library's new public field for retrieving an image's
creation date.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #218
Approved by: rhatdan
2017-07-28 12:23:07 +00:00
Nalin Dahyabhai
be5bcd549d Bump containers/storage and containers/image
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #227
Approved by: rhatdan
2017-07-28 12:10:46 +00:00
Jonathan Lebon
c845d7a5fe ci: use Fedora registry and mount repos
Use the official Fedora 26 image from the Fedora registry rather than
from the Docker Hub.

Also mount yum repos from the host. This will speed up provisioning
because PAPR injects mirror repos that are much closer and faster.

Signed-off-by: Jonathan Lebon <jlebon@redhat.com>

Closes: #225
Approved by: rhatdan
2017-07-27 18:39:31 +00:00
Jonathan Lebon
16d9d97d8c ci: rename files to the new PAPR name
Rename the YAML file and its auxiliary files to the newly supported
name.

Signed-off-by: Jonathan Lebon <jlebon@redhat.com>

Closes: #225
Approved by: rhatdan
2017-07-27 18:39:31 +00:00
Nalin Dahyabhai
8e36b22a71 Makefile: "clean" should also remove test helpers
Make "clean" also remove the imgtype helper.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #216
Approved by: rhatdan

Closes: #216
Approved by: rhatdan
2017-07-26 20:59:38 +00:00
Nalin Dahyabhai
98c4e0d970 Don't panic if an image's ID can't be parsed
Return a "doesn't match" result if an image's ID can't be turned into a
valid reference for any reason.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #217
Approved by: rhatdan
2017-07-26 20:35:47 +00:00
Nalin Dahyabhai
83fe25ca4e Turn on --enable-gc when running gometalinter
It looks like the metalinter is running out of memory while running
tests under PAPR, so give this a try.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #221
Approved by: rhatdan
2017-07-26 17:43:05 +00:00
Nalin Dahyabhai
b7e9966fb2 Make sure that we can build an RPM
Add a CI test that ensures that we can build an RPM package on the
current version (as of this writing, 26) of Fedora, using the .spec file
under contrib.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #208
Approved by: jlebon
2017-07-25 21:03:38 +00:00
Nalin Dahyabhai
8a3ccb53c4 rmi: handle truncated image IDs
Have storageImageID() use a lower-level image lookup to let it handle
truncated IDs correctly.  Wrap errors in getImage().  When reporting
that an image is in use, report its ID correctly.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #211
Approved by: rhatdan
2017-07-25 20:52:33 +00:00
Nalin Dahyabhai
3e9a075b48 Don't leak containers/image Image references
In-memory image objects created using an ImageReference's NewImage()
method need to be Close()d.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #211
Approved by: rhatdan
2017-07-25 20:52:33 +00:00
Nalin Dahyabhai
95d9d22949 Prefer higher-level storage APIs
Prefer higher-level storage APIs (Store) over lower-level storage APIs
(LayerStore, ImageStore, and ContainerStore objects), so that we don't
bypass synchronization and locking.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #211
Approved by: rhatdan
2017-07-25 20:52:33 +00:00
TomSweeneyRedHat
728f641179 Update README.md with buildah pronunciation
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #210
Approved by: rhatdan
2017-07-25 17:17:54 +00:00
Nalin Dahyabhai
8ce683f4fe Build our own libseccomp in Travis
The libseccomp2 in Travis (or rather, in the default repositories that
we have) is too old to support conditional filtering, so we need to
supply our own in order to not get "conditional filtering requires
libseccomp version >= 2.2.1" errors from runc.

That version also appears to be happy to translate the syscall name
_llseek from a name to a number that it doesn't recognize, triggering
"unrecognized syscall" errors.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Nalin Dahyabhai
fd7762b7e2 Try to enable "run" tests in CI
Try to ensure that we have runc, so that we can test the "run" command
in CI.  In the absence of a compatible packaged version of runc, we may
have to build our own.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Nalin Dahyabhai
9333e5369d Switch to running PAPR tests on Fedora 26
Run PAPR tests using Fedora 26 instead of Fedora 25.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Nalin Dahyabhai
e92020a4db Keep the version in the .spec file current
Add a test to compare the version we claim to be with the version
recorded in the RPM .spec file.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
859 changed files with 95828 additions and 36126 deletions

View File

@@ -15,16 +15,19 @@ dnf install -y \
findutils \
git \
glib2-devel \
gnupg \
golang \
gpgme-devel \
libassuan-devel \
libseccomp-devel \
libselinux-devel \
libselinux-utils \
make \
ostree-devel \
which
# Red Hat CI adds a merge commit, for testing, which fails the
# PAPR adds a merge commit, for testing, which fails the
# short-commit-subject validation test, so tell git-validate.sh to only check
# up to, but not including, the merge commit.
export GITVALIDATE_TIP=$(cd $GOSRC; git log -2 --pretty='%H' | tail -n 1)
make -C $GOSRC install.tools all validate
$GOSRC/tests/test_runner.sh
make -C $GOSRC install.tools runc all validate test-unit test-integration TAGS="seccomp"

15
.papr.yml Normal file
View File

@@ -0,0 +1,15 @@
branches:
- master
- auto
- try
host:
distro: fedora/26/atomic
required: true
tests:
# mount yum repos to inherit injected mirrors from PAPR
- docker run --net=host --privileged -v /etc/yum.repos.d:/etc/yum.repos.d.host:ro
-v $PWD:/code registry.fedoraproject.org/fedora:26 sh -c
"cp -fv /etc/yum.repos.d{.host/*.repo,} && /code/.papr.sh"

View File

@@ -1,12 +0,0 @@
branches:
- master
- auto
- try
host:
distro: fedora/25/atomic
required: true
tests:
- docker run --privileged -v $PWD:/code fedora:25 /code/.redhat-ci.sh

View File

@@ -1,14 +1,30 @@
language: go
go:
- 1.7
- 1.8
- tip
dist: trusty
sudo: required
go:
- 1.7
- 1.8
- 1.9.x
- tip
matrix:
# If the latest unstable development version of go fails, that's OK.
allow_failures:
- go: tip
# Don't hold on the tip tests to finish. Mark tests green if the
# stable versions pass.
fast_finish: true
services:
- docker
before_install:
- sudo add-apt-repository -y ppa:duggan/bats
- sudo apt-get -qq update
- sudo apt-get -qq install bats btrfs-tools git libdevmapper-dev libglib2.0-dev libgpgme11-dev
- sudo apt-get -qq install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libselinux1-dev
- sudo apt-get -qq remove libseccomp2
script:
- make install.tools all validate TAGS=containers_image_ostree_stub
- make install.tools install.libseccomp.sudo all runc validate TAGS="apparmor seccomp containers_image_ostree_stub"
- go test -c -tags "apparmor seccomp `./btrfs_tag.sh` `./libdm_tag.sh` `./ostree_tag.sh` `./selinux_tag.sh`" ./cmd/buildah
- tmp=`mktemp -d`; mkdir $tmp/root $tmp/runroot; sudo PATH="$PATH" ./buildah.test -test.v -root $tmp/root -runroot $tmp/runroot -storage-driver vfs -signature-policy `pwd`/tests/policy.json
- cd tests; sudo PATH="$PATH" ./test_runner.sh

80
CHANGELOG.md Normal file
View File

@@ -0,0 +1,80 @@
# Changelog
## 0.5 - 2017-11-07
Add secrets patch to buildah
Add proper SELinux labeling to buildah run
Add tls-verify to bud command
Make filtering by date use the image's date
images: don't list unnamed images twice
Fix timeout issue
Add further tty verbiage to buildah run
Make inspect try an image on failure if type not specified
Add support for `buildah run --hostname`
Tons of bug fixes and code cleanup
## 0.4 - 2017-09-22
### Added
Update buildah spec file to match new version
Bump to version 0.4
Add default transport to push if not provided
Add authentication to commit and push
Remove --transport flag
Run: don't complain about missing volume locations
Add credentials to buildah from
Remove export command
Bump containers/storage and containers/image
## 0.3 - 2017-07-20
## 0.2 - 2017-07-18
### Added
Vendor in latest containers/image and containers/storage
Update image-spec and runtime-spec to v1.0.0
Add support for -- ending options parsing to buildah run
Add/Copy need to support glob syntax
Add flag to remove containers on commit
Add buildah export support
update 'buildah images' and 'buildah rmi' commands
buildah containers/image: Add JSON output option
Add 'buildah version' command
Handle "run" without an explicit command correctly
Ensure volume points get created, and with perms
Add a -a/--all option to "buildah containers"
## 0.1 - 2017-06-14
### Added
Vendor in latest container/storage container/image
Add a "push" command
Add an option to specify a Create date for images
Allow building a source image from another image
Improve buildah commit performance
Add a --volume flag to "buildah run"
Fix inspect/tag-by-truncated-image-ID
Include image-spec and runtime-spec versions
buildah mount command should list mounts when no arguments are given.
Make the output image format selectable
commit images in multiple formats
Also import configurations from V2S1 images
Add a "tag" command
Add an "inspect" command
Update reference comments for docker types origins
Improve configuration preservation in imagebuildah
Report pull/commit progress by default
Contribute buildah.spec
Remove --mount from buildah-from
Add a build-using-dockerfile command (alias: bud)
Create manpages for the buildah project
Add installation for buildah and bash completions
Rename "list"/"delete" to "containers"/"rm"
Switch `buildah list quiet` option to only list container id's
buildah delete should be able to delete multiple containers
Correctly set tags on the names of pulled images
Don't mix "config" in with "run" and "commit"
Add a "list" command, for listing active builders
Add "add" and "copy" commands
Add a "run" command, using runc
Massive refactoring
Make a note to distinguish compression of layers
## 0.0 - 2017-01-26
### Added
Initial version, needs work

142
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,142 @@
# Contributing to Buildah
We'd love to have you join the community! Below summarizes the processes
that we follow.
## Topics
* [Reporting Issues](#reporting-issues)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Communications](#communications)
* [Becoming a Maintainer](#becoming-a-maintainer)
## Reporting Issues
Before reporting an issue, check our backlog of
[open issues](https://github.com/projectatomic/buildah/issues)
to see if someone else has already reported it. If so, feel free to add
your scenario, or additional information, to the discussion. Or simply
"subscribe" to it to be notified when it is updated.
If you find a new issue with the project we'd love to hear about it! The most
important aspect of a bug report is that it includes enough information for
us to reproduce it. So, please include as much detail as possible and try
to remove the extra stuff that doesn't really relate to the issue itself.
The easier it is for us to reproduce it, the faster it'll be fixed!
Please don't include any private/sensitive information in your issue!
## Submitting Pull Requests
No Pull Request (PR) is too small! Typos, additional comments in the code,
new testcases, bug fixes, new features, more documentation, ... it's all
welcome!
While bug fixes can first be identified via an "issue", that is not required.
It's ok to just open up a PR with the fix, but make sure you include the same
information you would have included in an issue - like how to reproduce it.
PRs for new features should include some background on what use cases the
new code is trying to address. When possible and when it makes sense, try to break-up
larger PRs into smaller ones - it's easier to review smaller
code changes. But only if those smaller ones make sense as stand-alone PRs.
Regardless of the type of PR, all PRs should include:
* well documented code changes
* additional testcases. Ideally, they should fail w/o your code change applied
* documentation changes
Squash your commits into logical pieces of work that might want to be reviewed
separate from the rest of the PRs. But, squashing down to just one commit is ok
too since in the end the entire PR will be reviewed anyway. When in doubt,
squash.
PRs that fix issues should include a reference like `Closes #XXXX` in the
commit message so that github will automatically close the referenced issue
when the PR is merged.
<!--
All PRs require at least two LGTMs (Looks Good To Me) from maintainers.
-->
### Sign your PRs
The sign-off is a line at the end of the explanation for the patch. Your
signature certifies that you wrote the patch or otherwise have the right to pass
it on as an open-source patch. The rules are simple: if you can certify
the below (from [developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
Then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe.smith@email.com>
Use your real name (sorry, no pseudonyms or anonymous contributions.)
If you set your `user.name` and `user.email` git configs, you can sign your
commit automatically with `git commit -s`.
## Communications
For general questions, or discussions, please use the
IRC group on `irc.freenode.net` called `cri-o`
that has been setup.
For discussions around issues/bugs and features, you can use the github
[issues](https://github.com/projectatomic/buildah/issues)
and
[PRs](https://github.com/projectatomic/buildah/pulls)
tracking system.
<!--
## Becoming a Maintainer
To become a maintainer you must first be nominated by an existing maintainer.
If a majority (>50%) of maintainers agree then the proposal is adopted and
you will be added to the list.
Removing a maintainer requires at least 75% of the remaining maintainers
approval, or if the person requests to be removed then it is automatic.
Normally, a maintainer will only be removed if they are considered to be
inactive for a long period of time or are viewed as disruptive to the community.
The current list of maintainers can be found in the
[MAINTAINERS](MAINTAINERS) file.
-->

View File

@@ -1,25 +1,30 @@
AUTOTAGS := $(shell ./btrfs_tag.sh) $(shell ./libdm_tag.sh)
AUTOTAGS := $(shell ./btrfs_tag.sh) $(shell ./libdm_tag.sh) $(shell ./ostree_tag.sh) $(shell ./selinux_tag.sh)
TAGS := seccomp
PREFIX := /usr/local
BINDIR := $(PREFIX)/bin
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
BUILDFLAGS := -tags "$(AUTOTAGS) $(TAGS)"
GO := go
GIT_COMMIT := $(shell git rev-parse --short HEAD)
BUILD_INFO := $(shell date +%s)
RUNC_COMMIT := c5ec25487693612aed95673800863e134785f946
LIBSECCOMP_COMMIT := release-2.3
LDFLAGS := -ldflags '-X main.gitCommit=${GIT_COMMIT} -X main.buildInfo=${BUILD_INFO}'
all: buildah imgtype docs
buildah: *.go imagebuildah/*.go cmd/buildah/*.go docker/*.go util/*.go
go build $(LDFLAGS) -o buildah $(BUILDFLAGS) ./cmd/buildah
$(GO) build $(LDFLAGS) -o buildah $(BUILDFLAGS) ./cmd/buildah
imgtype: *.go docker/*.go util/*.go tests/imgtype.go
go build $(LDFLAGS) -o imgtype $(BUILDFLAGS) ./tests/imgtype.go
$(GO) build $(LDFLAGS) -o imgtype $(BUILDFLAGS) ./tests/imgtype.go
.PHONY: clean
clean:
$(RM) buildah
$(RM) buildah imgtype
$(MAKE) -C docs clean
.PHONY: docs
@@ -46,11 +51,24 @@ validate:
.PHONY: install.tools
install.tools:
go get -u $(BUILDFLAGS) github.com/cpuguy83/go-md2man
go get -u $(BUILDFLAGS) github.com/vbatts/git-validation
go get -u $(BUILDFLAGS) gopkg.in/alecthomas/gometalinter.v1
$(GO) get -u $(BUILDFLAGS) github.com/cpuguy83/go-md2man
$(GO) get -u $(BUILDFLAGS) github.com/vbatts/git-validation
$(GO) get -u $(BUILDFLAGS) gopkg.in/alecthomas/gometalinter.v1
gometalinter.v1 -i
.PHONY: runc
runc: gopath
rm -rf ../../opencontainers/runc
git clone https://github.com/opencontainers/runc ../../opencontainers/runc
cd ../../opencontainers/runc && git checkout $(RUNC_COMMIT) && $(GO) build -tags "$(AUTOTAGS) $(TAGS)"
ln -sf ../../opencontainers/runc/runc
.PHONY: install.libseccomp.sudo
install.libseccomp.sudo: gopath
rm -rf ../../seccomp/libseccomp
git clone https://github.com/seccomp/libseccomp ../../seccomp/libseccomp
cd ../../seccomp/libseccomp && git checkout $(LIBSECCOMP_COMMIT) && ./autogen.sh && ./configure --prefix=/usr && make all && sudo make install
.PHONY: install
install:
install -D -m0755 buildah $(DESTDIR)/$(BINDIR)/buildah
@@ -59,3 +77,13 @@ install:
.PHONY: install.completions
install.completions:
install -m 644 -D contrib/completions/bash/buildah $(DESTDIR)/${BASHINSTALLDIR}/buildah
.PHONY: test-integration
test-integration:
cd tests; ./test_runner.sh
.PHONY: test-unit
test-unit:
tmp=$(shell mktemp -d) ; \
mkdir -p $$tmp/root $$tmp/runroot; \
$(GO) test -v -tags "$(AUTOTAGS) $(TAGS)" ./cmd/buildah -args -root $$tmp/root -runroot $$tmp/runroot -storage-driver vfs -signature-policy $(shell pwd)/tests/policy.json

View File

@@ -1,4 +1,6 @@
buildah - a tool which facilitates building OCI container images
![buildah logo](https://cdn.rawgit.com/projectatomic/buildah/master/logos/buildah.svg)
# [Buildah](https://www.youtube.com/embed/YVk5NgSiUw8) - a tool which facilitates building OCI container images
================================================================
[![Go Report Card](https://goreportcard.com/badge/github.com/projectatomic/buildah)](https://goreportcard.com/report/github.com/projectatomic/buildah)
@@ -6,7 +8,7 @@ buildah - a tool which facilitates building OCI container images
Note: this package is in alpha, but is close to being feature-complete.
The buildah package provides a command line tool which can be used to
The Buildah package provides a command line tool which can be used to
* create a working container, either from scratch or using an image as a starting point
* create an image, either from a working container or via the instructions in a Dockerfile
* images can be built in either the OCI image format or the traditional upstream docker image format
@@ -15,9 +17,11 @@ The buildah package provides a command line tool which can be used to
* use the updated contents of a container's root filesystem as a filesystem layer to create a new image
* delete a working container or an image
**[Changelog](CHANGELOG.md)**
**Installation notes**
Prior to installing buildah, install the following packages on your linux distro:
Prior to installing Buildah, install the following packages on your linux distro:
* make
* golang (Requires version 1.8.1 or higher.)
* bats
@@ -30,7 +34,7 @@ Prior to installing buildah, install the following packages on your linux distro
* glib2-devel
* libassuan-devel
* ostree-devel
* runc
* runc (Requires version 1.0 RC4 or higher.)
* skopeo-containers
In Fedora, you can use this command:
@@ -53,7 +57,8 @@ In Fedora, you can use this command:
skopeo-containers
```
Then to install buildah follow the steps in this example:
Then to install Buildah on Fedora follow the steps in this example:
```
mkdir ~/buildah
@@ -66,9 +71,56 @@ Then to install buildah follow the steps in this example:
buildah --help
```
buildah uses `runc` to run commands when `buildah run` is used, or when `buildah build-using-dockerfile`
In RHEL 7, ensure that you are subscribed to `rhel-7-server-rpms`,
`rhel-7-server-extras-rpms`, and `rhel-7-server-optional-rpms`, then
run this command:
```
yum -y install \
make \
golang \
bats \
btrfs-progs-devel \
device-mapper-devel \
glib2-devel \
gpgme-devel \
libassuan-devel \
ostree-devel \
git \
bzip2 \
go-md2man \
runc \
skopeo-containers
```
The build steps for Buildah on RHEL are the same as Fedora, above.
In Ubuntu zesty and xenial, you can use this command:
```
apt-get -y install software-properties-common
add-apt-repository -y ppa:alexlarsson/flatpak
add-apt-repository -y ppa:gophers/archive
apt-add-repository -y ppa:projectatomic/ppa
apt-get -y -qq update
apt-get -y install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libostree-dev libseccomp-dev libselinux1-dev skopeo-containers go-md2man
apt-get -y install golang-1.8
```
Then to install Buildah on Ubuntu follow the steps in this example:
```
mkdir ~/buildah
cd ~/buildah
export GOPATH=`pwd`
git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
cd ./src/github.com/projectatomic/buildah
PATH=/usr/lib/go-1.8/bin:$PATH make runc all TAGS="apparmor seccomp"
make install
buildah --help
```
Buildah uses `runc` to run commands when `buildah run` is used, or when `buildah build-using-dockerfile`
encounters a `RUN` instruction, so you'll also need to build and install a compatible version of
[runc](https://github.com/opencontainers/runc) for buildah to call for those cases.
[runc](https://github.com/opencontainers/runc) for Buildah to call for those cases.
## Commands
| Command | Description |
@@ -79,7 +131,6 @@ encounters a `RUN` instruction, so you'll also need to build and install a compa
| [buildah-config(1)](/docs/buildah-config.md) | Update image configuration settings. |
| [buildah-containers(1)](/docs/buildah-containers.md) | List the working containers and their base images. |
| [buildah-copy(1)](/docs/buildah-copy.md) | Copies the contents of a file, URL, or directory into a container's working directory. |
| [buildah-export(1)](/docs/buildah-export.md) | Export the contents of a container's filesystem as a tar archive |
| [buildah-from(1)](/docs/buildah-from.md) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| [buildah-images(1)](/docs/buildah-images.md) | List images in local storage. |
| [buildah-inspect(1)](/docs/buildah-inspect.md) | Inspects the configuration of a container or image. |

9
add.go
View File

@@ -11,10 +11,9 @@ import (
"syscall"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/chrootarchive"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// addURL copies the contents of the source URL to the destination. This is
@@ -144,7 +143,7 @@ func (b *Builder) Add(destination string, extract bool, source ...string) error
return errors.Wrapf(err, "error ensuring directory %q exists", d)
}
logrus.Debugf("copying %q to %q", gsrc+string(os.PathSeparator)+"*", d+string(os.PathSeparator)+"*")
if err := chrootarchive.CopyWithTar(gsrc, d); err != nil {
if err := copyWithTar(gsrc, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", gsrc, d)
}
continue
@@ -159,14 +158,14 @@ func (b *Builder) Add(destination string, extract bool, source ...string) error
}
// Copy the file, preserving attributes.
logrus.Debugf("copying %q to %q", gsrc, d)
if err := chrootarchive.CopyFileWithTar(gsrc, d); err != nil {
if err := copyFileWithTar(gsrc, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", gsrc, d)
}
continue
}
// We're extracting an archive into the destination directory.
logrus.Debugf("extracting contents of %q into %q", gsrc, dest)
if err := chrootarchive.UntarPath(gsrc, dest); err != nil {
if err := untarPath(gsrc, dest); err != nil {
return errors.Wrapf(err, "error extracting %q into %q", gsrc, dest)
}
}

View File

@@ -7,6 +7,7 @@ import (
"os"
"path/filepath"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/ioutils"
"github.com/opencontainers/image-spec/specs-go/v1"
@@ -19,7 +20,7 @@ const (
// identify working containers.
Package = "buildah"
// Version for the Package
Version = "0.3"
Version = "0.6"
// The value we use to identify what type of information, currently a
// serialized Builder structure, we are using as per-container state.
// This should only be changed when we make incompatible changes to
@@ -76,6 +77,10 @@ type Builder struct {
// MountPoint is the last location where the container's root
// filesystem was mounted. It should not be modified.
MountPoint string `json:"mountpoint,omitempty"`
// ProcessLabel is the SELinux process label associated with the container
ProcessLabel string `json:"process-label,omitempty"`
// MountLabel is the SELinux mount label associated with the container
MountLabel string `json:"mount-label,omitempty"`
// ImageAnnotations is a set of key-value pairs which is stored in the
// image's manifest.
@@ -86,6 +91,8 @@ type Builder struct {
// Image metadata and runtime settings, in multiple formats.
OCIv1 v1.Image `json:"ociv1,omitempty"`
Docker docker.V2Image `json:"docker,omitempty"`
// DefaultMountsFilePath is the file path holding the mounts to be mounted in "host-path:container-path" format
DefaultMountsFilePath string `json:"defaultMountsFilePath,omitempty"`
}
// BuilderOptions are used to initialize a new Builder.
@@ -103,8 +110,13 @@ type BuilderOptions struct {
PullPolicy int
// Registry is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone can not be resolved to a
// reference to a source image.
// reference to a source image. No separator is implicitly added.
Registry string
// Transport is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone, or the image name and
// the registry together, can not be resolved to a reference to a
// source image. No separator is implicitly added.
Transport string
// Mount signals to NewBuilder() that the container should be mounted
// immediately.
Mount bool
@@ -117,6 +129,11 @@ type BuilderOptions struct {
// ReportWriter is an io.Writer which will be used to log the reading
// of the source image from a registry, if we end up pulling the image.
ReportWriter io.Writer
// github.com/containers/image/types SystemContext to hold credentials
// and other authentication/authorization information.
SystemContext *types.SystemContext
// DefaultMountsFilePath is the file path holding the mounts to be mounted in "host-path:container-path" format
DefaultMountsFilePath string
}
// ImportOptions are used to initialize a Builder from an existing container

View File

@@ -5,22 +5,25 @@ import (
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/pkg/errors"
"github.com/projectatomic/buildah/imagebuildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var (
budFlags = []cli.Flag{
cli.BoolFlag{
Name: "quiet, q",
Usage: "refrain from announcing build instructions and image read/write progress",
cli.StringSliceFlag{
Name: "build-arg",
Usage: "`argument=value` to supply to the builder",
},
cli.StringSliceFlag{
Name: "file, f",
Usage: "`pathname or URL` of a Dockerfile",
},
cli.StringFlag{
Name: "registry",
Usage: "prefix to prepend to the image name in order to pull the image",
Value: DefaultRegistry,
Name: "format",
Usage: "`format` of the built image's manifest and metadata",
},
cli.BoolTFlag{
Name: "pull",
@@ -30,13 +33,9 @@ var (
Name: "pull-always",
Usage: "pull the image, even if a version is present",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.StringSliceFlag{
Name: "build-arg",
Usage: "`argument=value` to supply to the builder",
cli.BoolFlag{
Name: "quiet, q",
Usage: "refrain from announcing build instructions and image read/write progress",
},
cli.StringFlag{
Name: "runtime",
@@ -48,18 +47,19 @@ var (
Usage: "add global flags for the container runtime",
},
cli.StringFlag{
Name: "format",
Usage: "`format` of the built image's manifest and metadata",
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.StringSliceFlag{
Name: "tag, t",
Usage: "`tag` to apply to the built image",
},
cli.StringSliceFlag{
Name: "file, f",
Usage: "`pathname or URL` of a Dockerfile",
cli.BoolTFlag{
Name: "tls-verify",
Usage: "Require HTTPS and verify certificates when accessing the registry",
},
}
budDescription = "Builds an OCI image using instructions in one or more Dockerfiles."
budCommand = cli.Command{
Name: "build-using-dockerfile",
@@ -82,39 +82,14 @@ func budCmd(c *cli.Context) error {
tags = tags[1:]
}
}
registry := DefaultRegistry
if c.IsSet("registry") {
registry = c.String("registry")
}
pull := true
if c.IsSet("pull") {
pull = c.BoolT("pull")
}
pullAlways := false
if c.IsSet("pull-always") {
pull = c.Bool("pull-always")
}
runtimeFlags := []string{}
if c.IsSet("runtime-flag") {
runtimeFlags = c.StringSlice("runtime-flag")
}
runtime := ""
if c.IsSet("runtime") {
runtime = c.String("runtime")
}
pullPolicy := imagebuildah.PullNever
if pull {
if c.BoolT("pull") {
pullPolicy = imagebuildah.PullIfMissing
}
if pullAlways {
if c.Bool("pull-always") {
pullPolicy = imagebuildah.PullAlways
}
signaturePolicy := ""
if c.IsSet("signature-policy") {
signaturePolicy = c.String("signature-policy")
}
args := make(map[string]string)
if c.IsSet("build-arg") {
for _, arg := range c.StringSlice("build-arg") {
@@ -126,14 +101,8 @@ func budCmd(c *cli.Context) error {
}
}
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
dockerfiles := []string{}
if c.IsSet("file") || c.IsSet("f") {
dockerfiles = c.StringSlice("file")
}
dockerfiles := c.StringSlice("file")
format := "oci"
if c.IsSet("format") {
format = strings.ToLower(c.String("format"))
@@ -199,6 +168,9 @@ func budCmd(c *cli.Context) error {
if len(dockerfiles) == 0 {
dockerfiles = append(dockerfiles, filepath.Join(contextDir, "Dockerfile"))
}
if err := validateFlags(c, budFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
@@ -208,18 +180,18 @@ func budCmd(c *cli.Context) error {
options := imagebuildah.BuildOptions{
ContextDirectory: contextDir,
PullPolicy: pullPolicy,
Registry: registry,
Compression: imagebuildah.Gzip,
Quiet: quiet,
SignaturePolicyPath: signaturePolicy,
Quiet: c.Bool("quiet"),
SignaturePolicyPath: c.String("signature-policy"),
SkipTLSVerify: !c.Bool("tls-verify"),
Args: args,
Output: output,
AdditionalTags: tags,
Runtime: runtime,
RuntimeArgs: runtimeFlags,
Runtime: c.String("runtime"),
RuntimeArgs: c.StringSlice("runtime-flag"),
OutputFormat: format,
}
if !quiet {
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}

View File

@@ -15,31 +15,46 @@ import (
var (
commitFlags = []cli.Flag{
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `username[:password]` for accessing the registry",
},
cli.BoolFlag{
Name: "disable-compression, D",
Usage: "don't compress layers",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.StringFlag{
Name: "format, f",
Usage: "`format` of the image manifest and metadata",
Value: "oci",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when writing images",
},
cli.StringFlag{
Name: "reference-time",
Usage: "set the timestamp on the image to match the named `file`",
Hidden: true,
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when writing images",
},
cli.BoolFlag{
Name: "rm",
Usage: "remove the container and its content after committing it to an image. Default leaves the container and its content in place.",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "Require HTTPS and verify certificates when accessing the registry",
},
}
commitDescription = "Writes a new image using the container's read-write layer and, if it is based\n on an image, the layers of that image"
commitCommand = cli.Command{
@@ -66,22 +81,13 @@ func commitCmd(c *cli.Context) error {
return errors.Errorf("too many arguments specified")
}
image := args[0]
if err := validateFlags(c, commitFlags); err != nil {
return err
}
signaturePolicy := ""
if c.IsSet("signature-policy") {
signaturePolicy = c.String("signature-policy")
}
compress := archive.Uncompressed
if !c.IsSet("disable-compression") || !c.Bool("disable-compression") {
compress = archive.Gzip
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
format := "oci"
if c.IsSet("format") {
format = c.String("format")
compress := archive.Gzip
if c.Bool("disable-compression") {
compress = archive.Uncompressed
}
timestamp := time.Now().UTC()
if c.IsSet("reference-time") {
@@ -92,6 +98,8 @@ func commitCmd(c *cli.Context) error {
}
timestamp = finfo.ModTime().UTC()
}
format := c.String("format")
if strings.HasPrefix(strings.ToLower(format), "oci") {
format = buildah.OCIv1ImageManifest
} else if strings.HasPrefix(strings.ToLower(format), "docker") {
@@ -118,13 +126,19 @@ func commitCmd(c *cli.Context) error {
dest = dest2
}
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
options := buildah.CommitOptions{
PreferredManifestType: format,
Compression: compress,
SignaturePolicyPath: signaturePolicy,
SignaturePolicyPath: c.String("signature-policy"),
HistoryTimestamp: &timestamp,
SystemContext: systemContext,
}
if !quiet {
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}
err = builder.Commit(dest, options)

View File

@@ -1,28 +1,22 @@
package main
import (
"encoding/json"
"os"
"reflect"
"regexp"
"strings"
"syscall"
"time"
is "github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/containers/storage"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
type imageMetadata struct {
Tag string `json:"tag"`
CreatedTime time.Time `json:"created-time"`
ID string `json:"id"`
Blobs []types.BlobInfo `json:"blob-list"`
Layers map[string][]string `json:"layers"`
SignatureSizes []string `json:"signature-sizes"`
}
var needToShutdownStore = false
func getStore(c *cli.Context) (storage.Store, error) {
@@ -85,30 +79,120 @@ func openImage(store storage.Store, name string) (builder *buildah.Builder, err
return builder, nil
}
func parseMetadata(image storage.Image) (imageMetadata, error) {
var im imageMetadata
dec := json.NewDecoder(strings.NewReader(image.Metadata))
if err := dec.Decode(&im); err != nil {
return imageMetadata{}, err
}
return im, nil
}
func getSize(image storage.Image, store storage.Store) (int64, error) {
func getDateAndDigestAndSize(image storage.Image, store storage.Store) (time.Time, string, int64, error) {
created := time.Time{}
is.Transport.SetStore(store)
storeRef, err := is.Transport.ParseStoreReference(store, "@"+image.ID)
if err != nil {
return -1, err
return created, "", -1, err
}
img, err := storeRef.NewImage(nil)
if err != nil {
return -1, err
return created, "", -1, err
}
imgSize, err := img.Size()
if err != nil {
return -1, err
defer img.Close()
imgSize, sizeErr := img.Size()
if sizeErr != nil {
imgSize = -1
}
return imgSize, nil
manifest, _, manifestErr := img.Manifest()
manifestDigest := ""
if manifestErr == nil && len(manifest) > 0 {
manifestDigest = digest.Canonical.FromBytes(manifest).String()
}
inspectInfo, inspectErr := img.Inspect()
if inspectErr == nil && inspectInfo != nil {
created = inspectInfo.Created
}
if sizeErr != nil {
err = sizeErr
} else if manifestErr != nil {
err = manifestErr
} else if inspectErr != nil {
err = inspectErr
}
return created, manifestDigest, imgSize, err
}
// systemContextFromOptions returns a SystemContext populated with values
// per the input parameters provided by the caller for the use in authentication.
func systemContextFromOptions(c *cli.Context) (*types.SystemContext, error) {
ctx := &types.SystemContext{
DockerCertPath: c.String("cert-dir"),
}
if c.IsSet("tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT("tls-verify")
}
if c.IsSet("creds") {
var err error
ctx.DockerAuthConfig, err = getDockerAuth(c.String("creds"))
if err != nil {
return nil, err
}
}
if c.IsSet("signature-policy") {
ctx.SignaturePolicyPath = c.String("signature-policy")
}
return ctx, nil
}
func parseCreds(creds string) (string, string, error) {
if creds == "" {
return "", "", errors.Wrapf(syscall.EINVAL, "credentials can't be empty")
}
up := strings.SplitN(creds, ":", 2)
if len(up) == 1 {
return up[0], "", nil
}
if up[0] == "" {
return "", "", errors.Wrapf(syscall.EINVAL, "username can't be empty")
}
return up[0], up[1], nil
}
func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
username, password, err := parseCreds(creds)
if err != nil {
return nil, err
}
return &types.DockerAuthConfig{
Username: username,
Password: password,
}, nil
}
// validateFlags searches for StringFlags or StringSlice flags that never had
// a value set. This commonly occurs when the CLI mistakenly takes the next
// option and uses it as a value.
func validateFlags(c *cli.Context, flags []cli.Flag) error {
re, err := regexp.Compile("^-.+")
if err != nil {
return errors.Wrap(err, "compiling regex failed")
}
for _, flag := range flags {
switch reflect.TypeOf(flag).String() {
case "cli.StringSliceFlag":
{
f := flag.(cli.StringSliceFlag)
name := strings.Split(f.Name, ",")
val := c.StringSlice(name[0])
for _, v := range val {
if ok := re.MatchString(v); ok {
return errors.Errorf("option --%s requires a value", name[0])
}
}
}
case "cli.StringFlag":
{
f := flag.(cli.StringFlag)
name := strings.Split(f.Name, ",")
val := c.String(name[0])
if ok := re.MatchString(val); ok {
return errors.Errorf("option --%s requires a value", name[0])
}
}
}
}
return nil
}

View File

@@ -1,25 +1,59 @@
package main
import (
"flag"
"os"
"os/user"
"testing"
"flag"
is "github.com/containers/image/storage"
"github.com/containers/storage"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var (
signaturePolicyPath = ""
storeOptions = storage.DefaultStoreOptions
)
func TestMain(m *testing.M) {
flag.StringVar(&signaturePolicyPath, "signature-policy", "", "pathname of signature policy file (not usually used)")
options := storage.StoreOptions{}
debug := false
flag.StringVar(&options.GraphRoot, "root", "", "storage root dir")
flag.StringVar(&options.RunRoot, "runroot", "", "storage state dir")
flag.StringVar(&options.GraphDriverName, "storage-driver", "", "storage driver")
flag.BoolVar(&debug, "debug", false, "turn on debug logging")
flag.Parse()
if options.GraphRoot != "" || options.RunRoot != "" || options.GraphDriverName != "" {
storeOptions = options
}
if buildah.InitReexec() {
return
}
logrus.SetLevel(logrus.ErrorLevel)
if debug {
logrus.SetLevel(logrus.DebugLevel)
}
os.Exit(m.Run())
}
func TestGetStore(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
set := flag.NewFlagSet("test", 0)
globalSet := flag.NewFlagSet("test", 0)
globalSet.String("root", "", "path to the root directory in which data, including images, is stored")
globalSet.String("root", "", "path to the directory in which data, including images, is stored")
globalSet.String("runroot", "", "path to the directory in which state is stored")
globalSet.String("storage-driver", "", "storage driver")
globalCtx := cli.NewContext(nil, globalSet, nil)
command := cli.Command{Name: "imagesCommand"}
globalCtx.GlobalSet("root", storeOptions.GraphRoot)
globalCtx.GlobalSet("runroot", storeOptions.RunRoot)
globalCtx.GlobalSet("storage-driver", storeOptions.GraphDriverName)
command := cli.Command{Name: "TestGetStore"}
c := cli.NewContext(nil, set, globalCtx)
c.Command = command
@@ -29,47 +63,29 @@ func TestGetStore(t *testing.T) {
}
}
func TestParseMetadata(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
} else if len(images) == 0 {
t.Fatalf("no images with metadata to parse")
}
_, err = parseMetadata(images[0])
if err != nil {
t.Error(err)
}
}
func TestGetSize(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
_, err = getSize(images[0], store)
_, _, _, err = getDateAndDigestAndSize(images[0], store)
if err != nil {
t.Error(err)
}
@@ -84,16 +100,24 @@ func failTestIfNotRoot(t *testing.T) {
}
}
func pullTestImage(imageName string) error {
set := flag.NewFlagSet("test", 0)
set.Bool("pull", true, "pull the image if not present")
globalSet := flag.NewFlagSet("globaltest", 0)
globalCtx := cli.NewContext(nil, globalSet, nil)
command := cli.Command{Name: "imagesCommand"}
c := cli.NewContext(nil, set, globalCtx)
c.Command = command
c.Set("pull", "true")
c.Args = append(c.Args, imageName)
func pullTestImage(t *testing.T, imageName string) (string, error) {
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
}
options := buildah.BuilderOptions{
FromImage: imageName,
SignaturePolicyPath: signaturePolicyPath,
}
return fromCommand(c)
b, err := buildah.NewBuilder(store, options)
if err != nil {
t.Fatal(err)
}
id := b.FromImageID
err = b.Delete()
if err != nil {
t.Fatal(err)
}
return id, nil
}

View File

@@ -3,10 +3,10 @@ package main
import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/mattn/go-shellwords"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@@ -18,58 +18,58 @@ const (
var (
configFlags = []cli.Flag{
cli.StringFlag{
Name: "author",
Usage: "image author contact `information`",
},
cli.StringFlag{
Name: "created-by",
Usage: "`description` of how the image was created",
Value: DefaultCreatedBy,
cli.StringSliceFlag{
Name: "annotation, a",
Usage: "add `annotation` e.g. annotation=value, for the target image",
},
cli.StringFlag{
Name: "arch",
Usage: "`architecture` of the target image",
Usage: "set `architecture` of the target image",
},
cli.StringFlag{
Name: "os",
Usage: "`operating system` of the target image",
},
cli.StringFlag{
Name: "user, u",
Usage: "`user` to run containers based on image as",
},
cli.StringSliceFlag{
Name: "port, p",
Usage: "`port` to expose when running containers based on image",
},
cli.StringSliceFlag{
Name: "env, e",
Usage: "`environment variable` to set when running containers based on image",
},
cli.StringFlag{
Name: "entrypoint",
Usage: "`entry point` for containers based on image",
Name: "author",
Usage: "set image author contact `information`",
},
cli.StringFlag{
Name: "cmd",
Usage: "`command` for containers based on image",
},
cli.StringSliceFlag{
Name: "volume, v",
Usage: "`volume` to create for containers based on image",
Usage: "sets the default `command` to run for containers based on the image",
},
cli.StringFlag{
Name: "workingdir",
Usage: "working `directory` for containers based on image",
Name: "created-by",
Usage: "add `description` of how the image was created",
Value: DefaultCreatedBy,
},
cli.StringFlag{
Name: "entrypoint",
Usage: "set `entry point` for containers based on image",
},
cli.StringSliceFlag{
Name: "env, e",
Usage: "add `environment variable` to be set when running containers based on image",
},
cli.StringSliceFlag{
Name: "label, l",
Usage: "image configuration `label` e.g. label=value",
Usage: "add image configuration `label` e.g. label=value",
},
cli.StringFlag{
Name: "os",
Usage: "set `operating system` of the target image",
},
cli.StringSliceFlag{
Name: "annotation, a",
Usage: "`annotation` e.g. annotation=value, for the target image",
Name: "port, p",
Usage: "add `port` to expose when running containers based on image",
},
cli.StringFlag{
Name: "user, u",
Usage: "set default `user` to run inside containers based on image",
},
cli.StringSliceFlag{
Name: "volume, v",
Usage: "add default `volume` path to be created for containers based on image",
},
cli.StringFlag{
Name: "workingdir",
Usage: "set working `directory` for containers based on image",
},
}
configDescription = "Modifies the configuration values which will be saved to the image"
@@ -171,6 +171,9 @@ func configCmd(c *cli.Context) error {
return errors.Errorf("too many arguments specified")
}
name := args[0]
if err := validateFlags(c, configFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {

View File

@@ -20,8 +20,12 @@ type jsonContainer struct {
var (
containersFlags = []cli.Flag{
cli.BoolFlag{
Name: "quiet, q",
Usage: "display only container IDs",
Name: "all, a",
Usage: "also list non-buildah containers",
},
cli.BoolFlag{
Name: "json",
Usage: "output in JSON format",
},
cli.BoolFlag{
Name: "noheading, n",
@@ -32,12 +36,8 @@ var (
Usage: "do not truncate output",
},
cli.BoolFlag{
Name: "all, a",
Usage: "also list non-buildah containers",
},
cli.BoolFlag{
Name: "json",
Usage: "output in JSON format",
Name: "quiet, q",
Usage: "display only container IDs",
},
}
containersDescription = "Lists containers which appear to be " + buildah.Package + " working containers, their\n names and IDs, and the names and IDs of the images from which they were\n initialized"
@@ -52,39 +52,26 @@ var (
)
func containersCmd(c *cli.Context) error {
if err := validateFlags(c, containersFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
return err
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
noheading := false
if c.IsSet("noheading") {
noheading = c.Bool("noheading")
}
truncate := true
if c.IsSet("notruncate") {
truncate = !c.Bool("notruncate")
}
all := false
if c.IsSet("all") {
all = c.Bool("all")
}
jsonOut := false
quiet := c.Bool("quiet")
truncate := !c.Bool("notruncate")
JSONContainers := []jsonContainer{}
if c.IsSet("json") {
jsonOut = c.Bool("json")
}
jsonOut := c.Bool("json")
list := func(n int, containerID, imageID, image, container string, isBuilder bool) {
if jsonOut {
JSONContainers = append(JSONContainers, jsonContainer{ID: containerID, Builder: isBuilder, ImageID: imageID, ImageName: image, ContainerName: container})
return
}
if n == 0 && !noheading && !quiet {
if n == 0 && !c.Bool("noheading") && !quiet {
if truncate {
fmt.Printf("%-12s %-8s %-12s %-32s %s\n", "CONTAINER ID", "BUILDER", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
} else {
@@ -125,7 +112,7 @@ func containersCmd(c *cli.Context) error {
if err != nil {
return errors.Wrapf(err, "error reading build containers")
}
if !all {
if !c.Bool("all") {
for i, builder := range builders {
image := imageNameForID(builder.FromImageID)
list(i, builder.ContainerID, builder.FromImageID, image, builder.Container, true)

View File

@@ -1,87 +0,0 @@
package main
import (
"fmt"
"io"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
var (
exportFlags = []cli.Flag{
cli.StringFlag{
Name: "output, o",
Usage: "write to a file, instead of STDOUT",
},
}
exportCommand = cli.Command{
Name: "export",
Usage: "Export container's filesystem contents as a tar archive",
Description: `This command exports the full or shortened container ID or container name to
STDOUT and should be redirected to a tar file.
`,
Flags: exportFlags,
Action: exportCmd,
ArgsUsage: "CONTAINER",
}
)
func exportCmd(c *cli.Context) error {
var builder *buildah.Builder
args := c.Args()
if len(args) == 0 {
return errors.Errorf("container name must be specified")
}
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
name := args[0]
store, err := getStore(c)
if err != nil {
return err
}
builder, err = openBuilder(store, name)
if err != nil {
return errors.Wrapf(err, "error reading build container %q", name)
}
mountPoint, err := builder.Mount("")
if err != nil {
return errors.Wrapf(err, "error mounting %q container %q", name, builder.Container)
}
defer func() {
if err := builder.Unmount(); err != nil {
fmt.Printf("Failed to umount %q: %v\n", builder.Container, err)
}
}()
input, err := archive.Tar(mountPoint, 0)
if err != nil {
return errors.Wrapf(err, "error reading directory %q", name)
}
outFile := os.Stdout
if c.IsSet("output") {
outfile := c.String("output")
outFile, err = os.Create(outfile)
if err != nil {
return errors.Wrapf(err, "error creating file %q", outfile)
}
defer outFile.Close()
}
if logrus.IsTerminal(outFile) {
return errors.Errorf("Refusing to save to a terminal. Use the -o flag or redirect.")
}
_, err = io.Copy(outFile, input)
return err
}

View File

@@ -9,15 +9,18 @@ import (
"github.com/urfave/cli"
)
const (
// DefaultRegistry is a prefix that we apply to an image name if we
// can't find one in the local Store, in order to generate a source
// reference for the image that we can then copy to the local Store.
DefaultRegistry = "docker://"
)
var (
fromFlags = []cli.Flag{
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `username[:password]` for accessing the registry",
},
cli.StringFlag{
Name: "name",
Usage: "`name` for the working container",
@@ -28,20 +31,19 @@ var (
},
cli.BoolFlag{
Name: "pull-always",
Usage: "pull the image even if one with the same name is already present",
Usage: "pull the image even if named image is present in store (supersedes pull option)",
},
cli.StringFlag{
Name: "registry",
Usage: "`prefix` to prepend to the image name in order to pull the image",
Value: DefaultRegistry,
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when pulling images",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when pulling images",
cli.BoolTFlag{
Name: "tls-verify",
Usage: "Require HTTPS and verify certificates when accessing the registry",
},
}
fromDescription = "Creates a new working container, either from scratch or using a specified\n image as a starting point"
@@ -65,42 +67,24 @@ func fromCmd(c *cli.Context) error {
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
image := args[0]
if err := validateFlags(c, fromFlags); err != nil {
return err
}
registry := DefaultRegistry
if c.IsSet("registry") {
registry = c.String("registry")
}
pull := true
if c.IsSet("pull") {
pull = c.BoolT("pull")
}
pullAlways := false
if c.IsSet("pull-always") {
pull = c.Bool("pull-always")
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
pullPolicy := buildah.PullNever
if pull {
if c.BoolT("pull") {
pullPolicy = buildah.PullIfMissing
}
if pullAlways {
if c.Bool("pull-always") {
pullPolicy = buildah.PullAlways
}
name := ""
if c.IsSet("name") {
name = c.String("name")
}
signaturePolicy := ""
if c.IsSet("signature-policy") {
signaturePolicy = c.String("signature-policy")
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
signaturePolicy := c.String("signature-policy")
store, err := getStore(c)
if err != nil {
@@ -108,13 +92,14 @@ func fromCmd(c *cli.Context) error {
}
options := buildah.BuilderOptions{
FromImage: image,
Container: name,
PullPolicy: pullPolicy,
Registry: registry,
SignaturePolicyPath: signaturePolicy,
FromImage: args[0],
Container: c.String("name"),
PullPolicy: pullPolicy,
SignaturePolicyPath: signaturePolicy,
SystemContext: systemContext,
DefaultMountsFilePath: c.GlobalString("default-mounts-file"),
}
if !quiet {
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}

View File

@@ -12,6 +12,7 @@ import (
is "github.com/containers/image/storage"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@@ -41,8 +42,20 @@ type filterParams struct {
var (
imagesFlags = []cli.Flag{
cli.BoolFlag{
Name: "quiet, q",
Usage: "display only image IDs",
Name: "digests",
Usage: "show digests",
},
cli.StringFlag{
Name: "filter, f",
Usage: "filter output based on conditions provided",
},
cli.StringFlag{
Name: "format",
Usage: "pretty-print images using a Go template. will override --quiet",
},
cli.BoolFlag{
Name: "json",
Usage: "output in JSON format",
},
cli.BoolFlag{
Name: "noheading, n",
@@ -53,20 +66,8 @@ var (
Usage: "do not truncate output",
},
cli.BoolFlag{
Name: "json",
Usage: "output in JSON format",
},
cli.BoolFlag{
Name: "digests",
Usage: "show digests",
},
cli.StringFlag{
Name: "format",
Usage: "pretty-print images using a Go template. will override --quiet",
},
cli.StringFlag{
Name: "filter, f",
Usage: "filter output based on conditions provided (default [])",
Name: "quiet, q",
Usage: "display only image IDs",
},
}
@@ -82,6 +83,9 @@ var (
)
func imagesCmd(c *cli.Context) error {
if err := validateFlags(c, imagesFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
return err
@@ -92,28 +96,10 @@ func imagesCmd(c *cli.Context) error {
return errors.Wrapf(err, "error reading images")
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
noheading := false
if c.IsSet("noheading") {
noheading = c.Bool("noheading")
}
truncate := true
if c.IsSet("no-trunc") {
truncate = !c.Bool("no-trunc")
}
digests := false
if c.IsSet("digests") {
digests = c.Bool("digests")
}
formatString := ""
hasTemplate := false
if c.IsSet("format") {
formatString = c.String("format")
hasTemplate = true
}
quiet := c.Bool("quiet")
truncate := !c.Bool("no-trunc")
digests := c.Bool("digests")
hasTemplate := c.IsSet("format")
name := ""
if len(c.Args()) == 1 {
@@ -135,7 +121,7 @@ func imagesCmd(c *cli.Context) error {
}
var params *filterParams
if c.IsSet("filter") {
params, err = parseFilter(images, c.String("filter"))
params, err = parseFilter(store, images, c.String("filter"))
if err != nil {
return errors.Wrapf(err, "error parsing filter")
}
@@ -143,14 +129,14 @@ func imagesCmd(c *cli.Context) error {
params = nil
}
if len(images) > 0 && !noheading && !quiet && !hasTemplate {
if len(images) > 0 && !c.Bool("noheading") && !quiet && !hasTemplate {
outputHeader(truncate, digests)
}
return outputImages(images, formatString, store, params, name, hasTemplate, truncate, digests, quiet)
return outputImages(images, c.String("format"), store, params, name, hasTemplate, truncate, digests, quiet)
}
func parseFilter(images []storage.Image, filter string) (*filterParams, error) {
func parseFilter(store storage.Store, images []storage.Image, filter string) (*filterParams, error) {
params := new(filterParams)
filterStrings := strings.Split(filter, ",")
for _, param := range filterStrings {
@@ -165,17 +151,19 @@ func parseFilter(images []storage.Image, filter string) (*filterParams, error) {
case "label":
params.label = pair[1]
case "before":
beforeDate, err := setFilterDate(images, pair[1])
beforeDate, err := setFilterDate(store, images, pair[1])
if err != nil {
return nil, fmt.Errorf("no such id: %s", pair[0])
}
params.beforeDate = beforeDate
params.beforeImage = pair[1]
case "since":
sinceDate, err := setFilterDate(images, pair[1])
sinceDate, err := setFilterDate(store, images, pair[1])
if err != nil {
return nil, fmt.Errorf("no such id: %s", pair[0])
}
params.sinceDate = sinceDate
params.sinceImage = pair[1]
case "reference":
params.referencePattern = pair[1]
default:
@@ -185,16 +173,25 @@ func parseFilter(images []storage.Image, filter string) (*filterParams, error) {
return params, nil
}
func setFilterDate(images []storage.Image, imgName string) (time.Time, error) {
func setFilterDate(store storage.Store, images []storage.Image, imgName string) (time.Time, error) {
for _, image := range images {
for _, name := range image.Names {
if matchesReference(name, imgName) {
// Set the date to this image
im, err := parseMetadata(image)
ref, err := is.Transport.ParseStoreReference(store, "@"+image.ID)
if err != nil {
return time.Time{}, errors.Wrapf(err, "could not get creation date for image %q", imgName)
return time.Time{}, fmt.Errorf("error parsing reference to image %q: %v", image.ID, err)
}
date := im.CreatedTime
img, err := ref.NewImage(nil)
if err != nil {
return time.Time{}, fmt.Errorf("error reading image %q: %v", image.ID, err)
}
defer img.Close()
inspect, err := img.Inspect()
if err != nil {
return time.Time{}, fmt.Errorf("error inspecting image %q: %v", image.ID, err)
}
date := inspect.Created
return date, nil
}
}
@@ -210,7 +207,7 @@ func outputHeader(truncate, digests bool) {
}
if digests {
fmt.Printf("%-64s ", "DIGEST")
fmt.Printf("%-71s ", "DIGEST")
}
fmt.Printf("%-22s %s\n", "CREATED AT", "SIZE")
@@ -218,18 +215,17 @@ func outputHeader(truncate, digests bool) {
func outputImages(images []storage.Image, format string, store storage.Store, filters *filterParams, argName string, hasTemplate, truncate, digests, quiet bool) error {
for _, image := range images {
imageMetadata, err := parseMetadata(image)
if err != nil {
fmt.Println(err)
}
createdTime := imageMetadata.CreatedTime.Format("Jan 2, 2006 15:04")
digest := ""
if len(imageMetadata.Blobs) > 0 {
digest = string(imageMetadata.Blobs[0].Digest)
}
size, _ := getSize(image, store)
createdTime := image.Created
names := []string{""}
inspectedTime, digest, size, _ := getDateAndDigestAndSize(image, store)
if !inspectedTime.IsZero() {
if createdTime != inspectedTime {
logrus.Debugf("image record and configuration disagree on the image's creation time for %q, using the one from the configuration", image)
createdTime = inspectedTime
}
}
names := []string{}
if len(image.Names) > 0 {
names = image.Names
} else {
@@ -250,12 +246,11 @@ func outputImages(images []storage.Image, format string, store storage.Store, fi
ID: image.ID,
Name: name,
Digest: digest,
CreatedAt: createdTime,
CreatedAt: createdTime.Format("Jan 2, 2006 15:04"),
Size: formattedSize(size),
}
if hasTemplate {
err = outputUsingTemplate(format, params)
if err != nil {
if err := outputUsingTemplate(format, params); err != nil {
return err
}
continue
@@ -297,12 +292,13 @@ func matchesDangling(name string, dangling string) bool {
func matchesLabel(image storage.Image, store storage.Store, label string) bool {
storeRef, err := is.Transport.ParseStoreReference(store, "@"+image.ID)
if err != nil {
return false
}
img, err := storeRef.NewImage(nil)
if err != nil {
return false
}
defer img.Close()
info, err := img.Inspect()
if err != nil {
return false
@@ -326,11 +322,7 @@ func matchesLabel(image storage.Image, store storage.Store, label string) bool {
// Returns true if the image was created since the filter image. Returns
// false otherwise
func matchesBeforeImage(image storage.Image, name string, params *filterParams) bool {
im, err := parseMetadata(image)
if err != nil {
return false
}
if im.CreatedTime.Before(params.beforeDate) {
if image.Created.IsZero() || image.Created.Before(params.beforeDate) {
return true
}
return false
@@ -339,18 +331,14 @@ func matchesBeforeImage(image storage.Image, name string, params *filterParams)
// Returns true if the image was created since the filter image. Returns
// false otherwise
func matchesSinceImage(image storage.Image, name string, params *filterParams) bool {
im, err := parseMetadata(image)
if err != nil {
return false
}
if im.CreatedTime.After(params.sinceDate) {
if image.Created.IsZero() || image.Created.After(params.sinceDate) {
return true
}
return false
}
func matchesID(id, argID string) bool {
return strings.HasPrefix(argID, id)
func matchesID(imageID, argID string) bool {
return strings.HasPrefix(imageID, argID)
}
func matchesReference(name, argName string) bool {

View File

@@ -62,7 +62,7 @@ func TestFormatStringOutput(t *testing.T) {
output := captureOutput(func() {
outputUsingFormatString(true, true, params)
})
expectedOutput := fmt.Sprintf("%-12.12s %-40s %-64s %-22s %s\n", params.ID, params.Name, params.Digest, params.CreatedAt, params.Size)
expectedOutput := fmt.Sprintf("%-20.12s %-56s %-64s %-22s %s\n", params.ID, params.Name, params.Digest, params.CreatedAt, params.Size)
if output != expectedOutput {
t.Errorf("Error outputting using format string:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
@@ -89,7 +89,7 @@ func TestOutputHeader(t *testing.T) {
output := captureOutput(func() {
outputHeader(true, false)
})
expectedOutput := fmt.Sprintf("%-12s %-40s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "CREATED AT", "SIZE")
expectedOutput := fmt.Sprintf("%-20s %-56s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "CREATED AT", "SIZE")
if output != expectedOutput {
t.Errorf("Error outputting header:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
@@ -97,7 +97,7 @@ func TestOutputHeader(t *testing.T) {
output = captureOutput(func() {
outputHeader(true, true)
})
expectedOutput = fmt.Sprintf("%-12s %-40s %-64s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "DIGEST", "CREATED AT", "SIZE")
expectedOutput = fmt.Sprintf("%-20s %-56s %-71s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "DIGEST", "CREATED AT", "SIZE")
if output != expectedOutput {
t.Errorf("Error outputting header:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
@@ -105,7 +105,7 @@ func TestOutputHeader(t *testing.T) {
output = captureOutput(func() {
outputHeader(false, false)
})
expectedOutput = fmt.Sprintf("%-64s %-40s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "CREATED AT", "SIZE")
expectedOutput = fmt.Sprintf("%-64s %-56s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "CREATED AT", "SIZE")
if output != expectedOutput {
t.Errorf("Error outputting header:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
@@ -163,7 +163,7 @@ func TestOutputImagesQuietTruncated(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
@@ -175,6 +175,12 @@ func TestOutputImagesQuietTruncated(t *testing.T) {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
// Tests quiet and truncated output
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "", store, nil, "", false, true, false, true)
@@ -191,13 +197,19 @@ func TestOutputImagesQuietNotTruncated(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
@@ -219,13 +231,19 @@ func TestOutputImagesFormatString(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
@@ -247,13 +265,19 @@ func TestOutputImagesFormatTemplate(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
@@ -275,13 +299,19 @@ func TestOutputImagesArgNoMatch(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
@@ -305,13 +335,23 @@ func TestOutputMultipleImages(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull two images so that we know we have at least two
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
_, err = pullTestImage(t, "alpine:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
@@ -333,7 +373,7 @@ func TestParseFilterAllParams(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
@@ -344,18 +384,40 @@ func TestParseFilterAllParams(t *testing.T) {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=true,label=a=b,before=busybox:latest,since=busybox:latest,reference=abcdef"
params, err := parseFilter(images, label)
params, err := parseFilter(store, images, label)
if err != nil {
t.Fatalf("error parsing filter")
t.Fatalf("error parsing filter: %v", err)
}
expectedParams := &filterParams{dangling: "true", label: "a=b", beforeImage: "busybox:latest", sinceImage: "busybox:latest", referencePattern: "abcdef"}
ref, err := is.Transport.ParseStoreReference(store, "busybox:latest")
if err != nil {
t.Fatalf("error parsing store reference: %v", err)
}
img, err := ref.NewImage(nil)
if err != nil {
t.Fatalf("error reading image from store: %v", err)
}
defer img.Close()
inspect, err := img.Inspect()
if err != nil {
t.Fatalf("error inspecting image in store: %v", err)
}
expectedParams := &filterParams{
dangling: "true",
label: "a=b",
beforeImage: "busybox:latest",
beforeDate: inspect.Created,
sinceImage: "busybox:latest",
sinceDate: inspect.Created,
referencePattern: "abcdef",
}
if *params != *expectedParams {
t.Errorf("filter did not return expected result\n\tExpected: %v\n\tReceived: %v", expectedParams, params)
}
@@ -365,7 +427,7 @@ func TestParseFilterInvalidDangling(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
@@ -376,13 +438,13 @@ func TestParseFilterInvalidDangling(t *testing.T) {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=NO,label=a=b,before=busybox:latest,since=busybox:latest,reference=abcdef"
_, err = parseFilter(images, label)
_, err = parseFilter(store, images, label)
if err == nil || err.Error() != "invalid filter: 'dangling=[NO]'" {
t.Fatalf("expected error parsing filter")
}
@@ -392,7 +454,7 @@ func TestParseFilterInvalidBefore(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
@@ -403,13 +465,13 @@ func TestParseFilterInvalidBefore(t *testing.T) {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=false,label=a=b,before=:,since=busybox:latest,reference=abcdef"
_, err = parseFilter(images, label)
_, err = parseFilter(store, images, label)
if err == nil || !strings.Contains(err.Error(), "no such id") {
t.Fatalf("expected error parsing filter")
}
@@ -419,7 +481,7 @@ func TestParseFilterInvalidSince(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
@@ -430,13 +492,13 @@ func TestParseFilterInvalidSince(t *testing.T) {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=false,label=a=b,before=busybox:latest,since=:,reference=abcdef"
_, err = parseFilter(images, label)
_, err = parseFilter(store, images, label)
if err == nil || !strings.Contains(err.Error(), "no such id") {
t.Fatalf("expected error parsing filter")
}
@@ -446,7 +508,7 @@ func TestParseFilterInvalidFilter(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
@@ -457,13 +519,13 @@ func TestParseFilterInvalidFilter(t *testing.T) {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "foo=bar"
_, err = parseFilter(images, label)
_, err = parseFilter(store, images, label)
if err == nil || err.Error() != "invalid filter: 'foo'" {
t.Fatalf("expected error parsing filter")
}
@@ -501,12 +563,19 @@ func TestMatchesBeforeImageTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
@@ -525,12 +594,17 @@ func TestMatchesBeforeImageFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
@@ -550,12 +624,17 @@ func TestMatchesSinceeImageTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
@@ -574,12 +653,17 @@ func TestMatchesSinceImageFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storage.DefaultStoreOptions)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)

View File

@@ -21,14 +21,15 @@ ID: {{.ContainerID}}
var (
inspectFlags = []cli.Flag{
cli.StringFlag{
Name: "type, t",
Usage: "look at the item of the specified `type` (container or image) and name",
},
cli.StringFlag{
Name: "format, f",
Usage: "use `format` as a Go template to format the output",
},
cli.StringFlag{
Name: "type, t",
Usage: "look at the item of the specified `type` (container or image) and name",
Value: inspectTypeContainer,
},
}
inspectDescription = "Inspects a build container's or built image's configuration."
inspectCommand = cli.Command{
@@ -51,23 +52,13 @@ func inspectCmd(c *cli.Context) error {
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
itemType := inspectTypeContainer
if c.IsSet("type") {
itemType = c.String("type")
}
switch itemType {
case inspectTypeContainer:
case inspectTypeImage:
default:
return errors.Errorf("the only recognized types are %q and %q", inspectTypeContainer, inspectTypeImage)
if err := validateFlags(c, inspectFlags); err != nil {
return err
}
format := defaultFormat
if c.IsSet("format") {
if c.String("format") != "" {
format = c.String("format")
}
if c.String("format") != "" {
format = c.String("format")
}
t := template.Must(template.New("format").Parse(format))
@@ -78,17 +69,25 @@ func inspectCmd(c *cli.Context) error {
return err
}
switch itemType {
switch c.String("type") {
case inspectTypeContainer:
builder, err = openBuilder(store, name)
if err != nil {
return errors.Wrapf(err, "error reading build container %q", name)
if c.IsSet("type") {
return errors.Wrapf(err, "error reading build container %q", name)
}
builder, err = openImage(store, name)
if err != nil {
return errors.Wrapf(err, "error reading build object %q", name)
}
}
case inspectTypeImage:
builder, err = openImage(store, name)
if err != nil {
return errors.Wrapf(err, "error reading image %q", name)
}
default:
return errors.Errorf("the only recognized types are %q and %q", inspectTypeContainer, inspectTypeImage)
}
if c.IsSet("format") {

View File

@@ -4,15 +4,17 @@ import (
"fmt"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/storage"
ispecs "github.com/opencontainers/image-spec/specs-go"
rspecs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
func main() {
debug := false
var defaultStoreDriverOptions *cli.StringSlice
if buildah.InitReexec() {
return
@@ -27,6 +29,10 @@ func main() {
defaultStoreDriverOptions = &optionSlice
}
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug",
Usage: "print debugging information",
},
cli.StringFlag{
Name: "root",
Usage: "storage root dir",
@@ -47,17 +53,17 @@ func main() {
Usage: "storage driver option",
Value: defaultStoreDriverOptions,
},
cli.BoolFlag{
Name: "debug",
Usage: "print debugging information",
cli.StringFlag{
Name: "default-mounts-file",
Usage: "path to default mounts file",
Value: buildah.DefaultMountsFile,
},
}
app.Before = func(c *cli.Context) error {
logrus.SetLevel(logrus.ErrorLevel)
if c.GlobalIsSet("debug") {
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
if c.GlobalBool("debug") {
debug = true
logrus.SetLevel(logrus.DebugLevel)
}
return nil
}
@@ -78,7 +84,6 @@ func main() {
configCommand,
containersCommand,
copyCommand,
exportCommand,
fromCommand,
imagesCommand,
inspectCommand,
@@ -93,7 +98,11 @@ func main() {
}
err := app.Run(os.Args)
if err != nil {
logrus.Errorf("%v", err)
os.Exit(1)
if debug {
logrus.Errorf(err.Error())
} else {
fmt.Fprintln(os.Stderr, err.Error())
}
cli.OsExiter(1)
}
}

View File

@@ -30,15 +30,15 @@ func mountCmd(c *cli.Context) error {
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
if err := validateFlags(c, mountFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
return err
}
truncate := true
if c.IsSet("notruncate") {
truncate = !c.Bool("notruncate")
}
truncate := !c.Bool("notruncate")
if len(args) == 1 {
name := args[0]

View File

@@ -5,9 +5,11 @@ import (
"os"
"strings"
"github.com/containers/image/manifest"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/storage/pkg/archive"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
@@ -15,18 +17,36 @@ import (
var (
pushFlags = []cli.Flag{
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `username[:password]` for accessing the registry",
},
cli.BoolFlag{
Name: "disable-compression, D",
Usage: "don't compress layers",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
Name: "format, f",
Usage: "manifest type (oci, v2s1, or v2s2) to use when saving image using the 'dir:' transport (default is manifest type of source)",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when pushing images",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "Require HTTPS and verify certificates when accessing the registry",
},
}
pushDescription = fmt.Sprintf(`
Pushes an image to a specified location.
@@ -54,37 +74,64 @@ func pushCmd(c *cli.Context) error {
if len(args) < 2 {
return errors.New("source and destination image IDs must be specified")
}
if err := validateFlags(c, pushFlags); err != nil {
return err
}
src := args[0]
destSpec := args[1]
signaturePolicy := ""
if c.IsSet("signature-policy") {
signaturePolicy = c.String("signature-policy")
}
compress := archive.Uncompressed
if !c.IsSet("disable-compression") || !c.Bool("disable-compression") {
compress = archive.Gzip
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
compress := archive.Gzip
if c.Bool("disable-compression") {
compress = archive.Uncompressed
}
store, err := getStore(c)
if err != nil {
return err
}
dest, err := alltransports.ParseImageName(destSpec)
// add the docker:// transport to see if they neglected it.
if err != nil {
return err
if strings.Contains(destSpec, "://") {
return err
}
destSpec = "docker://" + destSpec
dest2, err2 := alltransports.ParseImageName(destSpec)
if err2 != nil {
return err
}
dest = dest2
}
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
var manifestType string
if c.IsSet("format") {
switch c.String("format") {
case "oci":
manifestType = imgspecv1.MediaTypeImageManifest
case "v2s1":
manifestType = manifest.DockerV2Schema1SignedMediaType
case "v2s2", "docker":
manifestType = manifest.DockerV2Schema2MediaType
default:
return fmt.Errorf("unknown format %q. Choose on of the supported formats: 'oci', 'v2s1', or 'v2s2'", c.String("format"))
}
}
options := buildah.PushOptions{
Compression: compress,
SignaturePolicyPath: signaturePolicy,
ManifestType: manifestType,
SignaturePolicyPath: c.String("signature-policy"),
Store: store,
SystemContext: systemContext,
}
if !quiet {
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@@ -31,16 +32,15 @@ var (
)
func rmiCmd(c *cli.Context) error {
force := false
if c.IsSet("force") {
force = c.Bool("force")
}
force := c.Bool("force")
args := c.Args()
if len(args) == 0 {
return errors.Errorf("image name or ID must be specified")
}
if err := validateFlags(c, rmiFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
@@ -59,9 +59,12 @@ func rmiCmd(c *cli.Context) error {
}
if len(ctrIDs) > 0 && len(image.Names) <= 1 {
if force {
removeContainers(ctrIDs, store)
err = removeContainers(ctrIDs, store)
if err != nil {
return errors.Wrapf(err, "error removing containers %v for image %q", ctrIDs, id)
}
} else {
for ctrID := range ctrIDs {
for _, ctrID := range ctrIDs {
return fmt.Errorf("Could not remove image %q (must force) - container %q is using its reference image", id, ctrID)
}
}
@@ -76,7 +79,7 @@ func rmiCmd(c *cli.Context) error {
} else {
name, err2 := untagImage(id, image, store)
if err2 != nil {
return err
return errors.Wrapf(err, "error removing tag %q from image %q", id, image.ID)
}
fmt.Printf("untagged: %s\n", name)
}
@@ -86,7 +89,7 @@ func rmiCmd(c *cli.Context) error {
}
id, err := removeImage(image, store)
if err != nil {
return err
return errors.Wrapf(err, "error removing image %q", image.ID)
}
fmt.Printf("%s\n", id)
}
@@ -99,22 +102,22 @@ func getImage(id string, store storage.Store) (*storage.Image, error) {
var ref types.ImageReference
ref, err := properImageRef(id)
if err != nil {
//logrus.Debug(err)
logrus.Debug(err)
}
if ref == nil {
if ref, err = storageImageRef(store, id); err != nil {
//logrus.Debug(err)
logrus.Debug(err)
}
}
if ref == nil {
if ref, err = storageImageID(store, id); err != nil {
//logrus.Debug(err)
logrus.Debug(err)
}
}
if ref != nil {
image, err2 := is.Transport.GetStoreImage(store, ref)
if err2 != nil {
return nil, err2
return nil, errors.Wrapf(err2, "error reading image using reference %q", transports.ImageName(ref))
}
return image, nil
}
@@ -122,11 +125,6 @@ func getImage(id string, store storage.Store) (*storage.Image, error) {
}
func untagImage(imgArg string, image *storage.Image, store storage.Store) (string, error) {
// Remove name from image.Names and set the new name in the ImageStore
imgStore, err := store.ImageStore()
if err != nil {
return "", errors.Wrap(err, "could not untag image")
}
newNames := []string{}
removedName := ""
for _, name := range image.Names {
@@ -136,23 +134,17 @@ func untagImage(imgArg string, image *storage.Image, store storage.Store) (strin
}
newNames = append(newNames, name)
}
imgStore.SetNames(image.ID, newNames)
err = imgStore.Save()
return removedName, err
if removedName != "" {
if err := store.SetNames(image.ID, newNames); err != nil {
return "", errors.Wrapf(err, "error removing name %q from image %q", removedName, image.ID)
}
}
return removedName, nil
}
func removeImage(image *storage.Image, store storage.Store) (string, error) {
imgStore, err := store.ImageStore()
if err != nil {
return "", errors.Wrapf(err, "could not open image store")
}
err = imgStore.Delete(image.ID)
if err != nil {
return "", errors.Wrapf(err, "could not remove image")
}
err = imgStore.Save()
if err != nil {
return "", errors.Wrapf(err, "could not save image store")
if _, err := store.DeleteImage(image.ID, true); err != nil {
return "", errors.Wrapf(err, "could not remove image %q", image.ID)
}
return image.ID, nil
}
@@ -160,12 +152,7 @@ func removeImage(image *storage.Image, store storage.Store) (string, error) {
// Returns a list of running containers associated with the given ImageReference
func runningContainers(image *storage.Image, store storage.Store) ([]string, error) {
ctrIDs := []string{}
ctrStore, err := store.ContainerStore()
if err != nil {
return nil, err
}
containers, err := ctrStore.Containers()
containers, err := store.Containers()
if err != nil {
return nil, err
}
@@ -178,12 +165,8 @@ func runningContainers(image *storage.Image, store storage.Store) ([]string, err
}
func removeContainers(ctrIDs []string, store storage.Store) error {
ctrStore, err := store.ContainerStore()
if err != nil {
return err
}
for _, ctrID := range ctrIDs {
if err = ctrStore.Delete(ctrID); err != nil {
if err := store.DeleteContainer(ctrID); err != nil {
return errors.Wrapf(err, "could not remove container %q", ctrID)
}
}
@@ -193,46 +176,46 @@ func removeContainers(ctrIDs []string, store storage.Store) error {
// If it's looks like a proper image reference, parse it and check if it
// corresponds to an image that actually exists.
func properImageRef(id string) (types.ImageReference, error) {
var ref types.ImageReference
var err error
if ref, err = alltransports.ParseImageName(id); err == nil {
if ref, err := alltransports.ParseImageName(id); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, fmt.Errorf("error confirming presence of image reference %q: %v", transports.ImageName(ref), err)
return nil, errors.Wrapf(err, "error confirming presence of image reference %q", transports.ImageName(ref))
}
return nil, fmt.Errorf("error parsing %q as an image reference: %v", id, err)
return nil, errors.Wrapf(err, "error parsing %q as an image reference", id)
}
// If it's looks like an image reference that's relative to our storage, parse
// it and check if it corresponds to an image that actually exists.
func storageImageRef(store storage.Store, id string) (types.ImageReference, error) {
var ref types.ImageReference
var err error
if ref, err = is.Transport.ParseStoreReference(store, id); err == nil {
if ref, err := is.Transport.ParseStoreReference(store, id); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, fmt.Errorf("error confirming presence of storage image reference %q: %v", transports.ImageName(ref), err)
return nil, errors.Wrapf(err, "error confirming presence of storage image reference %q", transports.ImageName(ref))
}
return nil, fmt.Errorf("error parsing %q as a storage image reference: %v", id, err)
return nil, errors.Wrapf(err, "error parsing %q as a storage image reference", id)
}
// If it might be an ID that's relative to our storage, parse it and check if it
// corresponds to an image that actually exists. This _should_ be redundant,
// since we already tried deleting the image using the ID directly above, but it
// can't hurt either.
// If it might be an ID that's relative to our storage, truncated or not, so
// parse it and check if it corresponds to an image that we have stored
// locally.
func storageImageID(store storage.Store, id string) (types.ImageReference, error) {
var ref types.ImageReference
var err error
if ref, err = is.Transport.ParseStoreReference(store, "@"+id); err == nil {
imageID := id
if img, err := store.Image(id); err == nil && img != nil {
imageID = img.ID
}
if ref, err := is.Transport.ParseStoreReference(store, "@"+imageID); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, fmt.Errorf("error confirming presence of storage image reference %q: %v", transports.ImageName(ref), err)
return nil, errors.Wrapf(err, "error confirming presence of storage image reference %q", transports.ImageName(ref))
}
return nil, fmt.Errorf("error parsing %q as a storage image reference: %v", "@"+id, err)
return nil, errors.Wrapf(err, "error parsing %q as a storage image reference: %v", "@"+id)
}

View File

@@ -10,7 +10,7 @@ import (
func TestProperImageRefTrue(t *testing.T) {
// Pull an image so we know we have it
err := pullTestImage("busybox:latest")
_, err := pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove")
}
@@ -25,7 +25,7 @@ func TestProperImageRefTrue(t *testing.T) {
func TestProperImageRefFalse(t *testing.T) {
// Pull an image so we know we have it
err := pullTestImage("busybox:latest")
_, err := pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatal("could not pull image to remove")
}
@@ -40,8 +40,7 @@ func TestStorageImageRefTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
options := storage.DefaultStoreOptions
store, err := storage.GetStore(options)
store, err := storage.GetStore(storeOptions)
if store != nil {
is.Transport.SetStore(store)
}
@@ -49,7 +48,7 @@ func TestStorageImageRefTrue(t *testing.T) {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
@@ -65,8 +64,7 @@ func TestStorageImageRefFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
options := storage.DefaultStoreOptions
store, err := storage.GetStore(options)
store, err := storage.GetStore(storeOptions)
if store != nil {
is.Transport.SetStore(store)
}
@@ -74,7 +72,7 @@ func TestStorageImageRefFalse(t *testing.T) {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
@@ -88,8 +86,7 @@ func TestStorageImageIDTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
options := storage.DefaultStoreOptions
store, err := storage.GetStore(options)
store, err := storage.GetStore(storeOptions)
if store != nil {
is.Transport.SetStore(store)
}
@@ -97,7 +94,7 @@ func TestStorageImageIDTrue(t *testing.T) {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
err = pullTestImage("busybox:latest")
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
@@ -126,8 +123,7 @@ func TestStorageImageIDFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
options := storage.DefaultStoreOptions
store, err := storage.GetStore(options)
store, err := storage.GetStore(storeOptions)
if store != nil {
is.Transport.SetStore(store)
}

View File

@@ -6,15 +6,19 @@ import (
"strings"
"syscall"
"github.com/Sirupsen/logrus"
specs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var (
runFlags = []cli.Flag{
cli.StringFlag{
Name: "hostname",
Usage: "Set the hostname inside of the container",
},
cli.StringFlag{
Name: "runtime",
Usage: "`path` to an alternate runtime",
@@ -24,14 +28,14 @@ var (
Name: "runtime-flag",
Usage: "add global flags for the container runtime",
},
cli.StringSliceFlag{
Name: "volume, v",
Usage: "bind mount a host location into the container while running the command",
},
cli.BoolFlag{
Name: "tty",
Usage: "allocate a pseudo-TTY in the container",
},
cli.StringSliceFlag{
Name: "volume, v",
Usage: "bind mount a host location into the container while running the command",
},
}
runDescription = "Runs a specified command using the container's root filesystem as a root\n filesystem, using configuration settings inherited from the container's\n image or as specified using previous calls to the config command"
runCommand = cli.Command{
@@ -50,24 +54,15 @@ func runCmd(c *cli.Context) error {
return errors.Errorf("container ID must be specified")
}
name := args[0]
if err := validateFlags(c, runFlags); err != nil {
return err
}
args = args.Tail()
if len(args) > 0 && args[0] == "--" {
args = args[1:]
}
runtime := ""
if c.IsSet("runtime") {
runtime = c.String("runtime")
}
flags := []string{}
if c.IsSet("runtime-flag") {
flags = c.StringSlice("runtime-flag")
}
volumes := []string{}
if c.IsSet("v") || c.IsSet("volume") {
volumes = c.StringSlice("volume")
}
store, err := getStore(c)
if err != nil {
return err
@@ -78,14 +73,10 @@ func runCmd(c *cli.Context) error {
return errors.Wrapf(err, "error reading build container %q", name)
}
hostname := ""
if c.IsSet("hostname") {
hostname = c.String("hostname")
}
options := buildah.RunOptions{
Hostname: hostname,
Runtime: runtime,
Args: flags,
Hostname: c.String("hostname"),
Runtime: c.String("runtime"),
Args: c.StringSlice("runtime-flag"),
}
if c.IsSet("tty") {
@@ -96,7 +87,7 @@ func runCmd(c *cli.Context) error {
}
}
for _, volumeSpec := range volumes {
for _, volumeSpec := range c.StringSlice("volume") {
volSpec := strings.Split(volumeSpec, ":")
if len(volSpec) >= 2 {
mountOptions := "bind"

View File

@@ -2,10 +2,10 @@ package buildah
import (
"bytes"
"fmt"
"io"
"time"
"github.com/Sirupsen/logrus"
cp "github.com/containers/image/copy"
"github.com/containers/image/signature"
is "github.com/containers/image/storage"
@@ -16,6 +16,7 @@ import (
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/projectatomic/buildah/util"
"github.com/sirupsen/logrus"
)
var (
@@ -54,6 +55,9 @@ type CommitOptions struct {
// HistoryTimestamp is the timestamp used when creating new items in the
// image's history. If unset, the current time will be used.
HistoryTimestamp *time.Time
// github.com/containers/image/types SystemContext to hold credentials
// and other authentication/authorization information.
SystemContext *types.SystemContext
}
// PushOptions can be used to alter how an image is copied somewhere.
@@ -73,6 +77,12 @@ type PushOptions struct {
ReportWriter io.Writer
// Store is the local storage store which holds the source image.
Store storage.Store
// github.com/containers/image/types SystemContext to hold credentials
// and other authentication/authorization information.
SystemContext *types.SystemContext
// ManifestType is the format to use when saving the imge using the 'dir' transport
// possible options are oci, v2s1, and v2s2
ManifestType string
}
// shallowCopy copies the most recent layer, the configuration, and the manifest from one image to another.
@@ -229,12 +239,17 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
func (b *Builder) Commit(dest types.ImageReference, options CommitOptions) error {
policy, err := signature.DefaultPolicy(getSystemContext(options.SignaturePolicyPath))
if err != nil {
return err
return errors.Wrapf(err, "error obtaining default signature policy")
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return err
return errors.Wrapf(err, "error creating new signature policy context")
}
defer func() {
if err2 := policyContext.Destroy(); err2 != nil {
logrus.Debugf("error destroying signature polcy context: %v", err2)
}
}()
// Check if we're keeping everything in local storage. If so, we can take certain shortcuts.
_, destIsStorage := dest.Transport().(is.StoreTransport)
exporting := !destIsStorage
@@ -244,7 +259,7 @@ func (b *Builder) Commit(dest types.ImageReference, options CommitOptions) error
}
if exporting {
// Copy everything.
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter))
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter, nil, options.SystemContext, ""))
if err != nil {
return errors.Wrapf(err, "error copying layers and metadata")
}
@@ -279,12 +294,17 @@ func Push(image string, dest types.ImageReference, options PushOptions) error {
systemContext := getSystemContext(options.SignaturePolicyPath)
policy, err := signature.DefaultPolicy(systemContext)
if err != nil {
return err
return errors.Wrapf(err, "error obtaining default signature policy")
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return err
return errors.Wrapf(err, "error creating new signature policy context")
}
defer func() {
if err2 := policyContext.Destroy(); err2 != nil {
logrus.Debugf("error destroying signature polcy context: %v", err2)
}
}()
importOptions := ImportFromImageOptions{
Image: image,
SignaturePolicyPath: options.SignaturePolicyPath,
@@ -311,9 +331,12 @@ func Push(image string, dest types.ImageReference, options PushOptions) error {
return errors.Wrapf(err, "error recomputing layer digests and building metadata")
}
// Copy everything.
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter))
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter, nil, options.SystemContext, options.ManifestType))
if err != nil {
return errors.Wrapf(err, "error copying layers and metadata")
}
if options.ReportWriter != nil {
fmt.Fprintf(options.ReportWriter, "\n")
}
return nil
}

View File

@@ -7,9 +7,12 @@ import (
"github.com/containers/image/types"
)
func getCopyOptions(reportWriter io.Writer) *cp.Options {
func getCopyOptions(reportWriter io.Writer, sourceSystemContext *types.SystemContext, destinationSystemContext *types.SystemContext, manifestType string) *cp.Options {
return &cp.Options{
ReportWriter: reportWriter,
ReportWriter: reportWriter,
SourceCtx: sourceSystemContext,
DestinationCtx: destinationSystemContext,
ForceManifestMIMEType: manifestType,
}
}

View File

@@ -553,3 +553,8 @@ func (b *Builder) Domainname() string {
func (b *Builder) SetDomainname(name string) {
b.Docker.Config.Domainname = name
}
// SetDefaultMountsFilePath sets the mounts file path for testing purposes
func (b *Builder) SetDefaultMountsFilePath(path string) {
b.DefaultMountsFilePath = path
}

View File

@@ -168,6 +168,7 @@ return 1
--runroot
--storage-driver
--storage-opt
--default-mounts-file
"
case "$prev" in
@@ -291,9 +292,12 @@ return 1
--quiet
-q
--rm
--tls-verify
"
local options_with_args="
--cert-dir
--creds
--signature-policy
--format
-f
@@ -337,10 +341,10 @@ return 1
--pull-always
--quiet
-q
--tls-verify
"
local options_with_args="
--registry
--signature-policy
--runtime
--runtime-flag
@@ -382,6 +386,7 @@ return 1
"
local options_with_args="
--hostname
--runtime
--runtime-flag
--volume
@@ -472,9 +477,14 @@ return 1
-D
--quiet
-q
--tls-verify
"
local options_with_args="
--cert-dir
--creds
--format
-f
--signature-policy
"
@@ -615,11 +625,13 @@ return 1
--pull-always
--quiet
-q
--tls-verify
"
local options_with_args="
--cert-dir
--creds
--name
--registry
--signature-policy
"
@@ -644,18 +656,6 @@ return 1
"
}
_buildah_export() {
local boolean_options="
--help
-h
"
local options_with_args="
-o
--output
"
}
_buildah() {
local previous_extglob_setting=$(shopt -p extglob)
shopt -s extglob
@@ -668,7 +668,6 @@ return 1
config
containers
copy
export
from
images
inspect

View File

@@ -25,7 +25,7 @@
%global shortcommit %(c=%{commit}; echo ${c:0:7})
Name: buildah
Version: 0.2
Version: 0.6
Release: 1.git%{shortcommit}%{?dist}
Summary: A command line tool used to creating OCI Images
License: ASL 2.0
@@ -44,6 +44,7 @@ BuildRequires: libassuan-devel
BuildRequires: glib2-devel
BuildRequires: ostree-devel
Requires: runc >= 1.0.0-6
Requires: container-selinux
Requires: skopeo-containers
Provides: %{repo} = %{version}-%{release}
@@ -69,7 +70,7 @@ popd
mv vendor src
export GOPATH=$(pwd)/_build:$(pwd):%{gopath}
make all
make all GIT_COMMIT=%{shortcommit}
%install
export GOPATH=$(pwd)/_build:$(pwd):%{gopath}
@@ -87,6 +88,45 @@ make DESTDIR=%{buildroot} PREFIX=%{_prefix} install install.completions
%{_datadir}/bash-completion/completions/*
%changelog
* Wed Nov 15 2017 Dan Walsh <dwalsh@redhat.com> 0.6-1
- Adds support for converting manifest types when using the dir transport
- Rework how we do UID resolution in images
- Bump github.com/vbatts/tar-split
- Set option.terminal appropriately in run
* Tue Nov 07 2017 Dan Walsh <dwalsh@redhat.com> 0.5-1
- Add secrets patch to buildah
- Add proper SELinux labeling to buildah run
- Add tls-verify to bud command
- Make filtering by date use the image's date
- images: don't list unnamed images twice
- Fix timeout issue
- Add further tty verbiage to buildah run
- Make inspect try an image on failure if type not specified
- Add support for `buildah run --hostname`
- Tons of bug fixes and code cleanup
* Fri Sep 22 2017 Dan Walsh <dwalsh@redhat.com> 0.4-1.git9cbccf88c
- Add default transport to push if not provided
- Avoid trying to print a nil ImageReference
- Add authentication to commit and push
- Add information on buildah from man page on transports
- Remove --transport flag
- Run: do not complain about missing volume locations
- Add credentials to buildah from
- Remove export command
- Run(): create the right working directory
- Improve "from" behavior with unnamed references
- Avoid parsing image metadata for dates and layers
- Read the image's creation date from public API
- Bump containers/storage and containers/image
- Don't panic if an image's ID can't be parsed
- Turn on --enable-gc when running gometalinter
- rmi: handle truncated image IDs
* Thu Jul 20 2017 Dan Walsh <dwalsh@redhat.com> 0.3.0-1
- Bump for inclusion of OCI 1.0 Runtime and Image Spec
* Tue Jul 18 2017 Dan Walsh <dwalsh@redhat.com> 0.2.0-1
- buildah run: Add support for -- ending options parsing
- buildah Add/Copy support for glob syntax

View File

@@ -1,6 +1,7 @@
package buildah
import (
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
)
@@ -13,5 +14,5 @@ func (b *Builder) Delete() error {
b.MountPoint = ""
b.Container = ""
b.ContainerID = ""
return nil
return label.ReleaseLabel(b.ProcessLabel)
}

View File

@@ -14,6 +14,13 @@ to a temporary location.
## OPTIONS
**--build-arg** *arg=value*
Specifies a build argument and its value, which will be interpolated in
instructions read from the Dockerfiles in the same way that environment
variables are, but which will not be added to environment variable list in the
resulting image's configuration.
**-f, --file** *Dockerfile*
Specifies a Dockerfile which contains instructions for building the image,
@@ -25,6 +32,12 @@ If a build context is not specified, and at least one Dockerfile is a
local file, the directory in which it resides will be used as the build
context.
**--format**
Control the format for the built image's manifest and configuration data.
Recognized formats include *oci* (OCI image-spec v1.0, the default) and
*docker* (version 2, using schema format 2 for the manifest).
**--pull**
Pull the image if it is not present. If this flag is disabled (with
@@ -35,23 +48,11 @@ Defaults to *true*.
Pull the image even if a version of the image is already present.
**--registry** *registry*
**--quiet**
A prefix to prepend to the image name in order to pull the image. Default
value is "docker://"
**--signature-policy** *signaturepolicy*
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--build-arg** *arg=value*
Specifies a build argument and its value, which will be interpolated in
instructions read from the Dockerfiles in the same way that environment
variables are, but which will not be added to environment variable list in the
resulting image's configuration.
Suppress output messages which indicate which instruction is being processed,
and of progress when pulling images from a registry, and when writing the
output image.
**--runtime** *path*
@@ -62,22 +63,20 @@ commands specified by the **RUN** instruction.
Adds global flags for the container rutime.
**--signature-policy** *signaturepolicy*
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**-t, --tag** *imageName*
Specifies the name which will be assigned to the resulting image if the build
process completes successfully.
**--format**
**--tls-verify** *bool-value*
Control the format for the built image's manifest and configuration data.
Recognized formats include *oci* (OCI image-spec v1.0, the default) and
*docker* (version 2, using schema format 2 for the manifest).
**--quiet**
Suppress output messages which indicate which instruction is being processed,
and of progress when pulling images from a registry, and when writing the
output image.
Require HTTPS and verify certificates when talking to container registries (defaults to true)
## EXAMPLE
@@ -89,7 +88,9 @@ buildah bud -f Dockerfile.simple -f Dockerfile.notsosimple
buildah bud -t imageName .
buildah bud -t imageName -f Dockerfile.simple
buildah bud --tls-verify=true -t imageName -f Dockerfile.simple
buildah bud --tls-verify=false -t imageName .
## SEE ALSO
buildah(1)

View File

@@ -13,19 +13,18 @@ specified, an ID is assigned, but no name is assigned to the image.
## OPTIONS
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry
**--creds** *creds*
The username[:password] to use to authenticate with the registry if required.
**--disable-compression, -D**
Don't compress filesystem layers when building the image.
**--signature-policy**
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--quiet**
When writing the output image, suppress progress output.
**--format**
@@ -33,19 +32,44 @@ Control the format for the image manifest and configuration data. Recognized
formats include *oci* (OCI image-spec v1.0, the default) and *docker* (version
2, using schema format 2 for the manifest).
**--quiet**
When writing the output image, suppress progress output.
**--rm**
Remove the container and its content after committing it to an image.
Default leaves the container and its content in place.
**--signature-policy**
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--tls-verify** *bool-value*
Require HTTPS and verify certificates when talking to container registries (defaults to true)
## EXAMPLE
buildah commit containerID
This example saves an image based on the container.
`buildah commit containerID`
buildah commit --rm containerID newImageName
This example saves an image named newImageName based on the container.
`buildah commit --rm containerID newImageName`
buildah commit --disable-compression --signature-policy '/etc/containers/policy.json' containerID
buildah commit --disable-compression --signature-policy '/etc/containers/policy.json' containerID newImageName
This example saves an image based on the container disabling compression.
`buildah commit --disable-compression containerID`
This example saves an image named newImageName based on the container disabling compression.
`buildah commit --disable-compression containerID newImageName`
This example commits the container to the image on the local registry while turning off tls verification.
`buildah commit --tls-verify=false containerID docker://localhost:5000/imageId`
This example commits the container to the image on the local registry using credentials and certificates for authentication.
`buildah commit --cert-dir ~/auth --tls-verify=true --creds=username:password containerID docker://localhost:5000/imageId`
## SEE ALSO
buildah(1)

View File

@@ -7,11 +7,16 @@ buildah containers - List the working containers and their base images.
**buildah** **containers** [*options* [...]]
## DESCRIPTION
Lists containers which appear to be buildah working containers, their names and
Lists containers which appear to be Buildah working containers, their names and
IDs, and the names and IDs of the images from which they were initialized.
## OPTIONS
**--all, -a**
List information about all containers, including those which were not created
by and are not being used by Buildah.
**--json**
Output in JSON format.
@@ -28,11 +33,6 @@ Do not truncate IDs in output.
Displays only the container IDs.
**--all, -a**
List information about all containers, including those which were not created
by and are not being used by buildah.
## EXAMPLE
buildah containers

View File

@@ -1,40 +0,0 @@
% BUILDAH(1) Buildah User Manuals
% Buildah Community
% JUNE 2017
# NAME
buildah-export - Export container's filesystem content as a tar archive
# SYNOPSIS
**buildah export**
[**--help**]
[**-o**|**--output**[=*""*]]
CONTAINER
# DESCRIPTION
**buildah export** exports the full or shortened container ID or container name
to STDOUT and should be redirected to a tar file.
# OPTIONS
**--help**
Print usage statement
**-o**, **--output**=""
Write to a file, instead of STDOUT
# EXAMPLES
Export the contents of the container called angry_bell to a tar file
called angry_bell.tar:
# buildah export angry_bell > angry_bell.tar
# buildah export --output=angry_bell-latest.tar angry_bell
# ls -sh angry_bell.tar
321M angry_bell.tar
# ls -sh angry_bell-latest.tar
321M angry_bell-latest.tar
# See also
**buildah-import(1)** to create an empty filesystem image
and import the contents of the tarball into it, then optionally tag it.
# HISTORY
July 2017, Originally copied from docker project docker-export.1.md

View File

@@ -8,13 +8,42 @@ buildah from - Creates a new working container, either from scratch or using a s
## DESCRIPTION
Creates a working container based upon the specified image name. If the
supplied image name is "scratch" a new empty container is created.
supplied image name is "scratch" a new empty container is created. Image names
uses a "transport":"details" format.
Multiple transports are supported:
**dir:**_path_
An existing local directory _path_ retrieving the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_ (Default)
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$HOME/.docker/config.json`, which is set e.g. using `(docker login)`.
**docker-archive:**_path_
An image is retrieved as a `docker load` formatted file.
**docker-daemon:**_docker-reference_
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
**oci:**_path_**:**_tag_
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
## RETURN VALUE
The container ID of the container that was created. On error, -1 is returned and errno is returned.
## OPTIONS
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry
**--creds** *creds*
The username[:password] to use to authenticate with the registry if required.
**--name** *name*
A *name* for the working container
@@ -29,10 +58,9 @@ Defaults to *true*.
Pull the image even if a version of the image is already present.
**--registry** *registry*
**--quiet**
A prefix to prepend to the image name in order to pull the image. Default
value is "docker://"
If an image needs to be pulled from the registry, suppress progress output.
**--signature-policy** *signaturepolicy*
@@ -40,19 +68,23 @@ Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--quiet**
**--tls-verify** *bool-value*
If an image needs to be pulled from the registry, suppress progress output.
Require HTTPS and verify certificates when talking to container registries (defaults to true)
## EXAMPLE
buildah from imagename --pull --registry "myregistry://"
buildah from imagename --pull
buildah from myregistry://imagename --pull
buildah from docker://myregistry.example.com/imagename --pull
buildah from imagename --signature-policy /etc/containers/policy.json
buildah from imagename --pull-always --registry "myregistry://" --name "mycontainer"
buildah from docker://myregistry.example.com/imagename --pull-always --name "mycontainer"
buildah from myregistry/myrepository/imagename:imagetag --tls-verify=false
buildah from myregistry/myrepository/imagename:imagetag --creds=myusername:mypassword --cert-dir ~/auth
## SEE ALSO
buildah(1)

View File

@@ -11,21 +11,23 @@ Displays locally stored images, their names, and their IDs.
## OPTIONS
**--json**
Display the output in JSON format.
**--digests**
Show image digests
Show the image digests.
**--filter, -f=[]**
Filter output based on conditions provided (default [])
Filter output based on conditions provided (default []). Valid
keywords are 'dangling', 'label', 'before' and 'since'.
**--format="TEMPLATE"**
Pretty-print images using a Go template. Will override --quiet
**--json**
Display the output in JSON format.
**--noheading, -n**
Omit the table headings from the listing of images.
@@ -36,6 +38,7 @@ Do not truncate output.
**--quiet, -q**
Displays only the image IDs.
## EXAMPLE
@@ -47,5 +50,7 @@ buildah images --quiet
buildah images -q --noheading --notruncate
buildah images --filter dangling=true
## SEE ALSO
buildah(1)

View File

@@ -7,7 +7,7 @@ buildah inspect - Display information about working containers or images.
**buildah** **inspect** [*options* [...] --] **ID**
## DESCRIPTION
Prints the low-level information on buildah object(s) (e.g. container, images) identified by name or ID. By default, this will render all results in a
Prints the low-level information on Buildah object(s) (e.g. container, images) identified by name or ID. By default, this will render all results in a
JSON array. If the container and image have the same name, this will return container JSON for unspecified type. If a format is specified,
the given template will be executed for each result.
@@ -19,7 +19,7 @@ Use *template* as a Go template when formatting the output.
Users of this option should be familiar with the [*text/template*
package](https://golang.org/pkg/text/template/) in the Go standard library, and
of internals of buildah's implementation.
of internals of Buildah's implementation.
**--type** *container* | *image*

View File

@@ -20,9 +20,6 @@ Image stored in local container/storage
Multiple transports are supported:
**atomic:**_hostname_**/**_namespace_**/**_stream_**:**_tag_
An image served by an OpenShift(Atomic) Registry server. The current OpenShift project and OpenShift Registry instance are by default read from `$HOME/.kube/config`, which is set e.g. using `(oc login)`.
**dir:**_path_
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
@@ -43,19 +40,35 @@ Image stored in local container/storage
## OPTIONS
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry
**--creds** *creds*
The username[:password] to use to authenticate with the registry if required.
**--disable-compression, -D**
Don't compress copies of filesystem layers which will be pushed.
**--format, -f**
Manifest Type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
**--quiet**
When writing the output image, suppress progress output.
**--signature-policy**
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--quiet**
**--tls-verify** *bool-value*
When writing the output image, suppress progress output.
Require HTTPS and verify certificates when talking to container registries (defaults to true)
## EXAMPLE
@@ -67,17 +80,19 @@ This example extracts the imageID image to a local directory in oci format.
`# buildah push imageID oci:/path/to/layout`
This example extracts the imageID image to a container registry named registry.example.com
This example extracts the imageID image to a container registry named registry.example.com.
`# buildah push imageID docker://registry.example.com/repository:tag`
This example extracts the imageID image and puts into the local docker container store
This example extracts the imageID image and puts into the local docker container store.
`# buildah push imageID docker-daemon:image:tag`
This example extracts the imageID image and pushes it to an OpenShift(Atomic) registry
This example extracts the imageID image and puts it into the registry on the localhost while turning off tls verification.
`# buildah push --tls-verify=false imageID docker://localhost:5000/my-imageID`
`# buildah push imageID atomic:registry.example.com/company/image:tag`
This example extracts the imageID image and puts it into the registry on the localhost using credentials and certificates for authentication.
`# buildah push --cert-dir ~/auth --tls-verify=true --creds=username:password imageID docker://localhost:5000/my-imageID`
## SEE ALSO
buildah(1)

View File

@@ -1,4 +1,3 @@
## buildah-run "1" "March 2017" "buildah"
## NAME
@@ -11,16 +10,13 @@ buildah run - Run a command inside of the container.
Launches a container and runs the specified command in that container using the
container's root filesystem as a root filesystem, using configuration settings
inherited from the container's image or as specified using previous calls to
the *buildah config* command.
the *buildah config* command. If you execute *buildah run* and expect an
interactive shell, you need to specify the --tty flag.
## OPTIONS
**--tty**
By default a pseudo-TTY is allocated only when buildah's standard input is
attached to a pseudo-TTY. Setting the `--tty` option to `true` will cause a
pseudo-TTY to be allocated inside the container. Setting the `--tty` option to
`false` will prevent the pseudo-TTY from being allocated.
**--hostname**
Set the hostname inside of the running container.
**--runtime** *path*
@@ -32,6 +28,14 @@ Adds global flags for the container runtime. To list the supported flags, please
consult manpages of your selected container runtime (`runc` is the default
runtime, the manpage to consult is `runc(8)`)
**--tty**
By default a pseudo-TTY is allocated only when buildah's standard input is
attached to a pseudo-TTY. Setting the `--tty` option to `true` will cause a
pseudo-TTY to be allocated inside the container connecting the user's "terminal"
with the stdin and stdout stream of the container. Setting the `--tty` option to
`false` will prevent the pseudo-TTY from being allocated.
**--volume, -v** *source*:*destination*:*flags*
Bind mount a location from the host into the container for its lifetime.
@@ -43,7 +47,13 @@ options to the command inside of the container
buildah run containerID -- ps -auxw
buildah run containerID --hostname myhost -- ps -auxw
buildah run containerID --runtime-flag --no-new-keyring -- ps -auxw
buildah run --tty containerID /bin/bash
buildah run --tty=false containerID ls /
## SEE ALSO
buildah(1)

View File

@@ -1,4 +1,4 @@
## buildah-version "1" "June 2017" "buildah"
## buildah-version "1" "June 2017" "Buildah"
## NAME
buildah version - Display the Buildah Version Information.

View File

@@ -1,14 +1,14 @@
## buildah "1" "March 2017" "buildah"
## NAME
buildah - A command line tool to facilitate working with containers and using them to build images.
Buildah - A command line tool to facilitate working with containers and using them to build images.
## SYNOPSIS
buildah [OPTIONS] COMMAND [ARG...]
## DESCRIPTION
The buildah package provides a command line tool which can be used to:
The Buildah package provides a command line tool which can be used to:
* Create a working container, either from scratch or using an image as a starting point.
* Mount a working container's root filesystem for manipulation.
@@ -18,6 +18,18 @@ The buildah package provides a command line tool which can be used to:
## OPTIONS
**--debug**
Print debugging information
**--default-mounts-file**
path to default mounts file (default path: "/usr/share/containers/mounts.conf")
**--help, -h**
Show help
**--root** **value**
Storage root dir (default: "/var/lib/containers/storage")
@@ -34,14 +46,6 @@ Storage driver
Storage driver option
**--debug**
Print debugging information
**--help, -h**
Show help
**--version, -v**
Print the version
@@ -58,7 +62,6 @@ Print the version
| buildah-config(1) | Update image configuration settings. |
| buildah-containers(1) | List the working containers and their base images. |
| buildah-copy(1) | Copies the contents of a file, URL, or directory into a container's working directory. |
| buildah-export(1) | Export the contents of a container's filesystem as a tar archive |
| buildah-from(1) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| buildah-images(1) | List images in local storage. |
| buildah-inspect(1) | Inspects the configuration of a container or image |

236
docs/tutorials/01-intro.md Normal file
View File

@@ -0,0 +1,236 @@
# Buildah Tutorial 1
## Building OCI container images
The purpose of this tutorial is to demonstrate how Buildah can be used to build container images compliant with the [Open Container Initiative](https://www.opencontainers.org/) (OCI) [image specification](https://github.com/opencontainers/image-spec). Images can be built from existing images, from scratch, and using Dockerfiles. OCI images built using the Buildah command line tool (CLI) and the underlying OCI based technologies (e.g. [containers/image](https://github.com/containers/image) and [containers/storage](https://github.com/containers/storage)) are portable and can therefore run in a Docker environment.
In brief the `containers/image` project provides mechanisms to copy, push, pull, inspect and sign container images. The `containers/storage` project provides mechanisms for storing filesystem layers, container images, and containers. Buildah is a CLI that takes advantage of these underlying projects and therefore allows you to build, move, and manage container images and containers.
First step is to install Buildah. Run as root because you will need to be root for running Buildah commands:
# dnf -y install buildah
After installing Buildah we can see there are no images installed. The `buildah images` command will list all the images:
# buildah images
We can also see that there are also no containers by running:
# buildah containers
When you build a working container from an existing image, Buildah defaults to appending '-working-container' to the image's name to construct a name for the container. The Buildah CLI conveniently returns the name of the new container. You can take advantage of this by assigning the returned value to a shell varible using standard shell assignment :
# container=$(buildah from fedora)
It is not required to assign a shell variable. Running `buildah from fedora` is sufficient. It just helps simplify commands later. To see the name of the container that we stored in the shell variable:
# echo $container
What can we do with this new container? Let's try running bash:
# buildah run $container bash
Notice we get a new shell prompt because we are running a bash shell inside of the container. It should be noted that `buildah run` is primarily intended for helping debug during the build process. A runtime like runc or a container interface like [CRI-O](https://github.com/kubernetes-incubator/cri-o) is more suited for starting containers in production.
Be sure to `exit` out of the container and let's try running something else:
# buildah run $container java
Oops. Java is not installed. A message containing something like the following was returned.
container_linux.go:274: starting container process caused "exec: \"java\": executable file not found in $PATH"
Lets try installing it using:
# buildah run $container -- dnf -y install java
The `--` syntax basically tells Buildah: there are no more `buildah run` command options after this point. The options after this point are for inside the containers shell. It is required if the command we specify includes command line options which are not meant for Buildah.
Now running `buildah run $container java` will show that Java has been installed. It will return the standard Java `Usage` output.
## Building a container from scratch
One of the advantages of using `buildah` to build OCI compliant container images is that you can easily build a container image from scratch and therefore exclude unnecessary packages from your image. E.g. most final container images for production probably don't need a package manager like `dnf`.
Let's build a container from scratch. The special "image" name "scratch" tells Buildah to create an empty container. The container has a small amount of metadata about the container but no real Linux content.
# newcontainer=$(buildah from scratch)
You can see this new empty container by running:
# buildah containers
You should see output similar to the following:
CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME
82af3b9a9488 * 3d85fcda5754 docker.io/library/fedora:latest fedora-working-container
ac8fa6be0f0a * scratch working-container
Its container name is working-container by default and it's stored in the `$newcontainer` variable. Notice the image name (IMAGE NAME) is "scratch". This just indicates that there is no real image yet. i.e. It is containers/storage but there is no representation in containers/image. So when we run:
# buildah images
We don't see the image listed. There is no corresponding scratch image. It is an empty container.
So does this container actually do anything? Let's see.
# buildah run $newcontainer bash
Nope. This really is empty. The package installer `dnf` is not even inside this container. It's essentially an empty layer on top of the kernel. So what can be done with that? Thankfully there is a `buildah mount` command.
# scratchmnt=$(buildah mount $newcontainer)
By echoing `$scratchmnt` we can see the path for the [overlay image](https://wiki.archlinux.org/index.php/Overlay_filesystem), which gives you a link directly to the root file system of the container.
# echo $scratchmnt
/var/lib/containers/storage/overlay/b78d0e11957d15b5d1fe776293bd40a36c28825fb6cf76f407b4d0a95b2a200d/diff
Notice that the overlay image is under `/var/lib/containers/storage` as one would expect. (See above on `containers/storage` or for more information see [containers/storage](https://github.com/containers/storage).)
Now that we have a new empty container we can install or remove software packages or simply copy content into that container. So let's install `bash` and `coreutils` so that we can run bash scripts. This could easily be `nginx` or other packages needed for your container.
# dnf install --installroot $scratchmnt --release 26 bash coreutils --setopt install_weak_deps=false -y
Let's try it out (showing the prompt in this example to demonstrate the difference):
# buildah run $newcontainer bash
bash-4.4# cd /usr/bin
bash-4.4# ls
bash-4.4# exit
Notice we have a `/usr/bin` directory in the newcontainer's image layer. Let's first copy a simple file from our host into the container. Create a file called runecho.sh which contains the following:
#!/bin/bash
for i in `seq 0 9`;
do
echo "This is a new container from ipbabble [" $i "]"
done
Change the permissions on the file so that it can be run:
# chmod +x runecho.sh
With `buildah` files can be copied into the new image and we can also configure the image to run commands. Let's copy this new command into the container's `/usr/bin` directory and configure the container to run the command when the container is run:
# buildah copy $newcontainer ./runecho.sh /usr/bin
# buildah config --cmd /usr/bin/runecho.sh $newcontainer
Now run the container:
# buildah run $newcontainer
This is a new container from ipbabble [ 0 ]
This is a new container from ipbabble [ 1 ]
This is a new container from ipbabble [ 2 ]
This is a new container from ipbabble [ 3 ]
This is a new container from ipbabble [ 4 ]
This is a new container from ipbabble [ 5 ]
This is a new container from ipbabble [ 6 ]
This is a new container from ipbabble [ 7 ]
This is a new container from ipbabble [ 8 ]
This is a new container from ipbabble [ 9 ]
It works! Congratulations, you have built a new OCI container from scratch that uses bash scripting. Let's add some more configuration information.
# buildah config --created-by "ipbabble" $newcontainer
# buildah config --author "wgh at redhat.com @ipbabble" --label name=fedora26-bashecho $newcontainer
We can inspect the container's metadata using the `inspect` command:
# buildah inspect $newcontainer
We should probably unmount and commit the image:
# buildah unmount $newcontainer
# buildah commit $newcontainer fedora-bashecho
# buildah images
And you can see there is a new image called `fedora-bashecho:latest`. You can inspect the new image using:
# buildah inspect --type=image fedora-bashecho
Later when you want to create a new container or containers from this image, you simply need need to do `buildah from fedora-bashecho`. This will create a new containers based on this image for you.
Now that you have the new image you can remove the scratch container called working-container:
# buildah rm $newcontainer
or
# buildah rm working-container
## OCI images built using Buildah are portable
Let's test if this new OCI image is really portable to another OCI technology like Docker. First you should install Docker and start it. Notice that Docker requires a daemon process (that's quite big) in order to run any client commands. Buildah has no daemon requirement.
# dnf -y install docker
# systemctl start docker
Let's copy that image from where containers/storage stores it to where the Docker daemon stores its images, so that we can run it using Docker. We can achieve this using `buildah push`. This copies the image to Docker's repository area which is located under `/var/lib/docker`. Docker's repository is managed by the Docker daemon. This needs to be explicitly stated by telling Buildah to push to the Docker repository protocol using `docker-daemon:`.
# buildah push fedora-bashecho docker-daemon:fedora-bashecho:latest
Under the covers, the containers/image library calls into the containers/storage library to read the image's contents, and sends them to the local Docker daemon. This can take a little while. And usually you won't need to do this. If you're using `buildah` you are probably not using Docker. This is just for demo purposes. Let's try it:
# docker run fedora-bashecho
This is a new container from ipbabble [ 0 ]
This is a new container from ipbabble [ 1 ]
This is a new container from ipbabble [ 2 ]
This is a new container from ipbabble [ 3 ]
This is a new container from ipbabble [ 4 ]
This is a new container from ipbabble [ 5 ]
This is a new container from ipbabble [ 6 ]
This is a new container from ipbabble [ 7 ]
This is a new container from ipbabble [ 8 ]
This is a new container from ipbabble [ 9 ]
OCI container images built with `buildah` are completely standard as expected. So now it might be time to run:
# dnf -y remove docker
## Using Dockerfiles with Buildah
What if you have been using Docker for a while and have some existing Dockerfiles. Not a problem. Buildah can build images using a Dockerfile. The `build-using-dockerfile`, or `bud` for short, takes a Dockerfile as input and produces an OCI image.
Find one of your Dockerfiles or create a file called Dockerfile. Use the following example or some variation if you'd like:
# Base on the Fedora
FROM fedora:latest
MAINTAINER ipbabble email buildahboy@redhat.com # not a real email
# Update image and install httpd
RUN echo "Updating all fedora packages"; dnf -y update; dnf -y clean all
RUN echo "Installing httpd"; dnf -y install httpd
# Expose the default httpd port 80
EXPOSE 80
# Run the httpd
CMD ["/usr/sbin/httpd", "-DFOREGROUND"]
Now run `buildah bud` with the name of the Dockerfile and the name to be given to the created image (e.g. fedora-httpd):
# buildah bud -f Dockerfile -t fedora-httpd
or, because `buildah bud` defaults to Dockerfile (note the period at the end of the example):
# buildah bud -t fedora-httpd .
You will see all the steps of the Dockerfile executing. Afterwards `buildah images` will show you the new image. Now we need to create the container using `buildah from` and test it with `buildah run`:
# httpcontainer=$(buildah from fedora-httpd)
# buildah run $httpcontainer
While that container is running, in another shell run:
# curl localhost
You will see the standard Apache webpage.
Why not try and modify the Dockerfile. Do not install httpd, but instead ADD the runecho.sh file and have it run as the CMD.
## Congratulations
Well done. You have learned a lot about Buildah using this short tutorial. Hopefully you followed along with the examples and found them to be sufficient. Be sure to look at Buildah's man pages to see the other useful commands you can use. Have fun playing.
If you have any suggestions or issues please post them at the [ProjectAtomic Buildah Issues page](https://github.com/projectatomic/buildah/issues).
For more information on Buildah and how you might contribute please visit the [Buildah home page on Github](https://github.com/projectatomic/buildah).

17
examples/lighttpd.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/bash -x
ctr1=`buildah from ${1:-fedora}`
## Get all updates and install our minimal httpd server
buildah run $ctr1 -- dnf update -y
buildah run $ctr1 -- dnf install -y lighttpd
## Include some buildtime annotations
buildah config --annotation "com.example.build.host=$(uname -n)" $ctr1
## Run our server and expose the port
buildah config $ctr1 --cmd "/usr/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf"
buildah config $ctr1 --port 80
## Commit this container to an image name
buildah commit $ctr1 ${2:-$USER/lighttpd}

View File

@@ -2,6 +2,7 @@ package buildah
import (
"bytes"
"context"
"encoding/json"
"io"
"io/ioutil"
@@ -9,7 +10,6 @@ import (
"path/filepath"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
@@ -23,6 +23,7 @@ import (
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/projectatomic/buildah/docker"
"github.com/sirupsen/logrus"
)
const (
@@ -68,32 +69,16 @@ type containerImageSource struct {
}
func (i *containerImageRef) NewImage(sc *types.SystemContext) (types.Image, error) {
src, err := i.NewImageSource(sc, nil)
src, err := i.NewImageSource(sc)
if err != nil {
return nil, err
}
return image.FromSource(src)
}
func selectManifestType(preferred string, acceptable, supported []string) string {
selected := preferred
for _, accept := range acceptable {
if preferred == accept {
return preferred
}
for _, support := range supported {
if accept == support {
selected = accept
}
}
}
return selected
}
func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestTypes []string) (src types.ImageSource, err error) {
func (i *containerImageRef) NewImageSource(sc *types.SystemContext) (src types.ImageSource, err error) {
// Decide which type of manifest and configuration output we're going to provide.
supportedManifestTypes := []string{v1.MediaTypeImageManifest, docker.V2S2MediaTypeManifest}
manifestType := selectManifestType(i.preferredManifestType, manifestTypes, supportedManifestTypes)
manifestType := i.preferredManifestType
// If it's not a format we support, return an error.
if manifestType != v1.MediaTypeImageManifest && manifestType != docker.V2S2MediaTypeManifest {
return nil, errors.Errorf("no supported manifest types (attempted to use %q, only know %q and %q)",
@@ -417,7 +402,7 @@ func (i *containerImageSource) Reference() types.ImageReference {
return i.ref
}
func (i *containerImageSource) GetSignatures() ([][]byte, error) {
func (i *containerImageSource) GetSignatures(ctx context.Context) ([][]byte, error) {
return nil, nil
}

View File

@@ -8,7 +8,6 @@ import (
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
@@ -22,6 +21,7 @@ import (
"github.com/openshift/imagebuilder"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
)
const (
@@ -51,8 +51,13 @@ type BuildOptions struct {
PullPolicy int
// Registry is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone can not be resolved to a
// reference to a source image.
// reference to a source image. No separator is implicitly added.
Registry string
// Transport is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone, or the image name and
// the registry together, can not be resolved to a reference to a
// source image. No separator is implicitly added.
Transport string
// IgnoreUnrecognizedInstructions tells us to just log instructions we
// don't recognize, and try to keep going.
IgnoreUnrecognizedInstructions bool
@@ -90,6 +95,8 @@ type BuildOptions struct {
// specified, indicating that the shared, system-wide default policy
// should be used.
SignaturePolicyPath string
// SkipTLSVerify denotes whether TLS verification should not be used.
SkipTLSVerify bool
// ReportWriter is an io.Writer which will be used to report the
// progress of the (possible) pulling of the source image and the
// writing of the new image.
@@ -108,6 +115,7 @@ type Executor struct {
builder *buildah.Builder
pullPolicy int
registry string
transport string
ignoreUnrecognizedInstructions bool
quiet bool
runtime string
@@ -130,11 +138,12 @@ type Executor struct {
reportWriter io.Writer
}
func makeSystemContext(signaturePolicyPath string) *types.SystemContext {
func makeSystemContext(signaturePolicyPath string, skipTLSVerify bool) *types.SystemContext {
sc := &types.SystemContext{}
if signaturePolicyPath != "" {
sc.SignaturePolicyPath = signaturePolicyPath
}
sc.DockerInsecureSkipTLSVerify = skipTLSVerify
return sc
}
@@ -403,6 +412,7 @@ func NewExecutor(store storage.Store, options BuildOptions) (*Executor, error) {
contextDir: options.ContextDirectory,
pullPolicy: options.PullPolicy,
registry: options.Registry,
transport: options.Transport,
ignoreUnrecognizedInstructions: options.IgnoreUnrecognizedInstructions,
quiet: options.Quiet,
runtime: options.Runtime,
@@ -413,7 +423,7 @@ func NewExecutor(store storage.Store, options BuildOptions) (*Executor, error) {
outputFormat: options.OutputFormat,
additionalTags: options.AdditionalTags,
signaturePolicyPath: options.SignaturePolicyPath,
systemContext: makeSystemContext(options.SignaturePolicyPath),
systemContext: makeSystemContext(options.SignaturePolicyPath, options.SkipTLSVerify),
volumeCache: make(map[string]string),
volumeCacheInfo: make(map[string]os.FileInfo),
log: options.Log,
@@ -458,6 +468,7 @@ func (b *Executor) Prepare(ib *imagebuilder.Builder, node *parser.Node, from str
FromImage: from,
PullPolicy: b.pullPolicy,
Registry: b.registry,
Transport: b.transport,
SignaturePolicyPath: b.signaturePolicyPath,
ReportWriter: b.reportWriter,
}

View File

@@ -9,10 +9,10 @@ import (
"path"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/chrootarchive"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
)
func cloneToDirectory(url, dir string) error {

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

BIN
logos/buildah-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

BIN
logos/buildah-logo_med.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

BIN
logos/buildah-logo_sm.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

BIN
logos/buildah-no-text.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

1870
logos/buildah.svg Normal file

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 88 KiB

212
new.go
View File

@@ -4,32 +4,149 @@ import (
"fmt"
"strings"
"github.com/Sirupsen/logrus"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/opencontainers/selinux/go-selinux"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/openshift/imagebuilder"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
// BaseImageFakeName is the "name" of a source image which we interpret
// as "no image".
BaseImageFakeName = imagebuilder.NoBaseImageSpecifier
// DefaultTransport is a prefix that we apply to an image name if we
// can't find one in the local Store, in order to generate a source
// reference for the image that we can then copy to the local Store.
DefaultTransport = "docker://"
)
func reserveSELinuxLabels(store storage.Store, id string) error {
if selinux.GetEnabled() {
containers, err := store.Containers()
if err != nil {
return err
}
for _, c := range containers {
if id == c.ID {
continue
} else {
b, err := OpenBuilder(store, c.ID)
if err != nil {
if err == storage.ErrContainerUnknown {
continue
}
return err
}
// Prevent containers from using same MCS Label
if err := label.ReserveLabel(b.ProcessLabel); err != nil {
return err
}
}
}
}
return nil
}
func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
var ref types.ImageReference
var img *storage.Image
manifest := []byte{}
config := []byte{}
name := "working-container"
if options.FromImage == BaseImageFakeName {
options.FromImage = ""
}
image := options.FromImage
if options.Transport == "" {
options.Transport = DefaultTransport
}
systemContext := getSystemContext(options.SignaturePolicyPath)
imageID := ""
if image != "" {
var err error
if options.PullPolicy == PullAlways {
pulledReference, err2 := pullImage(store, options, systemContext)
if err2 != nil {
return nil, errors.Wrapf(err2, "error pulling image %q", image)
}
ref = pulledReference
}
if ref == nil {
srcRef, err2 := alltransports.ParseImageName(image)
if err2 != nil {
srcRef2, err3 := alltransports.ParseImageName(options.Registry + image)
if err3 != nil {
srcRef3, err4 := alltransports.ParseImageName(options.Transport + options.Registry + image)
if err4 != nil {
return nil, errors.Wrapf(err4, "error parsing image name %q", options.Transport+options.Registry+image)
}
srcRef2 = srcRef3
}
srcRef = srcRef2
}
destImage, err2 := localImageNameForReference(store, srcRef)
if err2 != nil {
return nil, errors.Wrapf(err2, "error computing local image name for %q", transports.ImageName(srcRef))
}
if destImage == "" {
return nil, errors.Errorf("error computing local image name for %q", transports.ImageName(srcRef))
}
ref, err = is.Transport.ParseStoreReference(store, destImage)
if err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", destImage)
}
image = destImage
}
img, err = is.Transport.GetStoreImage(store, ref)
if err != nil {
if errors.Cause(err) == storage.ErrImageUnknown && options.PullPolicy != PullIfMissing {
return nil, errors.Wrapf(err, "no such image %q", transports.ImageName(ref))
}
ref2, err2 := pullImage(store, options, systemContext)
if err2 != nil {
return nil, errors.Wrapf(err2, "error pulling image %q", image)
}
ref = ref2
img, err = is.Transport.GetStoreImage(store, ref)
}
if err != nil {
return nil, errors.Wrapf(err, "no such image %q", transports.ImageName(ref))
}
imageID = img.ID
src, err := ref.NewImage(systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error instantiating image for %q", transports.ImageName(ref))
}
defer src.Close()
config, err = src.ConfigBlob()
if err != nil {
return nil, errors.Wrapf(err, "error reading image configuration for %q", transports.ImageName(ref))
}
manifest, _, err = src.Manifest()
if err != nil {
return nil, errors.Wrapf(err, "error reading image manifest for %q", transports.ImageName(ref))
}
}
name := "working-container"
if options.Container != "" {
name = options.Container
} else {
var err2 error
if image != "" {
prefix := image
s := strings.Split(prefix, "/")
@@ -46,69 +163,17 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
}
name = prefix + "-" + name
}
}
if name != "" {
var err error
suffix := 1
tmpName := name
for err != storage.ErrContainerUnknown {
_, err = store.Container(tmpName)
if err == nil {
for errors.Cause(err2) != storage.ErrContainerUnknown {
_, err2 = store.Container(tmpName)
if err2 == nil {
suffix++
tmpName = fmt.Sprintf("%s-%d", name, suffix)
}
}
name = tmpName
}
systemContext := getSystemContext(options.SignaturePolicyPath)
imageID := ""
if image != "" {
if options.PullPolicy == PullAlways {
err := pullImage(store, options, systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error pulling image %q", image)
}
}
ref, err := is.Transport.ParseStoreReference(store, image)
if err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", image)
}
img, err = is.Transport.GetStoreImage(store, ref)
if err != nil {
if err == storage.ErrImageUnknown && options.PullPolicy != PullIfMissing {
return nil, errors.Wrapf(err, "no such image %q", image)
}
err = pullImage(store, options, systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error pulling image %q", image)
}
ref, err = is.Transport.ParseStoreReference(store, image)
if err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", image)
}
img, err = is.Transport.GetStoreImage(store, ref)
}
if err != nil {
return nil, errors.Wrapf(err, "no such image %q", image)
}
imageID = img.ID
src, err := ref.NewImage(systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error instantiating image")
}
defer src.Close()
config, err = src.ConfigBlob()
if err != nil {
return nil, errors.Wrapf(err, "error reading image configuration")
}
manifest, _, err = src.Manifest()
if err != nil {
return nil, errors.Wrapf(err, "error reading image manifest")
}
}
coptions := storage.ContainerOptions{}
container, err := store.CreateContainer("", []string{name}, imageID, "", "", &coptions)
if err != nil {
@@ -123,21 +188,32 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
}
}()
if err := reserveSELinuxLabels(store, container.ID); err != nil {
return nil, err
}
processLabel, mountLabel, err := label.InitLabels(nil)
if err != nil {
return nil, err
}
builder := &Builder{
store: store,
Type: containerType,
FromImage: image,
FromImageID: imageID,
Config: config,
Manifest: manifest,
Container: name,
ContainerID: container.ID,
ImageAnnotations: map[string]string{},
ImageCreatedBy: "",
store: store,
Type: containerType,
FromImage: image,
FromImageID: imageID,
Config: config,
Manifest: manifest,
Container: name,
ContainerID: container.ID,
ImageAnnotations: map[string]string{},
ImageCreatedBy: "",
ProcessLabel: processLabel,
MountLabel: mountLabel,
DefaultMountsFilePath: options.DefaultMountsFilePath,
}
if options.Mount {
_, err = builder.Mount("")
_, err = builder.Mount(mountLabel)
if err != nil {
return nil, errors.Wrapf(err, "error mounting build container")
}

4
ostree_tag.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
if ! pkg-config ostree-1 2> /dev/null ; then
echo containers_image_ostree_stub
fi

84
pull.go
View File

@@ -1,58 +1,114 @@
package buildah
import (
"github.com/Sirupsen/logrus"
"strings"
cp "github.com/containers/image/copy"
"github.com/containers/image/docker/reference"
"github.com/containers/image/signature"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
func pullImage(store storage.Store, options BuilderOptions, sc *types.SystemContext) error {
func localImageNameForReference(store storage.Store, srcRef types.ImageReference) (string, error) {
if srcRef == nil {
return "", errors.Errorf("reference to image is empty")
}
ref := srcRef.DockerReference()
if ref == nil {
name := srcRef.StringWithinTransport()
_, err := is.Transport.ParseStoreReference(store, name)
if err == nil {
return name, nil
}
if strings.LastIndex(name, "/") != -1 {
name = name[strings.LastIndex(name, "/")+1:]
_, err = is.Transport.ParseStoreReference(store, name)
if err == nil {
return name, nil
}
}
return "", errors.Errorf("reference to image %q is not a named reference", transports.ImageName(srcRef))
}
name := ""
if named, ok := ref.(reference.Named); ok {
name = named.Name()
if namedTagged, ok := ref.(reference.NamedTagged); ok {
name = name + ":" + namedTagged.Tag()
}
if canonical, ok := ref.(reference.Canonical); ok {
name = name + "@" + canonical.Digest().String()
}
}
if _, err := is.Transport.ParseStoreReference(store, name); err != nil {
return "", errors.Wrapf(err, "error parsing computed local image name %q", name)
}
return name, nil
}
func pullImage(store storage.Store, options BuilderOptions, sc *types.SystemContext) (types.ImageReference, error) {
name := options.FromImage
spec := name
if options.Registry != "" {
spec = options.Registry + spec
}
spec2 := spec
if options.Transport != "" {
spec2 = options.Transport + spec
}
srcRef, err := alltransports.ParseImageName(name)
if err != nil {
srcRef2, err2 := alltransports.ParseImageName(spec)
if err2 != nil {
return errors.Wrapf(err2, "error parsing image name %q", spec)
srcRef3, err3 := alltransports.ParseImageName(spec2)
if err3 != nil {
return nil, errors.Wrapf(err3, "error parsing image name %q", spec2)
}
srcRef2 = srcRef3
}
srcRef = srcRef2
}
if ref := srcRef.DockerReference(); ref != nil {
name = srcRef.DockerReference().Name()
if tagged, ok := srcRef.DockerReference().(reference.NamedTagged); ok {
name = name + ":" + tagged.Tag()
}
destName, err := localImageNameForReference(store, srcRef)
if err != nil {
return nil, errors.Wrapf(err, "error computing local image name for %q", transports.ImageName(srcRef))
}
if destName == "" {
return nil, errors.Errorf("error computing local image name for %q", transports.ImageName(srcRef))
}
destRef, err := is.Transport.ParseStoreReference(store, name)
destRef, err := is.Transport.ParseStoreReference(store, destName)
if err != nil {
return errors.Wrapf(err, "error parsing full image name %q", name)
return nil, errors.Wrapf(err, "error parsing image name %q", destName)
}
policy, err := signature.DefaultPolicy(sc)
if err != nil {
return err
return nil, errors.Wrapf(err, "error obtaining default signature policy")
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return err
return nil, errors.Wrapf(err, "error creating new signature policy context")
}
defer func() {
if err2 := policyContext.Destroy(); err2 != nil {
logrus.Debugf("error destroying signature polcy context: %v", err2)
}
}()
logrus.Debugf("copying %q to %q", spec, name)
err = cp.Image(policyContext, destRef, srcRef, getCopyOptions(options.ReportWriter))
return err
err = cp.Image(policyContext, destRef, srcRef, getCopyOptions(options.ReportWriter, options.SystemContext, nil, ""))
return destRef, err
}

41
run.go
View File

@@ -8,13 +8,13 @@ import (
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/ioutils"
digest "github.com/opencontainers/go-digest"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/opencontainers/runtime-tools/generate"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/ssh/terminal"
)
const (
@@ -100,6 +100,26 @@ func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts
Options: []string{"rbind", "ro"},
})
}
cdir, err := b.store.ContainerDirectory(b.ContainerID)
if err != nil {
return errors.Wrapf(err, "error determining work directory for container %q", b.ContainerID)
}
// Add secrets mounts
mountsFiles := []string{OverrideMountsFile, b.DefaultMountsFilePath}
for _, file := range mountsFiles {
secretMounts, err := secretMounts(file, b.MountLabel, cdir)
if err != nil {
logrus.Warn("error mounting secrets, skipping...")
}
for _, mount := range secretMounts {
if haveMount(mount.Destination) {
continue
}
mounts = append(mounts, mount)
}
}
// Add temporary copies of the contents of volume locations at the
// volume locations, unless we already have something there.
for _, volume := range volumes {
@@ -107,20 +127,15 @@ func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts
// Already mounting something there, no need to bother.
continue
}
cdir, err := b.store.ContainerDirectory(b.ContainerID)
if err != nil {
return errors.Wrapf(err, "error determining work directory for container %q", b.ContainerID)
}
subdir := digest.Canonical.FromString(volume).Hex()
volumePath := filepath.Join(cdir, "buildah-volumes", subdir)
logrus.Debugf("using %q for volume at %q", volumePath, volume)
// If we need to, initialize the volume path's initial contents.
if _, err = os.Stat(volumePath); os.IsNotExist(err) {
if err = os.MkdirAll(volumePath, 0755); err != nil {
return errors.Wrapf(err, "error creating directory %q for volume %q in container %q", volumePath, volume, b.ContainerID)
}
srcPath := filepath.Join(mountPoint, volume)
if err = archive.CopyWithTar(srcPath, volumePath); err != nil {
if err = copyFileWithTar(srcPath, volumePath); err != nil && !os.IsNotExist(err) {
return errors.Wrapf(err, "error populating directory %q for volume %q in container %q using contents of %q", volumePath, volume, b.ContainerID, srcPath)
}
@@ -182,7 +197,9 @@ func (b *Builder) Run(command []string, options RunOptions) error {
} else if b.Hostname() != "" {
g.SetHostname(b.Hostname())
}
mountPoint, err := b.Mount("")
g.SetProcessSelinuxLabel(b.ProcessLabel)
g.SetLinuxMountLabel(b.MountLabel)
mountPoint, err := b.Mount(b.MountLabel)
if err != nil {
return err
}
@@ -194,7 +211,7 @@ func (b *Builder) Run(command []string, options RunOptions) error {
g.SetRootPath(mountPoint)
switch options.Terminal {
case DefaultTerminal:
g.SetProcessTerminal(logrus.IsTerminal(os.Stdout))
g.SetProcessTerminal(terminal.IsTerminal(int(os.Stdout.Fd())))
case WithTerminal:
g.SetProcessTerminal(true)
case WithoutTerminal:
@@ -219,8 +236,8 @@ func (b *Builder) Run(command []string, options RunOptions) error {
if spec.Process.Cwd == "" {
spec.Process.Cwd = DefaultWorkingDir
}
if err = os.MkdirAll(filepath.Join(mountPoint, b.WorkDir()), 0755); err != nil {
return errors.Wrapf(err, "error ensuring working directory %q exists", b.WorkDir())
if err = os.MkdirAll(filepath.Join(mountPoint, spec.Process.Cwd), 0755); err != nil {
return errors.Wrapf(err, "error ensuring working directory %q exists", spec.Process.Cwd)
}
bindFiles := []string{"/etc/hosts", "/etc/resolv.conf"}

198
secrets.go Normal file
View File

@@ -0,0 +1,198 @@
package buildah
import (
"bufio"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
rspec "github.com/opencontainers/runtime-spec/specs-go"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var (
// DefaultMountsFile holds the default mount paths in the form
// "host_path:container_path"
DefaultMountsFile = "/usr/share/containers/mounts.conf"
// OverrideMountsFile holds the default mount paths in the form
// "host_path:container_path" overriden by the user
OverrideMountsFile = "/etc/containers/mounts.conf"
)
// SecretData info
type SecretData struct {
Name string
Data []byte
}
func getMounts(filePath string) []string {
file, err := os.Open(filePath)
if err != nil {
logrus.Warnf("file %q not found, skipping...", filePath)
return nil
}
defer file.Close()
scanner := bufio.NewScanner(file)
if err = scanner.Err(); err != nil {
logrus.Warnf("error reading file %q, skipping...", filePath)
return nil
}
var mounts []string
for scanner.Scan() {
mounts = append(mounts, scanner.Text())
}
return mounts
}
// SaveTo saves secret data to given directory
func (s SecretData) SaveTo(dir string) error {
path := filepath.Join(dir, s.Name)
if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil && !os.IsExist(err) {
return err
}
return ioutil.WriteFile(path, s.Data, 0700)
}
func readAll(root, prefix string) ([]SecretData, error) {
path := filepath.Join(root, prefix)
data := []SecretData{}
files, err := ioutil.ReadDir(path)
if err != nil {
if os.IsNotExist(err) {
return data, nil
}
return nil, err
}
for _, f := range files {
fileData, err := readFile(root, filepath.Join(prefix, f.Name()))
if err != nil {
// If the file did not exist, might be a dangling symlink
// Ignore the error
if os.IsNotExist(err) {
continue
}
return nil, err
}
data = append(data, fileData...)
}
return data, nil
}
func readFile(root, name string) ([]SecretData, error) {
path := filepath.Join(root, name)
s, err := os.Stat(path)
if err != nil {
return nil, err
}
if s.IsDir() {
dirData, err2 := readAll(root, name)
if err2 != nil {
return nil, err2
}
return dirData, nil
}
bytes, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
return []SecretData{{Name: name, Data: bytes}}, nil
}
// getHostAndCtrDir separates the host:container paths
func getMountsMap(path string) (string, string, error) {
arr := strings.SplitN(path, ":", 2)
if len(arr) == 2 {
return arr[0], arr[1], nil
}
return "", "", errors.Errorf("unable to get host and container dir")
}
func getHostSecretData(hostDir string) ([]SecretData, error) {
var allSecrets []SecretData
hostSecrets, err := readAll(hostDir, "")
if err != nil {
return nil, errors.Wrapf(err, "failed to read secrets from %q", hostDir)
}
return append(allSecrets, hostSecrets...), nil
}
// secretMount copies the contents of host directory to container directory
// and returns a list of mounts
func secretMounts(filePath, mountLabel, containerWorkingDir string) ([]rspec.Mount, error) {
var mounts []rspec.Mount
defaultMountsPaths := getMounts(filePath)
for _, path := range defaultMountsPaths {
hostDir, ctrDir, err := getMountsMap(path)
if err != nil {
return nil, err
}
// skip if the hostDir path doesn't exist
if _, err = os.Stat(hostDir); os.IsNotExist(err) {
logrus.Warnf("%q doesn't exist, skipping", hostDir)
continue
}
ctrDirOnHost := filepath.Join(containerWorkingDir, ctrDir)
if err = os.RemoveAll(ctrDirOnHost); err != nil {
return nil, fmt.Errorf("remove container directory failed: %v", err)
}
if err = os.MkdirAll(ctrDirOnHost, 0755); err != nil {
return nil, fmt.Errorf("making container directory failed: %v", err)
}
hostDir, err = resolveSymbolicLink(hostDir)
if err != nil {
return nil, err
}
data, err := getHostSecretData(hostDir)
if err != nil {
return nil, errors.Wrapf(err, "getting host secret data failed")
}
for _, s := range data {
err = s.SaveTo(ctrDirOnHost)
if err != nil {
return nil, err
}
}
err = label.Relabel(ctrDirOnHost, mountLabel, false)
if err != nil {
return nil, errors.Wrap(err, "error applying correct labels")
}
m := rspec.Mount{
Source: ctrDirOnHost,
Destination: ctrDir,
Type: "bind",
Options: []string{"bind"},
}
mounts = append(mounts, m)
}
return mounts, nil
}
// resolveSymbolicLink resolves a possbile symlink path. If the path is a symlink, returns resolved
// path; if not, returns the original path.
func resolveSymbolicLink(path string) (string, error) {
info, err := os.Lstat(path)
if err != nil {
return "", err
}
if info.Mode()&os.ModeSymlink != os.ModeSymlink {
return path, nil
}
return filepath.EvalSymlinks(path)
}

4
selinux_tag.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
if pkg-config libselinux 2> /dev/null ; then
echo selinux
fi

View File

@@ -110,5 +110,6 @@ load helpers
buildah rmi $id
done
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" == "" ]
}

View File

@@ -9,6 +9,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -24,6 +25,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
target=alpine-image
@@ -37,6 +39,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -52,6 +55,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
target=alpine-image
@@ -65,6 +69,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -88,6 +93,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -100,6 +106,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -112,6 +119,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -124,6 +132,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -136,6 +145,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -155,6 +165,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -168,6 +179,7 @@ load helpers
buildah --debug=false images -q
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -177,6 +189,7 @@ load helpers
target3=so-many-scratch-images
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} -t ${target2} -t ${target3} ${TESTSDIR}/bud/from-scratch
run buildah --debug=false images
[ "$status" -eq 0 ]
cid=$(buildah from ${target})
buildah rm ${cid}
cid=$(buildah from library/${target2})
@@ -185,6 +198,7 @@ load helpers
buildah rm ${cid}
buildah rmi -f $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -200,10 +214,12 @@ load helpers
run test -s $root/vol/subvol/subvolfile
[ "$status" -ne 0 ]
run stat -c %f $root/vol/subvol
[ "$status" -eq 0 ]
[ "$output" = 41ed ]
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -217,5 +233,6 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}

View File

@@ -1,22 +0,0 @@
#!/usr/bin/env bats
load helpers
@test "extract" {
touch ${TESTDIR}/reference-time-file
for source in scratch alpine; do
cid=$(buildah from --pull=true --signature-policy ${TESTSDIR}/policy.json ${source})
mnt=$(buildah mount $cid)
touch ${mnt}/export.file
tar -cf - --transform s,^./,,g -C ${mnt} . | tar tf - | grep -v "^./$" | sort > ${TESTDIR}/tar.output
buildah umount $cid
buildah export "$cid" > ${TESTDIR}/${source}.tar
buildah export -o ${TESTDIR}/${source}1.tar "$cid"
diff ${TESTDIR}/${source}.tar ${TESTDIR}/${source}1.tar
tar -tf ${TESTDIR}/${source}.tar | sort > ${TESTDIR}/export.output
diff ${TESTDIR}/tar.output ${TESTDIR}/export.output
rm -f ${TESTDIR}/tar.output ${TESTDIR}/export.output
rm -f ${TESTDIR}/${source}1.tar ${TESTDIR}/${source}.tar
buildah rm "$cid"
done
}

114
tests/from.bats Normal file
View File

@@ -0,0 +1,114 @@
#!/usr/bin/env bats
load helpers
@test "commit-to-from-elsewhere" {
elsewhere=${TESTDIR}/elsewhere-img
mkdir -p ${elsewhere}
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json scratch)
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid dir:${elsewhere}
buildah rm $cid
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = elsewhere-img-working-container ]
cid=$(buildah from --pull-always --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = `basename ${elsewhere}`-working-container ]
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json scratch)
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid dir:${elsewhere}
buildah rm $cid
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = elsewhere-img-working-container ]
cid=$(buildah from --pull-always --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = `basename ${elsewhere}`-working-container ]
}
@test "from-authenticate-cert" {
mkdir -p ${TESTDIR}/auth
# Create certifcate via openssl
openssl req -newkey rsa:4096 -nodes -sha256 -keyout ${TESTDIR}/auth/domain.key -x509 -days 2 -out ${TESTDIR}/auth/domain.crt -subj "/C=US/ST=Foo/L=Bar/O=Red Hat, Inc./CN=localhost"
# Skopeo and buildah both require *.cert file
cp ${TESTDIR}/auth/domain.crt ${TESTDIR}/auth/domain.cert
# Create a private registry that uses certificate and creds file
# docker run -d -p 5000:5000 --name registry -v ${TESTDIR}/auth:${TESTDIR}/auth:Z -e REGISTRY_HTTP_TLS_CERTIFICATE=${TESTDIR}/auth/domain.crt -e REGISTRY_HTTP_TLS_KEY=${TESTDIR}/auth/domain.key registry:2
# When more buildah auth is in place convert the below.
# docker pull alpine
# docker tag alpine localhost:5000/my-alpine
# docker push localhost:5000/my-alpine
# ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth)
# buildah rm $ctrid
# buildah rmi -f $(buildah --debug=false images -q)
# This should work
# ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth --tls-verify true)
rm -rf ${TESTDIR}/auth
# This should fail
run ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth --tls-verify true)
[ "$status" -ne 0 ]
# Clean up
# docker rm -f $(docker ps --all -q)
# docker rmi -f localhost:5000/my-alpine
# docker rmi -f $(docker images -q)
# buildah rm $ctrid
# buildah rmi -f $(buildah --debug=false images -q)
}
@test "from-authenticate-cert-and-creds" {
mkdir -p ${TESTDIR}/auth
# Create creds and store in ${TESTDIR}/auth/htpasswd
# docker run --entrypoint htpasswd registry:2 -Bbn testuser testpassword > ${TESTDIR}/auth/htpasswd
# Create certifcate via openssl
openssl req -newkey rsa:4096 -nodes -sha256 -keyout ${TESTDIR}/auth/domain.key -x509 -days 2 -out ${TESTDIR}/auth/domain.crt -subj "/C=US/ST=Foo/L=Bar/O=Red Hat, Inc./CN=localhost"
# Skopeo and buildah both require *.cert file
cp ${TESTDIR}/auth/domain.crt ${TESTDIR}/auth/domain.cert
# Create a private registry that uses certificate and creds file
# docker run -d -p 5000:5000 --name registry -v ${TESTDIR}/auth:${TESTDIR}/auth:Z -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=${TESTDIR}/auth/htpasswd -e REGISTRY_HTTP_TLS_CERTIFICATE=${TESTDIR}/auth/domain.crt -e REGISTRY_HTTP_TLS_KEY=${TESTDIR}/auth/domain.key registry:2
# When more buildah auth is in place convert the below.
# docker pull alpine
# docker login localhost:5000 --username testuser --password testpassword
# docker tag alpine localhost:5000/my-alpine
# docker push localhost:5000/my-alpine
# ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth)
# buildah rm $ctrid
# buildah rmi -f $(buildah --debug=false images -q)
# docker logout localhost:5000
# This should fail
run ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth --tls-verify true)
[ "$status" -ne 0 ]
# This should work
# ctrid=$(buildah from localhost:5000/my-alpine --cert-dir ${TESTDIR}/auth --tls-verify true --creds=testuser:testpassword)
# Clean up
rm -rf ${TESTDIR}/auth
# docker rm -f $(docker ps --all -q)
# docker rmi -f localhost:5000/my-alpine
# docker rmi -f $(docker images -q)
# buildah rm $ctrid
# buildah rmi -f $(buildah --debug=false images -q)
}

View File

@@ -4,9 +4,10 @@ BUILDAH_BINARY=${BUILDAH_BINARY:-$(dirname ${BASH_SOURCE})/../buildah}
IMGTYPE_BINARY=${IMGTYPE_BINARY:-$(dirname ${BASH_SOURCE})/../imgtype}
TESTSDIR=${TESTSDIR:-$(dirname ${BASH_SOURCE})}
STORAGE_DRIVER=${STORAGE_DRIVER:-vfs}
PATH=$(dirname ${BASH_SOURCE})/..:${PATH}
function setup() {
suffix=$(dd if=/dev/urandom bs=12 count=1 status=none | base64 | tr +/ _.)
suffix=$(dd if=/dev/urandom bs=12 count=1 status=none | base64 | tr +/ABCDEFGHIJKLMNOPQRSTUVWXYZ _.abcdefghijklmnopqrstuvwxyz)
TESTDIR=${BATS_TMPDIR}/tmp.${suffix}
rm -fr ${TESTDIR}
mkdir -p ${TESTDIR}/{root,runroot}

View File

@@ -7,13 +7,13 @@ import (
"os"
"strings"
"github.com/Sirupsen/logrus"
is "github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/projectatomic/buildah"
"github.com/projectatomic/buildah/docker"
"github.com/sirupsen/logrus"
)
func main() {

View File

@@ -18,3 +18,23 @@ load helpers
buildah rm "$cid"
done
}
@test "push with manifest type conversion" {
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah push --signature-policy ${TESTSDIR}/policy.json --format oci alpine dir:my-dir
echo "$output"
[ "$status" -eq 0 ]
manifest=$(cat my-dir/manifest.json)
run grep "application/vnd.oci.image.config.v1+json" <<< "$manifest"
echo "$output"
[ "$status" -eq 0 ]
run buildah push --signature-policy ${TESTSDIR}/policy.json --format v2s2 alpine dir:my-dir
echo "$output"
[ "$status" -eq 0 ]
run grep "application/vnd.docker.distribution.manifest.v2+json" my-dir/manifest.json
echo "$output"
[ "$status" -eq 0 ]
buildah rm "$cid"
buildah rmi alpine
rm -rf my-dir
}

58
tests/rpm.bats Normal file
View File

@@ -0,0 +1,58 @@
#!/usr/bin/env bats
load helpers
@test "rpm-build" {
if ! which runc ; then
skip
fi
# Build a container to use for building the binaries.
image=registry.fedoraproject.org/fedora:26
cid=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json $image)
root=$(buildah --debug=false mount $cid)
commit=$(git log --format=%H -n 1)
shortcommit=$(echo ${commit} | cut -c-7)
mkdir -p ${root}/rpmbuild/{SOURCES,SPECS}
# Build the tarball.
(cd ..; git archive --format tar.gz --prefix=buildah-${commit}/ ${commit}) > ${root}/rpmbuild/SOURCES/buildah-${shortcommit}.tar.gz
# Update the .spec file with the commit ID.
sed s:REPLACEWITHCOMMITID:${commit}:g ${TESTSDIR}/../contrib/rpm/buildah.spec > ${root}/rpmbuild/SPECS/buildah.spec
# Install build dependencies and build binary packages.
buildah --debug=false run $cid -- dnf -y install 'dnf-command(builddep)' rpm-build
buildah --debug=false run $cid -- dnf -y builddep --spec rpmbuild/SPECS/buildah.spec
buildah --debug=false run $cid -- rpmbuild --define "_topdir /rpmbuild" -ba /rpmbuild/SPECS/buildah.spec
# Build a second new container.
cid2=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json registry.fedoraproject.org/fedora:26)
root2=$(buildah --debug=false mount $cid2)
# Copy the binary packages from the first container to the second one, and build a list of
# their filenames relative to the root of the second container.
rpms=
mkdir -p ${root2}/packages
for rpm in ${root}/rpmbuild/RPMS/*/*.rpm ; do
cp $rpm ${root2}/packages/
rpms="$rpms "/packages/$(basename $rpm)
done
# Install the binary packages into the second container.
buildah --debug=false run $cid2 -- dnf -y install $rpms
# Run the binary package and compare its self-identified version to the one we tried to build.
id=$(buildah --debug=false run $cid2 -- buildah version | awk '/^Git Commit:/ { print $NF }')
bv=$(buildah --debug=false run $cid2 -- buildah version | awk '/^Version:/ { print $NF }')
rv=$(buildah --debug=false run $cid2 -- rpm -q --queryformat '%{version}' buildah)
echo "short commit: $shortcommit"
echo "id: $id"
echo "buildah version: $bv"
echo "buildah rpm version: $rv"
test $shortcommit = $id
test $bv = $rv
# Clean up.
buildah --debug=false rm $cid $cid2
}

View File

@@ -6,34 +6,64 @@ load helpers
if ! which runc ; then
skip
fi
runc --version
createrandom ${TESTDIR}/randomfile
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
root=$(buildah mount $cid)
buildah config $cid --workingdir /tmp
run buildah --debug=false run $cid pwd
[ "$status" -eq 0 ]
[ "$output" = /tmp ]
buildah config $cid --workingdir /root
run buildah --debug=false run $cid pwd
[ "$status" -eq 0 ]
[ "$output" = /root ]
cp ${TESTDIR}/randomfile $root/tmp/
buildah run $cid cp /tmp/randomfile /tmp/other-randomfile
test -s $root/tmp/other-randomfile
cmp ${TESTDIR}/randomfile $root/tmp/other-randomfile
run buildah run $cid echo -n test
[ $status != 0 ]
run buildah run $cid echo -- -n test
[ $status != 0 ]
run buildah run $cid -- echo -n -- test
[ "$output" = "-- test" ]
run buildah run $cid -- echo -- -n test --
[ "$output" = "-- -n -- test --" ]
run buildah run $cid -- echo -n "test"
[ "$output" = "test" ]
buildah unmount $cid
buildah rm $cid
}
@test "run--args" {
if ! which runc ; then
skip
fi
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
# This should fail, because buildah run doesn't have a -n flag.
run buildah --debug=false run $cid echo -n test
[ "$status" -ne 0 ]
# This should succeed, because buildah run stops caring at the --, which is preserved as part of the command.
run buildah --debug=false run $cid echo -- -n test
[ "$status" -eq 0 ]
echo :"$output":
[ "$output" = "-- -n test" ]
# This should succeed, because buildah run stops caring at the --, which is not part of the command.
run buildah --debug=false run $cid -- echo -n -- test
[ "$status" -eq 0 ]
echo :"$output":
[ "$output" = "-- test" ]
# This should succeed, because buildah run stops caring at the --.
run buildah --debug=false run $cid -- echo -- -n test --
[ "$status" -eq 0 ]
echo :"$output":
[ "$output" = "-- -n test --" ]
# This should succeed, because buildah run stops caring at the --.
run buildah --debug=false run $cid -- echo -n "test"
[ "$status" -eq 0 ]
echo :"$output":
[ "$output" = "test" ]
buildah rm $cid
}
@test "run-cmd" {
if ! which runc ; then
skip
@@ -44,26 +74,32 @@ load helpers
buildah config $cid --entrypoint ""
buildah config $cid --cmd pwd
run buildah --debug=false run $cid
[ "$status" -eq 0 ]
[ "$output" = /tmp ]
buildah config $cid --entrypoint echo
run buildah --debug=false run $cid
[ "$status" -eq 0 ]
[ "$output" = pwd ]
buildah config $cid --cmd ""
run buildah --debug=false run $cid
[ "$status" -eq 0 ]
[ "$output" = "" ]
buildah config $cid --entrypoint ""
run buildah --debug=false run $cid echo that-other-thing
[ "$status" -eq 0 ]
[ "$output" = that-other-thing ]
buildah config $cid --cmd echo
run buildah --debug=false run $cid echo that-other-thing
[ "$status" -eq 0 ]
[ "$output" = that-other-thing ]
buildah config $cid --entrypoint echo
run buildah --debug=false run $cid echo that-other-thing
[ "$status" -eq 0 ]
[ "$output" = that-other-thing ]
buildah rm $cid
@@ -82,8 +118,10 @@ load helpers
root=$(buildah mount $cid)
testuser=jimbo
testbogususer=nosuchuser
testgroup=jimbogroup
testuid=$RANDOM
testotheruid=$RANDOM
testgid=$RANDOM
testgroupid=$RANDOM
echo "$testuser:x:$testuid:$testgid:Jimbo Jenkins:/home/$testuser:/bin/sh" >> $root/etc/passwd
@@ -92,52 +130,116 @@ load helpers
buildah config $cid -u ""
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
[ "$output" = 0 ]
run buildah --debug=false run -- $cid id -g
[ "$status" -eq 0 ]
[ "$output" = 0 ]
buildah config $cid -u ${testuser}
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
[ "$output" = $testuid ]
run buildah --debug=false run -- $cid id -g
[ "$status" -eq 0 ]
[ "$output" = $testgid ]
buildah config $cid -u ${testuid}
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
[ "$output" = $testuid ]
run buildah --debug=false run -- $cid id -g
[ "$status" -eq 0 ]
[ "$output" = $testgid ]
buildah config $cid -u ${testuser}:${testgroup}
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
[ "$output" = $testuid ]
run buildah --debug=false run -- $cid id -g
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testuid}:${testgroup}
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
[ "$output" = $testuid ]
run buildah --debug=false run -- $cid id -g
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testotheruid}:${testgroup}
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
[ "$output" = $testotheruid ]
run buildah --debug=false run -- $cid id -g
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testotheruid}
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
[ "$output" = $testotheruid ]
run buildah --debug=false run -- $cid id -g
[ "$status" -eq 0 ]
[ "$output" = 0 ]
buildah config $cid -u ${testuser}:${testgroupid}
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
[ "$output" = $testuid ]
run buildah --debug=false run -- $cid id -g
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testuid}:${testgroupid}
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
[ "$output" = $testuid ]
run buildah --debug=false run -- $cid id -g
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testbogususer}
run buildah --debug=false run -- $cid id -u
[ "$status" -ne 0 ]
[[ "$output" =~ "unknown user" ]]
run buildah --debug=false run -- $cid id -g
[ "$status" -ne 0 ]
[[ "$output" =~ "unknown user" ]]
ln -vsf /etc/passwd $root/etc/passwd
buildah config $cid -u ${testuser}:${testgroup}
run buildah --debug=false run -- $cid id -u
echo "$output"
[ "$status" -ne 0 ]
[[ "$output" =~ "unknown user" ]]
buildah unmount $cid
buildah rm $cid
}
@test "run --hostname" {
if ! which runc ; then
skip
fi
runc --version
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah --debug=false run $cid hostname
echo "$output"
[ "$status" -eq 0 ]
[ "$output" != "foobar" ]
run buildah --debug=false run --hostname foobar $cid hostname
echo "$output"
[ "$status" -eq 0 ]
[ "$output" = "foobar" ]
buildah rm $cid
}

33
tests/secrets.bats Normal file
View File

@@ -0,0 +1,33 @@
#!/usr/bin/env bats
load helpers
function setup() {
mkdir $TESTSDIR/containers
touch $TESTSDIR/mounts.conf
MOUNTS_PATH=$TESTSDIR/containers/mounts.conf
echo "$TESTSDIR/rhel/secrets:/run/secrets" > $MOUNTS_PATH
mkdir $TESTSDIR/rhel
mkdir $TESTSDIR/rhel/secrets
touch $TESTSDIR/rhel/secrets/test.txt
echo "Testing secrets mounts. I am mounted!" > $TESTSDIR/rhel/secrets/test.txt
}
@test "bind secrets mounts to container" {
if ! which runc ; then
skip
fi
runc --version
cid=$(buildah --default-mounts-file "$MOUNTS_PATH" --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah --debug=false run $cid ls /run
echo "$output"
[ "$status" -eq 0 ]
mounts="$output"
run grep "secrets" <<< "$mounts"
echo "$output"
[ "$status" -eq 0 ]
buildah rm $cid
rm -rf $TESTSDIR/containers
rm -rf $TESTSDIR/rhel
}

26
tests/selinux.bats Normal file
View File

@@ -0,0 +1,26 @@
#!/usr/bin/env bats
load helpers
@test "selinux test" {
if ! which selinuxenabled ; then
skip "No selinuxenabled"
elif ! /usr/sbin/selinuxenabled; then
skip "selinux is disabled"
fi
image=alpine
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json $image)
firstlabel=$(buildah --debug=false run $cid cat /proc/1/attr/current)
run buildah --debug=false run $cid cat /proc/1/attr/current
[ "$status" -eq 0 ]
[ "$output" == $firstlabel ]
cid1=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json $image)
run buildah --debug=false run $cid1 cat /proc/1/attr/current
[ "$status" -eq 0 ]
[ "$output" != $firstlabel ]
buildah rm $cid
buildah rm $cid1
}

View File

@@ -0,0 +1,181 @@
#!/bin/bash
# test_buildah_authentication
# A script to be run at the command line with Buildah installed.
# This will test the code and should be run with this command:
#
# /bin/bash -v test_buildah_authentication.sh
########
# Create creds and store in /root/auth/htpasswd
########
registry=$(buildah from registry:2)
buildah run $registry -- htpasswd -Bbn testuser testpassword > /root/auth/htpasswd
########
# Create certificate via openssl
########
openssl req -newkey rsa:4096 -nodes -sha256 -keyout /root/auth/domain.key -x509 -days 2 -out /root/auth/domain.crt -subj "/C=US/ST=Foo/L=Bar/O=Red Hat, Inc./CN=localhost"
########
# Skopeo and buildah both require *.cert file
########
cp /root/auth/domain.crt /root/auth/domain.cert
########
# Create a private registry that uses certificate and creds file
########
docker run -d -p 5000:5000 --name registry -v /root/auth:/root/auth:Z -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/root/auth/htpasswd -e REGISTRY_HTTP_TLS_CERTIFICATE=/root/auth/domain.crt -e REGISTRY_HTTP_TLS_KEY=/root/auth/domain.key registry:2
########
# Pull alpine
########
buildah from alpine
buildah containers
buildah images
########
# Log into docker on local repo
########
docker login localhost:5000 --username testuser --password testpassword
########
# Push to the local repo using cached Docker creds.
########
buildah push --cert-dir /root/auth alpine docker://localhost:5000/my-alpine
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Buildah pulls using certs and cached Docker creds.
# Should show two alpine images and containers when done.
########
ctrid=$(buildah from localhost:5000/my-alpine --cert-dir /root/auth)
buildah containers
buildah images
########
# Clean up Buildah
########
buildah rm $ctrid
buildah rmi -f localhost:5000/my-alpine:latest
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Log out of local repo
########
docker logout localhost:5000
########
# Push using only certs, this should fail.
########
buildah push --cert-dir /root/auth --tls-verify=true alpine docker://localhost:5000/my-alpine
########
# Push using creds, certs and no transport, this should work.
########
buildah push --cert-dir ~/auth --tls-verify=true --creds=testuser:testpassword alpine localhost:5000/my-alpine
########
# No creds anywhere, only the certificate, this should fail.
########
buildah from localhost:5000/my-alpine --cert-dir /root/auth --tls-verify=true
########
# Log in with creds, this should work
########
ctrid=$(buildah from localhost:5000/my-alpine --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword)
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Clean up Buildah
########
buildah rm $ctrid
buildah rmi -f $(buildah --debug=false images -q)
########
# Pull alpine
########
buildah from alpine
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Let's test commit
########
########
# No credentials, this should fail.
########
buildah commit --cert-dir /root/auth --tls-verify=true alpine-working-container docker://localhost:5000/my-commit-alpine
########
# This should work, writing image in registry. Will not create an image locally.
########
buildah commit --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword alpine-working-container docker://localhost:5000/my-commit-alpine
########
# Pull the new image that we just commited
########
buildah from localhost:5000/my-commit-alpine --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Clean up
########
rm -rf ${TESTDIR}/auth
docker rm -f $(docker ps --all -q)
docker rmi -f $(docker images -q)
buildah rm $(buildah containers -q)
buildah rmi -f $(buildah --debug=false images -q)

View File

@@ -3,6 +3,10 @@ set -e
cd "$(dirname "$(readlink -f "$BASH_SOURCE")")"
# Default to using /var/tmp for test space, since it's more likely to support
# labels than /tmp, which is often on tmpfs.
export TMPDIR=${TMPDIR:-/var/tmp}
# Load the helpers.
. helpers.bash

View File

@@ -8,6 +8,7 @@ if ! which gometalinter.v1 > /dev/null 2> /dev/null ; then
exit 1
fi
exec gometalinter.v1 \
--enable-gc \
--exclude='error return value not checked.*(Close|Log|Print).*\(errcheck\)$' \
--exclude='.*_test\.go:.*error return value not checked.*\(errcheck\)$' \
--exclude='duplicate of.*_test.go.*\(dupl\)$' \
@@ -16,5 +17,5 @@ exec gometalinter.v1 \
--disable=gas \
--disable=aligncheck \
--cyclo-over=40 \
--deadline=240s \
--deadline=480s \
--tests "$@"

View File

@@ -2,10 +2,16 @@
load helpers
@test "buildah version test" {
run buildah version
echo "$output"
[ "$status" -eq 0 ]
}
@test "buildah version up to date in .spec file" {
run buildah version
[ "$status" -eq 0 ]
bversion=$(echo "$output" | awk '/^Version:/ { print $NF }')
rversion=$(cat ${TESTSDIR}/../contrib/rpm/buildah.spec | awk '/^Version:/ { print $NF }')
test "$bversion" = "$rversion"
}

27
user.go
View File

@@ -26,39 +26,34 @@ func getUser(rootdir, userspec string) (specs.User, error) {
uid64, uerr := strconv.ParseUint(userspec, 10, 32)
if uerr == nil && groupspec == "" {
// We parsed the user name as a number, and there's no group
// component, so we need to look up the user's primary GID.
// component, so try to look up the primary GID of the user who
// has this UID.
var name string
name, gid64, gerr = lookupGroupForUIDInContainer(rootdir, uid64)
if gerr == nil {
userspec = name
} else {
if userrec, err := user.LookupId(userspec); err == nil {
gid64, gerr = strconv.ParseUint(userrec.Gid, 10, 32)
userspec = userrec.Name
}
// Leave userspec alone, but swallow the error and just
// use GID 0.
gid64 = 0
gerr = nil
}
}
if uerr != nil {
// The user ID couldn't be parsed as a number, so try to look
// up the user's UID and primary GID.
uid64, gid64, uerr = lookupUserInContainer(rootdir, userspec)
gerr = uerr
}
if uerr != nil {
if userrec, err := user.Lookup(userspec); err == nil {
uid64, uerr = strconv.ParseUint(userrec.Uid, 10, 32)
gid64, gerr = strconv.ParseUint(userrec.Gid, 10, 32)
}
}
if groupspec != "" {
// We have a group name or number, so parse it.
gid64, gerr = strconv.ParseUint(groupspec, 10, 32)
if gerr != nil {
// The group couldn't be parsed as a number, so look up
// the group's GID.
gid64, gerr = lookupGroupInContainer(rootdir, groupspec)
}
if gerr != nil {
if group, err := user.LookupGroup(groupspec); err == nil {
gid64, gerr = strconv.ParseUint(group.Gid, 10, 32)
}
}
}
if uerr == nil && gerr == nil {

View File

@@ -1,4 +1,4 @@
// +build !cgo !linux
// +build !linux
package buildah

235
user_linux.go Normal file
View File

@@ -0,0 +1,235 @@
// +build linux
package buildah
import (
"bufio"
"flag"
"fmt"
"io"
"os"
"os/exec"
"os/user"
"strconv"
"strings"
"sync"
"github.com/containers/storage/pkg/reexec"
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
)
const (
openChrootedCommand = Package + "-open"
)
func init() {
reexec.Register(openChrootedCommand, openChrootedFileMain)
}
func openChrootedFileMain() {
status := 0
flag.Parse()
if len(flag.Args()) < 1 {
os.Exit(1)
}
// Our first parameter is the directory to chroot into.
if err := unix.Chdir(flag.Arg(0)); err != nil {
fmt.Fprintf(os.Stderr, "chdir(): %v", err)
os.Exit(1)
}
if err := unix.Chroot(flag.Arg(0)); err != nil {
fmt.Fprintf(os.Stderr, "chroot(): %v", err)
os.Exit(1)
}
// Anything else is a file we want to dump out.
for _, filename := range flag.Args()[1:] {
f, err := os.Open(filename)
if err != nil {
fmt.Fprintf(os.Stderr, "open(%q): %v", filename, err)
status = 1
continue
}
_, err = io.Copy(os.Stdout, f)
if err != nil {
fmt.Fprintf(os.Stderr, "read(%q): %v", filename, err)
}
f.Close()
}
os.Exit(status)
}
func openChrootedFile(rootdir, filename string) (*exec.Cmd, io.ReadCloser, error) {
// The child process expects a chroot and one or more filenames that
// will be consulted relative to the chroot directory and concatenated
// to its stdout. Start it up.
cmd := reexec.Command(openChrootedCommand, rootdir, filename)
stdout, err := cmd.StdoutPipe()
if err != nil {
return nil, nil, err
}
err = cmd.Start()
if err != nil {
return nil, nil, err
}
// Hand back the child's stdout for reading, and the child to reap.
return cmd, stdout, nil
}
var (
lookupUser, lookupGroup sync.Mutex
)
type lookupPasswdEntry struct {
name string
uid uint64
gid uint64
}
type lookupGroupEntry struct {
name string
gid uint64
}
func readWholeLine(rc *bufio.Reader) ([]byte, error) {
line, isPrefix, err := rc.ReadLine()
if err != nil {
return nil, err
}
for isPrefix {
// We didn't get a whole line. Keep reading chunks until we find an end of line, and discard them.
for isPrefix {
logrus.Debugf("discarding partial line %q", string(line))
_, isPrefix, err = rc.ReadLine()
if err != nil {
return nil, err
}
}
// That last read was the end of a line, so now we try to read the (beginning of?) the next line.
line, isPrefix, err = rc.ReadLine()
if err != nil {
return nil, err
}
}
return line, nil
}
func parseNextPasswd(rc *bufio.Reader) *lookupPasswdEntry {
line, err := readWholeLine(rc)
if err != nil {
return nil
}
fields := strings.Split(string(line), ":")
if len(fields) < 7 {
return nil
}
uid, err := strconv.ParseUint(fields[2], 10, 32)
if err != nil {
return nil
}
gid, err := strconv.ParseUint(fields[3], 10, 32)
if err != nil {
return nil
}
return &lookupPasswdEntry{
name: fields[0],
uid: uid,
gid: gid,
}
}
func parseNextGroup(rc *bufio.Reader) *lookupGroupEntry {
line, err := readWholeLine(rc)
if err != nil {
return nil
}
fields := strings.Split(string(line), ":")
if len(fields) < 4 {
return nil
}
gid, err := strconv.ParseUint(fields[2], 10, 32)
if err != nil {
return nil
}
return &lookupGroupEntry{
name: fields[0],
gid: gid,
}
}
func lookupUserInContainer(rootdir, username string) (uid uint64, gid uint64, err error) {
cmd, f, err := openChrootedFile(rootdir, "/etc/passwd")
if err != nil {
return 0, 0, err
}
defer func() {
_ = cmd.Wait()
}()
rc := bufio.NewReader(f)
defer f.Close()
lookupUser.Lock()
defer lookupUser.Unlock()
pwd := parseNextPasswd(rc)
for pwd != nil {
if pwd.name != username {
pwd = parseNextPasswd(rc)
continue
}
return pwd.uid, pwd.gid, nil
}
return 0, 0, user.UnknownUserError(fmt.Sprintf("error looking up user %q", username))
}
func lookupGroupForUIDInContainer(rootdir string, userid uint64) (username string, gid uint64, err error) {
cmd, f, err := openChrootedFile(rootdir, "/etc/passwd")
if err != nil {
return "", 0, err
}
defer func() {
_ = cmd.Wait()
}()
rc := bufio.NewReader(f)
defer f.Close()
lookupUser.Lock()
defer lookupUser.Unlock()
pwd := parseNextPasswd(rc)
for pwd != nil {
if pwd.uid != userid {
pwd = parseNextPasswd(rc)
continue
}
return pwd.name, pwd.gid, nil
}
return "", 0, user.UnknownUserError(fmt.Sprintf("error looking up user with UID %d", userid))
}
func lookupGroupInContainer(rootdir, groupname string) (gid uint64, err error) {
cmd, f, err := openChrootedFile(rootdir, "/etc/group")
if err != nil {
return 0, err
}
defer func() {
_ = cmd.Wait()
}()
rc := bufio.NewReader(f)
defer f.Close()
lookupGroup.Lock()
defer lookupGroup.Unlock()
grp := parseNextGroup(rc)
for grp != nil {
if grp.name != groupname {
grp = parseNextGroup(rc)
continue
}
return grp.gid, nil
}
return 0, user.UnknownGroupError(fmt.Sprintf("error looking up group %q", groupname))
}

View File

@@ -1,124 +0,0 @@
// +build cgo
// +build linux
package buildah
// #include <sys/types.h>
// #include <grp.h>
// #include <pwd.h>
// #include <stdlib.h>
// #include <stdio.h>
// #include <string.h>
// typedef FILE * pFILE;
import "C"
import (
"fmt"
"os/user"
"path/filepath"
"sync"
"syscall"
"unsafe"
"github.com/pkg/errors"
)
func fopenContainerFile(rootdir, filename string) (C.pFILE, error) {
var st, lst syscall.Stat_t
ctrfile := filepath.Join(rootdir, filename)
cctrfile := C.CString(ctrfile)
defer C.free(unsafe.Pointer(cctrfile))
mode := C.CString("r")
defer C.free(unsafe.Pointer(mode))
f, err := C.fopen(cctrfile, mode)
if f == nil || err != nil {
return nil, errors.Wrapf(err, "error opening %q", ctrfile)
}
if err = syscall.Fstat(int(C.fileno(f)), &st); err != nil {
return nil, errors.Wrapf(err, "fstat(%q)", ctrfile)
}
if err = syscall.Lstat(ctrfile, &lst); err != nil {
return nil, errors.Wrapf(err, "lstat(%q)", ctrfile)
}
if st.Dev != lst.Dev || st.Ino != lst.Ino {
return nil, errors.Errorf("%q is not a regular file", ctrfile)
}
return f, nil
}
var (
lookupUser, lookupGroup sync.Mutex
)
func lookupUserInContainer(rootdir, username string) (uint64, uint64, error) {
name := C.CString(username)
defer C.free(unsafe.Pointer(name))
f, err := fopenContainerFile(rootdir, "/etc/passwd")
if err != nil {
return 0, 0, err
}
defer C.fclose(f)
lookupUser.Lock()
defer lookupUser.Unlock()
pwd := C.fgetpwent(f)
for pwd != nil {
if C.strcmp(pwd.pw_name, name) != 0 {
pwd = C.fgetpwent(f)
continue
}
return uint64(pwd.pw_uid), uint64(pwd.pw_gid), nil
}
return 0, 0, user.UnknownUserError(fmt.Sprintf("error looking up user %q", username))
}
func lookupGroupForUIDInContainer(rootdir string, userid uint64) (string, uint64, error) {
f, err := fopenContainerFile(rootdir, "/etc/passwd")
if err != nil {
return "", 0, err
}
defer C.fclose(f)
lookupUser.Lock()
defer lookupUser.Unlock()
pwd := C.fgetpwent(f)
for pwd != nil {
if uint64(pwd.pw_uid) != userid {
pwd = C.fgetpwent(f)
continue
}
return C.GoString(pwd.pw_name), uint64(pwd.pw_gid), nil
}
return "", 0, user.UnknownUserError(fmt.Sprintf("error looking up user with UID %d", userid))
}
func lookupGroupInContainer(rootdir, groupname string) (uint64, error) {
name := C.CString(groupname)
defer C.free(unsafe.Pointer(name))
f, err := fopenContainerFile(rootdir, "/etc/group")
if err != nil {
return 0, err
}
defer C.fclose(f)
lookupGroup.Lock()
defer lookupGroup.Unlock()
grp := C.fgetgrent(f)
for grp != nil {
if C.strcmp(grp.gr_name, name) != 0 {
grp = C.fgetgrent(f)
continue
}
return uint64(grp.gr_gid), nil
}
return 0, user.UnknownGroupError(fmt.Sprintf("error looking up group %q", groupname))
}

View File

@@ -1,9 +1,17 @@
package buildah
import (
"github.com/containers/storage/pkg/chrootarchive"
"github.com/containers/storage/pkg/reexec"
)
var (
// CopyWithTar defines the copy method to use.
copyWithTar = chrootarchive.NewArchiver(nil).CopyWithTar
copyFileWithTar = chrootarchive.NewArchiver(nil).CopyFileWithTar
untarPath = chrootarchive.NewArchiver(nil).UntarPath
)
// InitReexec is a wrapper for reexec.Init(). It should be called at
// the start of main(), and if it returns true, main() should return
// immediately.

View File

@@ -1,14 +1,15 @@
github.com/BurntSushi/toml master
github.com/Nvveen/Gotty master
github.com/blang/semver master
github.com/containers/image c2a797dfe5bb4a9dd7f48332ce40c6223ffba492
github.com/containers/storage 105f7c77aef0c797429e41552743bf5b03b63263
github.com/docker/distribution master
github.com/docker/docker 0f9ec7e47072b0c2e954b5b821bde5c1fe81bfa7
github.com/containers/image f950aa3529148eb0dea90888c24b6682da641b13
github.com/containers/storage d7921c6facc516358070a1306689eda18adaa20a
github.com/docker/distribution 5f6282db7d65e6d72ad7c2cc66310724a57be716
github.com/docker/docker 30eb4d8cdc422b023d5f11f29a82ecb73554183b
github.com/docker/engine-api master
github.com/docker/go-connections e15c02316c12de00874640cd76311849de2aeed5
github.com/docker/go-units master
github.com/docker/libtrust master
github.com/docker/go-connections 3ede32e2033de7505e6500d6c868c2b9ed9f169d
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/docker/docker-credential-helpers d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1
github.com/docker/libtrust aabc10ec26b754e797f9028f4589c5b7bd90dc20
github.com/fsouza/go-dockerclient master
github.com/ghodss/yaml master
github.com/golang/glog master
@@ -19,23 +20,23 @@ github.com/imdario/mergo master
github.com/mattn/go-runewidth master
github.com/mattn/go-shellwords master
github.com/mistifyio/go-zfs master
github.com/moby/moby 0f9ec7e47072b0c2e954b5b821bde5c1fe81bfa7
github.com/moby/moby f8806b18b4b92c5e1980f6e11c917fad201cd73c
github.com/mtrmac/gpgme master
github.com/opencontainers/go-digest aa2ec055abd10d26d539eb630a92241b781ce4bc
github.com/opencontainers/image-spec v1.0.0
github.com/opencontainers/runc master
github.com/opencontainers/runtime-spec v1.0.0
github.com/opencontainers/runtime-tools 2d270b8764c02228eeb13e36f076f5ce6f2e3591
github.com/opencontainers/selinux ba1aefe8057f1d0cfb8e88d0ec1dc85925ef987d
github.com/opencontainers/runtime-tools master
github.com/opencontainers/selinux b29023b86e4a69d1b46b7e7b4e2b6fda03f0b9cd
github.com/openshift/imagebuilder master
github.com/ostreedev/ostree-go aeb02c6b6aa2889db3ef62f7855650755befd460
github.com/pborman/uuid master
github.com/pkg/errors master
github.com/Sirupsen/logrus master
github.com/sirupsen/logrus master
github.com/syndtr/gocapability master
github.com/tchap/go-patricia master
github.com/urfave/cli master
github.com/vbatts/tar-split master
github.com/vbatts/tar-split v0.10.2
golang.org/x/crypto master
golang.org/x/net master
golang.org/x/sys master
@@ -45,3 +46,11 @@ gopkg.in/yaml.v2 cd8b52f8269e0feb286dfeef29f8fe4d5b397e0b
k8s.io/apimachinery master
k8s.io/client-go master
k8s.io/kubernetes master
github.com/hashicorp/go-multierror master
github.com/hashicorp/errwrap master
github.com/xeipuuv/gojsonschema master
github.com/xeipuuv/gojsonreference master
github.com/containerd/continuity master
github.com/gogo/protobuf master
github.com/xeipuuv/gojsonpointer master
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac

View File

@@ -1,10 +0,0 @@
// +build appengine
package logrus
import "io"
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal(f io.Writer) bool {
return true
}

View File

@@ -1,10 +0,0 @@
// +build darwin freebsd openbsd netbsd dragonfly
// +build !appengine
package logrus
import "syscall"
const ioctlReadTermios = syscall.TIOCGETA
type Termios syscall.Termios

View File

@@ -1,28 +0,0 @@
// Based on ssh/terminal:
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build linux darwin freebsd openbsd netbsd dragonfly
// +build !appengine
package logrus
import (
"io"
"os"
"syscall"
"unsafe"
)
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal(f io.Writer) bool {
var termios Termios
switch v := f.(type) {
case *os.File:
_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(v.Fd()), ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)
return err == 0
default:
return false
}
}

View File

@@ -1,21 +0,0 @@
// +build solaris,!appengine
package logrus
import (
"io"
"os"
"golang.org/x/sys/unix"
)
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal(f io.Writer) bool {
switch v := f.(type) {
case *os.File:
_, err := unix.IoctlGetTermios(int(v.Fd()), unix.TCGETA)
return err == nil
default:
return false
}
}

View File

@@ -1,33 +0,0 @@
// Based on ssh/terminal:
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build windows,!appengine
package logrus
import (
"io"
"os"
"syscall"
"unsafe"
)
var kernel32 = syscall.NewLazyDLL("kernel32.dll")
var (
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
)
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal(f io.Writer) bool {
switch v := f.(type) {
case *os.File:
var st uint32
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(v.Fd()), uintptr(unsafe.Pointer(&st)), 0)
return r != 0 && e == 0
default:
return false
}
}

79
vendor/github.com/containers/image/README.md generated vendored Normal file
View File

@@ -0,0 +1,79 @@
[![GoDoc](https://godoc.org/github.com/containers/image?status.svg)](https://godoc.org/github.com/containers/image) [![Build Status](https://travis-ci.org/containers/image.svg?branch=master)](https://travis-ci.org/containers/image)
=
`image` is a set of Go libraries aimed at working in various way with
containers' images and container image registries.
The containers/image library allows application to pull and push images from
container image registries, like the upstream docker registry. It also
implements "simple image signing".
The containers/image library also allows you to inspect a repository on a
container registry without pulling down the image. This means it fetches the
repository's manifest and it is able to show you a `docker inspect`-like json
output about a whole repository or a tag. This library, in contrast to `docker
inspect`, helps you gather useful information about a repository or a tag
without requiring you to run `docker pull`.
The containers/image library also allows you to translate from one image format
to another, for example docker container images to OCI images. It also allows
you to copy container images between various registries, possibly converting
them as necessary, and to sign and verify images.
## Command-line usage
The containers/image project is only a library with no user interface;
you can either incorporate it into your Go programs, or use the `skopeo` tool:
The [skopeo](https://github.com/projectatomic/skopeo) tool uses the
containers/image library and takes advantage of many of its features,
e.g. `skopeo copy` exposes the `containers/image/copy.Image` functionality.
## Dependencies
This library does not ship a committed version of its dependencies in a `vendor`
subdirectory. This is so you can make well-informed decisions about which
libraries you should use with this package in your own projects, and because
types defined in the `vendor` directory would be impossible to use from your projects.
What this project tests against dependencies-wise is located
[in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf).
## Building
If you want to see what the library can do, or an example of how it is called,
consider starting with the [skopeo](https://github.com/projectatomic/skopeo) tool
instead.
To integrate this library into your project, put it into `$GOPATH` or use
your preferred vendoring tool to include a copy in your project.
Ensure that the dependencies documented [in vendor.conf](https://github.com/containers/image/blob/master/vendor.conf)
are also available
(using those exact versions or different versions of your choosing).
This library, by default, also depends on the GpgME and libostree C libraries. Either install them:
```sh
Fedora$ dnf install gpgme-devel libassuan-devel libostree-devel
macOS$ brew install gpgme
```
or use the build tags described below to avoid the dependencies (e.g. using `go build -tags …`)
### Supported build tags
- `containers_image_openpgp`: Use a Golang-only OpenPGP implementation for signature verification instead of the default cgo/gpgme-based implementation;
the primary downside is that creating new signatures with the Golang-only implementation is not supported.
- `containers_image_ostree_stub`: Instead of importing `ostree:` transport in `github.com/containers/image/transports/alltransports`, use a stub which reports that the transport is not supported. This allows building the library without requiring the `libostree` development libraries. The `github.com/containers/image/ostree` package is completely disabled
and impossible to import when this build tag is in use.
## Contributing
When developing this library, please use `make` (or `make … BUILDTAGS=…`) to take advantage of the tests and validation.
## License
ASL 2.0
## Contact
- Mailing list: [containers-dev](https://groups.google.com/forum/?hl=en#!forum/containers-dev)
- IRC: #[container-projects](irc://irc.freenode.net:6667/#container-projects) on freenode.net

View File

@@ -3,6 +3,7 @@ package copy
import (
"bytes"
"compress/gzip"
"context"
"fmt"
"io"
"io/ioutil"
@@ -11,9 +12,6 @@ import (
"strings"
"time"
pb "gopkg.in/cheggaaa/pb.v1"
"github.com/Sirupsen/logrus"
"github.com/containers/image/image"
"github.com/containers/image/pkg/compression"
"github.com/containers/image/signature"
@@ -21,6 +19,8 @@ import (
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
pb "gopkg.in/cheggaaa/pb.v1"
)
type digestingReader struct {
@@ -94,6 +94,8 @@ type Options struct {
DestinationCtx *types.SystemContext
ProgressInterval time.Duration // time to wait between reports to signal the progress channel
Progress chan types.ProgressProperties // Reported to when ProgressInterval has arrived for a single artifact+offset.
// manifest MIME type of image set by user. "" is default and means use the autodetection to the the manifest MIME type
ForceManifestMIMEType string
}
// Image copies image from srcRef to destRef, using policyContext to validate
@@ -128,9 +130,7 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
}
}()
destSupportedManifestMIMETypes := dest.SupportedManifestMIMETypes()
rawSource, err := srcRef.NewImageSource(options.SourceCtx, destSupportedManifestMIMETypes)
rawSource, err := srcRef.NewImageSource(options.SourceCtx)
if err != nil {
return errors.Wrapf(err, "Error initializing source %s", transports.ImageName(srcRef))
}
@@ -171,7 +171,7 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
sigs = [][]byte{}
} else {
writeReport("Getting image source signatures\n")
s, err := src.Signatures()
s, err := src.Signatures(context.TODO())
if err != nil {
return errors.Wrap(err, "Error reading signatures")
}
@@ -194,7 +194,7 @@ func Image(policyContext *signature.PolicyContext, destRef, srcRef types.ImageRe
// We compute preferredManifestMIMEType only to show it in error messages.
// Without having to add this context in an error message, we would be happy enough to know only that no conversion is needed.
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := determineManifestConversion(&manifestUpdates, src, destSupportedManifestMIMETypes, canModifyManifest)
preferredManifestMIMEType, otherManifestMIMETypeCandidates, err := determineManifestConversion(&manifestUpdates, src, dest.SupportedManifestMIMETypes(), canModifyManifest, options.ForceManifestMIMEType)
if err != nil {
return err
}
@@ -582,7 +582,7 @@ func (ic *imageCopier) copyBlobFromStream(srcStream io.Reader, srcInfo types.Blo
bar.ShowPercent = false
bar.Start()
destStream = bar.NewProxyReader(destStream)
defer fmt.Fprint(ic.reportWriter, "\n")
defer bar.Finish()
// === Send a copy of the original, uncompressed, stream, to a separate path if necessary.
var originalLayerReader io.Reader // DO NOT USE this other than to drain the input if no other consumer in the pipeline has done so.

View File

@@ -3,10 +3,10 @@ package copy
import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/image/manifest"
"github.com/containers/image/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// preferredManifestMIMETypes lists manifest MIME types in order of our preference, if we can't use the original manifest and need to convert.
@@ -41,12 +41,16 @@ func (os *orderedSet) append(s string) {
// Note that the conversion will only happen later, through src.UpdatedImage
// Returns the preferred manifest MIME type (whether we are converting to it or using it unmodified),
// and a list of other possible alternatives, in order.
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool) (string, []string, error) {
func determineManifestConversion(manifestUpdates *types.ManifestUpdateOptions, src types.Image, destSupportedManifestMIMETypes []string, canModifyManifest bool, forceManifestMIMEType string) (string, []string, error) {
_, srcType, err := src.Manifest()
if err != nil { // This should have been cached?!
return "", nil, errors.Wrap(err, "Error reading manifest")
}
if forceManifestMIMEType != "" {
destSupportedManifestMIMETypes = []string{forceManifestMIMEType}
}
if len(destSupportedManifestMIMETypes) == 0 {
return srcType, []string{}, nil // Anything goes; just use the original as is, do not try any conversions.
}

View File

@@ -4,19 +4,77 @@ import (
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const version = "Directory Transport Version: 1.0\n"
// ErrNotContainerImageDir indicates that the directory doesn't match the expected contents of a directory created
// using the 'dir' transport
var ErrNotContainerImageDir = errors.New("not a containers image directory, don't want to overwrite important data")
type dirImageDestination struct {
ref dirReference
ref dirReference
compress bool
}
// newImageDestination returns an ImageDestination for writing to an existing directory.
func newImageDestination(ref dirReference) types.ImageDestination {
return &dirImageDestination{ref}
// newImageDestination returns an ImageDestination for writing to a directory.
func newImageDestination(ref dirReference, compress bool) (types.ImageDestination, error) {
d := &dirImageDestination{ref: ref, compress: compress}
// If directory exists check if it is empty
// if not empty, check whether the contents match that of a container image directory and overwrite the contents
// if the contents don't match throw an error
dirExists, err := pathExists(d.ref.resolvedPath)
if err != nil {
return nil, errors.Wrapf(err, "error checking for path %q", d.ref.resolvedPath)
}
if dirExists {
isEmpty, err := isDirEmpty(d.ref.resolvedPath)
if err != nil {
return nil, err
}
if !isEmpty {
versionExists, err := pathExists(d.ref.versionPath())
if err != nil {
return nil, errors.Wrapf(err, "error checking if path exists %q", d.ref.versionPath())
}
if versionExists {
contents, err := ioutil.ReadFile(d.ref.versionPath())
if err != nil {
return nil, err
}
// check if contents of version file is what we expect it to be
if string(contents) != version {
return nil, ErrNotContainerImageDir
}
} else {
return nil, ErrNotContainerImageDir
}
// delete directory contents so that only one image is in the directory at a time
if err = removeDirContents(d.ref.resolvedPath); err != nil {
return nil, errors.Wrapf(err, "error erasing contents in %q", d.ref.resolvedPath)
}
logrus.Debugf("overwriting existing container image directory %q", d.ref.resolvedPath)
}
} else {
// create directory if it doesn't exist
if err := os.MkdirAll(d.ref.resolvedPath, 0755); err != nil {
return nil, errors.Wrapf(err, "unable to create directory %q", d.ref.resolvedPath)
}
}
// create version file
err = ioutil.WriteFile(d.ref.versionPath(), []byte(version), 0755)
if err != nil {
return nil, errors.Wrapf(err, "error creating version file %q", d.ref.versionPath())
}
return d, nil
}
// Reference returns the reference used to set up this destination. Note that this should directly correspond to user's intent,
@@ -42,7 +100,7 @@ func (d *dirImageDestination) SupportsSignatures() error {
// ShouldCompressLayers returns true iff it is desirable to compress layer blobs written to this destination.
func (d *dirImageDestination) ShouldCompressLayers() bool {
return false
return d.compress
}
// AcceptsForeignLayerURLs returns false iff foreign layers in manifest should be actually
@@ -147,3 +205,39 @@ func (d *dirImageDestination) PutSignatures(signatures [][]byte) error {
func (d *dirImageDestination) Commit() error {
return nil
}
// returns true if path exists
func pathExists(path string) (bool, error) {
_, err := os.Stat(path)
if err == nil {
return true, nil
}
if err != nil && os.IsNotExist(err) {
return false, nil
}
return false, err
}
// returns true if directory is empty
func isDirEmpty(path string) (bool, error) {
files, err := ioutil.ReadDir(path)
if err != nil {
return false, err
}
return len(files) == 0, nil
}
// deletes the contents of a directory
func removeDirContents(path string) error {
files, err := ioutil.ReadDir(path)
if err != nil {
return err
}
for _, file := range files {
if err := os.RemoveAll(filepath.Join(path, file.Name())); err != nil {
return err
}
}
return nil
}

View File

@@ -1,6 +1,7 @@
package directory
import (
"context"
"io"
"io/ioutil"
"os"
@@ -59,7 +60,7 @@ func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, err
return r, fi.Size(), nil
}
func (s *dirImageSource) GetSignatures() ([][]byte, error) {
func (s *dirImageSource) GetSignatures(ctx context.Context) ([][]byte, error) {
signatures := [][]byte{}
for i := 0; ; i++ {
signature, err := ioutil.ReadFile(s.ref.signaturePath(i))

View File

@@ -143,18 +143,20 @@ func (ref dirReference) NewImage(ctx *types.SystemContext) (types.Image, error)
return image.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref dirReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func (ref dirReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ref), nil
}
// NewImageDestination returns a types.ImageDestination for this reference.
// The caller must call .Close() on the returned ImageDestination.
func (ref dirReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return newImageDestination(ref), nil
compress := false
if ctx != nil {
compress = ctx.DirForceCompress
}
return newImageDestination(ref, compress)
}
// DeleteImage deletes the named image from the registry, if supported.
@@ -177,3 +179,8 @@ func (ref dirReference) layerPath(digest digest.Digest) string {
func (ref dirReference) signaturePath(index int) string {
return filepath.Join(ref.path, fmt.Sprintf("signature-%d", index+1))
}
// versionPath returns a path for the version file within a directory using our conventions.
func (ref dirReference) versionPath() string {
return filepath.Join(ref.path, "version")
}

View File

@@ -19,15 +19,24 @@ func newImageDestination(ctx *types.SystemContext, ref archiveReference) (types.
if ref.destinationRef == nil {
return nil, errors.Errorf("docker-archive: destination reference not supplied (must be of form <path>:<reference:tag>)")
}
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_EXCL|os.O_CREATE, 0644)
// ref.path can be either a pipe or a regular file
// in the case of a pipe, we require that we can open it for write
// in the case of a regular file, we don't want to overwrite any pre-existing file
// so we check for Size() == 0 below (This is racy, but using O_EXCL would also be racy,
// only in a different way. Either way, its up to the user to not have two writers to the same path.)
fh, err := os.OpenFile(ref.path, os.O_WRONLY|os.O_CREATE, 0644)
if err != nil {
// FIXME: It should be possible to modify archives, but the only really
// sane way of doing it is to create a copy of the image, modify
// it and then do a rename(2).
if os.IsExist(err) {
err = errors.New("docker-archive doesn't support modifying existing images")
}
return nil, err
return nil, errors.Wrapf(err, "error opening file %q", ref.path)
}
fhStat, err := fh.Stat()
if err != nil {
return nil, errors.Wrapf(err, "error statting file %q", ref.path)
}
if fhStat.Mode().IsRegular() && fhStat.Size() != 0 {
return nil, errors.New("docker-archive doesn't support modifying existing images")
}
return &archiveImageDestination{

View File

@@ -1,9 +1,9 @@
package archive
import (
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/tarfile"
"github.com/containers/image/types"
"github.com/sirupsen/logrus"
)
type archiveImageSource struct {

View File

@@ -134,11 +134,9 @@ func (ref archiveReference) NewImage(ctx *types.SystemContext) (types.Image, err
return ctrImage.FromSource(src)
}
// NewImageSource returns a types.ImageSource for this reference,
// asking the backend to use a manifest from requestedManifestMIMETypes if possible.
// nil requestedManifestMIMETypes means manifest.DefaultRequestedManifestMIMETypes.
// NewImageSource returns a types.ImageSource for this reference.
// The caller must call .Close() on the returned ImageSource.
func (ref archiveReference) NewImageSource(ctx *types.SystemContext, requestedManifestMIMETypes []string) (types.ImageSource, error) {
func (ref archiveReference) NewImageSource(ctx *types.SystemContext) (types.ImageSource, error) {
return newImageSource(ctx, ref), nil
}

View File

@@ -0,0 +1,69 @@
package daemon
import (
"net/http"
"path/filepath"
"github.com/containers/image/types"
dockerclient "github.com/docker/docker/client"
"github.com/docker/go-connections/tlsconfig"
)
const (
// The default API version to be used in case none is explicitly specified
defaultAPIVersion = "1.22"
)
// NewDockerClient initializes a new API client based on the passed SystemContext.
func newDockerClient(ctx *types.SystemContext) (*dockerclient.Client, error) {
host := dockerclient.DefaultDockerHost
if ctx != nil && ctx.DockerDaemonHost != "" {
host = ctx.DockerDaemonHost
}
// Sadly, unix:// sockets don't work transparently with dockerclient.NewClient.
// They work fine with a nil httpClient; with a non-nil httpClient, the transports
// TLSClientConfig must be nil (or the client will try using HTTPS over the PF_UNIX socket
// regardless of the values in the *tls.Config), and we would have to call sockets.ConfigureTransport.
//
// We don't really want to configure anything for unix:// sockets, so just pass a nil *http.Client.
proto, _, _, err := dockerclient.ParseHost(host)
if err != nil {
return nil, err
}
var httpClient *http.Client
if proto != "unix" {
hc, err := tlsConfig(ctx)
if err != nil {
return nil, err
}
httpClient = hc
}
return dockerclient.NewClient(host, defaultAPIVersion, httpClient, nil)
}
func tlsConfig(ctx *types.SystemContext) (*http.Client, error) {
options := tlsconfig.Options{}
if ctx != nil && ctx.DockerDaemonInsecureSkipTLSVerify {
options.InsecureSkipVerify = true
}
if ctx != nil && ctx.DockerDaemonCertPath != "" {
options.CAFile = filepath.Join(ctx.DockerDaemonCertPath, "ca.pem")
options.CertFile = filepath.Join(ctx.DockerDaemonCertPath, "cert.pem")
options.KeyFile = filepath.Join(ctx.DockerDaemonCertPath, "key.pem")
}
tlsc, err := tlsconfig.Client(options)
if err != nil {
return nil, err
}
return &http.Client{
Transport: &http.Transport{
TLSClientConfig: tlsc,
},
CheckRedirect: dockerclient.CheckRedirect,
}, nil
}

Some files were not shown because too many files have changed in this diff Show More