Compare commits

...

279 Commits
v0.1 ... v0.15

Author SHA1 Message Date
Daniel J Walsh
d1330a5c46 Bump to v0.15
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #503
Approved by: rhatdan
2018-02-27 12:54:29 +00:00
Daniel J Walsh
b75bf0a5b3 Currently buildah run is not handling command options correctly
This patch will allow commands like

buildah run $ctr ls -lZ /

To work correctly.

Need to update vendor of urfave cli.

Also changed all commands to no longer accept global options after the COMMAND.
Single boolean options can now be passed together.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #493
Approved by: rhatdan
2018-02-27 12:08:45 +00:00
TomSweeneyRedHat
cb42905a7f Add selinux test from #486
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #501
Approved by: rhatdan
2018-02-27 00:47:42 +00:00
Daniel J Walsh
ee11d75f3e Bump to v0.14
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #499
Approved by: rhatdan
2018-02-27 00:28:50 +00:00
Daniel J Walsh
9bf5a5e52a Breaking change on CommonBuildOpts
Just have to refuse to use previous created containers when doing a run.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #500
Approved by: rhatdan
2018-02-27 00:05:12 +00:00
Daniel J Walsh
873ecd8791 If commonOpts do not exist, we should return rather then segfault
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #498
Approved by: TomSweeneyRedHat
2018-02-26 22:49:44 +00:00
Shyukri Shyukriev
99066e0104 Add openSUSE in install section
Signed-off-by: Shyukri Shyukriev <shshyukriev@suse.com>

Closes: #492
Approved by: rhatdan
2018-02-26 00:20:33 +00:00
TomSweeneyRedHat
68a6c0a4c0 Display full error string instead of just status
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #485
Approved by: rhatdan
2018-02-24 09:12:53 +00:00
Daniel J Walsh
963c1d95c0 Merge pull request #494 from umohnani8/secrets
Fix secrets patch for buildah bud
2018-02-23 13:49:47 -05:00
umohnani8
4bbe6e7cc0 Implement --volume and --shm-size for bud and from
Add the remaining --volume and --shm-size flags to buildah bud and from
--volume supports the following options: rw, ro, z, Z, private, slave, shared

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #491
Approved by: rhatdan
2018-02-23 17:53:00 +00:00
umohnani8
fb14850b50 Fix secrets patch for buildah bud
buildah bud was failing to get the secrets data
The issue was buildah bud was not being given the /usr/share/containers/mounts.conf file path
so it had no secrets to mount
Also reworked the way the secrets data was being copied from the host to the container

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-23 12:38:39 -05:00
umohnani8
669ffddd99 Vendor in latest containers/image
Fixes the naming issue of blobs and config for the dir transport
by removing the .tar extension

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #489
Approved by: rhatdan
2018-02-22 18:57:31 +00:00
TomSweeneyRedHat
ac093aecd1 Add libseccomp-devel to packages to install
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #490
Approved by: rhatdan
2018-02-22 18:01:09 +00:00
Daniel J Walsh
ef0ca9cd2d Bump to v0.13
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #488
Approved by: TomSweeneyRedHat
2018-02-22 14:25:16 +00:00
Daniel J Walsh
5ca91c0eb7 Vendor in latest containers/storage
This fixes a large SELinux bug.  Currently if you do the following
commands

ctr=$(buildah from scratch)
mnt=$(buildah mount $ctr)
dnf install --installroot=$mnt httpd
buildah run $ctr touch /test

The last command fails.  The reason for this is the SELinux labels are getting applied
to the mount point, since it was not being mounted as an overlay file system.

Containers/storage was updated to always mount an overlay even if the lower layer is empty
This then causes the mount point to use a context mount, and changes dnf to not apply
labels.  This change then allows buildah run to create confined containers to run code.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #486
Approved by: TomSweeneyRedHat
2018-02-22 13:55:26 +00:00
baude
e623e5c004 Initial ginkgo framework
This is an initial attempt at bringing in the ginkgo test framework into
buildah.  The inspect bats file was also imported.

Signed-off-by: baude <bbaude@redhat.com>

Closes: #472
Approved by: rhatdan
2018-02-22 13:06:08 +00:00
Giuseppe Scrivano
9d163a50d1 run: do not open /etc/hosts if not needed
Avoid opening the file in write mode if we are not going to write
anything.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

Closes: #487
Approved by: rhatdan
2018-02-22 13:04:38 +00:00
umohnani8
93a3c89943 Add the following flags to buildah bud and from
--add-host
	--cgroup-parent
	--cpu-period
	--cpu-quota
	--cpu-shares
	--cpuset-cpus
	--cpuset-mems
	--memory
	--memory-swap
	--security-opt
	--ulimit

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #477
Approved by: rhatdan
2018-02-19 17:00:29 +00:00
umohnani8
b23f145416 Vendor in packages
vendor in profiles from github.com/docker/docker/profiles to support seccomp.
vendor in latest runtime-tools to support bind mounting.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #477
Approved by: rhatdan
2018-02-19 17:00:29 +00:00
TomSweeneyRedHat
43d1102d02 Touchup doc in rpm spec
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #483
Approved by: rhatdan
2018-02-19 14:52:38 +00:00
TomSweeneyRedHat
95f16ab260 Remove you/your from manpages
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #484
Approved by: rhatdan
2018-02-19 14:30:16 +00:00
Michael Gugino
09e7ddf544 Fix README link to tutorials
Current link is broken.

This commit corrects the link to point at the
correct file.

Signed-off-by: Michael Gugino <mgugino@redhat.com>

Closes: #480
Approved by: nalind
2018-02-15 15:15:15 +00:00
TomSweeneyRedHat
ee383ec9cf Add redis test to baseline
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #476
Approved by: nalind
2018-02-14 14:18:08 +00:00
TomSweeneyRedHat
d59af12866 Fix versioning to 0.12
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #478
Approved by: rhatdan
2018-02-14 10:06:43 +00:00
Daniel J Walsh
e073df11aa Merge pull request #473 from rhatdan/master
Bump version to 0.12
2018-02-12 12:08:24 -05:00
Daniel J Walsh
8badcc2d02 Bump version to 0.12
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-02-12 11:48:48 -05:00
Daniel J Walsh
a586779353 We are copying a directory not a single file
When populating a container from a container image with a
volume directory, we need to copy the content of the source
directory into the target.  The code was mistakenly looking
for a file not a directory.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #471
Approved by: nalind
2018-02-12 15:57:23 +00:00
umohnani8
4eb654f10c Removing docs and completions for run options
Figured that these options need to be in from and bud instead.
Removed the options from the documentation of run and bud for now.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #470
Approved by: rhatdan
2018-02-12 15:22:09 +00:00
Boaz Shuster
f29314579d Return multi errors in buildah-rm
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #458
Approved by: rhatdan
2018-02-12 12:02:37 +00:00
TomSweeneyRedHat
9e20c3d948 Don't drop error on mix case imagename
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #468
Approved by: rhatdan
2018-02-11 11:41:44 +00:00
TomSweeneyRedHat
46c1a54b15 Revert to using latest go-md2man
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #463
Approved by: rhatdan
2018-02-10 12:24:54 +00:00
Benjamin Kircher
67b565da7b Docs: note that buildah needs to run as root
You have to be root to run buildah. This commit adds a notice to the
buildah(1) man-page and improves the front-page README.md a bit so that
this is more obvious to the user.

Fixes issue #420.

Signed-off-by: Benjamin Kircher <benjamin.kircher@gmail.com>

Closes: #462
Approved by: rhatdan
2018-02-10 12:07:07 +00:00
baude
dd4a6aea97 COPR enablement
For COPR builds, we will use a a slightly modified spec and the
makesrpm method over SCM builds so we can have dynamic package
names.

Signed-off-by: baude <bbaude@redhat.com>

Closes: #460
Approved by: rhatdan
2018-02-10 11:49:46 +00:00
William Henry
9116598a2e Added handing for simpler error message for Unknown Dockerfile instructions.
Signed-off-by: William Henry <whenry@redhat.com>

Closes: #457
Approved by: rhatdan
2018-02-10 11:32:38 +00:00
TomSweeneyRedHat
df2a10d43f Add limitation to buildah rmi man page
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #464
Approved by: rhatdan
2018-02-10 11:15:34 +00:00
TomSweeneyRedHat
e9915937ac Rename tutorials.md to README.md in tutorial dir
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #459
Approved by: rhatdan
2018-02-10 11:14:52 +00:00
Daniel J Walsh
5a9c591abf Merge pull request #455 from umohnani8/certs
Make /etc/containers/certs.d the default certs directory
2018-02-07 08:44:32 -05:00
Daniel J Walsh
531ef9159d Merge pull request #446 from umohnani8/flags_docs
Add documentation and completions for the following flags
2018-02-06 17:21:41 -05:00
umohnani8
811cf927d7 Change default certs directory to /etc/containers/certs.dir
Made changes to the man pages to reflect this.

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-06 17:04:34 -05:00
umohnani8
032b56ee8d Vendor in latest containers/image
Adds support for default certs directory to be /etc/containers/certs.d

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-06 17:04:34 -05:00
Daniel J Walsh
6af847dd2a Vendor in latest containers/storage
A patch got merged into containers/storage that makes sure SELinux labels
are appliced when committing to storage.  This prevents a failure condition
which arrises from leaked mount points between the time a container is mounted
and it is committed.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #453
Approved by: TomSweeneyRedHat
2018-02-06 20:26:45 +00:00
Nalin Dahyabhai
6b207f7b0c Fix unintended reversal of the ignoreUnrecognizedInstructions flag
We were interpreting the ignoreUnrecognizedInstructions incorrectly, so
fix that, and call out the unrecognized instruction keyword in the error
message (or debug message, if we're ignoring it).

Should fix #451.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #452
Approved by: rhatdan
2018-02-06 18:58:25 +00:00
umohnani8
c14697ebe4 Add documentation and completions for the following flags
--add-host
	--cgroup-parent
	--cpu-period
	--cpu-quota
	--cpu-shares
	--cpuset-mems
	--memory
	--memory-swap
	--security-opt
	--ulimit

These flags are going to be used by buildah run and bud.
The implementation will follow in another PR.

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-06 12:45:17 -05:00
Nalin Dahyabhai
c35493248e build-using-dockerfile: set the 'author' field for MAINTAINER
When we encounter the MAINTAINER keyword in a Dockerfile, imagebuilder
updates the Author field in the imagebuilder.Builder structure.  Pick up
that value when we go to commit the image.

Should fix #448.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #450
Approved by: rhatdan
2018-02-06 01:18:55 +00:00
umohnani8
d03a894969 Fix md2man issues
Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #447
Approved by: rhatdan
2018-02-05 21:52:41 +00:00
Boaz Shuster
fbb8b702bc Return exit code 1 when buildah-rmi fails
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #412
Approved by: rhatdan
2018-02-05 13:50:30 +00:00
Boaz Shuster
815cedfc71 Trim the image reference to just its name before calling getImageName
When setting a container name the getImageName function goes through
all the names of the resolved image and finds the name that contains
the given name by the user.

However, if the user is specifying "docker.io/tagged-image"
the Docker transport returns "docker.io/library/tagged-image" which
makes getImageName returns the original image name because it does
not find a match.

To resolve this issue before calling getImageName the image given
by the user will be trimmed to be just the name.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #422
Approved by: rhatdan
2018-02-04 11:26:43 +00:00
TomSweeneyRedHat
1c97f6ac2c Bump Fedora 26 to 27 in rpm test
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #444
Approved by: rhatdan
2018-02-04 11:25:25 +00:00
umohnani8
bc9d574c10 Add new line after executing template for buildah inspect
No new line was returned when using the --format flag for buildah
inspect.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #442
Approved by: rhatdan
2018-02-02 21:11:52 +00:00
TomSweeneyRedHat
c84db980ae Touch up rmi -f usage statement
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #441
Approved by: rhatdan
2018-02-02 20:15:08 +00:00
umohnani8
85a37b39e8 Add --format and --filter to buildah containers
buildah containers now supports oretty-printing using a Go template
with the --format flag. And output can be filtered based on id, name, or
ancestor.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #437
Approved by: rhatdan
2018-02-02 19:32:06 +00:00
Arthur Mello
49095a83f8 Add --prune,-p option to rmi command
Allows rmi to remove all dangling images (images without a tag and without a child image)
Add new test case

Signed-off-by: Arthur Mello <amello@redhat.com>

Closes: #418
Approved by: rhatdan
2018-02-01 10:50:33 +00:00
TomSweeneyRedHat
6c05a352df Add authfile param to commit
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #433
Approved by: rhatdan
2018-02-01 05:48:09 +00:00
umohnani8
1849466827 Fix --runtime-flag for buildah run and bud
The --runtime-flag flag for buildah run and bud would fail
whenever the global flags of the runtime were passed to it.
Changed it to accept the format [global-flag]=[value] where
global-flag would be converted to --[global-flag] in the code.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #431
Approved by: rhatdan
2018-01-30 18:21:46 +00:00
umohnani8
4f38267342 format should override quiet for images
quiet was overriding format, but we want format to override quiet
if both the flags are set for buildah images.

Changed it so that it errors out if both quiet and format are set.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #426
Approved by: rhatdan
2018-01-30 17:06:51 +00:00
TomSweeneyRedHat
9790b89771 Allow all auth params to work with bud
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #419
Approved by: rhatdan
2018-01-30 15:41:52 +00:00
Fabio Bertinatto
61f5319504 Don't overwrite directory permissions on --chown
Signed-off-by: Fabio Bertinatto <fbertina@redhat.com>

Closes: #389
Approved by: rhatdan
2018-01-30 05:09:06 +00:00
Boaz Shuster
947714fbd2 Unescape HTML characters output into the terminal
By default, the JSON encoder from the Go standard library
escapes the HTML characters which causes the maintainer output
looks strange:

"maintainer": "NGINX Docker Maintainers \u003cdocker-maint@nginx.com\u003e"
Instead of:
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>"

This patch fixes this issue in "buildah-inspect" only as this is
the only place that such characters are displayed.

Note: if the output of "buildah-inspect" is piped or redirected
then the HTML characters are not escaped.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #421
Approved by: rhatdan
2018-01-30 04:51:17 +00:00
Boaz Shuster
c615c3e23d Fix typo then->than
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #423
Approved by: rhatdan
2018-01-29 05:50:24 +00:00
Daniel J Walsh
d0e1ad1a1a Merge pull request #415 from TomSweeneyRedHat/dev/tsweeney/pwdprompt
Prompt for un/pwd if not supplied with --creds
2018-01-27 09:34:51 +01:00
Boaz Shuster
b68f88c53d Fix: setting the container name to the image
In commit 47ac96155f the image name that is used
for setting the container name is taken from the resolved image
unless it is empty.

The image has the "Names" field and right now the first name is
taken. However, when the image is a tagged image, the container name
will end up using the original name instead of the given one.

For example:

$ buildah tag busybox busybox1
$ buildah from busybox1

Will set the name of the container as "busybox-working-container"
while it was expected to be "busybox1-working-container".

This patch fixes this particular issue.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #399
Approved by: rhatdan
2018-01-26 08:07:58 +00:00
TomSweeneyRedHat
7dc787a9c7 Prompt for un/pwd if not supplied with --creds
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2018-01-25 15:03:59 -05:00
TomSweeneyRedHat
2dbb2a13ed Make bud be really quiet
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #408
Approved by: rhatdan
2018-01-24 15:11:30 +00:00
Boaz Shuster
ad49b24d0b Return a better error message when failed to resolve an image
During the creation of a new builder object there are errors
that are only logged into "logrus.Debugf".

If in the end of the process "ref" or "img" are nil and "options.FromImage"
is set then it means that there was an issue.
By default, it was assumed that the image name is wrong. Yet,
this assumption isn't always correct. For example, it might fail due to
authorization or connection errors.

In this patch, I am attempting to fix this problem by checking the
last error stored in the "err" variable and returning the cause
of the failure.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #406
Approved by: rhatdan
2018-01-24 14:03:28 +00:00
Boaz Shuster
ba128004ca Fix "make validate" warnings
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #405
Approved by: rhatdan
2018-01-22 14:46:54 +00:00
TomSweeneyRedHat
5179733c63 Update auth tests and fix bud man page
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #404
Approved by: rhatdan
2018-01-22 13:34:50 +00:00
TomSweeneyRedHat
40c3a57d5a Try to fix buildah-containers.md CI test issue
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #395
Approved by: rhatdan
2018-01-19 20:20:13 +00:00
Daniel J Walsh
de9e71dda7 Drop support for 1.7
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #400
Approved by: rhatdan
2018-01-18 23:00:15 +00:00
TomSweeneyRedHat
1052f3ba40 Create Buildah issue template
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #397
Approved by: rhatdan
2018-01-18 11:56:31 +00:00
Daniel J Walsh
6bad262ff1 Bump to version 0.11
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #394
Approved by: @ripcurld0
2018-01-17 13:42:19 +00:00
TomSweeneyRedHat
092591620b Show ctrid when doing rm -all
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #392
Approved by: rhatdan
2018-01-16 18:38:28 +00:00
Boaz Shuster
4d6c90e902 vendor containers/image to 386d6c33c9d622ed84baf14f4b1ff1be86800ccd
Signed-off-by: Boaz Shuster <bshuster@redhat.com>

Closes: #393
Approved by: rhatdan
2018-01-16 16:44:34 +00:00
TomSweeneyRedHat
17d9a73329 Touchup rm and container man pages
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #391
Approved by: rhatdan
2018-01-16 15:48:29 +00:00
Boaz Shuster
fe2de4f491 Handle commit error gracefully
This change gives a better error message when the commit fails
because of bad authentication.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #385
Approved by: rhatdan
2018-01-15 15:48:20 +00:00
TomSweeneyRedHat
adfb256a0f Remove new errors and use established ones in rmi
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #388
Approved by: rhatdan
2018-01-10 17:01:41 +00:00
Huamin Chen
029bdbcbd0 fix buildah push description
Signed-off-by: Huamin Chen <hchen@redhat.com>

Closes: #387
Approved by: rhatdan
2018-01-09 18:45:53 +00:00
TomSweeneyRedHat
fd995e6166 Add --all functionality to rmi
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #384
Approved by: rhatdan
2018-01-08 21:07:25 +00:00
Nalin Dahyabhai
ae7d2f3547 Ignore sequential duplicate layers when reading v2s1
When a v2s1 image is stored to disk, some of the layer blobs listed in
its manifest may be discarded as.  Account for this.

Start treating a failure to decode v1compat information as a fatal error
instead of trying to fake it.

Tweak how we build the created-by field in history when generating one
from v2s1 information to better match what we see in v2s2 images.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #383
Approved by: rhatdan
2018-01-08 21:06:35 +00:00
Nalin Dahyabhai
86fa0803e8 Sanity check the history/diffid list sizes
When building an image's config blob, add a sanity check that the number
of diffIDs that we're including matches the number of entries in the
history which don't claim to be empty layers.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #383
Approved by: rhatdan
2018-01-08 21:06:35 +00:00
Nalin Dahyabhai
81dfe0a964 When we say we skip a secrets config file, do so
When we warn about not processing a secrets configuration file, actually
skip anything we might have salvaged from it to make our behavior match
the warning.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #380
Approved by: rhatdan
2018-01-05 16:09:53 +00:00
Nalin Dahyabhai
9bff989832 Use NewImageSource() instead of NewImage()
Use NewImageSource() instead of NewImage() when checking if an image is
actually there, since it makes the image library do less work while
answering the same question for us.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #381
Approved by: rhatdan
2018-01-05 15:54:34 +00:00
TomSweeneyRedHat
b8740e386e Add --all to remove containers
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #382
Approved by: rhatdan
2018-01-05 15:53:19 +00:00
Daniel J Walsh
9f5e1b3a77 Make lint was complaining about some vetshowed err
We often use err as a variable inside of subblocks, and
we don't want golint to complain about it.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #379
Approved by: nalind
2018-01-03 21:10:28 +00:00
Daniel J Walsh
01f8c7afee Remove chrootuser handling and use libpod/pkg
I have made a subpackage of libpod to handle chrootuser,
using the user code from buildah.

This patch removes user handling from buildah and uses
projectatomic/libpod/pkg/chrootuser

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #377
Approved by: nalind
2018-01-03 15:36:10 +00:00
TomSweeneyRedHat
67e5341846 Add kernel version requirement to install.md and touchups
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #372
Approved by: rhatdan
2018-01-03 13:00:36 +00:00
Daniel J Walsh
123493895f Merge pull request #370 from rhatdan/master
Bump to version 0.10
2017-12-26 07:18:41 -05:00
Daniel J Walsh
129fb109d5 Bump to version 0.10
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-12-24 06:59:23 -05:00
Boaz Shuster
979c945674 Display Config and Manifest as strings
Builder has the Config and the Manifest fields which are []byte.
The type is []byte because it's eaiser to serialize them that way.
However, when these fields are displayed in the following way:

buildah inspect -f '{{.Config}} {{.Manifest}}' IMAGE

they might be shown as a byte slice.

This patch suggests to use a struct that wraps Builder's exposed
fields such as Config, Manifest and Container as strings and
thus make them readable when inspecting specific field.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #368
Approved by: rhatdan
2017-12-19 22:16:37 +00:00
TomSweeneyRedHat
c77a8d39f1 Create Registry in Docker for Travis CI
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #290
Approved by: rhatdan
2017-12-18 21:26:57 +00:00
TomSweeneyRedHat
f4151372e5 Create Tutorials menu and add link to README.md
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #367
Approved by: rhatdan
2017-12-15 21:28:32 +00:00
Nalin Dahyabhai
0705787a07 Bump containers/image
Bump containers/image to 4fdf9c9b8a1e014705581ff77df6446e67a8318d.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #366
Approved by: nalind
2017-12-15 17:54:40 +00:00
Nalin Dahyabhai
a5129ec3eb Add a test that 'rmi' works with truncated image IDs
Add a test to ensure that 'buildah rmi' works if passed a truncated
version of an image's ID.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #361
Approved by: rhatdan
2017-12-15 12:13:47 +00:00
Nalin Dahyabhai
47ac96155f Use configured registries to resolve image names
When locating an image for pulling, inspection, or pushing, if we're
given an image name that doesn't include a domain/registry, try building
a set of candidate names using the configured registries as domains, and
then pull/inspect/push using the first of those names that works.

If a name that we're given corresponds to a prefix of the ID of a local
image, skip completion and use the ID directly instead.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #360
Approved by: rhatdan
2017-12-14 22:21:16 +00:00
Nalin Dahyabhai
8b2b56d9b8 Update to work with newer image library
Update shallowCopy() to work with the newer version of image.
Remove things from Push() that we don't need to do any more.
Preserve digests in image names, make sure we update creation times, and
add a test to ensure that we can pull, commit, and push using such names
as sources.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #187
Approved by: rhatdan
2017-12-14 20:57:13 +00:00
Nalin Dahyabhai
544e63de42 Bump containers/image and containers/storage
Update containers/image and containers/storage to current master
(17449738f2bb4c6375c20dcdcfe2a6cccf03f312 and
0d32dfce498e06c132c60dac945081bf44c22464, respectively).

Also updates github.com/docker/docker, golang.org/x/sys, and
golang.org/x/text.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #187
Approved by: rhatdan
2017-12-14 20:57:13 +00:00
Máirín Duffy
43a025ebf9 updating logo reference in README
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Removing old 'underwear hat' logo, replacing with updated logo artwork

Closes: #359
Approved by: rhatdan
2017-12-13 17:02:17 +00:00
Antoine Beaupré
6116d6a9bc install golang metapackage from backports
`golang-1.8` *is* available in stretch, but /usr/bin/go does *not*
point to it by default, *unless* the `golang` meta-package is
installed from backports.

Signed-off-by: Antoine Beaupré <anarcat@debian.org>

Closes: #354
Approved by: rhatdan
2017-12-13 12:38:52 +00:00
Antoine Beaupré
e5aa6c9fc5 add install.runc target
We need this for platforms like Ubuntu and Debian that do not ship a
standard (post 1.0rc4) version of runc.

I'm assuming here this is why we're building `runc` on our own here -
but it doesn't make sense to just do that while leaving only a symlink
in $PWD. We want to actually install the thing as well. So we add an
`install.runc` target, similar to `install.libseccomp.sudo` to make
sure we install `runc` in the right location.

An alternative to this would be to change the documentation to do the
`install` command by hand, but this is more error-prone. As runc
trickles down to the Debian distros, we can then just remove the `make
install.runc` call and, eventually, the target itself.

Closes: #355

Signed-off-by: Antoine Beaupré <anarcat@debian.org>

Closes: #354
Approved by: rhatdan
2017-12-13 12:38:52 +00:00
Antoine Beaupré
95ca6c1e1f run make install as sudo
Explicitly using `sudo` in the `make install` line makes it clear that
the rest of the commands can (and probably should) be ran as non-root.

Signed-off-by: Antoine Beaupré <anarcat@debian.org>

Closes: #354
Approved by: rhatdan
2017-12-13 12:38:52 +00:00
Antoine Beaupré
7244ef44fb add Debian stable install instructions
Note that the instructions may seem unusual to people used to
`apt-key`, but they conform to the [emerging standard](https://wiki.debian.org/DebianRepository/UseThirdParty) for
third-party repositories in Debian.

We use ostree from backports because it matches the version in the
Ubuntu Flatpak PPA. We also explicitly require golang 1.8, which gives
us a 1.8.1 runtime in stretch. We otherwise use the Project Atomic,
but that's only because of Skopeo and similar tools.

Signed-off-by: Antoine Beaupré <anarcat@debian.org>

Closes: #354
Approved by: rhatdan
2017-12-13 12:38:52 +00:00
Antoine Beaupré
9df6f62a4c add headings for different OSes
Signed-off-by: Antoine Beaupré <anarcat@debian.org>

Closes: #354
Approved by: rhatdan
2017-12-13 12:38:52 +00:00
Daniel J Walsh
bf01a80b2b Merge pull request #349 from TomSweeneyRedHat/dev/tsweeney/baseline2
Touchup baseline and rpm tests
2017-12-11 12:00:57 -06:00
Daniel J Walsh
ccd3b3fedb Merge pull request #351 from ripcurld0/small_nitpick
Small nitpick at matches(Since,Before)Image in cmd/buildah/images
2017-12-08 20:49:49 -06:00
Nalin Dahyabhai
4b4e25868c Merge pull request #350 from ipbabble/tutorial2
Add a new tutorial for using Buildah with registries
2017-12-07 17:12:36 -05:00
William Henry
4d943752fe Fixed some more Tom nits.
Signed-off-by: William Henry <whenry@redhat.com>
2017-12-07 10:57:14 -07:00
William Henry
9128a40ada Fixed some Tom nits.
Signed-off-by: William Henry <whenry@redhat.com>
2017-12-07 09:18:13 -07:00
William Henry
8910199181 Add a new tutorial for using Buildah with registries
Signed-off-by: William Henry <whenry@redhat.com>
2017-12-07 08:40:03 -07:00
Fabio Bertinatto
1fc5a49958 Add --chown option to add/copy commands
Signed-off-by: Fabio Bertinatto <fbertina@redhat.com>

Closes: #336
Approved by: rhatdan
2017-12-07 13:45:12 +00:00
Boaz Shuster
98f1533731 Small nitpick at matches(Since,Before)Image in cmd/buildah/images
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>
2017-12-07 14:09:45 +02:00
TomSweeneyRedHat
aae843123f Touchup baseline and rpm tests
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-12-06 11:14:09 -05:00
Nalin Dahyabhai
77804bf256 Merge pull request #348 from vbatts/readme
README: better first glance idea
2017-12-05 13:40:28 -05:00
Vincent Batts
7aaa21d70a README: better first glance idea
Fixes #347

Make the project's first-glance easier to digest.

Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
2017-12-05 09:09:44 -05:00
TomSweeneyRedHat
ee9b8cde5a Create rpm and baseline test script
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #346
Approved by: rhatdan
2017-12-04 17:01:00 +00:00
TomSweeneyRedHat
04ea079130 Bump version to 0.9
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #345
Approved by: rhatdan
2017-12-02 11:51:00 +00:00
Nalin Dahyabhai
2dd03d6741 tests/rpm.bats: use Fedora 27
Update tests/rpm.bats to use Fedora 27.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #342
Approved by: rhatdan
2017-12-01 13:22:58 +00:00
TomSweeneyRedHat
1680a5f0a0 Fix iterator and a few typos in baseline test
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #335
Approved by: rhatdan
2017-12-01 00:02:53 +00:00
TomSweeneyRedHat
5dd1a5f3c9 Touchup test scripts for some minor nits
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #335
Approved by: rhatdan
2017-12-01 00:02:53 +00:00
TomSweeneyRedHat
15792b227a Allow push to use the image id
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #341
Approved by: nalind
2017-11-30 23:47:13 +00:00
Daniel J Walsh
38d3cddb0c Make sure builtin volumes have the correct label
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #339
Approved by: nalind
2017-11-28 21:44:17 +00:00
Nalin Dahyabhai
a99d5f0798 Bump the GIT_VALIDATION_EPOCH to a newer version
Bump the GIT_VALIDATION_EPOCH in tests/validate/git-validation.sh to a
later commit.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #340
Approved by: rhatdan
2017-11-28 19:39:37 +00:00
Nalin Dahyabhai
53c3e6434d Bump RPM version to 0.8
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #340
Approved by: rhatdan
2017-11-28 19:39:37 +00:00
Daniel J Walsh
bf40000e72 Bump to v0.8 2017-11-22 16:35:41 +00:00
Daniel J Walsh
fb99d85b76 Need to block access to kernel files systems in /proc and /sys
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #333
Approved by: TomSweeneyRedHat
2017-11-22 16:13:50 +00:00
Daniel J Walsh
85476bf093 Buildah bud does not work with SELinux
buildah bud was not setting the mount label on the image
so SELinux in enforcing mode is blocking writing to the image

This patch also fixes a similar problem with the `buildah mount`
command

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #332
Approved by: TomSweeneyRedHat
2017-11-22 15:36:51 +00:00
Urvashi Mohnani
819c227bf2 Mention docker login in documentation for authentication
Since we fall back to reading the credentials from $HOME/.docker/config
set by docker login when kpod login doesn't have the credentials

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>

Closes: #331
Approved by: rhatdan
2017-11-21 18:06:44 +00:00
TomSweeneyRedHat
4b23819189 Touchup test scripts for some minor nits
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #330
Approved by: rhatdan
2017-11-21 15:39:39 +00:00
Daniel J Walsh
b893112a90 Merge pull request #328 from TomSweeneyRedHat/dev/tsweeney/baselinetest
Create baseline test script
2017-11-21 09:41:03 -05:00
TomSweeneyRedHat
9fa477e303 Create baseline test script
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-11-19 14:27:38 -05:00
Daniel J Walsh
b7e3320fe4 Bump to 0.7
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-11-16 22:00:38 +00:00
Daniel J Walsh
58025ee1be Ignore errors when trying to read containers buildah.json
Since containers can be created using other tools then buildah
we can not fail when they don't have buildah config.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #327
Approved by: nalind
2017-11-16 21:12:38 +00:00
Urvashi Mohnani
7a3bc6efd4 Use credentials from kpod login for buildah
buildah push and from now use the credentials stored in ${XDG_RUNTIME_DIR}/containers/auth.json by kpod login
if the auth file path is changed, buildah push and from can get the credentials from the custom auth file
using the --authfile flag
e.g buildah push --authfile /tmp/auths/myauths.json alpine docker://username/image

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>

Closes: #325
Approved by: rhatdan
2017-11-16 18:08:52 +00:00
Daniel J Walsh
de0fb93f3d Bump to 0.6
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-11-15 17:47:53 +00:00
Urvashi Mohnani
4419612150 Add manifest type conversion to buildah push
buildah push supports manifest type conversion when pushing using the 'dir' transport
Manifest types include oci, v2s1, and v2s2
e.g buildah push --format v2s2 alpine dir:my-directory

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>

Closes: #321
Approved by: rhatdan
2017-11-15 13:38:28 +00:00
Urvashi Mohnani
5ececfad2c Vendor in latest container/image
Adds support for converting manifest types when using the dir transport

Signed-off-by: Urvashi Mohnani <umohnani@redhat.com>

Closes: #321
Approved by: rhatdan
2017-11-15 13:38:28 +00:00
TomSweeneyRedHat
4f376bbb5e Set option.terminal appropriately in run
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #323
Approved by: rhatdan
2017-11-14 19:28:51 +00:00
Anthony Green
d03123204d Add RHEL build instructions.
Signed-off-by: Anthony Green <green@redhat.com>

Closes: #322
Approved by: rhatdan
2017-11-10 11:36:11 +00:00
Nalin Dahyabhai
0df1c44b12 tests: check $status whenever we use run
Always be sure to check $status after using the run helper.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
75fbb8483e Test that "run" fails with unresolvable names
Add a test that makes sure that "buildah run" fails if it can't resolve
the name of the user for the container.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
52e2737460 Rework how we do UID resolution in images
* Use chroot() instead of trying to read the right file ourselves.

This should resolve #66.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
c83cd3fba9 Accept numeric USER values with no group ID
Change our behavior when we're given USER with a numeric UID and no GID,
so that we no longer error out if the UID doesn't correspond to a known
user so that we can use that user's primary GID.  Instead, use GID 0.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
d41ac23a03 Add a test for USER symlink resolution
Add a test that makes sure we catch cases where we attempt to open a
file in the container's tree that's actually a symlink that points out
of the tree.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
dbebeb7235 Never use host methods for parsing USER values
Drop fallbacks for resolving USER values that attempt to look up names
on the host, since that's never predictable.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
9e129fd653 fopenContainerFile: scope filename lookups better
Switch fopenContainerFile from using Stat/Lstat after opening the file
to using openat() to walk the given path, resolving links to keep them
from escaping the container's root fs.  This should resolve #66.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #313
Approved by: rhatdan
2017-11-10 09:58:08 +00:00
Nalin Dahyabhai
0a44c7f162 "run --hostname test": do less setup
We don't need to mount the container for this test or add files to it,
and switching to a smaller base image that already includes a "hostname"
command means we don't need to run a package installer in the container.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #320
Approved by: nalind
2017-11-09 20:27:58 +00:00
Nalin Dahyabhai
b12735358a "run --hostname test": print $output more
Make it easier to troubleshoot the "run --hostname" test.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #320
Approved by: nalind
2017-11-09 20:27:58 +00:00
Nalin Dahyabhai
318beaa720 integration tests: default to /var/tmp
Default to running integration tests using /var/tmp as scratch space,
since it's more likely to support proper SELinux labeling than /tmp,
which is more likely to be on a tmpfs.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #320
Approved by: nalind
2017-11-09 20:27:57 +00:00
Nalin Dahyabhai
f7dc659e52 Bump github.com/vbatts/tar-split
Update github.com/vbatts/tar-split to v0.10.2 and pin that version
instead of master, to pick up https://github.com/vbatts/tar-split/pull/42

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #318
Approved by: rhatdan
2017-11-08 17:36:54 +00:00
Daniel J Walsh
35afa1c1f4 Merge pull request #317 from rhatdan/master
Bump to v0.5
2017-11-07 19:51:29 -05:00
Daniel J Walsh
c71b655cfc Bump to v0.5
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-11-08 00:50:37 +00:00
Daniel J Walsh
ec9db747d9 Merge pull request #316 from rhatdan/selinux
Add secrets patch to buildah
2017-11-07 19:40:04 -05:00
Daniel J Walsh
3e8ded8646 Add secrets patch to buildah
Signed-off-by: umohnani8 <umohnani@redhat.com>
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-11-08 00:01:57 +00:00
Daniel J Walsh
966f32b2ac Add proper SELinux labeling to buildah run
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #294
Approved by: nalind
2017-11-07 22:40:29 +00:00
Daniel J Walsh
cde99f8517 Merge pull request #308 from TomSweeneyRedHat/dev/tsweeney/tip2
Add go tip to build, but allow it to have failures
2017-11-07 14:14:13 -05:00
TomSweeneyRedHat
01db066498 Add tls-verify to bud command
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #297
Approved by: nalind
2017-11-07 19:07:30 +00:00
TomSweeneyRedHat
9653e2ba9a Add go tip to build, but allow it to have failures
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-11-02 16:59:37 -04:00
Daniel J Walsh
4d87007327 Merge pull request #307 from ipbabble/tutorial-fix
Made some edits based on tsweeney feedback.
2017-11-02 13:40:05 -04:00
William Henry
dbea38b440 Add root prompt and some other minor changes
Signed-off-by: William Henry <whenry@redhat.com>
2017-11-02 10:30:11 -06:00
William Henry
0bc120edda Made some edits based on tsweeney feedback.
Signed-off-by: William Henry <whenry@redhat.com>
2017-11-02 10:13:30 -06:00
Daniel J Walsh
297bfa6b30 Merge pull request #305 from TomSweeneyRedHat/dev/tsweeney/tip
Add go 1.9.x to .travis.yml and remove go tip build temporarily
2017-11-02 11:57:17 -04:00
TomSweeneyRedHat
58c078fc88 Add go 1.9.x to .travis.yml and fix tip
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-11-02 11:09:38 -04:00
William Henry
79663fe1a0 Added file tutorials/01-intro.md
A basic introduction to buildah tutorial

Topics covered:
buildah from a base
briefly describes containers/storage and containers/image
buildah run
buildah from scratch
installing packages and files to a scratch image
buildah push and running a buildah built container in docker
buildah bud

Signed-off-by: William Henry <whenry@redhat.com>

Closes: #302
Approved by: rhatdan
2017-10-31 13:06:51 +00:00
Daniel J Walsh
9a4e0e8a28 Merge pull request #299 from rhatdan/logos
Add logos for buildah
2017-10-31 08:47:07 -04:00
TomSweeneyRedHat
515386e1a7 Fix for rpm.bats test issue
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #303
Approved by: nalind
2017-10-30 20:04:51 +00:00
Daniel J Walsh
49bf6fc095 Merge pull request #298 from rhatdan/contributing
Add CONTRIBUTING.md document from skopeo
2017-10-26 15:23:55 -07:00
Daniel J Walsh
d63314d737 Add logos for buildah
Thanks to Máirín Duffy for building these logos for e Buildah project

Patch also fixes references to the project Buildah to be upper case.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-26 15:23:24 -07:00
Daniel J Walsh
b186786563 add CONTRIBUTING.md
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-25 12:06:06 -07:00
Daniel J Walsh
3cc0218280 Merge pull request #296 from TomSweeneyRedHat/dev/tsweeney/docfix/17
Add required runc version to README.md
2017-10-22 06:42:48 -04:00
TomSweeneyRedHat
b794edef6a Add required runc version to README.md
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2017-10-21 14:41:49 -04:00
Ace-Tang
5cc3c510c5 Fix list images with digest not aligned
Signed-off-by: Ace-Tang <aceapril@126.com>

Closes: #289
Approved by: rhatdan
2017-10-18 13:58:31 +00:00
Daniel J Walsh
5aec4fe722 Vendor in latest code from containers/storage
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #288
Approved by: rhatdan
2017-10-13 19:53:54 +00:00
Daniel J Walsh
1513b82eed Merge pull request #237 from rhatdan/vendor
Vendor in latest containers/storage and vendor/github.com/sirupsen
2017-10-10 14:44:28 -04:00
Daniel J Walsh
7d5e57f7ff Optimize regex matching
Make lint is showing that we should compile regex before using it
in a for loop.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-10 17:36:24 +00:00
Daniel J Walsh
8ecefa978c Vendor in changes to support sirupsen/logrus
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-10 17:30:11 +00:00
Daniel J Walsh
a673ac7ae6 Merge pull request #284 from rhatdan/cleanup
Fix validateOptions to match kpod
2017-10-10 08:22:36 -04:00
Daniel J Walsh
99e512e3f2 Merge pull request #282 from nalind/unit-tests
Run unit tests in CI
2017-10-09 16:20:56 -04:00
Daniel J Walsh
166d4db597 Fix validateOptions to match kpod
This patch will allow for the possibility of a valid "-"
option value.  This is often used for taking from STDIN,
and should future proof us from this breaking if ever added.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-10-09 16:12:53 +00:00
Nalin Dahyabhai
c04748f3fb Allow specifying store locations for unit tests
Add options for specifying the root location when we're running unit
tests, so that we don't try to use the system's default location, which
we should avoid messing with if we can.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-09 11:29:49 -04:00
Nalin Dahyabhai
63e314ea22 Add make targets for unit tests, and run them
Add targets to the top-level Makefile for running unit and integration
tests, and start having Travis run the unit tests.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 15:44:06 -04:00
Nalin Dahyabhai
0d6bf94eb6 Fix some validation errors
Fix some validation errors flagged by metalinter.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
f88cddfb4d unit tests: always make sure we have images
In unit tests that assume that one or more images are present, make sure
we actually have pulled some images.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
0814bc19bd Make filtering by date use the image's date
When filtering "images" output by the date of an image, use the creation
date recorded in the image.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
422ad51afb Fix a compile error in the unit tests
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
a5a3a7be11 Update test for output formatting changes
When we added more spaces between the columns of output from the
"images" command, we didn't update the test to expect it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
2017-10-06 12:52:15 -04:00
Nalin Dahyabhai
a4b830a9fc images: don't list unnamed images twice
The "images" command was erroneously listing images that don't have
names twice, once with no name, and a second time with "<none>" as a
placeholder.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #280
Approved by: rhatdan
2017-10-06 15:35:10 +00:00
Daniel J Walsh
68ccdd77fe Fix timeout issue
We are seeing some wierd timeouts in testing, if we eliminate
the bridged networkin in the container runtime, we hope to be
able to eliminate this problem.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #283
Approved by: nalind
2017-10-06 14:30:56 +00:00
TomSweeneyRedHat
6124673bbc Add further tty verbiage to buildah run
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #276
Approved by: rhatdan
2017-10-03 10:44:07 +00:00
TomSweeneyRedHat
cac2dd4dd8 Make inspect try an image on failure if type not specified
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #277
Approved by: rhatdan
2017-10-03 10:31:45 +00:00
Lokesh Mandvekar
70b57afda6 use Makefile var for go compiler
This will allow compilation with a custom go binary,
for example /usr/lib/go-1.8/bin/go instead of /usr/bin/go on Ubuntu
16.04 which is still version 1.6

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>

Closes: #278
Approved by: rhatdan
2017-10-03 10:31:07 +00:00
Daniel J Walsh
f6c2a1e24e Make sure pushing ends up with CLI on a fresh new line
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #275
Approved by: rhatdan
2017-09-29 15:58:39 +00:00
Daniel J Walsh
480befa88f Add CHANGELOG.md to document buildah progress.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #273
Approved by: rhatdan
2017-09-28 17:08:11 +00:00
Daniel J Walsh
a3fef4879e Validate options
If an string option is passed in, and it is not followed by a value,
then error out with a message saying the option requires a value.

For example

buildah from --creds --pull dan
option --creds requires a value

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #270
Approved by: rhatdan
2017-09-28 17:07:03 +00:00
Daniel J Walsh
330cfc923c Only print logrus if in debug mode
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #270
Approved by: rhatdan
2017-09-28 17:07:03 +00:00
Daniel J Walsh
0fc0551edd Cleanup buildah-run code
Part of the general buildah code cleanup.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
296a752555 Cleanup buildah-rmi code
Part of the general buildah code cleanup.

1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
3fbfb56001 Cleanup buildah-mount code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
50a6a566ca Cleanup buildah-inspect code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
aca2c96602 Cleanup buildah-config code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
57a0f38db6 Cleanup buildah-images code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
ff39bf0b80 Cleanup buildah code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
4b38cff005 Cleanup buildah-push code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
89949a1156 Cleanup buildah-from code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
97ec4563b4 Cleanup buildah-containers code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
47665ad777 Cleanup buildah-commit code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
e1e58584a9 Cleanup buildah-bud code
1. Sort options so they are in alphabet order
2. Remove extra lines of code for options parsing that really do not accomplish anything.
3. Remove variables when they are not necessary, I.E. Don't create a variable to hold an
option that is only used once, use the option instead.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #267
Approved by: <username_without_leading_@>
2017-09-26 18:11:26 +00:00
Daniel J Walsh
62fc48433c Add support for buildah run --hostname
Need to set the hostname inside of a container.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #266
Approved by: nalind
2017-09-24 09:55:24 +00:00
Daniel J Walsh
a72aaa2268 Update buildah spec file to match new version
Match the version 0.4 in the spec file and add the
comments that went into Fedora release

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #269
Approved by: rhatdan
2017-09-22 14:30:46 +00:00
Daniel J Walsh
9cbccf88cf Merge pull request #268 from rhatdan/master
Bump to version 0.4
2017-09-22 06:10:31 -04:00
Daniel J Walsh
de0d8cbdcf Bump to version 0.4
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-09-22 10:05:03 +00:00
Daniel J Walsh
333899deb6 Merge pull request #264 from lsm5/fix-readme-ubuntu
update README.md to remove Debian mentions
2017-09-22 05:59:07 -04:00
TomSweeneyRedHat
1d0b48d7da Add default transport to push if not provided
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #260
Approved by: rhatdan
2017-09-21 21:02:23 +00:00
Lokesh Mandvekar
a2765bb1be update README.md to remove Debian mentions
ppa install steps mentioned in README.md work only for Ubuntu xenial and
zesty so far.

Signed-off-by: Lokesh Mandvekar <lsm5@fedoraproject.org>
2017-09-20 02:47:36 -04:00
Daniel J Walsh
c19c8f9503 Merge pull request #261 from vbatts/example
examples: adding a basic lighttpd example
2017-09-18 10:42:04 -04:00
Vincent Batts
f17bfb937f examples: adding a basic lighttpd example
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
2017-09-18 08:43:52 -04:00
Daniel J Walsh
4e4ceff6cf Merge pull request #244 from rhatdan/unbuntu
Add build information for Ubuntu
2017-09-08 07:56:11 -04:00
Daniel J Walsh
ef532adb2f Add build information for Ubuntu
We should document requied packages for installing on Ubuntu and Debian
to match up with the use on Fedora.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2017-09-06 06:30:27 -04:00
Nalin Dahyabhai
9327431e97 Avoid trying to print a nil ImageReference
When we fail to pull an image, don't try to include the name of the
image that pullImage() returned in the error text - it will have
returned nil for the pulled reference in most cases.  Instead, use the
name of the image as we were told it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #255
Approved by: nalind
2017-08-31 21:24:35 +00:00
TomSweeneyRedHat
c9c735e20d Add authentication to commit and push
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #250
Approved by: rhatdan
2017-08-29 15:20:19 +00:00
Nalin Dahyabhai
f28dcb3751 Auto-set build tags for ostree and selinux
Try to use pkg-config to check for 'ostree-1' and 'libselinux'.

If ostree-1 is not found, use the containers_image_ostree_stub build tag
to not require it, at the cost of not being able to use or write images
to the 'ostree' transport.

If libselinux is found, build with the 'selinux' tag,

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #252
Approved by: rhatdan
2017-08-29 13:22:53 +00:00
Daniel J Walsh
9e088bd41d Add information on buildah from man page on transports
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #234
Approved by: rhatdan
2017-08-29 10:37:54 +00:00
Daniel J Walsh
52087ca1c5 Remove --transport flag
This is no simpler then putting the transport in the image page,
we should default to the registry specified in containers/image
and not override it.  People are confused by this option, and I
see no value.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #234
Approved by: rhatdan
2017-08-29 10:37:54 +00:00
Nalin Dahyabhai
0de0d23df4 Run: don't complain about missing volume locations
Don't worry about not being able to populate temporary volumes using the
contents of the location in the image where they're expected to be
mounted if we fail to do so because that location doesn't exist.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #248
Approved by: rhatdan
2017-08-24 10:41:29 +00:00
TomSweeneyRedHat
498f0ae9d7 Add credentials to buildah from
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Add credentials to buildah from

Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #204
Approved by: nalind
2017-08-22 18:55:38 +00:00
Daniel J Walsh
ee91e6b981 Remove export command
We have implemented most of this code in kpod export, and we now
have kpod import/load/save.  No reason to implement them in both
commands.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #245
Approved by: nalind
2017-08-17 19:40:47 +00:00
Nalin Dahyabhai
265d2da6cf Always free signature.PolicyContexts
Whenever we create a containers/image/signature.PolicyContext, make sure
we don't forget to destroy it.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #231
Approved by: rhatdan
2017-08-14 12:02:07 +00:00
Nalin Dahyabhai
8eb7d6d610 Run(): create the right working directory
When ensuring that the working directory exists before running a
command, make sure we create the location that we set in the
configuration file that we pass to runc.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #241
Approved by: rhatdan
2017-08-10 20:14:54 +00:00
Nalin Dahyabhai
94f2bf025a Replace --registry with --transport
Replace --registry command line flags with --transport.  For backward
compatibility, add Transport as an addtional setting that we prepend to
the still-optional Registry setting if the Transport and image name
alone don't provide a parseable image reference.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #235
Approved by: rhatdan
2017-08-03 15:55:13 +00:00
Nalin Dahyabhai
262b43a866 Improve "from" behavior with unnamed references
Fix our instantiation behavior when the source image reference is not a
named reference.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #235
Approved by: rhatdan
2017-08-03 15:55:13 +00:00
Daniel J Walsh
5259a84b7a atomic transport is being deprecated, so we should not document it.
containers/image now fully supports pushing images and signatures to an
openshift/atomic registry using the docker:// transport.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #238
Approved by: rhatdan
2017-08-02 12:01:26 +00:00
TomSweeneyRedHat
bf83bc208d Add quiet description and touch ups
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #233
Approved by: rhatdan
2017-07-30 12:34:36 +00:00
Nalin Dahyabhai
e616dc116a Avoid parsing image metadata for dates and layers
Avoid parsing metadata that the image library keeps in order to find an
image's digest and creation date; instead, compute the digest from the
manifest, and read the creation date value by inspecting the image,
logging a debug-level diagnostic if it doesn't match the value that the
storage library has on record.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #218
Approved by: rhatdan
2017-07-28 12:23:07 +00:00
Nalin Dahyabhai
933c18f2ad Read the image's creation date from public API
Use the storage library's new public field for retrieving an image's
creation date.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #218
Approved by: rhatdan
2017-07-28 12:23:07 +00:00
Nalin Dahyabhai
be5bcd549d Bump containers/storage and containers/image
Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #227
Approved by: rhatdan
2017-07-28 12:10:46 +00:00
Jonathan Lebon
c845d7a5fe ci: use Fedora registry and mount repos
Use the official Fedora 26 image from the Fedora registry rather than
from the Docker Hub.

Also mount yum repos from the host. This will speed up provisioning
because PAPR injects mirror repos that are much closer and faster.

Signed-off-by: Jonathan Lebon <jlebon@redhat.com>

Closes: #225
Approved by: rhatdan
2017-07-27 18:39:31 +00:00
Jonathan Lebon
16d9d97d8c ci: rename files to the new PAPR name
Rename the YAML file and its auxiliary files to the newly supported
name.

Signed-off-by: Jonathan Lebon <jlebon@redhat.com>

Closes: #225
Approved by: rhatdan
2017-07-27 18:39:31 +00:00
Nalin Dahyabhai
8e36b22a71 Makefile: "clean" should also remove test helpers
Make "clean" also remove the imgtype helper.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #216
Approved by: rhatdan

Closes: #216
Approved by: rhatdan
2017-07-26 20:59:38 +00:00
Nalin Dahyabhai
98c4e0d970 Don't panic if an image's ID can't be parsed
Return a "doesn't match" result if an image's ID can't be turned into a
valid reference for any reason.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #217
Approved by: rhatdan
2017-07-26 20:35:47 +00:00
Nalin Dahyabhai
83fe25ca4e Turn on --enable-gc when running gometalinter
It looks like the metalinter is running out of memory while running
tests under PAPR, so give this a try.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #221
Approved by: rhatdan
2017-07-26 17:43:05 +00:00
Nalin Dahyabhai
b7e9966fb2 Make sure that we can build an RPM
Add a CI test that ensures that we can build an RPM package on the
current version (as of this writing, 26) of Fedora, using the .spec file
under contrib.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #208
Approved by: jlebon
2017-07-25 21:03:38 +00:00
Nalin Dahyabhai
8a3ccb53c4 rmi: handle truncated image IDs
Have storageImageID() use a lower-level image lookup to let it handle
truncated IDs correctly.  Wrap errors in getImage().  When reporting
that an image is in use, report its ID correctly.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #211
Approved by: rhatdan
2017-07-25 20:52:33 +00:00
Nalin Dahyabhai
3e9a075b48 Don't leak containers/image Image references
In-memory image objects created using an ImageReference's NewImage()
method need to be Close()d.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #211
Approved by: rhatdan
2017-07-25 20:52:33 +00:00
Nalin Dahyabhai
95d9d22949 Prefer higher-level storage APIs
Prefer higher-level storage APIs (Store) over lower-level storage APIs
(LayerStore, ImageStore, and ContainerStore objects), so that we don't
bypass synchronization and locking.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #211
Approved by: rhatdan
2017-07-25 20:52:33 +00:00
TomSweeneyRedHat
728f641179 Update README.md with buildah pronunciation
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #210
Approved by: rhatdan
2017-07-25 17:17:54 +00:00
Nalin Dahyabhai
8ce683f4fe Build our own libseccomp in Travis
The libseccomp2 in Travis (or rather, in the default repositories that
we have) is too old to support conditional filtering, so we need to
supply our own in order to not get "conditional filtering requires
libseccomp version >= 2.2.1" errors from runc.

That version also appears to be happy to translate the syscall name
_llseek from a name to a number that it doesn't recognize, triggering
"unrecognized syscall" errors.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Nalin Dahyabhai
fd7762b7e2 Try to enable "run" tests in CI
Try to ensure that we have runc, so that we can test the "run" command
in CI.  In the absence of a compatible packaged version of runc, we may
have to build our own.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Nalin Dahyabhai
9333e5369d Switch to running PAPR tests on Fedora 26
Run PAPR tests using Fedora 26 instead of Fedora 25.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Nalin Dahyabhai
e92020a4db Keep the version in the .spec file current
Add a test to compare the version we claim to be with the version
recorded in the RPM .spec file.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #202
Approved by: rhatdan
2017-07-24 13:03:42 +00:00
Dan Walsh
b9b2a8a7ef Vendor in latest containers/image and bump to version 0.3
OCI 1.0 Image specification is now released so we want to bump
buildah version to support 1.0 images.

Bump to version 0.3

Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #203
Approved by: rhatdan
2017-07-20 20:23:32 +00:00
Nalin Dahyabhai
b37a981500 Stop trying to set the Platform in runtime specs
run: The latest version of runtime-spec dropped the Platform field, so
stop trying to set it when generating a configuration for a runtime.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #201
Approved by: rhatdan
2017-07-20 18:38:19 +00:00
Nalin Dahyabhai
a500e22104 Update image-spec and runtime-spec to v1.0.0
Update to just-released versions of image-spec and runtime-spec, and the
latest version of runtime-tools.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #201
Approved by: rhatdan
2017-07-20 18:38:19 +00:00
Daniel J Walsh
ac2aad6343 Update the version number we identify as to 0.2.
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #199
Approved by: nalind
2017-07-18 20:55:06 +00:00
Dan Walsh
c8a887f512 Add support for -- ending options parsing to buildah run
If you specify an option in a buildah run command, the command fails.
The proper syntax for this is to add --

buildah run $ctr -- ls -l /

Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #197
Approved by: nalind
2017-07-18 19:19:45 +00:00
Dan Walsh
f4a5511e83 Only print heading once when executing buildah images
The current buildah images command prints the heading twice, this
bug was introduced when --json flag was added.

Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #195
Approved by: rhatdan
2017-07-17 20:37:08 +00:00
Daniel J Walsh
a6f7d725a0 Add/Copy need to support glob syntax
This patch allows users to do
buildah add $ctr * /dest

Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #194
Approved by: nalind
2017-07-17 20:11:48 +00:00
Daniel J Walsh
dd98523b8d Add flag to remove containers on commit
I think this would be good practice to eliminate wasted disk space.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Signed-off-by: Dan Walsh <dwalsh@redhat.com>

Closes: #189
Approved by: rhatdan
2017-07-17 19:07:21 +00:00
Daniel J Walsh
98ca81073e Improve buildah push man page and help information
This better documents the options available to the user.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #186
Approved by: rhatdan
2017-07-13 19:58:04 +00:00
Tomas Tomecek
5f80a1033b add a way to disable PTY allocation
Fixes #179

Signed-off-by: Tomas Tomecek <ttomecek@redhat.com>

Closes: #181
Approved by: nalind
2017-07-13 14:18:08 +00:00
Tomas Tomecek
70518e7093 clarify --runtime-flag of run command
Signed-off-by: Tomas Tomecek <ttomecek@redhat.com>

Fixes #178

Closes: #182
Approved by: rhatdan
2017-07-10 19:12:48 +00:00
Tomas Tomecek
0c70609031 gitignore build artifacts
* manpages
* buildah binary
* imgtype binary

Signed-off-by: Tomas Tomecek <ttomecek@redhat.com>

Closes: #183
Approved by: rhatdan
2017-07-10 19:02:13 +00:00
Daniel J Walsh
b1c6243f8a Need \n for Printing untagged message
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #176
Approved by: rhatdan
2017-06-29 13:32:47 +00:00
Nalin Dahyabhai
12a3abf6fa Update to match newer storage and image-spec APIs
Update to adjust to new types and method signatures in just-updated
vendored code.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #174
Approved by: rhatdan
2017-06-28 21:05:58 +00:00
Nalin Dahyabhai
f46ed32a11 Build imgtype independently
Just build imgtype once, and reuse the flags we use for the main binary.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #174
Approved by: rhatdan
2017-06-28 21:05:58 +00:00
Nalin Dahyabhai
be2d536f52 Bump containers/storage and containers/image
Bump containers/storage and containers/image, and pin them to their
current versions.  This requires that we update image-spec to rc6 and
add github.com/ostreedev/ostree-go, which adds build-time requirements
on glib2-devel and ostree-devel.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #174
Approved by: rhatdan
2017-06-28 21:05:58 +00:00
Nalin Dahyabhai
72253654d5 imgtype: don't log at Fatal level
Logging at Fatal calls os.Exit(), which keeps us from shutting down
storage properly, which prevents test cleanup from succeeding.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #162
Approved by: rhatdan
2017-06-28 20:16:31 +00:00
Nalin Dahyabhai
a2bd274d11 imgtype: add debugging, reexec, an optimization
In the imgtype test helper, add a -debug flag, correctly handle things
on the off chance that we need to call a reexec handler, and read the
manifest using the Manifest() method of an image that we're already
opening, rather than creating a source image just so that we can call
its GetManifest() method.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #162
Approved by: rhatdan
2017-06-28 20:16:31 +00:00
Nalin Dahyabhai
416301306a Make it possible to run tests with non-vfs drivers
Make the tests use the storage driver named in $STORAGE_DRIVER, if one's
set, instead of hard-coding the default of "vfs".

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #162
Approved by: rhatdan
2017-06-28 20:16:31 +00:00
Daniel J Walsh
a49a32f55f Add buildah export support
Will export the contents of a container as a tar ball.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #170
Approved by: rhatdan
2017-06-28 20:06:42 +00:00
Daniel J Walsh
7af6ab2351 Add the missing buildah version from the readme and the buildah man page
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #173
Approved by: rhatdan
2017-06-28 15:59:35 +00:00
Ryan Cole
6d85cd3f7d update 'buildah images' and 'buildah rmi' commands
add more flags to `buildah images` and `buildah rmi`, and write tests

Signed-off-by: Ryan Cole <rcyoalne@gmail.com>

Closes: #155
Approved by: rhatdan
2017-06-28 15:36:19 +00:00
Tomas Tomecek
d9a77b38fc readme: link to actual manpages
Signed-off-by: Tomas Tomecek <ttomecek@redhat.com>

Closes: #165
Approved by: rhatdan
2017-06-27 16:10:40 +00:00
Brent Baude
e50fee5738 cmd/buildah/containers.go: Add JSON output option
Consumers of the buildah output will need structured text like
the JSON format.  This commit adds a --json option to
buildah containers.

Example output:
```
[
    {
        "ID": "8911b523771cb2e0a26ab9bb324fb5be4e992764fdd5ead86a936aa6de964d9a",
        "Builder": true,
        "ImageId": "26db5ad6e82d85265d1609e6bffc04331537fdceb9740d36f576e7ee4e8d1be3",
        "ImageName": "docker.io/library/alpine:latest",
        "ContainerName": "alpine-working-container"
    }
]

```

Signed-off-by: Brent Baude <bbaude@redhat.com>

Closes: #164
Approved by: rhatdan
2017-06-27 16:01:07 +00:00
umohnani8
63ca9028bc Add 'buildah version' command
Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #157
Approved by: rhatdan
2017-06-27 15:50:36 +00:00
Brent Baude
62845372ad cmd/buildah/images.go: Add JSON output option
The Atomic CLI will eventually need to be able to consume
structured output (in something like JSON).  This commit
adds a -j option to output to trigger JSON output of
images.

Example output:
```
[
    {
        "id": "aa66247d48aedfa3e9b74e4a41d2c9e5d2529122c8f0d43417012028a66f4f3b",
        "names": [
            "docker.io/library/busybox:latest"
        ]
    },
    {
        "id": "26db5ad6e82d85265d1609e6bffc04331537fdceb9740d36f576e7ee4e8d1be3",
        "names": [
            "docker.io/library/alpine:latest"
        ]
    }
]
```

Signed-off-by: Brent Baude <bbaude@redhat.com>

Closes: #161
Approved by: rhatdan
2017-06-26 16:05:58 +00:00
Nalin Dahyabhai
5458250462 Update the example commit target to skip transport
Update the target name that we use when committing an image in the
example script to not mention the local storage transport, since the
default is to use it anyway.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #163
Approved by: rhatdan
2017-06-26 15:56:18 +00:00
Nalin Dahyabhai
8efeb7f4ac Handle "run" without an explicit command correctly
When "run" isn't explicitly given a command, mix the command and
entrypoint options and configured values together correctly.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #160
Approved by: rhatdan
2017-06-26 13:21:53 +00:00
Nalin Dahyabhai
303a8df35d Ensure volume points get created, and with perms
Ensure that volume points are created, if they don't exist, when they're
defined in a Dockerfile (#151), and that if we create them, we create
them with 0755 permissions (#152).

When processing RUN instructions or the run command, if we're not
mounting something in a volume's location, create a copy of the volume's
initial contents under the container directory and bind mount that.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #154
Approved by: rhatdan
2017-06-24 10:37:13 +00:00
Nalin Dahyabhai
21b1a9349d Add a -a/--all option to "buildah containers"
Add a --all option to "buildah containers" that causes it to go through
the full list of containers, providing information about the ones that
aren't buildah containers in addition to the ones that are.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #148
Approved by: rhatdan
2017-06-22 21:01:33 +00:00
TomSweeneyRedHat
8b99eae5e8 Add runc to required packages in README.md
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #153
Approved by: rhatdan
2017-06-20 21:04:51 +00:00
Nalin Dahyabhai
cd6b5870e2 Make shallowCopy() not use a temporary image
Modify shallowCopy() to not use a temporary image.  Assume that the big
data items that we formerly added to the temporary image are small
enough that we can just hang on to them.

Write everything to the destination reference instead of a temporary
image, read it all back using the low level APIs, delete the image, and
then recreate it using the new layer and the saved items and names.

This lets us lift the requirement that we shallowCopy only to images
with names, so that build-using-dockerfile will work without them again.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #150
Approved by: rhatdan
2017-06-16 15:34:53 +00:00
Nalin Dahyabhai
02f5235773 Add a quick note about state version values
Add a note that the version we record in our state file isn't
necessarily the package version, but is meant to change when we make
incompatible changes to the contents of the state file.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #147
Approved by: rhatdan
2017-06-14 17:47:07 +00:00
1066 changed files with 117046 additions and 56586 deletions

28
.copr/Makefile Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/make -f
spec := contrib/rpm/buildah_copr.spec
outdir := $(CURDIR)
tmpdir := build
gitdir := $(PWD)/.git
rev := $(shell sed 's/\(.......\).*/\1/' $(gitdir)/$$(sed -n '/^ref:/{s/.* //;p}' $(gitdir)/HEAD))
date := $(shell date +%Y%m%d.%H%M)
version := $(shell sed -n '/Version:/{s/.* //;p}' $(spec))
release := $(date).git.$(rev)
srpm: $(outdir)/buildah-$(version)-$(release).src.rpm
$(tmpdir)/buildah.spec: $(spec)
@mkdir -p $(tmpdir)
sed '/^Release:/s/\(: *\).*/\1$(release)%{?dist}/' $< >$@
$(tmpdir)/$(version).tar.gz: $(gitdir)/..
@mkdir -p $(tmpdir)
tar c --exclude-vcs --exclude-vcs-ignores -C $< --transform 's|^\.|buildah-$(version)|' . | gzip -9 >$@
$(outdir)/buildah-$(version)-$(release).src.rpm: $(tmpdir)/buildah.spec $(tmpdir)/$(version).tar.gz
@mkdir -p $(outdir)
rpmbuild -D'_srcrpmdir $(outdir)' -D'_sourcedir $(tmpdir)' -bs $(tmpdir)/buildah.spec
.PHONY: srpm

65
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,65 @@
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
**Steps to reproduce the issue:**
1.
2.
3.
**Describe the results you received:**
**Describe the results you expected:**
**Output of `rpm -q buildah` or `apt list buildah`:**
```
(paste your output here)
```
**Output of `buildah version`:**
```
(paste your output here)
```
**Output of `cat /etc/*release`:**
```
(paste your output here)
```
**Output of `uname -a`:**
```
(paste your output here)
```
**Output of `cat /etc/containers/storage.conf`:**
```
(paste your output here)
```

4
.gitignore vendored Normal file
View File

@@ -0,0 +1,4 @@
docs/buildah*.1
/buildah
/imgtype
/build/

View File

@@ -14,15 +14,26 @@ dnf install -y \
device-mapper-devel \
findutils \
git \
glib2-devel \
gnupg \
golang \
gpgme-devel \
libassuan-devel \
libseccomp-devel \
libselinux-devel \
libselinux-utils \
make \
openssl \
ostree-devel \
skopeo-containers \
which
# Red Hat CI adds a merge commit, for testing, which fails the
# Install gomega
go get github.com/onsi/gomega/...
# PAPR adds a merge commit, for testing, which fails the
# short-commit-subject validation test, so tell git-validate.sh to only check
# up to, but not including, the merge commit.
export GITVALIDATE_TIP=$(cd $GOSRC; git log -2 --pretty='%H' | tail -n 1)
make -C $GOSRC install.tools all validate
$GOSRC/tests/test_runner.sh
make -C $GOSRC install.tools runc all validate test-unit test-integration TAGS="seccomp"

49
.papr.yml Normal file
View File

@@ -0,0 +1,49 @@
branches:
- master
- auto
- try
host:
distro: fedora/26/atomic
required: true
tests:
# Let's create a self signed certificate and get it in the right places
- hostname
- ip a
- ping -c 3 localhost
- cat /etc/hostname
- mkdir -p /home/travis/auth
- openssl req -newkey rsa:4096 -nodes -sha256 -keyout /home/travis/auth/domain.key -x509 -days 2 -out /home/travis/auth/domain.crt -subj "/C=US/ST=Foo/L=Bar/O=Red Hat, Inc./CN=localhost"
- cp /home/travis/auth/domain.crt /home/travis/auth/domain.cert
- sudo mkdir -p /etc/docker/certs.d/docker.io/
- sudo cp /home/travis/auth/domain.crt /etc/docker/certs.d/docker.io/ca.crt
- sudo mkdir -p /etc/docker/certs.d/localhost:5000/
- sudo cp /home/travis/auth/domain.crt /etc/docker/certs.d/localhost:5000/ca.crt
- sudo cp /home/travis/auth/domain.crt /etc/docker/certs.d/localhost:5000/domain.crt
# Create the credentials file, then start up the Docker registry
- docker run --entrypoint htpasswd registry:2 -Bbn testuser testpassword > /home/travis/auth/htpasswd
- docker run -d -p 5000:5000 --name registry -v /home/travis/auth:/home/travis/auth:Z -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/home/travis/auth/htpasswd -e REGISTRY_HTTP_TLS_CERTIFICATE=/home/travis/auth/domain.crt -e REGISTRY_HTTP_TLS_KEY=/home/travis/auth/domain.key registry:2
# Test Docker setup
- docker ps --all
- docker images
- ls -alF /home/travis/auth
- docker pull alpine
- docker login localhost:5000 --username testuser --password testpassword
- docker tag alpine localhost:5000/my-alpine
- docker push localhost:5000/my-alpine
- docker ps --all
- docker images
- docker rmi docker.io/alpine
- docker rmi localhost:5000/my-alpine
- docker pull localhost:5000/my-alpine
- docker ps --all
- docker images
- docker rmi localhost:5000/my-alpine
# mount yum repos to inherit injected mirrors from PAPR
- docker run --net=host --privileged -v /etc/yum.repos.d:/etc/yum.repos.d.host:ro
-v $PWD:/code registry.fedoraproject.org/fedora:26 sh -c
"cp -fv /etc/yum.repos.d{.host/*.repo,} && /code/.papr.sh"

View File

@@ -1,12 +0,0 @@
branches:
- master
- auto
- try
host:
distro: fedora/25/atomic
required: true
tests:
- docker run --privileged -v $PWD:/code fedora:25 /code/.redhat-ci.sh

View File

@@ -1,14 +1,65 @@
language: go
go:
- 1.7
- 1.8
- tip
dist: trusty
sudo: required
go:
- 1.8
- 1.9.x
- tip
matrix:
# If the latest unstable development version of go fails, that's OK.
allow_failures:
- go: tip
# Don't hold on the tip tests to finish. Mark tests green if the
# stable versions pass.
fast_finish: true
services:
- docker
before_install:
- sudo add-apt-repository -y ppa:duggan/bats
- sudo apt-get update
- sudo apt-get -qq install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libselinux1-dev
- sudo apt-get -qq remove libseccomp2
- sudo apt-get -qq update
- sudo apt-get -qq install bats btrfs-tools git libdevmapper-dev libgpgme11-dev
- sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
- mkdir /home/travis/auth
install:
# Let's create a self signed certificate and get it in the right places
- hostname
- ip a
- ping -c 3 localhost
- cat /etc/hostname
- openssl req -newkey rsa:4096 -nodes -sha256 -keyout /home/travis/auth/domain.key -x509 -days 2 -out /home/travis/auth/domain.crt -subj "/C=US/ST=Foo/L=Bar/O=Red Hat, Inc./CN=localhost"
- cp /home/travis/auth/domain.crt /home/travis/auth/domain.cert
- sudo mkdir -p /etc/docker/certs.d/docker.io/
- sudo cp /home/travis/auth/domain.crt /etc/docker/certs.d/docker.io/ca.crt
- sudo mkdir -p /etc/docker/certs.d/localhost:5000/
- sudo cp /home/travis/auth/domain.crt /etc/docker/certs.d/localhost:5000/ca.crt
- sudo cp /home/travis/auth/domain.crt /etc/docker/certs.d/localhost:5000/domain.crt
# Create the credentials file, then start up the Docker registry
- docker run --entrypoint htpasswd registry:2 -Bbn testuser testpassword > /home/travis/auth/htpasswd
- docker run -d -p 5000:5000 --name registry -v /home/travis/auth:/home/travis/auth:Z -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/home/travis/auth/htpasswd -e REGISTRY_HTTP_TLS_CERTIFICATE=/home/travis/auth/domain.crt -e REGISTRY_HTTP_TLS_KEY=/home/travis/auth/domain.key registry:2
script:
- make install.tools all validate
# Let's do some docker stuff just for verification purposes
- docker ps --all
- docker images
- ls -alF /home/travis/auth
- docker pull alpine
- docker login localhost:5000 --username testuser --password testpassword
- docker tag alpine localhost:5000/my-alpine
- docker push localhost:5000/my-alpine
- docker ps --all
- docker images
- docker rmi docker.io/alpine
- docker rmi localhost:5000/my-alpine
- docker pull localhost:5000/my-alpine
- docker ps --all
- docker images
- docker rmi localhost:5000/my-alpine
# Setting up Docker Registry is complete, let's do Buildah testing!
- make install.tools install.libseccomp.sudo all runc validate TAGS="apparmor seccomp containers_image_ostree_stub"
- go test -c -tags "apparmor seccomp `./btrfs_tag.sh` `./libdm_tag.sh` `./ostree_tag.sh` `./selinux_tag.sh`" ./cmd/buildah
- tmp=`mktemp -d`; mkdir $tmp/root $tmp/runroot; sudo PATH="$PATH" ./buildah.test -test.v -root $tmp/root -runroot $tmp/runroot -storage-driver vfs -signature-policy `pwd`/tests/policy.json
- cd tests; sudo PATH="$PATH" ./test_runner.sh

80
CHANGELOG.md Normal file
View File

@@ -0,0 +1,80 @@
# Changelog
## 0.5 - 2017-11-07
Add secrets patch to buildah
Add proper SELinux labeling to buildah run
Add tls-verify to bud command
Make filtering by date use the image's date
images: don't list unnamed images twice
Fix timeout issue
Add further tty verbiage to buildah run
Make inspect try an image on failure if type not specified
Add support for `buildah run --hostname`
Tons of bug fixes and code cleanup
## 0.4 - 2017-09-22
### Added
Update buildah spec file to match new version
Bump to version 0.4
Add default transport to push if not provided
Add authentication to commit and push
Remove --transport flag
Run: don't complain about missing volume locations
Add credentials to buildah from
Remove export command
Bump containers/storage and containers/image
## 0.3 - 2017-07-20
## 0.2 - 2017-07-18
### Added
Vendor in latest containers/image and containers/storage
Update image-spec and runtime-spec to v1.0.0
Add support for -- ending options parsing to buildah run
Add/Copy need to support glob syntax
Add flag to remove containers on commit
Add buildah export support
update 'buildah images' and 'buildah rmi' commands
buildah containers/image: Add JSON output option
Add 'buildah version' command
Handle "run" without an explicit command correctly
Ensure volume points get created, and with perms
Add a -a/--all option to "buildah containers"
## 0.1 - 2017-06-14
### Added
Vendor in latest container/storage container/image
Add a "push" command
Add an option to specify a Create date for images
Allow building a source image from another image
Improve buildah commit performance
Add a --volume flag to "buildah run"
Fix inspect/tag-by-truncated-image-ID
Include image-spec and runtime-spec versions
buildah mount command should list mounts when no arguments are given.
Make the output image format selectable
commit images in multiple formats
Also import configurations from V2S1 images
Add a "tag" command
Add an "inspect" command
Update reference comments for docker types origins
Improve configuration preservation in imagebuildah
Report pull/commit progress by default
Contribute buildah.spec
Remove --mount from buildah-from
Add a build-using-dockerfile command (alias: bud)
Create manpages for the buildah project
Add installation for buildah and bash completions
Rename "list"/"delete" to "containers"/"rm"
Switch `buildah list quiet` option to only list container id's
buildah delete should be able to delete multiple containers
Correctly set tags on the names of pulled images
Don't mix "config" in with "run" and "commit"
Add a "list" command, for listing active builders
Add "add" and "copy" commands
Add a "run" command, using runc
Massive refactoring
Make a note to distinguish compression of layers
## 0.0 - 2017-01-26
### Added
Initial version, needs work

142
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,142 @@
# Contributing to Buildah
We'd love to have you join the community! Below summarizes the processes
that we follow.
## Topics
* [Reporting Issues](#reporting-issues)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Communications](#communications)
* [Becoming a Maintainer](#becoming-a-maintainer)
## Reporting Issues
Before reporting an issue, check our backlog of
[open issues](https://github.com/projectatomic/buildah/issues)
to see if someone else has already reported it. If so, feel free to add
your scenario, or additional information, to the discussion. Or simply
"subscribe" to it to be notified when it is updated.
If you find a new issue with the project we'd love to hear about it! The most
important aspect of a bug report is that it includes enough information for
us to reproduce it. So, please include as much detail as possible and try
to remove the extra stuff that doesn't really relate to the issue itself.
The easier it is for us to reproduce it, the faster it'll be fixed!
Please don't include any private/sensitive information in your issue!
## Submitting Pull Requests
No Pull Request (PR) is too small! Typos, additional comments in the code,
new testcases, bug fixes, new features, more documentation, ... it's all
welcome!
While bug fixes can first be identified via an "issue", that is not required.
It's ok to just open up a PR with the fix, but make sure you include the same
information you would have included in an issue - like how to reproduce it.
PRs for new features should include some background on what use cases the
new code is trying to address. When possible and when it makes sense, try to break-up
larger PRs into smaller ones - it's easier to review smaller
code changes. But only if those smaller ones make sense as stand-alone PRs.
Regardless of the type of PR, all PRs should include:
* well documented code changes
* additional testcases. Ideally, they should fail w/o your code change applied
* documentation changes
Squash your commits into logical pieces of work that might want to be reviewed
separate from the rest of the PRs. But, squashing down to just one commit is ok
too since in the end the entire PR will be reviewed anyway. When in doubt,
squash.
PRs that fix issues should include a reference like `Closes #XXXX` in the
commit message so that github will automatically close the referenced issue
when the PR is merged.
<!--
All PRs require at least two LGTMs (Looks Good To Me) from maintainers.
-->
### Sign your PRs
The sign-off is a line at the end of the explanation for the patch. Your
signature certifies that you wrote the patch or otherwise have the right to pass
it on as an open-source patch. The rules are simple: if you can certify
the below (from [developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
Then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe.smith@email.com>
Use your real name (sorry, no pseudonyms or anonymous contributions.)
If you set your `user.name` and `user.email` git configs, you can sign your
commit automatically with `git commit -s`.
## Communications
For general questions, or discussions, please use the
IRC group on `irc.freenode.net` called `cri-o`
that has been setup.
For discussions around issues/bugs and features, you can use the github
[issues](https://github.com/projectatomic/buildah/issues)
and
[PRs](https://github.com/projectatomic/buildah/pulls)
tracking system.
<!--
## Becoming a Maintainer
To become a maintainer you must first be nominated by an existing maintainer.
If a majority (>50%) of maintainers agree then the proposal is adopted and
you will be added to the list.
Removing a maintainer requires at least 75% of the remaining maintainers
approval, or if the person requests to be removed then it is automatic.
Normally, a maintainer will only be removed if they are considered to be
inactive for a long period of time or are viewed as disruptive to the community.
The current list of maintainers can be found in the
[MAINTAINERS](MAINTAINERS) file.
-->

View File

@@ -1,17 +1,30 @@
AUTOTAGS := $(shell ./btrfs_tag.sh) $(shell ./libdm_tag.sh)
AUTOTAGS := $(shell ./btrfs_tag.sh) $(shell ./libdm_tag.sh) $(shell ./ostree_tag.sh) $(shell ./selinux_tag.sh)
TAGS := seccomp
PREFIX := /usr/local
BINDIR := $(PREFIX)/bin
BASHINSTALLDIR=${PREFIX}/share/bash-completion/completions
BUILDFLAGS := -tags "$(AUTOTAGS) $(TAGS)"
GO := go
all: buildah docs
GIT_COMMIT := $(shell git rev-parse --short HEAD)
BUILD_INFO := $(shell date +%s)
RUNC_COMMIT := c5ec25487693612aed95673800863e134785f946
LIBSECCOMP_COMMIT := release-2.3
LDFLAGS := -ldflags '-X main.gitCommit=${GIT_COMMIT} -X main.buildInfo=${BUILD_INFO}'
all: buildah imgtype docs
buildah: *.go imagebuildah/*.go cmd/buildah/*.go docker/*.go util/*.go
go build -o buildah $(BUILDFLAGS) ./cmd/buildah
$(GO) build $(LDFLAGS) -o buildah $(BUILDFLAGS) ./cmd/buildah
imgtype: *.go docker/*.go util/*.go tests/imgtype.go
$(GO) build $(LDFLAGS) -o imgtype $(BUILDFLAGS) ./tests/imgtype.go
.PHONY: clean
clean:
$(RM) buildah
$(RM) buildah imgtype build
$(MAKE) -C docs clean
.PHONY: docs
@@ -38,11 +51,25 @@ validate:
.PHONY: install.tools
install.tools:
go get -u $(BUILDFLAGS) github.com/cpuguy83/go-md2man
go get -u $(BUILDFLAGS) github.com/vbatts/git-validation
go get -u $(BUILDFLAGS) gopkg.in/alecthomas/gometalinter.v1
$(GO) get -u $(BUILDFLAGS) github.com/cpuguy83/go-md2man
$(GO) get -u $(BUILDFLAGS) github.com/vbatts/git-validation
$(GO) get -u $(BUILDFLAGS) github.com/onsi/ginkgo/ginkgo
$(GO) get -u $(BUILDFLAGS) gopkg.in/alecthomas/gometalinter.v1
gometalinter.v1 -i
.PHONY: runc
runc: gopath
rm -rf ../../opencontainers/runc
git clone https://github.com/opencontainers/runc ../../opencontainers/runc
cd ../../opencontainers/runc && git checkout $(RUNC_COMMIT) && $(GO) build -tags "$(AUTOTAGS) $(TAGS)"
ln -sf ../../opencontainers/runc/runc
.PHONY: install.libseccomp.sudo
install.libseccomp.sudo: gopath
rm -rf ../../seccomp/libseccomp
git clone https://github.com/seccomp/libseccomp ../../seccomp/libseccomp
cd ../../seccomp/libseccomp && git checkout $(LIBSECCOMP_COMMIT) && ./autogen.sh && ./configure --prefix=/usr && make all && sudo make install
.PHONY: install
install:
install -D -m0755 buildah $(DESTDIR)/$(BINDIR)/buildah
@@ -51,3 +78,18 @@ install:
.PHONY: install.completions
install.completions:
install -m 644 -D contrib/completions/bash/buildah $(DESTDIR)/${BASHINSTALLDIR}/buildah
.PHONY: install.runc
install.runc:
install -m 755 ../../opencontainers/runc/runc $(DESTDIR)/$(BINDIR)/
.PHONY: test-integration
test-integration:
ginkgo -v tests/e2e/.
cd tests; ./test_runner.sh
.PHONY: test-unit
test-unit:
tmp=$(shell mktemp -d) ; \
mkdir -p $$tmp/root $$tmp/runroot; \
$(GO) test -v -tags "$(AUTOTAGS) $(TAGS)" ./cmd/buildah -args -root $$tmp/root -runroot $$tmp/runroot -storage-driver vfs -signature-policy $(shell pwd)/tests/policy.json

117
README.md
View File

@@ -1,4 +1,6 @@
buildah - a tool which facilitates building OCI container images
![buildah logo](https://cdn.rawgit.com/projectatomic/buildah/master/logos/buildah-logo_large.png)
# [Buildah](https://www.youtube.com/embed/YVk5NgSiUw8) - a tool which facilitates building OCI container images
================================================================
[![Go Report Card](https://goreportcard.com/badge/github.com/projectatomic/buildah)](https://goreportcard.com/report/github.com/projectatomic/buildah)
@@ -6,7 +8,7 @@ buildah - a tool which facilitates building OCI container images
Note: this package is in alpha, but is close to being feature-complete.
The buildah package provides a command line tool which can be used to
The Buildah package provides a command line tool which can be used to
* create a working container, either from scratch or using an image as a starting point
* create an image, either from a working container or via the instructions in a Dockerfile
* images can be built in either the OCI image format or the traditional upstream docker image format
@@ -15,74 +17,61 @@ The buildah package provides a command line tool which can be used to
* use the updated contents of a container's root filesystem as a filesystem layer to create a new image
* delete a working container or an image
**Installation notes**
**[Changelog](CHANGELOG.md)**
Prior to installing buildah, install the following packages on your linux distro:
* make
* golang (Requires version 1.8.1 or higher.)
* bats
* btrfs-progs-devel
* device-mapper-devel
* gpgme-devel
* libassuan-devel
* git
* bzip2
* go-md2man
* skopeo-containers
**[Installation notes](install.md)**
In Fedora, you can use this command:
**[Tutorials](docs/tutorials/README.md)**
## Example
From [`./examples/lighttpd.sh`](examples/lighttpd.sh):
```bash
$ cat > lighttpd.sh <<EOF
#!/bin/bash -x
ctr1=`buildah from ${1:-fedora}`
## Get all updates and install our minimal httpd server
buildah run $ctr1 -- dnf update -y
buildah run $ctr1 -- dnf install -y lighttpd
## Include some buildtime annotations
buildah config --annotation "com.example.build.host=$(uname -n)" $ctr1
## Run our server and expose the port
buildah config $ctr1 --cmd "/usr/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf"
buildah config $ctr1 --port 80
## Commit this container to an image name
buildah commit $ctr1 ${2:-$USER/lighttpd}
EOF
$ chmod +x lighttpd.sh
$ sudo ./lighttpd.sh
```
dnf -y install \
make \
golang \
bats \
btrfs-progs-devel \
device-mapper-devel \
gpgme-devel \
libassuan-devel \
git \
bzip2 \
go-md2man \
skopeo-containers
```
Then to install buildah follow the steps in this example:
```
mkdir ~/buildah
cd ~/buildah
export GOPATH=`pwd`
git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
cd ./src/github.com/projectatomic/buildah
make
make install
buildah --help
```
buildah uses `runc` to run commands when `buildah run` is used, or when `buildah build-using-dockerfile`
encounters a `RUN` instruction, so you'll also need to build and install a compatible version of
[runc](https://github.com/opencontainers/runc) for buildah to call for those cases.
## Commands
| Command | Description |
| --------------------- | --------------------------------------------------- |
| buildah-add(1) | Add the contents of a file, URL, or a directory to the container. |
| buildah-bud(1) | Build an image using instructions from Dockerfiles. |
| buildah-commit(1) | Create an image from a working container. |
| buildah-config(1) | Update image configuration settings. |
| buildah-containers(1) | List the working containers and their base images. |
| buildah-copy(1) | Copies the contents of a file, URL, or directory into a container's working directory. |
| buildah-from(1) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| buildah-images(1) | List images in local storage. |
| buildah-inspect(1) | Inspects the configuration of a container or image. |
| buildah-mount(1) | Mount the working container's root filesystem. |
| buildah-push(1) | Copies an image from local storage. |
| buildah-rm(1) | Removes one or more working containers. |
| buildah-rmi(1) | Removes one or more images. |
| buildah-run(1) | Run a command inside of the container. |
| buildah-tag(1) | Add an additional name to a local image. |
| buildah-umount(1) | Unmount a working container's root file system. |
| Command | Description |
| ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| [buildah-add(1)](/docs/buildah-add.md) | Add the contents of a file, URL, or a directory to the container. |
| [buildah-bud(1)](/docs/buildah-bud.md) | Build an image using instructions from Dockerfiles. |
| [buildah-commit(1)](/docs/buildah-commit.md) | Create an image from a working container. |
| [buildah-config(1)](/docs/buildah-config.md) | Update image configuration settings. |
| [buildah-containers(1)](/docs/buildah-containers.md) | List the working containers and their base images. |
| [buildah-copy(1)](/docs/buildah-copy.md) | Copies the contents of a file, URL, or directory into a container's working directory. |
| [buildah-from(1)](/docs/buildah-from.md) | Creates a new working container, either from scratch or using a specified image as a starting point. |
| [buildah-images(1)](/docs/buildah-images.md) | List images in local storage. |
| [buildah-inspect(1)](/docs/buildah-inspect.md) | Inspects the configuration of a container or image. |
| [buildah-mount(1)](/docs/buildah-mount.md) | Mount the working container's root filesystem. |
| [buildah-push(1)](/docs/buildah-push.md) | Push an image from local storage to elsewhere. |
| [buildah-rm(1)](/docs/buildah-rm.md) | Removes one or more working containers. |
| [buildah-rmi(1)](/docs/buildah-rmi.md) | Removes one or more images. |
| [buildah-run(1)](/docs/buildah-run.md) | Run a command inside of the container. |
| [buildah-tag(1)](/docs/buildah-tag.md) | Add an additional name to a local image. |
| [buildah-umount(1)](/docs/buildah-umount.md) | Unmount a working container's root file system. |
| [buildah-version(1)](/docs/buildah-version.md) | Display the Buildah Version Information |
**Future goals include:**
* more CI tests

171
add.go
View File

@@ -8,14 +8,21 @@ import (
"path"
"path/filepath"
"strings"
"syscall"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/chrootarchive"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
"github.com/projectatomic/libpod/pkg/chrootuser"
"github.com/sirupsen/logrus"
)
//AddAndCopyOptions holds options for add and copy commands.
type AddAndCopyOptions struct {
Chown string
}
// addURL copies the contents of the source URL to the destination. This is
// its own function so that deferred closes happen after we're done pulling
// down each item of potentially many.
@@ -58,8 +65,8 @@ func addURL(destination, srcurl string) error {
// Add copies the contents of the specified sources into the container's root
// filesystem, optionally extracting contents of local files that look like
// non-empty archives.
func (b *Builder) Add(destination string, extract bool, source ...string) error {
mountPoint, err := b.Mount("")
func (b *Builder) Add(destination string, extract bool, options AddAndCopyOptions, source ...string) error {
mountPoint, err := b.Mount(b.MountLabel)
if err != nil {
return err
}
@@ -68,12 +75,17 @@ func (b *Builder) Add(destination string, extract bool, source ...string) error
logrus.Errorf("error unmounting container: %v", err2)
}
}()
// Find out which user (and group) the destination should belong to.
user, err := b.user(mountPoint, options.Chown)
if err != nil {
return err
}
dest := mountPoint
if destination != "" && filepath.IsAbs(destination) {
dest = filepath.Join(dest, destination)
} else {
if err = os.MkdirAll(filepath.Join(dest, b.WorkDir()), 0755); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists)", filepath.Join(dest, b.WorkDir()))
if err = ensureDir(filepath.Join(dest, b.WorkDir()), user, 0755); err != nil {
return err
}
dest = filepath.Join(dest, b.WorkDir(), destination)
}
@@ -81,8 +93,8 @@ func (b *Builder) Add(destination string, extract bool, source ...string) error
// with a '/', create it so that we can be sure that it's a directory,
// and any files we're copying will be placed in the directory.
if len(destination) > 0 && destination[len(destination)-1] == os.PathSeparator {
if err = os.MkdirAll(dest, 0755); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists", dest)
if err = ensureDir(dest, user, 0755); err != nil {
return err
}
}
// Make sure the destination's parent directory is usable.
@@ -118,46 +130,123 @@ func (b *Builder) Add(destination string, extract bool, source ...string) error
if err := addURL(d, src); err != nil {
return err
}
if err := setOwner("", d, user); err != nil {
return err
}
continue
}
srcfi, err := os.Stat(src)
glob, err := filepath.Glob(src)
if err != nil {
return errors.Wrapf(err, "error reading %q", src)
return errors.Wrapf(err, "invalid glob %q", src)
}
if srcfi.IsDir() {
// The source is a directory, so copy the contents of
// the source directory into the target directory. Try
// to create it first, so that if there's a problem,
// we'll discover why that won't work.
d := dest
if err := os.MkdirAll(d, 0755); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists", d)
}
logrus.Debugf("copying %q to %q", src+string(os.PathSeparator)+"*", d+string(os.PathSeparator)+"*")
if err := chrootarchive.CopyWithTar(src, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", src, d)
}
continue
if len(glob) == 0 {
return errors.Wrapf(syscall.ENOENT, "no files found matching %q", src)
}
if !extract || !archive.IsArchivePath(src) {
// This source is a file, and either it's not an
// archive, or we don't care whether or not it's an
// archive.
d := dest
if destfi != nil && destfi.IsDir() {
d = filepath.Join(dest, filepath.Base(src))
for _, gsrc := range glob {
srcfi, err := os.Stat(gsrc)
if err != nil {
return errors.Wrapf(err, "error reading %q", gsrc)
}
// Copy the file, preserving attributes.
logrus.Debugf("copying %q to %q", src, d)
if err := chrootarchive.CopyFileWithTar(src, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", src, d)
if srcfi.IsDir() {
// The source is a directory, so copy the contents of
// the source directory into the target directory. Try
// to create it first, so that if there's a problem,
// we'll discover why that won't work.
if err = ensureDir(dest, user, 0755); err != nil {
return err
}
logrus.Debugf("copying %q to %q", gsrc+string(os.PathSeparator)+"*", dest+string(os.PathSeparator)+"*")
if err := copyWithTar(gsrc, dest); err != nil {
return errors.Wrapf(err, "error copying %q to %q", gsrc, dest)
}
if err := setOwner(gsrc, dest, user); err != nil {
return err
}
continue
}
if !extract || !archive.IsArchivePath(gsrc) {
// This source is a file, and either it's not an
// archive, or we don't care whether or not it's an
// archive.
d := dest
if destfi != nil && destfi.IsDir() {
d = filepath.Join(dest, filepath.Base(gsrc))
}
// Copy the file, preserving attributes.
logrus.Debugf("copying %q to %q", gsrc, d)
if err := copyFileWithTar(gsrc, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", gsrc, d)
}
if err := setOwner(gsrc, d, user); err != nil {
return err
}
continue
}
// We're extracting an archive into the destination directory.
logrus.Debugf("extracting contents of %q into %q", gsrc, dest)
if err := untarPath(gsrc, dest); err != nil {
return errors.Wrapf(err, "error extracting %q into %q", gsrc, dest)
}
continue
}
// We're extracting an archive into the destination directory.
logrus.Debugf("extracting contents of %q into %q", src, dest)
if err := chrootarchive.UntarPath(src, dest); err != nil {
return errors.Wrapf(err, "error extracting %q into %q", src, dest)
}
}
return nil
}
// user returns the user (and group) information which the destination should belong to.
func (b *Builder) user(mountPoint string, userspec string) (specs.User, error) {
if userspec == "" {
userspec = b.User()
}
uid, gid, err := chrootuser.GetUser(mountPoint, userspec)
u := specs.User{
UID: uid,
GID: gid,
Username: userspec,
}
return u, err
}
// setOwner sets the uid and gid owners of a given path.
func setOwner(src, dest string, user specs.User) error {
fid, err := os.Stat(dest)
if err != nil {
return errors.Wrapf(err, "error reading %q", dest)
}
if !fid.IsDir() || src == "" {
if err := os.Lchown(dest, int(user.UID), int(user.GID)); err != nil {
return errors.Wrapf(err, "error setting ownership of %q", dest)
}
return nil
}
err = filepath.Walk(src, func(p string, info os.FileInfo, we error) error {
relPath, err2 := filepath.Rel(src, p)
if err2 != nil {
return errors.Wrapf(err2, "error getting relative path of %q to set ownership on destination", p)
}
if relPath != "." {
absPath := filepath.Join(dest, relPath)
if err2 := os.Lchown(absPath, int(user.UID), int(user.GID)); err != nil {
return errors.Wrapf(err2, "error setting ownership of %q", absPath)
}
}
return nil
})
if err != nil {
return errors.Wrapf(err, "error walking dir %q to set ownership", src)
}
return nil
}
// ensureDir creates a directory if it doesn't exist, setting ownership and permissions as passed by user and perm.
func ensureDir(path string, user specs.User, perm os.FileMode) error {
if _, err := os.Stat(path); os.IsNotExist(err) {
if err := os.MkdirAll(path, perm); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists", path)
}
if err := os.Chown(path, int(user.UID), int(user.GID)); err != nil {
return errors.Wrapf(err, "error setting ownership of %q", path)
}
}
return nil

View File

@@ -7,6 +7,7 @@ import (
"os"
"path/filepath"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/ioutils"
"github.com/opencontainers/image-spec/specs-go/v1"
@@ -18,10 +19,19 @@ const (
// Package is the name of this package, used in help output and to
// identify working containers.
Package = "buildah"
// Version for the Package
Version = "0.1"
// Version for the Package. Bump version in contrib/rpm/buildah.spec
// too.
Version = "0.15"
// The value we use to identify what type of information, currently a
// serialized Builder structure, we are using as per-container state.
// This should only be changed when we make incompatible changes to
// that data structure, as it's used to distinguish containers which
// are "ours" from ones that aren't.
containerType = Package + " 0.0.1"
stateFile = Package + ".json"
// The file in the per-container directory which we use to store our
// per-container state. If it isn't there, then the container isn't
// one of our build containers.
stateFile = Package + ".json"
)
const (
@@ -68,6 +78,10 @@ type Builder struct {
// MountPoint is the last location where the container's root
// filesystem was mounted. It should not be modified.
MountPoint string `json:"mountpoint,omitempty"`
// ProcessLabel is the SELinux process label associated with the container
ProcessLabel string `json:"process-label,omitempty"`
// MountLabel is the SELinux mount label associated with the container
MountLabel string `json:"mount-label,omitempty"`
// ImageAnnotations is a set of key-value pairs which is stored in the
// image's manifest.
@@ -78,6 +92,81 @@ type Builder struct {
// Image metadata and runtime settings, in multiple formats.
OCIv1 v1.Image `json:"ociv1,omitempty"`
Docker docker.V2Image `json:"docker,omitempty"`
// DefaultMountsFilePath is the file path holding the mounts to be mounted in "host-path:container-path" format
DefaultMountsFilePath string `json:"defaultMountsFilePath,omitempty"`
CommonBuildOpts *CommonBuildOptions
}
// BuilderInfo are used as objects to display container information
type BuilderInfo struct {
Type string
FromImage string
FromImageID string
Config string
Manifest string
Container string
ContainerID string
MountPoint string
ProcessLabel string
MountLabel string
ImageAnnotations map[string]string
ImageCreatedBy string
OCIv1 v1.Image
Docker docker.V2Image
DefaultMountsFilePath string
}
// GetBuildInfo gets a pointer to a Builder object and returns a BuilderInfo object from it.
// This is used in the inspect command to display Manifest and Config as string and not []byte.
func GetBuildInfo(b *Builder) BuilderInfo {
return BuilderInfo{
Type: b.Type,
FromImage: b.FromImage,
FromImageID: b.FromImageID,
Config: string(b.Config),
Manifest: string(b.Manifest),
Container: b.Container,
ContainerID: b.ContainerID,
MountPoint: b.MountPoint,
ProcessLabel: b.ProcessLabel,
ImageAnnotations: b.ImageAnnotations,
ImageCreatedBy: b.ImageCreatedBy,
OCIv1: b.OCIv1,
Docker: b.Docker,
DefaultMountsFilePath: b.DefaultMountsFilePath,
}
}
// CommonBuildOptions are reseources that can be defined by flags for both buildah from and bud
type CommonBuildOptions struct {
// AddHost is the list of hostnames to add to the resolv.conf
AddHost []string
//CgroupParent it the path to cgroups under which the cgroup for the container will be created.
CgroupParent string
//CPUPeriod limits the CPU CFS (Completely Fair Scheduler) period
CPUPeriod uint64
//CPUQuota limits the CPU CFS (Completely Fair Scheduler) quota
CPUQuota int64
//CPUShares (relative weight
CPUShares uint64
//CPUSetCPUs in which to allow execution (0-3, 0,1)
CPUSetCPUs string
//CPUSetMems memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
CPUSetMems string
//Memory limit
Memory int64
//MemorySwap limit value equal to memory plus swap.
MemorySwap int64
//SecruityOpts modify the way container security is running
LabelOpts []string
SeccompProfilePath string
ApparmorProfile string
//ShmSize is the shared memory size
ShmSize string
//Ulimit options
Ulimit []string
//Volumes to bind mount into the container
Volumes []string
}
// BuilderOptions are used to initialize a new Builder.
@@ -95,8 +184,13 @@ type BuilderOptions struct {
PullPolicy int
// Registry is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone can not be resolved to a
// reference to a source image.
// reference to a source image. No separator is implicitly added.
Registry string
// Transport is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone, or the image name and
// the registry together, can not be resolved to a reference to a
// source image. No separator is implicitly added.
Transport string
// Mount signals to NewBuilder() that the container should be mounted
// immediately.
Mount bool
@@ -109,6 +203,12 @@ type BuilderOptions struct {
// ReportWriter is an io.Writer which will be used to log the reading
// of the source image from a registry, if we end up pulling the image.
ReportWriter io.Writer
// github.com/containers/image/types SystemContext to hold credentials
// and other authentication/authorization information.
SystemContext *types.SystemContext
// DefaultMountsFilePath is the file path holding the mounts to be mounted in "host-path:container-path" format
DefaultMountsFilePath string
CommonBuildOpts *CommonBuildOptions
}
// ImportOptions are used to initialize a Builder from an existing container
@@ -134,6 +234,10 @@ type ImportFromImageOptions struct {
// specified, indicating that the shared, system-wide default policy
// should be used.
SignaturePolicyPath string
// github.com/containers/image/types SystemContext to hold information
// about which registries we should check for completing image names
// that don't include a domain portion.
SystemContext *types.SystemContext
}
// NewBuilder creates a new build container.

View File

@@ -2,27 +2,38 @@ package main
import (
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
var (
addAndCopyFlags = []cli.Flag{
cli.StringFlag{
Name: "chown",
Usage: "Set the user and group ownership of the destination content",
},
}
addDescription = "Adds the contents of a file, URL, or directory to a container's working\n directory. If a local file appears to be an archive, its contents are\n extracted and added instead of the archive file itself."
copyDescription = "Copies the contents of a file, URL, or directory into a container's working\n directory"
addCommand = cli.Command{
Name: "add",
Usage: "Add content to the container",
Description: addDescription,
Action: addCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [[FILE | DIRECTORY | URL] ...] [DESTINATION]",
Name: "add",
Usage: "Add content to the container",
Description: addDescription,
Flags: addAndCopyFlags,
Action: addCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [[FILE | DIRECTORY | URL] ...] [DESTINATION]",
SkipArgReorder: true,
}
copyCommand = cli.Command{
Name: "copy",
Usage: "Copy content into the container",
Description: copyDescription,
Action: copyCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [[FILE | DIRECTORY | URL] ...] [DESTINATION]",
Name: "copy",
Usage: "Copy content into the container",
Description: copyDescription,
Flags: addAndCopyFlags,
Action: copyCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [[FILE | DIRECTORY | URL] ...] [DESTINATION]",
SkipArgReorder: true,
}
)
@@ -34,7 +45,11 @@ func addAndCopyCmd(c *cli.Context, extractLocalArchives bool) error {
name := args[0]
args = args.Tail()
// If list is greater then one, the last item is the destination
if err := validateFlags(c, addAndCopyFlags); err != nil {
return err
}
// If list is greater than one, the last item is the destination
dest := ""
size := len(args)
if size > 1 {
@@ -52,8 +67,11 @@ func addAndCopyCmd(c *cli.Context, extractLocalArchives bool) error {
return errors.Wrapf(err, "error reading build container %q", name)
}
err = builder.Add(dest, extractLocalArchives, args...)
if err != nil {
options := buildah.AddAndCopyOptions{
Chown: c.String("chown"),
}
if err := builder.Add(dest, extractLocalArchives, options, args...); err != nil {
return errors.Wrapf(err, "error adding content to container %q", builder.Container)
}

View File

@@ -5,22 +5,39 @@ import (
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/pkg/errors"
"github.com/projectatomic/buildah/imagebuildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var (
budFlags = []cli.Flag{
cli.BoolFlag{
Name: "quiet, q",
Usage: "refrain from announcing build instructions and image read/write progress",
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.StringSliceFlag{
Name: "build-arg",
Usage: "`argument=value` to supply to the builder",
},
cli.StringFlag{
Name: "registry",
Usage: "prefix to prepend to the image name in order to pull the image",
Value: DefaultRegistry,
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `[username[:password]]` for accessing the registry",
},
cli.StringSliceFlag{
Name: "file, f",
Usage: "`pathname or URL` of a Dockerfile",
},
cli.StringFlag{
Name: "format",
Usage: "`format` of the built image's manifest and metadata",
},
cli.BoolTFlag{
Name: "pull",
@@ -30,13 +47,9 @@ var (
Name: "pull-always",
Usage: "pull the image, even if a version is present",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.StringSliceFlag{
Name: "build-arg",
Usage: "`argument=value` to supply to the builder",
cli.BoolFlag{
Name: "quiet, q",
Usage: "refrain from announcing build instructions and image read/write progress",
},
cli.StringFlag{
Name: "runtime",
@@ -48,27 +61,29 @@ var (
Usage: "add global flags for the container runtime",
},
cli.StringFlag{
Name: "format",
Usage: "`format` of the built image's manifest and metadata",
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.StringSliceFlag{
Name: "tag, t",
Usage: "`tag` to apply to the built image",
},
cli.StringSliceFlag{
Name: "file, f",
Usage: "`pathname or URL` of a Dockerfile",
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when accessing the registry",
},
}
budDescription = "Builds an OCI image using instructions in one or more Dockerfiles."
budCommand = cli.Command{
Name: "build-using-dockerfile",
Aliases: []string{"bud"},
Usage: "Build an image using instructions in a Dockerfile",
Description: budDescription,
Flags: budFlags,
Action: budCmd,
ArgsUsage: "CONTEXT-DIRECTORY | URL",
Name: "build-using-dockerfile",
Aliases: []string{"bud"},
Usage: "Build an image using instructions in a Dockerfile",
Description: budDescription,
Flags: append(budFlags, fromAndBudFlags...),
Action: budCmd,
ArgsUsage: "CONTEXT-DIRECTORY | URL",
SkipArgReorder: true,
}
)
@@ -82,39 +97,14 @@ func budCmd(c *cli.Context) error {
tags = tags[1:]
}
}
registry := DefaultRegistry
if c.IsSet("registry") {
registry = c.String("registry")
}
pull := true
if c.IsSet("pull") {
pull = c.BoolT("pull")
}
pullAlways := false
if c.IsSet("pull-always") {
pull = c.Bool("pull-always")
}
runtimeFlags := []string{}
if c.IsSet("runtime-flag") {
runtimeFlags = c.StringSlice("runtime-flag")
}
runtime := ""
if c.IsSet("runtime") {
runtime = c.String("runtime")
}
pullPolicy := imagebuildah.PullNever
if pull {
if c.BoolT("pull") {
pullPolicy = imagebuildah.PullIfMissing
}
if pullAlways {
if c.Bool("pull-always") {
pullPolicy = imagebuildah.PullAlways
}
signaturePolicy := ""
if c.IsSet("signature-policy") {
signaturePolicy = c.String("signature-policy")
}
args := make(map[string]string)
if c.IsSet("build-arg") {
for _, arg := range c.StringSlice("build-arg") {
@@ -126,14 +116,8 @@ func budCmd(c *cli.Context) error {
}
}
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
dockerfiles := []string{}
if c.IsSet("file") || c.IsSet("f") {
dockerfiles = c.StringSlice("file")
}
dockerfiles := c.StringSlice("file")
format := "oci"
if c.IsSet("format") {
format = strings.ToLower(c.String("format"))
@@ -199,27 +183,48 @@ func budCmd(c *cli.Context) error {
if len(dockerfiles) == 0 {
dockerfiles = append(dockerfiles, filepath.Join(contextDir, "Dockerfile"))
}
if err := validateFlags(c, budFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
return err
}
options := imagebuildah.BuildOptions{
ContextDirectory: contextDir,
PullPolicy: pullPolicy,
Registry: registry,
Compression: imagebuildah.Gzip,
Quiet: quiet,
SignaturePolicyPath: signaturePolicy,
Args: args,
Output: output,
AdditionalTags: tags,
Runtime: runtime,
RuntimeArgs: runtimeFlags,
OutputFormat: format,
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
if !quiet {
runtimeFlags := []string{}
for _, arg := range c.StringSlice("runtime-flag") {
runtimeFlags = append(runtimeFlags, "--"+arg)
}
commonOpts, err := parseCommonBuildOptions(c)
if err != nil {
return err
}
options := imagebuildah.BuildOptions{
ContextDirectory: contextDir,
PullPolicy: pullPolicy,
Compression: imagebuildah.Gzip,
Quiet: c.Bool("quiet"),
SignaturePolicyPath: c.String("signature-policy"),
Args: args,
Output: output,
AdditionalTags: tags,
Runtime: c.String("runtime"),
RuntimeArgs: runtimeFlags,
OutputFormat: format,
SystemContext: systemContext,
CommonBuildOpts: commonOpts,
DefaultMountsFilePath: c.GlobalString("default-mounts-file"),
}
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}

View File

@@ -10,22 +10,38 @@ import (
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/projectatomic/buildah/util"
"github.com/urfave/cli"
)
var (
commitFlags = []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `[username[:password]]` for accessing the registry",
},
cli.BoolFlag{
Name: "disable-compression, D",
Usage: "don't compress layers",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.StringFlag{
Name: "format, f",
Usage: "`format` of the image manifest and metadata",
Value: "oci",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when writing images",
},
cli.StringFlag{
Name: "reference-time",
@@ -33,18 +49,27 @@ var (
Hidden: true,
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when writing images",
Name: "rm",
Usage: "remove the container and its content after committing it to an image. Default leaves the container and its content in place.",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "Require HTTPS and verify certificates when accessing the registry",
},
}
commitDescription = "Writes a new image using the container's read-write layer and, if it is based\n on an image, the layers of that image"
commitCommand = cli.Command{
Name: "commit",
Usage: "Create an image from a working container",
Description: commitDescription,
Flags: commitFlags,
Action: commitCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID IMAGE",
Name: "commit",
Usage: "Create an image from a working container",
Description: commitDescription,
Flags: commitFlags,
Action: commitCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID IMAGE",
SkipArgReorder: true,
}
)
@@ -62,22 +87,13 @@ func commitCmd(c *cli.Context) error {
return errors.Errorf("too many arguments specified")
}
image := args[0]
if err := validateFlags(c, commitFlags); err != nil {
return err
}
signaturePolicy := ""
if c.IsSet("signature-policy") {
signaturePolicy = c.String("signature-policy")
}
compress := archive.Uncompressed
if !c.IsSet("disable-compression") || !c.Bool("disable-compression") {
compress = archive.Gzip
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
format := "oci"
if c.IsSet("format") {
format = c.String("format")
compress := archive.Gzip
if c.Bool("disable-compression") {
compress = archive.Uncompressed
}
timestamp := time.Now().UTC()
if c.IsSet("reference-time") {
@@ -88,6 +104,8 @@ func commitCmd(c *cli.Context) error {
}
timestamp = finfo.ModTime().UTC()
}
format := c.String("format")
if strings.HasPrefix(strings.ToLower(format), "oci") {
format = buildah.OCIv1ImageManifest
} else if strings.HasPrefix(strings.ToLower(format), "docker") {
@@ -114,19 +132,31 @@ func commitCmd(c *cli.Context) error {
dest = dest2
}
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
options := buildah.CommitOptions{
PreferredManifestType: format,
Compression: compress,
SignaturePolicyPath: signaturePolicy,
SignaturePolicyPath: c.String("signature-policy"),
HistoryTimestamp: &timestamp,
SystemContext: systemContext,
}
if !quiet {
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}
err = builder.Commit(dest, options)
if err != nil {
return errors.Wrapf(err, "error committing container %q to %q", builder.Container, image)
return util.GetFailureCause(
err,
errors.Wrapf(err, "error committing container %q to %q", builder.Container, image),
)
}
if c.Bool("rm") {
return builder.Delete()
}
return nil
}

View File

@@ -1,13 +1,30 @@
package main
import (
"fmt"
"net"
"os"
"reflect"
"regexp"
"strings"
"time"
is "github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/containers/storage"
units "github.com/docker/go-units"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
"golang.org/x/crypto/ssh/terminal"
)
const (
// SeccompDefaultPath defines the default seccomp path
SeccompDefaultPath = "/usr/share/containers/seccomp.json"
// SeccompOverridePath if this exists it overrides the default seccomp path
SeccompOverridePath = "/etc/crio/seccomp.json"
)
var needToShutdownStore = false
@@ -58,9 +75,10 @@ func openBuilders(store storage.Store) (builders []*buildah.Builder, err error)
return buildah.OpenAllBuilders(store)
}
func openImage(store storage.Store, name string) (builder *buildah.Builder, err error) {
func openImage(sc *types.SystemContext, store storage.Store, name string) (builder *buildah.Builder, err error) {
options := buildah.ImportFromImageOptions{
Image: name,
Image: name,
SystemContext: sc,
}
builder, err = buildah.ImportBuilderFromImage(store, options)
if err != nil {
@@ -71,3 +89,380 @@ func openImage(store storage.Store, name string) (builder *buildah.Builder, err
}
return builder, nil
}
func getDateAndDigestAndSize(image storage.Image, store storage.Store) (time.Time, string, int64, error) {
created := time.Time{}
is.Transport.SetStore(store)
storeRef, err := is.Transport.ParseStoreReference(store, image.ID)
if err != nil {
return created, "", -1, err
}
img, err := storeRef.NewImage(nil)
if err != nil {
return created, "", -1, err
}
defer img.Close()
imgSize, sizeErr := img.Size()
if sizeErr != nil {
imgSize = -1
}
manifest, _, manifestErr := img.Manifest()
manifestDigest := ""
if manifestErr == nil && len(manifest) > 0 {
manifestDigest = digest.Canonical.FromBytes(manifest).String()
}
inspectInfo, inspectErr := img.Inspect()
if inspectErr == nil && inspectInfo != nil {
created = inspectInfo.Created
}
if sizeErr != nil {
err = sizeErr
} else if manifestErr != nil {
err = manifestErr
} else if inspectErr != nil {
err = inspectErr
}
return created, manifestDigest, imgSize, err
}
// systemContextFromOptions returns a SystemContext populated with values
// per the input parameters provided by the caller for the use in authentication.
func systemContextFromOptions(c *cli.Context) (*types.SystemContext, error) {
ctx := &types.SystemContext{
DockerCertPath: c.String("cert-dir"),
}
if c.IsSet("tls-verify") {
ctx.DockerInsecureSkipTLSVerify = !c.BoolT("tls-verify")
}
if c.IsSet("creds") {
var err error
ctx.DockerAuthConfig, err = getDockerAuth(c.String("creds"))
if err != nil {
return nil, err
}
}
if c.IsSet("signature-policy") {
ctx.SignaturePolicyPath = c.String("signature-policy")
}
if c.IsSet("authfile") {
ctx.AuthFilePath = c.String("authfile")
}
if c.GlobalIsSet("registries-conf") {
ctx.SystemRegistriesConfPath = c.GlobalString("registries-conf")
}
if c.GlobalIsSet("registries-conf-dir") {
ctx.RegistriesDirPath = c.GlobalString("registries-conf-dir")
}
return ctx, nil
}
func parseCreds(creds string) (string, string) {
if creds == "" {
return "", ""
}
up := strings.SplitN(creds, ":", 2)
if len(up) == 1 {
return up[0], ""
}
if up[0] == "" {
return "", up[1]
}
return up[0], up[1]
}
func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
username, password := parseCreds(creds)
if username == "" {
fmt.Print("Username: ")
fmt.Scanln(&username)
}
if password == "" {
fmt.Print("Password: ")
termPassword, err := terminal.ReadPassword(0)
if err != nil {
return nil, errors.Wrapf(err, "could not read password from terminal")
}
password = string(termPassword)
}
return &types.DockerAuthConfig{
Username: username,
Password: password,
}, nil
}
// validateFlags searches for StringFlags or StringSlice flags that never had
// a value set. This commonly occurs when the CLI mistakenly takes the next
// option and uses it as a value.
func validateFlags(c *cli.Context, flags []cli.Flag) error {
re, err := regexp.Compile("^-.+")
if err != nil {
return errors.Wrap(err, "compiling regex failed")
}
for _, flag := range flags {
switch reflect.TypeOf(flag).String() {
case "cli.StringSliceFlag":
{
f := flag.(cli.StringSliceFlag)
name := strings.Split(f.Name, ",")
val := c.StringSlice(name[0])
for _, v := range val {
if ok := re.MatchString(v); ok {
return errors.Errorf("option --%s requires a value", name[0])
}
}
}
case "cli.StringFlag":
{
f := flag.(cli.StringFlag)
name := strings.Split(f.Name, ",")
val := c.String(name[0])
if ok := re.MatchString(val); ok {
return errors.Errorf("option --%s requires a value", name[0])
}
}
}
}
return nil
}
var fromAndBudFlags = []cli.Flag{
cli.StringSliceFlag{
Name: "add-host",
Usage: "add a custom host-to-IP mapping (host:ip) (default [])",
},
cli.StringFlag{
Name: "cgroup-parent",
Usage: "optional parent cgroup for the container",
},
cli.Uint64Flag{
Name: "cpu-period",
Usage: "limit the CPU CFS (Completely Fair Scheduler) period",
},
cli.Int64Flag{
Name: "cpu-quota",
Usage: "limit the CPU CFS (Completely Fair Scheduler) quota",
},
cli.Uint64Flag{
Name: "cpu-shares",
Usage: "CPU shares (relative weight)",
},
cli.StringFlag{
Name: "cpuset-cpus",
Usage: "CPUs in which to allow execution (0-3, 0,1)",
},
cli.StringFlag{
Name: "cpuset-mems",
Usage: "memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.",
},
cli.StringFlag{
Name: "memory, m",
Usage: "memory limit (format: <number>[<unit>], where unit = b, k, m or g)",
},
cli.StringFlag{
Name: "memory-swap",
Usage: "swap limit equal to memory plus swap: '-1' to enable unlimited swap",
},
cli.StringSliceFlag{
Name: "security-opt",
Usage: "security Options (default [])",
},
cli.StringFlag{
Name: "shm-size",
Usage: "size of `/dev/shm`. The format is `<number><unit>`.",
Value: "65536k",
},
cli.StringSliceFlag{
Name: "ulimit",
Usage: "ulimit options (default [])",
},
cli.StringSliceFlag{
Name: "volume, v",
Usage: "bind mount a volume into the container (default [])",
},
}
func parseCommonBuildOptions(c *cli.Context) (*buildah.CommonBuildOptions, error) {
var (
memoryLimit int64
memorySwap int64
err error
)
if c.String("memory") != "" {
memoryLimit, err = units.RAMInBytes(c.String("memory"))
if err != nil {
return nil, errors.Wrapf(err, "invalid value for memory")
}
}
if c.String("memory-swap") != "" {
memorySwap, err = units.RAMInBytes(c.String("memory-swap"))
if err != nil {
return nil, errors.Wrapf(err, "invalid value for memory-swap")
}
}
if len(c.StringSlice("add-host")) > 0 {
for _, host := range c.StringSlice("add-host") {
if err := validateExtraHost(host); err != nil {
return nil, errors.Wrapf(err, "invalid value for add-host")
}
}
}
if _, err := units.FromHumanSize(c.String("shm-size")); err != nil {
return nil, errors.Wrapf(err, "invalid --shm-size")
}
if err := parseVolumes(c.StringSlice("volume")); err != nil {
return nil, err
}
commonOpts := &buildah.CommonBuildOptions{
AddHost: c.StringSlice("add-host"),
CgroupParent: c.String("cgroup-parent"),
CPUPeriod: c.Uint64("cpu-period"),
CPUQuota: c.Int64("cpu-quota"),
CPUSetCPUs: c.String("cpuset-cpus"),
CPUSetMems: c.String("cpuset-mems"),
CPUShares: c.Uint64("cpu-shares"),
Memory: memoryLimit,
MemorySwap: memorySwap,
ShmSize: c.String("shm-size"),
Ulimit: c.StringSlice("ulimit"),
Volumes: c.StringSlice("volume"),
}
if err := parseSecurityOpts(c.StringSlice("security-opt"), commonOpts); err != nil {
return nil, err
}
return commonOpts, nil
}
func parseSecurityOpts(securityOpts []string, commonOpts *buildah.CommonBuildOptions) error {
for _, opt := range securityOpts {
if opt == "no-new-privileges" {
return errors.Errorf("no-new-privileges is not supported")
}
con := strings.SplitN(opt, "=", 2)
if len(con) != 2 {
return errors.Errorf("Invalid --security-opt 1: %q", opt)
}
switch con[0] {
case "label":
commonOpts.LabelOpts = append(commonOpts.LabelOpts, con[1])
case "apparmor":
commonOpts.ApparmorProfile = con[1]
case "seccomp":
commonOpts.SeccompProfilePath = con[1]
default:
return errors.Errorf("Invalid --security-opt 2: %q", opt)
}
}
if commonOpts.SeccompProfilePath == "" {
if _, err := os.Stat(SeccompOverridePath); err == nil {
commonOpts.SeccompProfilePath = SeccompOverridePath
} else {
if !os.IsNotExist(err) {
return errors.Wrapf(err, "can't check if %q exists", SeccompOverridePath)
}
if _, err := os.Stat(SeccompDefaultPath); err != nil {
if !os.IsNotExist(err) {
return errors.Wrapf(err, "can't check if %q exists", SeccompDefaultPath)
}
} else {
commonOpts.SeccompProfilePath = SeccompDefaultPath
}
}
}
return nil
}
func parseVolumes(volumes []string) error {
if len(volumes) == 0 {
return nil
}
for _, volume := range volumes {
arr := strings.SplitN(volume, ":", 3)
if len(arr) < 2 {
return errors.Errorf("incorrect volume format %q, should be host-dir:ctr-dir[:option]", volume)
}
if err := validateVolumeHostDir(arr[0]); err != nil {
return err
}
if err := validateVolumeCtrDir(arr[1]); err != nil {
return err
}
if len(arr) > 2 {
if err := validateVolumeOpts(arr[2]); err != nil {
return err
}
}
}
return nil
}
func validateVolumeHostDir(hostDir string) error {
if _, err := os.Stat(hostDir); err != nil {
return errors.Wrapf(err, "error checking path %q", hostDir)
}
return nil
}
func validateVolumeCtrDir(ctrDir string) error {
if ctrDir[0] != '/' {
return errors.Errorf("invalid container directory path %q", ctrDir)
}
return nil
}
func validateVolumeOpts(option string) error {
var foundRootPropagation, foundRWRO, foundLabelChange int
options := strings.Split(option, ",")
for _, opt := range options {
switch opt {
case "rw", "ro":
if foundRWRO > 1 {
return errors.Errorf("invalid options %q, can only specify 1 'rw' or 'ro' option", option)
}
foundRWRO++
case "z", "Z":
if foundLabelChange > 1 {
return errors.Errorf("invalid options %q, can only specify 1 'z' or 'Z' option", option)
}
foundLabelChange++
case "private", "rprivate", "shared", "rshared", "slave", "rslave":
if foundRootPropagation > 1 {
return errors.Errorf("invalid options %q, can only specify 1 '[r]shared', '[r]private' or '[r]slave' option", option)
}
foundRootPropagation++
default:
return errors.Errorf("invalid option type %q", option)
}
}
return nil
}
// validateExtraHost validates that the specified string is a valid extrahost and returns it.
// ExtraHost is in the form of name:ip where the ip has to be a valid ip (ipv4 or ipv6).
// for add-host flag
func validateExtraHost(val string) error {
// allow for IPv6 addresses in extra hosts by only splitting on first ":"
arr := strings.SplitN(val, ":", 2)
if len(arr) != 2 || len(arr[0]) == 0 {
return fmt.Errorf("bad format for add-host: %q", val)
}
if _, err := validateIPAddress(arr[1]); err != nil {
return fmt.Errorf("invalid IP address in add-host: %q", arr[1])
}
return nil
}
// validateIPAddress validates an Ip address.
// for dns, ip, and ip6 flags also
func validateIPAddress(val string) (string, error) {
var ip = net.ParseIP(strings.TrimSpace(val))
if ip != nil {
return ip.String(), nil
}
return "", fmt.Errorf("%s is not an ip address", val)
}

127
cmd/buildah/common_test.go Normal file
View File

@@ -0,0 +1,127 @@
package main
import (
"flag"
"os"
"os/user"
"testing"
is "github.com/containers/image/storage"
"github.com/containers/storage"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var (
signaturePolicyPath = ""
storeOptions = storage.DefaultStoreOptions
)
func TestMain(m *testing.M) {
flag.StringVar(&signaturePolicyPath, "signature-policy", "", "pathname of signature policy file (not usually used)")
options := storage.StoreOptions{}
debug := false
flag.StringVar(&options.GraphRoot, "root", "", "storage root dir")
flag.StringVar(&options.RunRoot, "runroot", "", "storage state dir")
flag.StringVar(&options.GraphDriverName, "storage-driver", "", "storage driver")
flag.BoolVar(&debug, "debug", false, "turn on debug logging")
flag.Parse()
if options.GraphRoot != "" || options.RunRoot != "" || options.GraphDriverName != "" {
storeOptions = options
}
if buildah.InitReexec() {
return
}
logrus.SetLevel(logrus.ErrorLevel)
if debug {
logrus.SetLevel(logrus.DebugLevel)
}
os.Exit(m.Run())
}
func TestGetStore(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
set := flag.NewFlagSet("test", 0)
globalSet := flag.NewFlagSet("test", 0)
globalSet.String("root", "", "path to the directory in which data, including images, is stored")
globalSet.String("runroot", "", "path to the directory in which state is stored")
globalSet.String("storage-driver", "", "storage driver")
globalCtx := cli.NewContext(nil, globalSet, nil)
globalCtx.GlobalSet("root", storeOptions.GraphRoot)
globalCtx.GlobalSet("runroot", storeOptions.RunRoot)
globalCtx.GlobalSet("storage-driver", storeOptions.GraphDriverName)
command := cli.Command{Name: "TestGetStore"}
c := cli.NewContext(nil, set, globalCtx)
c.Command = command
_, err := getStore(c)
if err != nil {
t.Error(err)
}
}
func TestGetSize(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
_, _, _, err = getDateAndDigestAndSize(images[0], store)
if err != nil {
t.Error(err)
}
}
func failTestIfNotRoot(t *testing.T) {
u, err := user.Current()
if err != nil {
t.Log("Could not determine user. Running without root may cause tests to fail")
} else if u.Uid != "0" {
t.Fatal("tests will fail unless run as root")
}
}
func pullTestImage(t *testing.T, imageName string) (string, error) {
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
}
commonOpts := &buildah.CommonBuildOptions{
LabelOpts: nil,
}
options := buildah.BuilderOptions{
FromImage: imageName,
SignaturePolicyPath: signaturePolicyPath,
CommonBuildOpts: commonOpts,
}
b, err := buildah.NewBuilder(store, options)
if err != nil {
t.Fatal(err)
}
id := b.FromImageID
err = b.Delete()
if err != nil {
t.Fatal(err)
}
return id, nil
}

View File

@@ -3,10 +3,10 @@ package main
import (
"strings"
"github.com/Sirupsen/logrus"
"github.com/mattn/go-shellwords"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
@@ -18,68 +18,69 @@ const (
var (
configFlags = []cli.Flag{
cli.StringFlag{
Name: "author",
Usage: "image author contact `information`",
},
cli.StringFlag{
Name: "created-by",
Usage: "`description` of how the image was created",
Value: DefaultCreatedBy,
cli.StringSliceFlag{
Name: "annotation, a",
Usage: "add `annotation` e.g. annotation=value, for the target image",
},
cli.StringFlag{
Name: "arch",
Usage: "`architecture` of the target image",
Usage: "set `architecture` of the target image",
},
cli.StringFlag{
Name: "os",
Usage: "`operating system` of the target image",
},
cli.StringFlag{
Name: "user, u",
Usage: "`user` to run containers based on image as",
},
cli.StringSliceFlag{
Name: "port, p",
Usage: "`port` to expose when running containers based on image",
},
cli.StringSliceFlag{
Name: "env, e",
Usage: "`environment variable` to set when running containers based on image",
},
cli.StringFlag{
Name: "entrypoint",
Usage: "`entry point` for containers based on image",
Name: "author",
Usage: "set image author contact `information`",
},
cli.StringFlag{
Name: "cmd",
Usage: "`command` for containers based on image",
},
cli.StringSliceFlag{
Name: "volume, v",
Usage: "`volume` to create for containers based on image",
Usage: "sets the default `command` to run for containers based on the image",
},
cli.StringFlag{
Name: "workingdir",
Usage: "working `directory` for containers based on image",
Name: "created-by",
Usage: "add `description` of how the image was created",
Value: DefaultCreatedBy,
},
cli.StringFlag{
Name: "entrypoint",
Usage: "set `entry point` for containers based on image",
},
cli.StringSliceFlag{
Name: "env, e",
Usage: "add `environment variable` to be set when running containers based on image",
},
cli.StringSliceFlag{
Name: "label, l",
Usage: "image configuration `label` e.g. label=value",
Usage: "add image configuration `label` e.g. label=value",
},
cli.StringFlag{
Name: "os",
Usage: "set `operating system` of the target image",
},
cli.StringSliceFlag{
Name: "annotation, a",
Usage: "`annotation` e.g. annotation=value, for the target image",
Name: "port, p",
Usage: "add `port` to expose when running containers based on image",
},
cli.StringFlag{
Name: "user, u",
Usage: "set default `user` to run inside containers based on image",
},
cli.StringSliceFlag{
Name: "volume, v",
Usage: "add default `volume` path to be created for containers based on image",
},
cli.StringFlag{
Name: "workingdir",
Usage: "set working `directory` for containers based on image",
},
}
configDescription = "Modifies the configuration values which will be saved to the image"
configCommand = cli.Command{
Name: "config",
Usage: "Update image configuration settings",
Description: configDescription,
Flags: configFlags,
Action: configCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
Name: "config",
Usage: "Update image configuration settings",
Description: configDescription,
Flags: configFlags,
Action: configCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
SkipArgReorder: true,
}
)
@@ -171,6 +172,9 @@ func configCmd(c *cli.Context) error {
return errors.Errorf("too many arguments specified")
}
name := args[0]
if err := validateFlags(c, configFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {

View File

@@ -1,18 +1,66 @@
package main
import (
"encoding/json"
"fmt"
"os"
"strings"
"text/template"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
type jsonContainer struct {
ID string `json:"id"`
Builder bool `json:"builder"`
ImageID string `json:"imageid"`
ImageName string `json:"imagename"`
ContainerName string `json:"containername"`
}
type containerOutputParams struct {
ContainerID string
Builder string
ImageID string
ImageName string
ContainerName string
}
type containerOptions struct {
all bool
format string
json bool
noHeading bool
noTruncate bool
quiet bool
}
type containerFilterParams struct {
id string
name string
ancestor string
}
var (
containersFlags = []cli.Flag{
cli.BoolFlag{
Name: "quiet, q",
Usage: "display only container IDs",
Name: "all, a",
Usage: "also list non-buildah containers",
},
cli.StringFlag{
Name: "filter, f",
Usage: "filter output based on conditions provided",
},
cli.StringFlag{
Name: "format",
Usage: "pretty-print containers using a Go template",
},
cli.BoolFlag{
Name: "json",
Usage: "output in JSON format",
},
cli.BoolFlag{
Name: "noheading, n",
@@ -22,62 +70,249 @@ var (
Name: "notruncate",
Usage: "do not truncate output",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "display only container IDs",
},
}
containersDescription = "Lists containers which appear to be " + buildah.Package + " working containers, their\n names and IDs, and the names and IDs of the images from which they were\n initialized"
containersCommand = cli.Command{
Name: "containers",
Usage: "List working containers and their base images",
Description: containersDescription,
Flags: containersFlags,
Action: containersCmd,
ArgsUsage: " ",
Name: "containers",
Usage: "List working containers and their base images",
Description: containersDescription,
Flags: containersFlags,
Action: containersCmd,
ArgsUsage: " ",
SkipArgReorder: true,
}
)
func containersCmd(c *cli.Context) error {
if err := validateFlags(c, containersFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
return err
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
if c.IsSet("quiet") && c.IsSet("format") {
return errors.Errorf("quiet and format are mutually exclusive")
}
noheading := false
if c.IsSet("noheading") {
noheading = c.Bool("noheading")
opts := containerOptions{
all: c.Bool("all"),
format: c.String("format"),
json: c.Bool("json"),
noHeading: c.Bool("noheading"),
noTruncate: c.Bool("notruncate"),
quiet: c.Bool("quiet"),
}
truncate := true
if c.IsSet("notruncate") {
truncate = !c.Bool("notruncate")
var params *containerFilterParams
if c.IsSet("filter") {
params, err = parseCtrFilter(c.String("filter"))
if err != nil {
return errors.Wrapf(err, "error parsing filter")
}
}
if !opts.noHeading && !opts.quiet && opts.format == "" && !opts.json {
containerOutputHeader(!opts.noTruncate)
}
return outputContainers(store, opts, params)
}
func outputContainers(store storage.Store, opts containerOptions, params *containerFilterParams) error {
seenImages := make(map[string]string)
imageNameForID := func(id string) string {
if id == "" {
return buildah.BaseImageFakeName
}
imageName, ok := seenImages[id]
if ok {
return imageName
}
img, err2 := store.Image(id)
if err2 == nil && len(img.Names) > 0 {
seenImages[id] = img.Names[0]
}
return seenImages[id]
}
builders, err := openBuilders(store)
if err != nil {
return errors.Wrapf(err, "error reading build containers")
}
if len(builders) > 0 && !noheading && !quiet {
if truncate {
fmt.Printf("%-12s %-12s %-10s %s\n", "CONTAINER ID", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
} else {
fmt.Printf("%-64s %-64s %-10s %s\n", "CONTAINER ID", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
var (
containerOutput []containerOutputParams
JSONContainers []jsonContainer
)
if !opts.all {
// only output containers created by buildah
for _, builder := range builders {
image := imageNameForID(builder.FromImageID)
if !matchesCtrFilter(builder.ContainerID, builder.Container, builder.FromImageID, image, params) {
continue
}
if opts.json {
JSONContainers = append(JSONContainers, jsonContainer{ID: builder.ContainerID,
Builder: true,
ImageID: builder.FromImageID,
ImageName: image,
ContainerName: builder.Container})
continue
}
output := containerOutputParams{
ContainerID: builder.ContainerID,
Builder: " *",
ImageID: builder.FromImageID,
ImageName: image,
ContainerName: builder.Container,
}
containerOutput = append(containerOutput, output)
}
} else {
// output all containers currently in storage
builderMap := make(map[string]struct{})
for _, builder := range builders {
builderMap[builder.ContainerID] = struct{}{}
}
containers, err2 := store.Containers()
if err2 != nil {
return errors.Wrapf(err2, "error reading list of all containers")
}
for _, container := range containers {
name := ""
if len(container.Names) > 0 {
name = container.Names[0]
}
_, ours := builderMap[container.ID]
builder := ""
if ours {
builder = " *"
}
if !matchesCtrFilter(container.ID, name, container.ImageID, imageNameForID(container.ImageID), params) {
continue
}
if opts.json {
JSONContainers = append(JSONContainers, jsonContainer{ID: container.ID,
Builder: ours,
ImageID: container.ImageID,
ImageName: imageNameForID(container.ImageID),
ContainerName: name})
}
output := containerOutputParams{
ContainerID: container.ID,
Builder: builder,
ImageID: container.ImageID,
ImageName: imageNameForID(container.ImageID),
ContainerName: name,
}
containerOutput = append(containerOutput, output)
}
}
for _, builder := range builders {
if builder.FromImage == "" {
builder.FromImage = buildah.BaseImageFakeName
}
if quiet {
fmt.Printf("%s\n", builder.ContainerID)
} else {
if truncate {
fmt.Printf("%-12.12s %-12.12s %-10s %s\n", builder.ContainerID, builder.FromImageID, builder.FromImage, builder.Container)
} else {
fmt.Printf("%-64s %-64s %-10s %s\n", builder.ContainerID, builder.FromImageID, builder.FromImage, builder.Container)
}
if opts.json {
data, err := json.MarshalIndent(JSONContainers, "", " ")
if err != nil {
return err
}
fmt.Printf("%s\n", data)
return nil
}
for _, ctr := range containerOutput {
if opts.quiet {
fmt.Printf("%-64s\n", ctr.ContainerID)
continue
}
if opts.format != "" {
if err := containerOutputUsingTemplate(opts.format, ctr); err != nil {
return err
}
continue
}
containerOutputUsingFormatString(!opts.noTruncate, ctr)
}
return nil
}
func containerOutputUsingTemplate(format string, params containerOutputParams) error {
tmpl, err := template.New("container").Parse(format)
if err != nil {
return errors.Wrapf(err, "Template parsing error")
}
err = tmpl.Execute(os.Stdout, params)
if err != nil {
return err
}
fmt.Println()
return nil
}
func containerOutputUsingFormatString(truncate bool, params containerOutputParams) {
if truncate {
fmt.Printf("%-12.12s %-8s %-12.12s %-32s %s\n", params.ContainerID, params.Builder, params.ImageID, params.ImageName, params.ContainerName)
} else {
fmt.Printf("%-64s %-8s %-64s %-32s %s\n", params.ContainerID, params.Builder, params.ImageID, params.ImageName, params.ContainerName)
}
}
func containerOutputHeader(truncate bool) {
if truncate {
fmt.Printf("%-12s %-8s %-12s %-32s %s\n", "CONTAINER ID", "BUILDER", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
} else {
fmt.Printf("%-64s %-8s %-64s %-32s %s\n", "CONTAINER ID", "BUILDER", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
}
}
func parseCtrFilter(filter string) (*containerFilterParams, error) {
params := new(containerFilterParams)
filters := strings.Split(filter, ",")
for _, param := range filters {
pair := strings.SplitN(param, "=", 2)
if len(pair) != 2 {
return nil, errors.Errorf("incorrect filter value %q, should be of form filter=value", param)
}
switch strings.TrimSpace(pair[0]) {
case "id":
params.id = pair[1]
case "name":
params.name = pair[1]
case "ancestor":
params.ancestor = pair[1]
default:
return nil, errors.Errorf("invalid filter %q", pair[0])
}
}
return params, nil
}
func matchesCtrName(ctrName, argName string) bool {
return strings.Contains(ctrName, argName)
}
func matchesAncestor(imgName, imgID, argName string) bool {
if matchesID(imgID, argName) {
return true
}
return matchesReference(imgName, argName)
}
func matchesCtrFilter(ctrID, ctrName, imgID, imgName string, params *containerFilterParams) bool {
if params == nil {
return true
}
if params.id != "" && !matchesID(ctrID, params.id) {
return false
}
if params.name != "" && !matchesCtrName(ctrName, params.name) {
return false
}
if params.ancestor != "" && !matchesAncestor(imgName, imgID, params.ancestor) {
return false
}
return true
}

View File

@@ -9,15 +9,22 @@ import (
"github.com/urfave/cli"
)
const (
// DefaultRegistry is a prefix that we apply to an image name if we
// can't find one in the local Store, in order to generate a source
// reference for the image that we can then copy to the local Store.
DefaultRegistry = "docker://"
)
var (
fromFlags = []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `[username[:password]]` for accessing the registry",
},
cli.StringFlag{
Name: "name",
Usage: "`name` for the working container",
@@ -28,36 +35,35 @@ var (
},
cli.BoolFlag{
Name: "pull-always",
Usage: "pull the image even if one with the same name is already present",
},
cli.StringFlag{
Name: "registry",
Usage: "`prefix` to prepend to the image name in order to pull the image",
Value: DefaultRegistry,
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
Usage: "pull the image even if named image is present in store (supersedes pull option)",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when pulling images",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when accessing the registry",
},
}
fromDescription = "Creates a new working container, either from scratch or using a specified\n image as a starting point"
fromCommand = cli.Command{
Name: "from",
Usage: "Create a working container based on an image",
Description: fromDescription,
Flags: fromFlags,
Action: fromCmd,
ArgsUsage: "IMAGE",
Name: "from",
Usage: "Create a working container based on an image",
Description: fromDescription,
Flags: append(fromFlags, fromAndBudFlags...),
Action: fromCmd,
ArgsUsage: "IMAGE",
SkipArgReorder: true,
}
)
func fromCmd(c *cli.Context) error {
args := c.Args()
if len(args) == 0 {
return errors.Errorf("an image name (or \"scratch\") must be specified")
@@ -65,56 +71,46 @@ func fromCmd(c *cli.Context) error {
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
image := args[0]
if err := validateFlags(c, fromFlags); err != nil {
return err
}
registry := DefaultRegistry
if c.IsSet("registry") {
registry = c.String("registry")
}
pull := true
if c.IsSet("pull") {
pull = c.BoolT("pull")
}
pullAlways := false
if c.IsSet("pull-always") {
pull = c.Bool("pull-always")
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
pullPolicy := buildah.PullNever
if pull {
if c.BoolT("pull") {
pullPolicy = buildah.PullIfMissing
}
if pullAlways {
if c.Bool("pull-always") {
pullPolicy = buildah.PullAlways
}
name := ""
if c.IsSet("name") {
name = c.String("name")
}
signaturePolicy := ""
if c.IsSet("signature-policy") {
signaturePolicy = c.String("signature-policy")
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
signaturePolicy := c.String("signature-policy")
store, err := getStore(c)
if err != nil {
return err
}
options := buildah.BuilderOptions{
FromImage: image,
Container: name,
PullPolicy: pullPolicy,
Registry: registry,
SignaturePolicyPath: signaturePolicy,
commonOpts, err := parseCommonBuildOptions(c)
if err != nil {
return err
}
if !quiet {
options := buildah.BuilderOptions{
FromImage: args[0],
Container: c.String("name"),
PullPolicy: pullPolicy,
SignaturePolicyPath: signaturePolicy,
SystemContext: systemContext,
DefaultMountsFilePath: c.GlobalString("default-mounts-file"),
CommonBuildOpts: commonOpts,
}
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}

View File

@@ -2,84 +2,393 @@ package main
import (
"fmt"
"os"
"strings"
"text/template"
"time"
"encoding/json"
is "github.com/containers/image/storage"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
"golang.org/x/crypto/ssh/terminal"
)
type jsonImage struct {
ID string `json:"id"`
Names []string `json:"names"`
}
type imageOutputParams struct {
ID string
Name string
Digest string
CreatedAt string
Size string
}
type filterParams struct {
dangling string
label string
beforeImage string // Images are sorted by date, so we can just output until we see the image
sinceImage string // Images are sorted by date, so we can just output until we don't see the image
beforeDate time.Time
sinceDate time.Time
referencePattern string
}
var (
imagesFlags = []cli.Flag{
cli.BoolFlag{
Name: "quiet, q",
Usage: "display only image IDs",
Name: "digests",
Usage: "show digests",
},
cli.StringFlag{
Name: "filter, f",
Usage: "filter output based on conditions provided",
},
cli.StringFlag{
Name: "format",
Usage: "pretty-print images using a Go template",
},
cli.BoolFlag{
Name: "json",
Usage: "output in JSON format",
},
cli.BoolFlag{
Name: "noheading, n",
Usage: "do not print column headings",
},
cli.BoolFlag{
Name: "notruncate",
Name: "no-trunc, notruncate",
Usage: "do not truncate output",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "display only image IDs",
},
}
imagesDescription = "Lists locally stored images."
imagesCommand = cli.Command{
Name: "images",
Usage: "List images in local storage",
Description: imagesDescription,
Flags: imagesFlags,
Action: imagesCmd,
ArgsUsage: " ",
Name: "images",
Usage: "List images in local storage",
Description: imagesDescription,
Flags: imagesFlags,
Action: imagesCmd,
ArgsUsage: " ",
SkipArgReorder: true,
}
)
func imagesCmd(c *cli.Context) error {
if err := validateFlags(c, imagesFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
return err
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
}
noheading := false
if c.IsSet("noheading") {
noheading = c.Bool("noheading")
}
truncate := true
if c.IsSet("notruncate") {
truncate = !c.Bool("notruncate")
}
images, err := store.Images()
if err != nil {
return errors.Wrapf(err, "error reading images")
}
if len(images) > 0 && !noheading && !quiet {
if truncate {
fmt.Printf("%-12s %s\n", "IMAGE ID", "IMAGE NAME")
} else {
fmt.Printf("%-64s %s\n", "IMAGE ID", "IMAGE NAME")
}
if c.IsSet("quiet") && c.IsSet("format") {
return errors.Errorf("quiet and format are mutually exclusive")
}
for _, image := range images {
if quiet {
fmt.Printf("%s\n", image.ID)
continue
quiet := c.Bool("quiet")
truncate := !c.Bool("no-trunc")
digests := c.Bool("digests")
hasTemplate := c.IsSet("format")
name := ""
if len(c.Args()) == 1 {
name = c.Args().Get(0)
} else if len(c.Args()) > 1 {
return errors.New("'buildah images' requires at most 1 argument")
}
if c.IsSet("json") {
JSONImages := []jsonImage{}
for _, image := range images {
JSONImages = append(JSONImages, jsonImage{ID: image.ID, Names: image.Names})
}
names := []string{""}
if len(image.Names) > 0 {
names = image.Names
data, err2 := json.MarshalIndent(JSONImages, "", " ")
if err2 != nil {
return err2
}
for _, name := range names {
if truncate {
fmt.Printf("%-12.12s %s\n", image.ID, name)
} else {
fmt.Printf("%-64s %s\n", image.ID, name)
}
fmt.Printf("%s\n", data)
return nil
}
var params *filterParams
if c.IsSet("filter") {
params, err = parseFilter(store, images, c.String("filter"))
if err != nil {
return errors.Wrapf(err, "error parsing filter")
}
}
if len(images) > 0 && !c.Bool("noheading") && !quiet && !hasTemplate {
outputHeader(truncate, digests)
}
return outputImages(images, c.String("format"), store, params, name, hasTemplate, truncate, digests, quiet)
}
func parseFilter(store storage.Store, images []storage.Image, filter string) (*filterParams, error) {
params := new(filterParams)
filterStrings := strings.Split(filter, ",")
for _, param := range filterStrings {
pair := strings.SplitN(param, "=", 2)
switch strings.TrimSpace(pair[0]) {
case "dangling":
if pair[1] == "true" || pair[1] == "false" {
params.dangling = pair[1]
} else {
return nil, fmt.Errorf("invalid filter: '%s=[%s]'", pair[0], pair[1])
}
case "label":
params.label = pair[1]
case "before":
beforeDate, err := setFilterDate(store, images, pair[1])
if err != nil {
return nil, fmt.Errorf("no such id: %s", pair[0])
}
params.beforeDate = beforeDate
params.beforeImage = pair[1]
case "since":
sinceDate, err := setFilterDate(store, images, pair[1])
if err != nil {
return nil, fmt.Errorf("no such id: %s", pair[0])
}
params.sinceDate = sinceDate
params.sinceImage = pair[1]
case "reference":
params.referencePattern = pair[1]
default:
return nil, fmt.Errorf("invalid filter: '%s'", pair[0])
}
}
return params, nil
}
func setFilterDate(store storage.Store, images []storage.Image, imgName string) (time.Time, error) {
for _, image := range images {
for _, name := range image.Names {
if matchesReference(name, imgName) {
// Set the date to this image
ref, err := is.Transport.ParseStoreReference(store, image.ID)
if err != nil {
return time.Time{}, fmt.Errorf("error parsing reference to image %q: %v", image.ID, err)
}
img, err := ref.NewImage(nil)
if err != nil {
return time.Time{}, fmt.Errorf("error reading image %q: %v", image.ID, err)
}
defer img.Close()
inspect, err := img.Inspect()
if err != nil {
return time.Time{}, fmt.Errorf("error inspecting image %q: %v", image.ID, err)
}
date := inspect.Created
return date, nil
}
}
}
return time.Time{}, fmt.Errorf("Could not locate image %q", imgName)
}
func outputHeader(truncate, digests bool) {
if truncate {
fmt.Printf("%-20s %-56s ", "IMAGE ID", "IMAGE NAME")
} else {
fmt.Printf("%-64s %-56s ", "IMAGE ID", "IMAGE NAME")
}
if digests {
fmt.Printf("%-71s ", "DIGEST")
}
fmt.Printf("%-22s %s\n", "CREATED AT", "SIZE")
}
func outputImages(images []storage.Image, format string, store storage.Store, filters *filterParams, argName string, hasTemplate, truncate, digests, quiet bool) error {
for _, image := range images {
createdTime := image.Created
inspectedTime, digest, size, _ := getDateAndDigestAndSize(image, store)
if !inspectedTime.IsZero() {
if createdTime != inspectedTime {
logrus.Debugf("image record and configuration disagree on the image's creation time for %q, using the one from the configuration", image)
createdTime = inspectedTime
}
}
names := []string{}
if len(image.Names) > 0 {
names = image.Names
} else {
// images without names should be printed with "<none>" as the image name
names = append(names, "<none>")
}
for _, name := range names {
if !matchesFilter(image, store, name, filters) || !matchesReference(name, argName) {
continue
}
if quiet {
fmt.Printf("%-64s\n", image.ID)
// We only want to print each id once
break
}
params := imageOutputParams{
ID: image.ID,
Name: name,
Digest: digest,
CreatedAt: createdTime.Format("Jan 2, 2006 15:04"),
Size: formattedSize(size),
}
if hasTemplate {
if err := outputUsingTemplate(format, params); err != nil {
return err
}
continue
}
outputUsingFormatString(truncate, digests, params)
}
}
return nil
}
func matchesFilter(image storage.Image, store storage.Store, name string, params *filterParams) bool {
if params == nil {
return true
}
if params.dangling != "" && !matchesDangling(name, params.dangling) {
return false
} else if params.label != "" && !matchesLabel(image, store, params.label) {
return false
} else if params.beforeImage != "" && !matchesBeforeImage(image, name, params) {
return false
} else if params.sinceImage != "" && !matchesSinceImage(image, name, params) {
return false
} else if params.referencePattern != "" && !matchesReference(name, params.referencePattern) {
return false
}
return true
}
func matchesDangling(name string, dangling string) bool {
if dangling == "false" && name != "<none>" {
return true
} else if dangling == "true" && name == "<none>" {
return true
}
return false
}
func matchesLabel(image storage.Image, store storage.Store, label string) bool {
storeRef, err := is.Transport.ParseStoreReference(store, image.ID)
if err != nil {
return false
}
img, err := storeRef.NewImage(nil)
if err != nil {
return false
}
defer img.Close()
info, err := img.Inspect()
if err != nil {
return false
}
pair := strings.SplitN(label, "=", 2)
for key, value := range info.Labels {
if key == pair[0] {
if len(pair) == 2 {
if value == pair[1] {
return true
}
} else {
return false
}
}
}
return false
}
// Returns true if the image was created since the filter image. Returns
// false otherwise
func matchesBeforeImage(image storage.Image, name string, params *filterParams) bool {
return image.Created.IsZero() || image.Created.Before(params.beforeDate)
}
// Returns true if the image was created since the filter image. Returns
// false otherwise
func matchesSinceImage(image storage.Image, name string, params *filterParams) bool {
return image.Created.IsZero() || image.Created.After(params.sinceDate)
}
func matchesID(imageID, argID string) bool {
return strings.HasPrefix(imageID, argID)
}
func matchesReference(name, argName string) bool {
if argName == "" {
return true
}
splitName := strings.Split(name, ":")
// If the arg contains a tag, we handle it differently than if it does not
if strings.Contains(argName, ":") {
splitArg := strings.Split(argName, ":")
return strings.HasSuffix(splitName[0], splitArg[0]) && (splitName[1] == splitArg[1])
}
return strings.HasSuffix(splitName[0], argName)
}
func formattedSize(size int64) string {
suffixes := [5]string{"B", "KB", "MB", "GB", "TB"}
count := 0
formattedSize := float64(size)
for formattedSize >= 1024 && count < 4 {
formattedSize /= 1024
count++
}
return fmt.Sprintf("%.4g %s", formattedSize, suffixes[count])
}
func outputUsingTemplate(format string, params imageOutputParams) error {
tmpl, err := template.New("image").Parse(format)
if err != nil {
return errors.Wrapf(err, "Template parsing error")
}
err = tmpl.Execute(os.Stdout, params)
if err != nil {
return err
}
if terminal.IsTerminal(int(os.Stdout.Fd())) {
fmt.Println()
}
return nil
}
func outputUsingFormatString(truncate, digests bool, params imageOutputParams) {
if truncate {
fmt.Printf("%-20.12s %-56s", params.ID, params.Name)
} else {
fmt.Printf("%-64s %-56s", params.ID, params.Name)
}
if digests {
fmt.Printf(" %-64s", params.Digest)
}
fmt.Printf(" %-22s %s\n", params.CreatedAt, params.Size)
}

713
cmd/buildah/images_test.go Normal file
View File

@@ -0,0 +1,713 @@
package main
import (
"bytes"
"fmt"
"io"
"os"
"strings"
"testing"
"time"
is "github.com/containers/image/storage"
"github.com/containers/storage"
)
func TestTemplateOutputBlankTemplate(t *testing.T) {
params := imageOutputParams{
ID: "0123456789abcdef",
Name: "test/image:latest",
Digest: "sha256:012345789abcdef012345789abcdef012345789abcdef012345789abcdef",
CreatedAt: "Jan 01 2016 10:45",
Size: "97 KB",
}
err := outputUsingTemplate("", params)
//Output: Words
if err != nil {
t.Error(err)
}
}
func TestTemplateOutputValidTemplate(t *testing.T) {
params := imageOutputParams{
ID: "0123456789abcdef",
Name: "test/image:latest",
Digest: "sha256:012345789abcdef012345789abcdef012345789abcdef012345789abcdef",
CreatedAt: "Jan 01 2016 10:45",
Size: "97 KB",
}
templateString := "{{.ID}}"
output, err := captureOutputWithError(func() error {
return outputUsingTemplate(templateString, params)
})
if err != nil {
t.Error(err)
} else if strings.TrimSpace(output) != strings.TrimSpace(params.ID) {
t.Errorf("Error with template output:\nExpected: %s\nReceived: %s\n", params.ID, output)
}
}
func TestFormatStringOutput(t *testing.T) {
params := imageOutputParams{
ID: "012345789abcdef",
Name: "test/image:latest",
Digest: "sha256:012345789abcdef012345789abcdef012345789abcdef012345789abcdef",
CreatedAt: "Jan 01 2016 10:45",
Size: "97 KB",
}
output := captureOutput(func() {
outputUsingFormatString(true, true, params)
})
expectedOutput := fmt.Sprintf("%-20.12s %-56s %-64s %-22s %s\n", params.ID, params.Name, params.Digest, params.CreatedAt, params.Size)
if output != expectedOutput {
t.Errorf("Error outputting using format string:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
}
func TestSizeFormatting(t *testing.T) {
size := formattedSize(0)
if size != "0 B" {
t.Errorf("Error formatting size: expected '%s' got '%s'", "0 B", size)
}
size = formattedSize(1024)
if size != "1 KB" {
t.Errorf("Error formatting size: expected '%s' got '%s'", "1 KB", size)
}
size = formattedSize(1024 * 1024 * 1024 * 1024 * 1024)
if size != "1024 TB" {
t.Errorf("Error formatting size: expected '%s' got '%s'", "1024 TB", size)
}
}
func TestOutputHeader(t *testing.T) {
output := captureOutput(func() {
outputHeader(true, false)
})
expectedOutput := fmt.Sprintf("%-20s %-56s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "CREATED AT", "SIZE")
if output != expectedOutput {
t.Errorf("Error outputting header:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
output = captureOutput(func() {
outputHeader(true, true)
})
expectedOutput = fmt.Sprintf("%-20s %-56s %-71s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "DIGEST", "CREATED AT", "SIZE")
if output != expectedOutput {
t.Errorf("Error outputting header:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
output = captureOutput(func() {
outputHeader(false, false)
})
expectedOutput = fmt.Sprintf("%-64s %-56s %-22s %s\n", "IMAGE ID", "IMAGE NAME", "CREATED AT", "SIZE")
if output != expectedOutput {
t.Errorf("Error outputting header:\n\texpected: %s\n\treceived: %s\n", expectedOutput, output)
}
}
func TestMatchWithTag(t *testing.T) {
isMatch := matchesReference("docker.io/kubernetes/pause:latest", "pause:latest")
if !isMatch {
t.Error("expected match, got not match")
}
isMatch = matchesReference("docker.io/kubernetes/pause:latest", "kubernetes/pause:latest")
if !isMatch {
t.Error("expected match, got no match")
}
}
func TestNoMatchesReferenceWithTag(t *testing.T) {
isMatch := matchesReference("docker.io/kubernetes/pause:latest", "redis:latest")
if isMatch {
t.Error("expected no match, got match")
}
isMatch = matchesReference("docker.io/kubernetes/pause:latest", "kubernetes/redis:latest")
if isMatch {
t.Error("expected no match, got match")
}
}
func TestMatchesReferenceWithoutTag(t *testing.T) {
isMatch := matchesReference("docker.io/kubernetes/pause:latest", "pause")
if !isMatch {
t.Error("expected match, got not match")
}
isMatch = matchesReference("docker.io/kubernetes/pause:latest", "kubernetes/pause")
if !isMatch {
t.Error("expected match, got no match")
}
}
func TestNoMatchesReferenceWithoutTag(t *testing.T) {
isMatch := matchesReference("docker.io/kubernetes/pause:latest", "redis")
if isMatch {
t.Error("expected no match, got match")
}
isMatch = matchesReference("docker.io/kubernetes/pause:latest", "kubernetes/redis")
if isMatch {
t.Error("expected no match, got match")
}
}
func TestOutputImagesQuietTruncated(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
// Tests quiet and truncated output
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "", store, nil, "", false, true, false, true)
})
expectedOutput := fmt.Sprintf("%-64s\n", images[0].ID)
if err != nil {
t.Error("quiet/truncated output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("quiet/truncated output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestOutputImagesQuietNotTruncated(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests quiet and non-truncated output
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "", store, nil, "", false, false, false, true)
})
expectedOutput := fmt.Sprintf("%-64s\n", images[0].ID)
if err != nil {
t.Error("quiet/non-truncated output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("quiet/non-truncated output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestOutputImagesFormatString(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests output with format template
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "{{.ID}}", store, nil, "", true, true, false, false)
})
expectedOutput := images[0].ID
if err != nil {
t.Error("format string output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("format string output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestOutputImagesFormatTemplate(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests quiet and non-truncated output
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "", store, nil, "", false, false, false, true)
})
expectedOutput := fmt.Sprintf("%-64s\n", images[0].ID)
if err != nil {
t.Error("format template output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("format template output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestOutputImagesArgNoMatch(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests output with an arg name that does not match. Args ending in ":" cannot match
// because all images in the repository must have a tag, and here the tag is an
// empty string
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "", store, nil, "foo:", false, true, false, false)
})
expectedOutput := fmt.Sprintf("")
if err != nil {
t.Error("arg no match output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Error("arg no match output should be empty")
}
}
func TestOutputMultipleImages(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull two images so that we know we have at least two
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
_, err = pullTestImage(t, "alpine:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Tests quiet and truncated output
output, err := captureOutputWithError(func() error {
return outputImages(images[:2], "", store, nil, "", false, true, false, true)
})
expectedOutput := fmt.Sprintf("%-64s\n%-64s\n", images[0].ID, images[1].ID)
if err != nil {
t.Error("multi-image output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {
t.Errorf("multi-image output does not match expected value\nExpected: %s\nReceived: %s\n", expectedOutput, output)
}
}
func TestParseFilterAllParams(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=true,label=a=b,before=busybox:latest,since=busybox:latest,reference=abcdef"
params, err := parseFilter(store, images, label)
if err != nil {
t.Fatalf("error parsing filter: %v", err)
}
ref, err := is.Transport.ParseStoreReference(store, "busybox:latest")
if err != nil {
t.Fatalf("error parsing store reference: %v", err)
}
img, err := ref.NewImage(nil)
if err != nil {
t.Fatalf("error reading image from store: %v", err)
}
defer img.Close()
inspect, err := img.Inspect()
if err != nil {
t.Fatalf("error inspecting image in store: %v", err)
}
expectedParams := &filterParams{
dangling: "true",
label: "a=b",
beforeImage: "busybox:latest",
beforeDate: inspect.Created,
sinceImage: "busybox:latest",
sinceDate: inspect.Created,
referencePattern: "abcdef",
}
if *params != *expectedParams {
t.Errorf("filter did not return expected result\n\tExpected: %v\n\tReceived: %v", expectedParams, params)
}
}
func TestParseFilterInvalidDangling(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=NO,label=a=b,before=busybox:latest,since=busybox:latest,reference=abcdef"
_, err = parseFilter(store, images, label)
if err == nil || err.Error() != "invalid filter: 'dangling=[NO]'" {
t.Fatalf("expected error parsing filter")
}
}
func TestParseFilterInvalidBefore(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=false,label=a=b,before=:,since=busybox:latest,reference=abcdef"
_, err = parseFilter(store, images, label)
if err == nil || !strings.Contains(err.Error(), "no such id") {
t.Fatalf("expected error parsing filter")
}
}
func TestParseFilterInvalidSince(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "dangling=false,label=a=b,before=busybox:latest,since=:,reference=abcdef"
_, err = parseFilter(store, images, label)
if err == nil || !strings.Contains(err.Error(), "no such id") {
t.Fatalf("expected error parsing filter")
}
}
func TestParseFilterInvalidFilter(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// Pull an image so we know we have it
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
label := "foo=bar"
_, err = parseFilter(store, images, label)
if err == nil || err.Error() != "invalid filter: 'foo'" {
t.Fatalf("expected error parsing filter")
}
}
func TestMatchesDangingTrue(t *testing.T) {
if !matchesDangling("<none>", "true") {
t.Error("matchesDangling() should return true with dangling=true and name=<none>")
}
if !matchesDangling("hello", "false") {
t.Error("matchesDangling() should return true with dangling=false and name='hello'")
}
}
func TestMatchesDangingFalse(t *testing.T) {
if matchesDangling("hello", "true") {
t.Error("matchesDangling() should return false with dangling=true and name=hello")
}
if matchesDangling("<none>", "false") {
t.Error("matchesDangling() should return false with dangling=false and name=<none>")
}
}
func TestMatchesLabelTrue(t *testing.T) {
//TODO: How do I implement this?
}
func TestMatchesLabelFalse(t *testing.T) {
// TODO: How do I implement this?
}
func TestMatchesBeforeImageTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// by default, params.seenImage is false
params := new(filterParams)
params.beforeDate = time.Now()
params.beforeImage = "foo:bar"
if !matchesBeforeImage(images[0], ":", params) {
t.Error("should have matched beforeImage")
}
}
func TestMatchesBeforeImageFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// by default, params.seenImage is false
params := new(filterParams)
params.beforeDate = time.Time{}
params.beforeImage = "foo:bar"
// Should return false because the image has been seen
if matchesBeforeImage(images[0], ":", params) {
t.Error("should not have matched beforeImage")
}
}
func TestMatchesSinceeImageTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// by default, params.seenImage is false
params := new(filterParams)
params.sinceDate = time.Time{}
params.sinceImage = "foo:bar"
if !matchesSinceImage(images[0], ":", params) {
t.Error("should have matched SinceImage")
}
}
func TestMatchesSinceImageFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if err != nil {
t.Fatal(err)
} else if store != nil {
is.Transport.SetStore(store)
}
// Pull an image so that we know we have at least one
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
// by default, params.seenImage is false
params := new(filterParams)
params.sinceDate = time.Now()
params.sinceImage = "foo:bar"
// Should return false because the image has been seen
if matchesSinceImage(images[0], ":", params) {
t.Error("should not have matched sinceImage")
}
if matchesSinceImage(images[0], "foo:bar", params) {
t.Error("image should have been filtered out")
}
}
func captureOutputWithError(f func() error) (string, error) {
old := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
err := f()
w.Close()
os.Stdout = old
var buf bytes.Buffer
io.Copy(&buf, r)
return buf.String(), err
}
// Captures output so that it can be compared to expected values
func captureOutput(f func()) string {
old := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
f()
w.Close()
os.Stdout = old
var buf bytes.Buffer
io.Copy(&buf, r)
return buf.String()
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
"golang.org/x/crypto/ssh/terminal"
)
const (
@@ -21,23 +22,25 @@ ID: {{.ContainerID}}
var (
inspectFlags = []cli.Flag{
cli.StringFlag{
Name: "type, t",
Usage: "look at the item of the specified `type` (container or image) and name",
},
cli.StringFlag{
Name: "format, f",
Usage: "use `format` as a Go template to format the output",
},
cli.StringFlag{
Name: "type, t",
Usage: "look at the item of the specified `type` (container or image) and name",
Value: inspectTypeContainer,
},
}
inspectDescription = "Inspects a build container's or built image's configuration."
inspectCommand = cli.Command{
Name: "inspect",
Usage: "Inspects the configuration of a container or image",
Description: inspectDescription,
Flags: inspectFlags,
Action: inspectCmd,
ArgsUsage: "CONTAINER-OR-IMAGE",
Name: "inspect",
Usage: "Inspects the configuration of a container or image",
Description: inspectDescription,
Flags: inspectFlags,
Action: inspectCmd,
ArgsUsage: "CONTAINER-OR-IMAGE",
SkipArgReorder: true,
}
)
@@ -51,23 +54,18 @@ func inspectCmd(c *cli.Context) error {
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
itemType := inspectTypeContainer
if c.IsSet("type") {
itemType = c.String("type")
if err := validateFlags(c, inspectFlags); err != nil {
return err
}
switch itemType {
case inspectTypeContainer:
case inspectTypeImage:
default:
return errors.Errorf("the only recognized types are %q and %q", inspectTypeContainer, inspectTypeImage)
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
format := defaultFormat
if c.IsSet("format") {
if c.String("format") != "" {
format = c.String("format")
}
if c.String("format") != "" {
format = c.String("format")
}
t := template.Must(template.New("format").Parse(format))
@@ -78,27 +76,41 @@ func inspectCmd(c *cli.Context) error {
return err
}
switch itemType {
switch c.String("type") {
case inspectTypeContainer:
builder, err = openBuilder(store, name)
if err != nil {
return errors.Wrapf(err, "error reading build container %q", name)
if c.IsSet("type") {
return errors.Wrapf(err, "error reading build container %q", name)
}
builder, err = openImage(systemContext, store, name)
if err != nil {
return errors.Wrapf(err, "error reading build object %q", name)
}
}
case inspectTypeImage:
builder, err = openImage(store, name)
builder, err = openImage(systemContext, store, name)
if err != nil {
return errors.Wrapf(err, "error reading image %q", name)
}
default:
return errors.Errorf("the only recognized types are %q and %q", inspectTypeContainer, inspectTypeImage)
}
if c.IsSet("format") {
return t.Execute(os.Stdout, builder)
if err := t.Execute(os.Stdout, buildah.GetBuildInfo(builder)); err != nil {
return err
}
if terminal.IsTerminal(int(os.Stdout.Fd())) {
fmt.Println()
}
return nil
}
b, err := json.MarshalIndent(builder, "", " ")
if err != nil {
return errors.Wrapf(err, "error encoding build container as json")
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
if terminal.IsTerminal(int(os.Stdout.Fd())) {
enc.SetEscapeHTML(false)
}
_, err = fmt.Println(string(b))
return err
return enc.Encode(builder)
}

View File

@@ -4,15 +4,17 @@ import (
"fmt"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/storage"
ispecs "github.com/opencontainers/image-spec/specs-go"
rspecs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
func main() {
debug := false
var defaultStoreDriverOptions *cli.StringSlice
if buildah.InitReexec() {
return
@@ -27,6 +29,18 @@ func main() {
defaultStoreDriverOptions = &optionSlice
}
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug",
Usage: "print debugging information",
},
cli.StringFlag{
Name: "registries-conf",
Usage: "path to registries.conf file (not usually used)",
},
cli.StringFlag{
Name: "registries-conf-dir",
Usage: "path to registries.conf.d directory (not usually used)",
},
cli.StringFlag{
Name: "root",
Usage: "storage root dir",
@@ -47,17 +61,17 @@ func main() {
Usage: "storage driver option",
Value: defaultStoreDriverOptions,
},
cli.BoolFlag{
Name: "debug",
Usage: "print debugging information",
cli.StringFlag{
Name: "default-mounts-file",
Usage: "path to default mounts file",
Value: buildah.DefaultMountsFile,
},
}
app.Before = func(c *cli.Context) error {
logrus.SetLevel(logrus.ErrorLevel)
if c.GlobalIsSet("debug") {
if c.GlobalBool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
if c.GlobalBool("debug") {
debug = true
logrus.SetLevel(logrus.DebugLevel)
}
return nil
}
@@ -88,10 +102,15 @@ func main() {
runCommand,
tagCommand,
umountCommand,
versionCommand,
}
err := app.Run(os.Args)
if err != nil {
logrus.Errorf("%v", err)
os.Exit(1)
if debug {
logrus.Errorf(err.Error())
} else {
fmt.Fprintln(os.Stderr, err.Error())
}
cli.OsExiter(1)
}
}

View File

@@ -16,12 +16,13 @@ var (
},
}
mountCommand = cli.Command{
Name: "mount",
Usage: "Mount a working container's root filesystem",
Description: mountDescription,
Action: mountCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
Flags: mountFlags,
Name: "mount",
Usage: "Mount a working container's root filesystem",
Description: mountDescription,
Action: mountCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
Flags: mountFlags,
SkipArgReorder: true,
}
)
@@ -30,15 +31,15 @@ func mountCmd(c *cli.Context) error {
if len(args) > 1 {
return errors.Errorf("too many arguments specified")
}
if err := validateFlags(c, mountFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
return err
}
truncate := true
if c.IsSet("notruncate") {
truncate = !c.Bool("notruncate")
}
truncate := !c.Bool("notruncate")
if len(args) == 1 {
name := args[0]
@@ -46,7 +47,7 @@ func mountCmd(c *cli.Context) error {
if err != nil {
return errors.Wrapf(err, "error reading build container %q", name)
}
mountPoint, err := builder.Mount("")
mountPoint, err := builder.Mount(builder.MountLabel)
if err != nil {
return errors.Wrapf(err, "error mounting %q container %q", name, builder.Container)
}

View File

@@ -1,38 +1,77 @@
package main
import (
"fmt"
"os"
"strings"
"github.com/containers/image/manifest"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/storage/pkg/archive"
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/projectatomic/buildah/util"
"github.com/urfave/cli"
)
var (
pushFlags = []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `[username[:password]]` for accessing the registry",
},
cli.BoolFlag{
Name: "disable-compression, D",
Usage: "don't compress layers",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
Name: "format, f",
Usage: "manifest type (oci, v2s1, or v2s2) to use when saving image using the 'dir:' transport (default is manifest type of source)",
},
cli.BoolFlag{
Name: "quiet, q",
Usage: "don't output progress information when pushing images",
},
cli.StringFlag{
Name: "signature-policy",
Usage: "`pathname` of signature policy file (not usually used)",
},
cli.BoolTFlag{
Name: "tls-verify",
Usage: "require HTTPS and verify certificates when accessing the registry",
},
}
pushDescription = "Pushes an image to a specified location."
pushCommand = cli.Command{
Name: "push",
Usage: "Push an image to a specified location",
Description: pushDescription,
Flags: pushFlags,
Action: pushCmd,
ArgsUsage: "IMAGE [TRANSPORT:]IMAGE",
pushDescription = fmt.Sprintf(`
Pushes an image to a specified location.
The Image "DESTINATION" uses a "transport":"details" format.
Supported transports:
%s
See buildah-push(1) section "DESTINATION" for the expected format
`, strings.Join(transports.ListNames(), ", "))
pushCommand = cli.Command{
Name: "push",
Usage: "Push an image to a specified destination",
Description: pushDescription,
Flags: pushFlags,
Action: pushCmd,
ArgsUsage: "IMAGE DESTINATION",
SkipArgReorder: true,
}
)
@@ -41,43 +80,73 @@ func pushCmd(c *cli.Context) error {
if len(args) < 2 {
return errors.New("source and destination image IDs must be specified")
}
if err := validateFlags(c, pushFlags); err != nil {
return err
}
src := args[0]
destSpec := args[1]
signaturePolicy := ""
if c.IsSet("signature-policy") {
signaturePolicy = c.String("signature-policy")
}
compress := archive.Uncompressed
if !c.IsSet("disable-compression") || !c.Bool("disable-compression") {
compress = archive.Gzip
}
quiet := false
if c.IsSet("quiet") {
quiet = c.Bool("quiet")
compress := archive.Gzip
if c.Bool("disable-compression") {
compress = archive.Uncompressed
}
store, err := getStore(c)
if err != nil {
return err
}
dest, err := alltransports.ParseImageName(destSpec)
// add the docker:// transport to see if they neglected it.
if err != nil {
return err
if strings.Contains(destSpec, "://") {
return err
}
destSpec = "docker://" + destSpec
dest2, err2 := alltransports.ParseImageName(destSpec)
if err2 != nil {
return err
}
dest = dest2
}
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
var manifestType string
if c.IsSet("format") {
switch c.String("format") {
case "oci":
manifestType = imgspecv1.MediaTypeImageManifest
case "v2s1":
manifestType = manifest.DockerV2Schema1SignedMediaType
case "v2s2", "docker":
manifestType = manifest.DockerV2Schema2MediaType
default:
return fmt.Errorf("unknown format %q. Choose on of the supported formats: 'oci', 'v2s1', or 'v2s2'", c.String("format"))
}
}
options := buildah.PushOptions{
Compression: compress,
SignaturePolicyPath: signaturePolicy,
ManifestType: manifestType,
SignaturePolicyPath: c.String("signature-policy"),
Store: store,
SystemContext: systemContext,
}
if !quiet {
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}
err = buildah.Push(src, dest, options)
if err != nil {
return errors.Wrapf(err, "error pushing image %q to %q", src, destSpec)
return util.GetFailureCause(
err,
errors.Wrapf(err, "error pushing image %q to %q", src, destSpec),
)
}
return nil

View File

@@ -2,6 +2,7 @@ package main
import (
"fmt"
"io"
"os"
"github.com/pkg/errors"
@@ -10,19 +11,36 @@ import (
var (
rmDescription = "Removes one or more working containers, unmounting them if necessary"
rmCommand = cli.Command{
Name: "rm",
Aliases: []string{"delete"},
Usage: "Remove one or more working containers",
Description: rmDescription,
Action: rmCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [...]",
rmFlags = []cli.Flag{
cli.BoolFlag{
Name: "all, a",
Usage: "remove all containers",
},
}
rmCommand = cli.Command{
Name: "rm",
Aliases: []string{"delete"},
Usage: "Remove one or more working containers",
Description: rmDescription,
Action: rmCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [...]",
Flags: rmFlags,
SkipArgReorder: true,
}
)
// writeError writes `lastError` into `w` if not nil and return the next error `err`
func writeError(w io.Writer, err error, lastError error) error {
if lastError != nil {
fmt.Fprintln(w, lastError)
}
return err
}
func rmCmd(c *cli.Context) error {
delContainerErrStr := "error removing container"
args := c.Args()
if len(args) == 0 {
if len(args) == 0 && !c.Bool("all") {
return errors.Errorf("container ID must be specified")
}
store, err := getStore(c)
@@ -30,28 +48,36 @@ func rmCmd(c *cli.Context) error {
return err
}
var e error
for _, name := range args {
builder, err := openBuilder(store, name)
if e == nil {
e = err
}
var lastError error
if c.Bool("all") {
builders, err := openBuilders(store)
if err != nil {
fmt.Fprintf(os.Stderr, "error reading build container %q: %v\n", name, err)
continue
return errors.Wrapf(err, "error reading build containers")
}
id := builder.ContainerID
err = builder.Delete()
if e == nil {
e = err
for _, builder := range builders {
id := builder.ContainerID
if err = builder.Delete(); err != nil {
lastError = writeError(os.Stderr, errors.Wrapf(err, "%s %q", delContainerErrStr, builder.Container), lastError)
continue
}
fmt.Printf("%s\n", id)
}
if err != nil {
fmt.Fprintf(os.Stderr, "error removing container %q: %v\n", builder.Container, err)
continue
} else {
for _, name := range args {
builder, err := openBuilder(store, name)
if err != nil {
lastError = writeError(os.Stderr, errors.Wrapf(err, "%s %q", delContainerErrStr, name), lastError)
continue
}
id := builder.ContainerID
if err = builder.Delete(); err != nil {
lastError = writeError(os.Stderr, errors.Wrapf(err, "%s %q", delContainerErrStr, name), lastError)
continue
}
fmt.Printf("%s\n", id)
}
fmt.Printf("%s\n", id)
}
return e
return lastError
}

View File

@@ -4,106 +4,313 @@ import (
"fmt"
"os"
"github.com/Sirupsen/logrus"
"github.com/containers/image/storage"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var (
rmiDescription = "Removes one or more locally stored images."
rmiCommand = cli.Command{
Name: "rmi",
Usage: "Removes one or more images from local storage",
Description: rmiDescription,
Action: rmiCmd,
ArgsUsage: "IMAGE-NAME-OR-ID [...]",
rmiDescription = "removes one or more locally stored images."
rmiFlags = []cli.Flag{
cli.BoolFlag{
Name: "all, a",
Usage: "remove all images",
},
cli.BoolFlag{
Name: "prune, p",
Usage: "prune dangling images",
},
cli.BoolFlag{
Name: "force, f",
Usage: "force removal of the image and any containers using the image",
},
}
rmiCommand = cli.Command{
Name: "rmi",
Usage: "removes one or more images from local storage",
Description: rmiDescription,
Action: rmiCmd,
ArgsUsage: "IMAGE-NAME-OR-ID [...]",
Flags: rmiFlags,
SkipArgReorder: true,
}
)
func rmiCmd(c *cli.Context) error {
force := c.Bool("force")
removeAll := c.Bool("all")
pruneDangling := c.Bool("prune")
args := c.Args()
if len(args) == 0 {
if len(args) == 0 && !removeAll && !pruneDangling {
return errors.Errorf("image name or ID must be specified")
}
if len(args) > 0 && removeAll {
return errors.Errorf("when using the --all switch, you may not pass any images names or IDs")
}
if removeAll && pruneDangling {
return errors.Errorf("when using the --all switch, you may not use --prune switch")
}
if err := validateFlags(c, rmiFlags); err != nil {
return err
}
store, err := getStore(c)
if err != nil {
return err
}
var e error
for _, id := range args {
// If it's an exact name or ID match with the underlying
// storage library's information about the image, then it's
// enough.
_, err = store.DeleteImage(id, true)
imagesToDelete := args[:]
var lastError error
if removeAll {
imagesToDelete, err = findAllImages(store)
if err != nil {
var ref types.ImageReference
// If it's looks like a proper image reference, parse
// it and check if it corresponds to an image that
// actually exists.
if ref2, err2 := alltransports.ParseImageName(id); err2 == nil {
if img, err3 := ref2.NewImage(nil); err3 == nil {
img.Close()
ref = ref2
} else {
logrus.Debugf("error confirming presence of image %q: %v", transports.ImageName(ref2), err3)
}
} else {
logrus.Debugf("error parsing %q as an image reference: %v", id, err2)
}
if ref == nil {
// If it's looks like an image reference that's
// relative to our storage, parse it and check
// if it corresponds to an image that actually
// exists.
if ref2, err2 := storage.Transport.ParseStoreReference(store, id); err2 == nil {
if img, err3 := ref2.NewImage(nil); err3 == nil {
img.Close()
ref = ref2
} else {
logrus.Debugf("error confirming presence of image %q: %v", transports.ImageName(ref2), err3)
}
} else {
logrus.Debugf("error parsing %q as a store reference: %v", id, err2)
}
}
if ref == nil {
// If it might be an ID that's relative to our
// storage, parse it and check if it
// corresponds to an image that actually
// exists. This _should_ be redundant, since
// we already tried deleting the image using
// the ID directly above, but it can't hurt,
// either.
if ref2, err2 := storage.Transport.ParseStoreReference(store, "@"+id); err2 == nil {
if img, err3 := ref2.NewImage(nil); err3 == nil {
img.Close()
ref = ref2
} else {
logrus.Debugf("error confirming presence of image %q: %v", transports.ImageName(ref2), err3)
}
} else {
logrus.Debugf("error parsing %q as an image reference: %v", "@"+id, err2)
}
}
if ref != nil {
err = ref.DeleteImage(nil)
}
return err
}
if e == nil {
e = err
}
if err != nil {
fmt.Fprintf(os.Stderr, "error removing image %q: %v\n", id, err)
continue
}
fmt.Printf("%s\n", id)
}
return e
if pruneDangling {
imagesToDelete, err = findDanglingImages(store)
if err != nil {
return err
}
}
for _, id := range imagesToDelete {
image, err := getImage(id, store)
if err != nil || image == nil {
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
if err == nil {
err = storage.ErrNotAnImage
}
lastError = errors.Wrapf(err, "could not get image %q", id)
continue
}
if image != nil {
ctrIDs, err := runningContainers(image, store)
if err != nil {
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(err, "error getting running containers for image %q", id)
continue
}
if len(ctrIDs) > 0 && len(image.Names) <= 1 {
if force {
err = removeContainers(ctrIDs, store)
if err != nil {
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(err, "error removing containers %v for image %q", ctrIDs, id)
continue
}
} else {
for _, ctrID := range ctrIDs {
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(storage.ErrImageUsedByContainer, "Could not remove image %q (must force) - container %q is using its reference image", id, ctrID)
}
continue
}
}
// If the user supplied an ID, we cannot delete the image if it is referred to by multiple tags
if matchesID(image.ID, id) {
if len(image.Names) > 1 && !force {
return fmt.Errorf("unable to delete %s (must force) - image is referred to in multiple tags", image.ID)
}
// If it is forced, we have to untag the image so that it can be deleted
image.Names = image.Names[:0]
} else {
name, err2 := untagImage(id, image, store)
if err2 != nil {
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(err2, "error removing tag %q from image %q", id, image.ID)
continue
}
fmt.Printf("untagged: %s\n", name)
}
if len(image.Names) > 0 {
continue
}
id, err := removeImage(image, store)
if err != nil {
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(err, "error removing image %q", image.ID)
continue
}
fmt.Printf("%s\n", id)
}
}
return lastError
}
func getImage(id string, store storage.Store) (*storage.Image, error) {
var ref types.ImageReference
ref, err := properImageRef(id)
if err != nil {
logrus.Debug(err)
}
if ref == nil {
if ref, err = storageImageRef(store, id); err != nil {
logrus.Debug(err)
}
}
if ref == nil {
if ref, err = storageImageID(store, id); err != nil {
logrus.Debug(err)
}
}
if ref != nil {
image, err2 := is.Transport.GetStoreImage(store, ref)
if err2 != nil {
return nil, errors.Wrapf(err2, "error reading image using reference %q", transports.ImageName(ref))
}
return image, nil
}
return nil, err
}
func untagImage(imgArg string, image *storage.Image, store storage.Store) (string, error) {
newNames := []string{}
removedName := ""
for _, name := range image.Names {
if matchesReference(name, imgArg) {
removedName = name
continue
}
newNames = append(newNames, name)
}
if removedName != "" {
if err := store.SetNames(image.ID, newNames); err != nil {
return "", errors.Wrapf(err, "error removing name %q from image %q", removedName, image.ID)
}
}
return removedName, nil
}
func removeImage(image *storage.Image, store storage.Store) (string, error) {
if _, err := store.DeleteImage(image.ID, true); err != nil {
return "", errors.Wrapf(err, "could not remove image %q", image.ID)
}
return image.ID, nil
}
// Returns a list of running containers associated with the given ImageReference
func runningContainers(image *storage.Image, store storage.Store) ([]string, error) {
ctrIDs := []string{}
containers, err := store.Containers()
if err != nil {
return nil, err
}
for _, ctr := range containers {
if ctr.ImageID == image.ID {
ctrIDs = append(ctrIDs, ctr.ID)
}
}
return ctrIDs, nil
}
func removeContainers(ctrIDs []string, store storage.Store) error {
for _, ctrID := range ctrIDs {
if err := store.DeleteContainer(ctrID); err != nil {
return errors.Wrapf(err, "could not remove container %q", ctrID)
}
}
return nil
}
// If it's looks like a proper image reference, parse it and check if it
// corresponds to an image that actually exists.
func properImageRef(id string) (types.ImageReference, error) {
var err error
if ref, err := alltransports.ParseImageName(id); err == nil {
if img, err2 := ref.NewImageSource(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, errors.Wrapf(err, "error confirming presence of image reference %q", transports.ImageName(ref))
}
return nil, errors.Wrapf(err, "error parsing %q as an image reference", id)
}
// If it's looks like an image reference that's relative to our storage, parse
// it and check if it corresponds to an image that actually exists.
func storageImageRef(store storage.Store, id string) (types.ImageReference, error) {
var err error
if ref, err := is.Transport.ParseStoreReference(store, id); err == nil {
if img, err2 := ref.NewImageSource(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, errors.Wrapf(err, "error confirming presence of storage image reference %q", transports.ImageName(ref))
}
return nil, errors.Wrapf(err, "error parsing %q as a storage image reference", id)
}
// If it might be an ID that's relative to our storage, truncated or not, so
// parse it and check if it corresponds to an image that we have stored
// locally.
func storageImageID(store storage.Store, id string) (types.ImageReference, error) {
var err error
imageID := id
if img, err := store.Image(id); err == nil && img != nil {
imageID = img.ID
}
if ref, err := is.Transport.ParseStoreReference(store, imageID); err == nil {
if img, err2 := ref.NewImageSource(nil); err2 == nil {
img.Close()
return ref, nil
}
return nil, errors.Wrapf(err, "error confirming presence of storage image reference %q", transports.ImageName(ref))
}
return nil, errors.Wrapf(err, "error parsing %q as a storage image reference: %v", id)
}
// Returns a list of all existing images
func findAllImages(store storage.Store) ([]string, error) {
imagesToDelete := []string{}
images, err := store.Images()
if err != nil {
return nil, errors.Wrapf(err, "error reading images")
}
for _, image := range images {
imagesToDelete = append(imagesToDelete, image.ID)
}
return imagesToDelete, nil
}
// Returns a list of all dangling images
func findDanglingImages(store storage.Store) ([]string, error) {
imagesToDelete := []string{}
images, err := store.Images()
if err != nil {
return nil, errors.Wrapf(err, "error reading images")
}
for _, image := range images {
if len(image.Names) == 0 {
imagesToDelete = append(imagesToDelete, image.ID)
}
}
return imagesToDelete, nil
}

141
cmd/buildah/rmi_test.go Normal file
View File

@@ -0,0 +1,141 @@
package main
import (
"strings"
"testing"
is "github.com/containers/image/storage"
"github.com/containers/storage"
)
func TestProperImageRefTrue(t *testing.T) {
// Pull an image so we know we have it
_, err := pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove")
}
// This should match a url path
imgRef, err := properImageRef("docker://busybox:latest")
if err != nil {
t.Errorf("could not match image: %v", err)
} else if imgRef == nil {
t.Error("Returned nil Image Reference")
}
}
func TestProperImageRefFalse(t *testing.T) {
// Pull an image so we know we have it
_, err := pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatal("could not pull image to remove")
}
// This should match a url path
imgRef, _ := properImageRef("docker://:")
if imgRef != nil {
t.Error("should not have found an Image Reference")
}
}
func TestStorageImageRefTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if store != nil {
is.Transport.SetStore(store)
}
if err != nil {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
imgRef, err := storageImageRef(store, "busybox")
if err != nil {
t.Errorf("could not match image: %v", err)
} else if imgRef == nil {
t.Error("Returned nil Image Reference")
}
}
func TestStorageImageRefFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if store != nil {
is.Transport.SetStore(store)
}
if err != nil {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
imgRef, _ := storageImageRef(store, "")
if imgRef != nil {
t.Error("should not have found an Image Reference")
}
}
func TestStorageImageIDTrue(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if store != nil {
is.Transport.SetStore(store)
}
if err != nil {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
_, err = pullTestImage(t, "busybox:latest")
if err != nil {
t.Fatalf("could not pull image to remove: %v", err)
}
//Somehow I have to get the id of the image I just pulled
images, err := store.Images()
if err != nil {
t.Fatalf("Error reading images: %v", err)
}
id, err := captureOutputWithError(func() error {
return outputImages(images, "", store, nil, "busybox:latest", false, false, false, true)
})
if err != nil {
t.Fatalf("Error getting id of image: %v", err)
}
id = strings.TrimSpace(id)
imgRef, err := storageImageID(store, id)
if err != nil {
t.Errorf("could not match image: %v", err)
} else if imgRef == nil {
t.Error("Returned nil Image Reference")
}
}
func TestStorageImageIDFalse(t *testing.T) {
// Make sure the tests are running as root
failTestIfNotRoot(t)
store, err := storage.GetStore(storeOptions)
if store != nil {
is.Transport.SetStore(store)
}
if err != nil {
t.Fatalf("could not get store: %v", err)
}
// Pull an image so we know we have it
id := ""
imgRef, _ := storageImageID(store, id)
if imgRef != nil {
t.Error("should not have returned Image Reference")
}
}

View File

@@ -6,15 +6,19 @@ import (
"strings"
"syscall"
"github.com/Sirupsen/logrus"
specs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
var (
runFlags = []cli.Flag{
cli.StringFlag{
Name: "hostname",
Usage: "set the hostname inside of the container",
},
cli.StringFlag{
Name: "runtime",
Usage: "`path` to an alternate runtime",
@@ -24,6 +28,14 @@ var (
Name: "runtime-flag",
Usage: "add global flags for the container runtime",
},
cli.StringSliceFlag{
Name: "security-opt",
Usage: "security Options (default [])",
},
cli.BoolFlag{
Name: "tty",
Usage: "allocate a pseudo-TTY in the container",
},
cli.StringSliceFlag{
Name: "volume, v",
Usage: "bind mount a host location into the container while running the command",
@@ -31,12 +43,13 @@ var (
}
runDescription = "Runs a specified command using the container's root filesystem as a root\n filesystem, using configuration settings inherited from the container's\n image or as specified using previous calls to the config command"
runCommand = cli.Command{
Name: "run",
Usage: "Run a command inside of the container",
Description: runDescription,
Flags: runFlags,
Action: runCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID COMMAND [ARGS [...]]",
Name: "run",
Usage: "Run a command inside of the container",
Description: runDescription,
Flags: runFlags,
Action: runCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID COMMAND [ARGS [...]]",
SkipArgReorder: true,
}
)
@@ -46,19 +59,13 @@ func runCmd(c *cli.Context) error {
return errors.Errorf("container ID must be specified")
}
name := args[0]
args = args.Tail()
if err := validateFlags(c, runFlags); err != nil {
return err
}
runtime := ""
if c.IsSet("runtime") {
runtime = c.String("runtime")
}
flags := []string{}
if c.IsSet("runtime-flag") {
flags = c.StringSlice("runtime-flag")
}
volumes := []string{}
if c.IsSet("v") || c.IsSet("volume") {
volumes = c.StringSlice("volume")
args = args.Tail()
if len(args) > 0 && args[0] == "--" {
args = args[1:]
}
store, err := getStore(c)
@@ -71,16 +78,26 @@ func runCmd(c *cli.Context) error {
return errors.Wrapf(err, "error reading build container %q", name)
}
hostname := ""
if c.IsSet("hostname") {
hostname = c.String("hostname")
runtimeFlags := []string{}
for _, arg := range c.StringSlice("runtime-flag") {
runtimeFlags = append(runtimeFlags, "--"+arg)
}
options := buildah.RunOptions{
Hostname: hostname,
Runtime: runtime,
Args: flags,
Hostname: c.String("hostname"),
Runtime: c.String("runtime"),
Args: runtimeFlags,
}
for _, volumeSpec := range volumes {
if c.IsSet("tty") {
if c.Bool("tty") {
options.Terminal = buildah.WithTerminal
} else {
options.Terminal = buildah.WithoutTerminal
}
}
for _, volumeSpec := range c.StringSlice("volume") {
volSpec := strings.Split(volumeSpec, ":")
if len(volSpec) >= 2 {
mountOptions := "bind"

View File

@@ -9,11 +9,12 @@ import (
var (
tagDescription = "Adds one or more additional names to locally-stored image"
tagCommand = cli.Command{
Name: "tag",
Usage: "Add an additional name to a local image",
Description: tagDescription,
Action: tagCmd,
ArgsUsage: "IMAGE-NAME [IMAGE-NAME ...]",
Name: "tag",
Usage: "Add an additional name to a local image",
Description: tagDescription,
Action: tagCmd,
ArgsUsage: "IMAGE-NAME [IMAGE-NAME ...]",
SkipArgReorder: true,
}
)

View File

@@ -7,12 +7,13 @@ import (
var (
umountCommand = cli.Command{
Name: "umount",
Aliases: []string{"unmount"},
Usage: "Unmount a working container's root filesystem",
Description: "Unmounts a working container's root filesystem",
Action: umountCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
Name: "umount",
Aliases: []string{"unmount"},
Usage: "Unmount a working container's root filesystem",
Description: "Unmounts a working container's root filesystem",
Action: umountCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
SkipArgReorder: true,
}
)

49
cmd/buildah/version.go Normal file
View File

@@ -0,0 +1,49 @@
package main
import (
"fmt"
"runtime"
"strconv"
"time"
ispecs "github.com/opencontainers/image-spec/specs-go"
rspecs "github.com/opencontainers/runtime-spec/specs-go"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
)
//Overwritten at build time
var (
gitCommit string
buildInfo string
)
//Function to get and print info for version command
func versionCmd(c *cli.Context) error {
//converting unix time from string to int64
buildTime, err := strconv.ParseInt(buildInfo, 10, 64)
if err != nil {
return err
}
fmt.Println("Version: ", buildah.Version)
fmt.Println("Go Version: ", runtime.Version())
fmt.Println("Image Spec: ", ispecs.Version)
fmt.Println("Runtime Spec: ", rspecs.Version)
fmt.Println("Git Commit: ", gitCommit)
//Prints out the build time in readable format
fmt.Println("Built: ", time.Unix(buildTime, 0).Format(time.ANSIC))
fmt.Println("OS/Arch: ", runtime.GOOS+"/"+runtime.GOARCH)
return nil
}
//cli command to print out the version info of buildah
var versionCommand = cli.Command{
Name: "version",
Usage: "Display the Buildah Version Information",
Action: versionCmd,
SkipArgReorder: true,
}

231
commit.go
View File

@@ -2,10 +2,11 @@ package buildah
import (
"bytes"
"fmt"
"io"
"syscall"
"time"
"github.com/Sirupsen/logrus"
cp "github.com/containers/image/copy"
"github.com/containers/image/signature"
is "github.com/containers/image/storage"
@@ -13,21 +14,10 @@ import (
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/containers/storage/pkg/archive"
"github.com/containers/storage/pkg/stringid"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/projectatomic/buildah/util"
)
var (
// gzippedEmptyLayer is a gzip-compressed version of an empty tar file (just 1024 zero bytes). This
// comes from github.com/docker/distribution/manifest/schema1/config_builder.go by way of
// github.com/containers/image/image/docker_schema2.go; there is a non-zero embedded timestamp; we could
// zero that, but that would just waste storage space in registries, so lets use the same values.
gzippedEmptyLayer = []byte{
31, 139, 8, 0, 0, 9, 110, 136, 0, 255, 98, 24, 5, 163, 96, 20, 140, 88,
0, 8, 0, 0, 255, 255, 46, 175, 181, 239, 0, 4, 0, 0,
}
"github.com/sirupsen/logrus"
)
// CommitOptions can be used to alter how an image is committed.
@@ -55,6 +45,9 @@ type CommitOptions struct {
// HistoryTimestamp is the timestamp used when creating new items in the
// image's history. If unset, the current time will be used.
HistoryTimestamp *time.Time
// github.com/containers/image/types SystemContext to hold credentials
// and other authentication/authorization information.
SystemContext *types.SystemContext
}
// PushOptions can be used to alter how an image is copied somewhere.
@@ -74,6 +67,12 @@ type PushOptions struct {
ReportWriter io.Writer
// Store is the local storage store which holds the source image.
Store storage.Store
// github.com/containers/image/types SystemContext to hold credentials
// and other authentication/authorization information.
SystemContext *types.SystemContext
// ManifestType is the format to use when saving the imge using the 'dir' transport
// possible options are oci, v2s1, and v2s2
ManifestType string
}
// shallowCopy copies the most recent layer, the configuration, and the manifest from one image to another.
@@ -81,41 +80,50 @@ type PushOptions struct {
// almost any other destination has higher expectations.
// We assume that "dest" is a reference to a local image (specifically, a containers/image/storage.storageReference),
// and will fail if it isn't.
func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReference, systemContext *types.SystemContext) error {
func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReference, systemContext *types.SystemContext, compression archive.Compression) error {
var names []string
// Read the target image name.
if dest.DockerReference() == nil {
return errors.New("can't write to an unnamed image")
if dest.DockerReference() != nil {
names = []string{dest.DockerReference().String()}
}
names, err := util.ExpandTags([]string{dest.DockerReference().String()})
if err != nil {
return err
}
// Make a temporary image reference.
tmpName := stringid.GenerateRandomID() + "-tmp-" + Package + "-commit"
tmpRef, err := is.Transport.ParseStoreReference(b.store, tmpName)
if err != nil {
return err
}
defer func() {
if err2 := tmpRef.DeleteImage(systemContext); err2 != nil {
logrus.Debugf("error deleting temporary image %q: %v", tmpName, err2)
}
}()
// Open the source for reading and a temporary image for writing.
// Open the source for reading and the new image for writing.
srcImage, err := src.NewImage(systemContext)
if err != nil {
return errors.Wrapf(err, "error reading configuration to write to image %q", transports.ImageName(dest))
}
defer srcImage.Close()
tmpImage, err := tmpRef.NewImageDestination(systemContext)
destImage, err := dest.NewImageDestination(systemContext)
if err != nil {
return errors.Wrapf(err, "error opening temporary copy of image %q for writing", transports.ImageName(dest))
return errors.Wrapf(err, "error opening image %q for writing", transports.ImageName(dest))
}
defer tmpImage.Close()
// Write an empty filesystem layer, because the image layer requires at least one.
_, err = tmpImage.PutBlob(bytes.NewReader(gzippedEmptyLayer), types.BlobInfo{Size: int64(len(gzippedEmptyLayer))})
// Look up the container's read-write layer.
container, err := b.store.Container(b.ContainerID)
if err != nil {
return errors.Wrapf(err, "error writing dummy layer for image %q", transports.ImageName(dest))
return errors.Wrapf(err, "error reading information about working container %q", b.ContainerID)
}
// Extract the read-write layer's contents, using whatever compression the container image used to
// calculate the blob sum in the manifest.
switch compression {
case archive.Gzip:
logrus.Debugf("extracting layer %q with gzip", container.LayerID)
case archive.Bzip2:
// Until the image specs define a media type for bzip2-compressed layers, even if we know
// how to decompress them, we can't try to compress layers with bzip2.
return errors.Wrapf(syscall.ENOTSUP, "media type for bzip2-compressed layers is not defined")
default:
logrus.Debugf("extracting layer %q with unknown compressor(?)", container.LayerID)
}
diffOptions := &storage.DiffOptions{
Compression: &compression,
}
layerDiff, err := b.store.Diff("", container.LayerID, diffOptions)
if err != nil {
return errors.Wrapf(err, "error reading layer %q from source image %q", container.LayerID, transports.ImageName(src))
}
defer layerDiff.Close()
// Write a copy of the layer as a blob, for the new image to reference.
if _, err = destImage.PutBlob(layerDiff, types.BlobInfo{Digest: "", Size: -1}); err != nil {
return errors.Wrapf(err, "error creating new read-only layer from container %q", b.ContainerID)
}
// Read the newly-generated configuration blob.
config, err := srcImage.ConfigBlob()
@@ -126,106 +134,45 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
return errors.Errorf("error reading new configuration for image %q: it's empty", transports.ImageName(dest))
}
logrus.Debugf("read configuration blob %q", string(config))
// Write the configuration to the temporary image.
// Write the configuration to the new image.
configBlobInfo := types.BlobInfo{
Digest: digest.Canonical.FromBytes(config),
Size: int64(len(config)),
}
_, err = tmpImage.PutBlob(bytes.NewReader(config), configBlobInfo)
if err != nil && len(config) > 0 {
if _, err = destImage.PutBlob(bytes.NewReader(config), configBlobInfo); err != nil {
return errors.Wrapf(err, "error writing image configuration for temporary copy of %q", transports.ImageName(dest))
}
// Read the newly-generated, mostly fake, manifest.
// Read the newly-generated manifest, which already contains a layer entry for the read-write layer.
manifest, _, err := srcImage.Manifest()
if err != nil {
return errors.Wrapf(err, "error reading new manifest for image %q", transports.ImageName(dest))
}
// Write the manifest to the temporary image.
err = tmpImage.PutManifest(manifest)
// Write the manifest to the new image.
err = destImage.PutManifest(manifest)
if err != nil {
return errors.Wrapf(err, "error writing new manifest to temporary copy of image %q", transports.ImageName(dest))
return errors.Wrapf(err, "error writing new manifest to image %q", transports.ImageName(dest))
}
// Save the temporary image.
err = tmpImage.Commit()
// Save the new image.
err = destImage.Commit()
if err != nil {
return errors.Wrapf(err, "error committing new image %q", transports.ImageName(dest))
}
// Locate the temporary image in the lower-level API. Read its item names.
tmpImg, err := is.Transport.GetStoreImage(b.store, tmpRef)
err = destImage.Close()
if err != nil {
return errors.Wrapf(err, "error locating temporary image %q", transports.ImageName(dest))
return errors.Wrapf(err, "error closing new image %q", transports.ImageName(dest))
}
items, err := b.store.ListImageBigData(tmpImg.ID)
image, err := is.Transport.GetStoreImage(b.store, dest)
if err != nil {
return errors.Wrapf(err, "error reading list of named data for image %q", tmpImg.ID)
return errors.Wrapf(err, "error locating just-written image %q", transports.ImageName(dest))
}
// Look up the container's read-write layer.
container, err := b.store.Container(b.ContainerID)
if err != nil {
return errors.Wrapf(err, "error reading information about working container %q", b.ContainerID)
}
parentLayer := ""
// Look up the container's source image's layer, if there is a source image.
if container.ImageID != "" {
img, err2 := b.store.Image(container.ImageID)
if err2 != nil {
return errors.Wrapf(err2, "error reading information about working container %q's source image", b.ContainerID)
}
parentLayer = img.TopLayer
}
// Extract the read-write layer's contents.
layerDiff, err := b.store.Diff(parentLayer, container.LayerID)
if err != nil {
return errors.Wrapf(err, "error reading layer from source image %q", transports.ImageName(src))
}
defer layerDiff.Close()
// Write a copy of the layer for the new image to reference.
layer, _, err := b.store.PutLayer("", parentLayer, []string{}, "", false, layerDiff)
if err != nil {
return errors.Wrapf(err, "error creating new read-only layer from container %q", b.ContainerID)
}
// Create a low-level image record that uses the new layer.
image, err := b.store.CreateImage("", []string{}, layer.ID, "", nil)
if err != nil {
err2 := b.store.DeleteLayer(layer.ID)
if err2 != nil {
logrus.Debugf("error removing layer %q: %v", layer, err2)
}
return errors.Wrapf(err, "error creating new low-level image %q", transports.ImageName(dest))
}
logrus.Debugf("created image ID %q", image.ID)
defer func() {
// Add the target name(s) to the new image.
if len(names) > 0 {
err = util.AddImageNames(b.store, image, names)
if err != nil {
_, err2 := b.store.DeleteImage(image.ID, true)
if err2 != nil {
logrus.Debugf("error removing image %q: %v", image.ID, err2)
}
return errors.Wrapf(err, "error assigning names %v to new image", names)
}
}()
// Copy the configuration and manifest, which are big data items, along with whatever else is there.
for _, item := range items {
var data []byte
data, err = b.store.ImageBigData(tmpImg.ID, item)
if err != nil {
return errors.Wrapf(err, "error copying data item %q", item)
}
err = b.store.SetImageBigData(image.ID, item, data)
if err != nil {
return errors.Wrapf(err, "error copying data item %q", item)
}
logrus.Debugf("copied data item %q to %q", item, image.ID)
logrus.Debugf("assigned names %v to image %q", names, image.ID)
}
// Set low-level metadata in the new image so that the image library will accept it as a real image.
err = b.store.SetMetadata(image.ID, "{}")
if err != nil {
return errors.Wrapf(err, "error assigning metadata to new image %q", transports.ImageName(dest))
}
// Move the target name(s) from the temporary image to the new image.
err = util.AddImageNames(b.store, image, names)
if err != nil {
return errors.Wrapf(err, "error assigning names %v to new image", names)
}
logrus.Debugf("assigned names %v to image %q", names, image.ID)
return nil
}
@@ -233,30 +180,35 @@ func (b *Builder) shallowCopy(dest types.ImageReference, src types.ImageReferenc
// configuration, to a new image in the specified location, and if we know how,
// add any additional tags that were specified.
func (b *Builder) Commit(dest types.ImageReference, options CommitOptions) error {
policy, err := signature.DefaultPolicy(getSystemContext(options.SignaturePolicyPath))
policy, err := signature.DefaultPolicy(getSystemContext(options.SystemContext, options.SignaturePolicyPath))
if err != nil {
return err
return errors.Wrapf(err, "error obtaining default signature policy")
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return err
return errors.Wrapf(err, "error creating new signature policy context")
}
defer func() {
if err2 := policyContext.Destroy(); err2 != nil {
logrus.Debugf("error destroying signature policy context: %v", err2)
}
}()
// Check if we're keeping everything in local storage. If so, we can take certain shortcuts.
_, destIsStorage := dest.Transport().(is.StoreTransport)
exporting := !destIsStorage
src, err := b.makeContainerImageRef(options.PreferredManifestType, exporting, options.Compression, options.HistoryTimestamp)
src, err := b.makeImageRef(options.PreferredManifestType, exporting, options.Compression, options.HistoryTimestamp)
if err != nil {
return errors.Wrapf(err, "error computing layer digests and building metadata")
}
if exporting {
// Copy everything.
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter))
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter, nil, options.SystemContext, ""))
if err != nil {
return errors.Wrapf(err, "error copying layers and metadata")
}
} else {
// Copy only the most recent layer, the configuration, and the manifest.
err = b.shallowCopy(dest, src, getSystemContext(options.SignaturePolicyPath))
err = b.shallowCopy(dest, src, getSystemContext(options.SystemContext, options.SignaturePolicyPath), options.Compression)
if err != nil {
return errors.Wrapf(err, "error copying layer and metadata")
}
@@ -282,44 +234,27 @@ func (b *Builder) Commit(dest types.ImageReference, options CommitOptions) error
// Push copies the contents of the image to a new location.
func Push(image string, dest types.ImageReference, options PushOptions) error {
systemContext := getSystemContext(options.SignaturePolicyPath)
systemContext := getSystemContext(options.SystemContext, options.SignaturePolicyPath)
policy, err := signature.DefaultPolicy(systemContext)
if err != nil {
return err
return errors.Wrapf(err, "error obtaining default signature policy")
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return err
return errors.Wrapf(err, "error creating new signature policy context")
}
importOptions := ImportFromImageOptions{
Image: image,
SignaturePolicyPath: options.SignaturePolicyPath,
}
builder, err := importBuilderFromImage(options.Store, importOptions)
if err != nil {
return errors.Wrap(err, "error importing builder information from image")
}
// Look up the image name and its layer.
ref, err := is.Transport.ParseStoreReference(options.Store, image)
// Look up the image.
src, err := is.Transport.ParseStoreReference(options.Store, image)
if err != nil {
return errors.Wrapf(err, "error parsing reference to image %q", image)
}
img, err := is.Transport.GetStoreImage(options.Store, ref)
if err != nil {
return errors.Wrapf(err, "error locating image %q", image)
}
// Give the image we're producing the same ancestors as its source image.
builder.FromImage = builder.Docker.ContainerConfig.Image
builder.FromImageID = string(builder.Docker.Parent)
// Prep the layers and manifest for export.
src, err := builder.makeImageImageRef(options.Compression, img.Names, img.TopLayer, nil)
if err != nil {
return errors.Wrapf(err, "error recomputing layer digests and building metadata")
}
// Copy everything.
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter))
err = cp.Image(policyContext, dest, src, getCopyOptions(options.ReportWriter, nil, options.SystemContext, options.ManifestType))
if err != nil {
return errors.Wrapf(err, "error copying layers and metadata")
}
if options.ReportWriter != nil {
fmt.Fprintf(options.ReportWriter, "\n")
}
return nil
}

View File

@@ -7,14 +7,20 @@ import (
"github.com/containers/image/types"
)
func getCopyOptions(reportWriter io.Writer) *cp.Options {
func getCopyOptions(reportWriter io.Writer, sourceSystemContext *types.SystemContext, destinationSystemContext *types.SystemContext, manifestType string) *cp.Options {
return &cp.Options{
ReportWriter: reportWriter,
ReportWriter: reportWriter,
SourceCtx: sourceSystemContext,
DestinationCtx: destinationSystemContext,
ForceManifestMIMEType: manifestType,
}
}
func getSystemContext(signaturePolicyPath string) *types.SystemContext {
func getSystemContext(defaults *types.SystemContext, signaturePolicyPath string) *types.SystemContext {
sc := &types.SystemContext{}
if defaults != nil {
*sc = *defaults
}
if signaturePolicyPath != "" {
sc.SignaturePolicyPath = signaturePolicyPath
}

View File

@@ -2,7 +2,6 @@ package buildah
import (
"encoding/json"
"fmt"
"path/filepath"
"runtime"
"strings"
@@ -21,8 +20,9 @@ func makeOCIv1Image(dimage *docker.V2Image) (ociv1.Image, error) {
if config == nil {
config = &dimage.ContainerConfig
}
dcreated := dimage.Created.UTC()
image := ociv1.Image{
Created: dimage.Created.UTC(),
Created: &dcreated,
Author: dimage.Author,
Architecture: dimage.Architecture,
OS: dimage.OS,
@@ -38,7 +38,7 @@ func makeOCIv1Image(dimage *docker.V2Image) (ociv1.Image, error) {
},
RootFS: ociv1.RootFS{
Type: "",
DiffIDs: []string{},
DiffIDs: []digest.Digest{},
},
History: []ociv1.History{},
}
@@ -51,13 +51,12 @@ func makeOCIv1Image(dimage *docker.V2Image) (ociv1.Image, error) {
}
if RootFS.Type == docker.TypeLayers {
image.RootFS.Type = docker.TypeLayers
for _, id := range RootFS.DiffIDs {
image.RootFS.DiffIDs = append(image.RootFS.DiffIDs, id.String())
}
image.RootFS.DiffIDs = append(image.RootFS.DiffIDs, RootFS.DiffIDs...)
}
for _, history := range dimage.History {
hcreated := history.Created.UTC()
ohistory := ociv1.History{
Created: history.Created.UTC(),
Created: &hcreated,
CreatedBy: history.CreatedBy,
Author: history.Author,
Comment: history.Comment,
@@ -98,13 +97,7 @@ func makeDockerV2S2Image(oimage *ociv1.Image) (docker.V2Image, error) {
}
if oimage.RootFS.Type == docker.TypeLayers {
image.RootFS.Type = docker.TypeLayers
for _, id := range oimage.RootFS.DiffIDs {
d, err := digest.Parse(id)
if err != nil {
return docker.V2Image{}, err
}
image.RootFS.DiffIDs = append(image.RootFS.DiffIDs, d)
}
image.RootFS.DiffIDs = append(image.RootFS.DiffIDs, oimage.RootFS.DiffIDs...)
}
for _, history := range oimage.History {
dhistory := docker.V2S2History{
@@ -145,23 +138,30 @@ func makeDockerV2S1Image(manifest docker.V2S1Manifest) (docker.V2Image, error) {
}
// Build a filesystem history.
history := []docker.V2S2History{}
lastID := ""
for i := range manifest.History {
h := docker.V2S2History{
Created: time.Now().UTC(),
Author: "",
CreatedBy: "",
Comment: "",
EmptyLayer: false,
}
// Decode the compatibility field.
dcompat := docker.V1Compatibility{}
if err2 := json.Unmarshal([]byte(manifest.History[i].V1Compatibility), &dcompat); err2 == nil {
h.Created = dcompat.Created.UTC()
h.Author = dcompat.Author
h.Comment = dcompat.Comment
if len(dcompat.ContainerConfig.Cmd) > 0 {
h.CreatedBy = fmt.Sprintf("%v", dcompat.ContainerConfig.Cmd)
}
h.EmptyLayer = dcompat.ThrowAway
if err = json.Unmarshal([]byte(manifest.History[i].V1Compatibility), &dcompat); err != nil {
return docker.V2Image{}, errors.Errorf("error parsing image compatibility data (%q) from history", manifest.History[i].V1Compatibility)
}
// Skip this history item if it shares the ID of the last one
// that we saw, since the image library will do the same.
if i > 0 && dcompat.ID == lastID {
continue
}
lastID = dcompat.ID
// Construct a new history item using the recovered information.
createdBy := ""
if len(dcompat.ContainerConfig.Cmd) > 0 {
createdBy = strings.Join(dcompat.ContainerConfig.Cmd, " ")
}
h := docker.V2S2History{
Created: dcompat.Created.UTC(),
Author: dcompat.Author,
CreatedBy: createdBy,
Comment: dcompat.Comment,
EmptyLayer: dcompat.ThrowAway,
}
// Prepend this layer to the list, because a v2s1 format manifest's list is in reverse order
// compared to v2s2, which lists earlier layers before later ones.
@@ -224,8 +224,8 @@ func (b *Builder) fixupConfig() {
if b.Docker.Created.IsZero() {
b.Docker.Created = now
}
if b.OCIv1.Created.IsZero() {
b.OCIv1.Created = now
if b.OCIv1.Created == nil || b.OCIv1.Created.IsZero() {
b.OCIv1.Created = &now
}
if b.OS() == "" {
b.SetOS(runtime.GOOS)
@@ -559,3 +559,8 @@ func (b *Builder) Domainname() string {
func (b *Builder) SetDomainname(name string) {
b.Docker.Config.Domainname = name
}
// SetDefaultMountsFilePath sets the mounts file path for testing purposes
func (b *Builder) SetDefaultMountsFilePath(path string) {
b.DefaultMountsFilePath = path
}

View File

@@ -168,6 +168,7 @@ return 1
--runroot
--storage-driver
--storage-opt
--default-mounts-file
"
case "$prev" in
@@ -209,6 +210,10 @@ return 1
_buildah_rmi() {
local boolean_options="
--all
-a
--force
-f
--help
-h
"
@@ -225,6 +230,8 @@ return 1
_buildah_rm() {
local boolean_options="
--all
-a
--help
-h
"
@@ -290,9 +297,14 @@ return 1
-D
--quiet
-q
--rm
--tls-verify
"
local options_with_args="
--authfile
--cert-dir
--creds
--signature-policy
--format
-f
@@ -336,19 +348,35 @@ return 1
--pull-always
--quiet
-q
--tls-verify
"
local options_with_args="
--registry
--signature-policy
--add-host
--authfile
--build-arg
--cert-dir
--cgroup-parent
--cpu-period
--cpu-quota
--cpu-shares
--cpuset-cpus
--cpuset-mems
--creds
-f
--file
--format
--label
-m
--memory
--memory-swap
--runtime
--runtime-flag
--tag
--security-opt
--signature-policy
-t
--file
-f
--build-arg
--format
--tag
--ulimit
"
local all_options="$options_with_args $boolean_options"
@@ -376,12 +404,15 @@ return 1
_buildah_run() {
local boolean_options="
--help
--tty
-h
"
local options_with_args="
--hostname
--runtime
--runtime-flag
--security-opt
--volume
-v
"
@@ -470,9 +501,15 @@ return 1
-D
--quiet
-q
--tls-verify
"
local options_with_args="
--authfile
--cert-dir
--creds
--format
-f
--signature-policy
"
@@ -529,14 +566,20 @@ return 1
local boolean_options="
--help
-h
--json
--quiet
-q
--noheading
-n
--notruncate
-a
--all
"
local options_with_args="
--filter
-f
--format
"
local all_options="$options_with_args $boolean_options"
@@ -552,6 +595,7 @@ return 1
local boolean_options="
--help
-h
--json
--quiet
-q
--noheading
@@ -609,12 +653,27 @@ return 1
--pull-always
--quiet
-q
--tls-verify
"
local options_with_args="
--add-host
--authfile
--cert-dir
--cgroup-parent
--cpu-period
--cpu-quota
--cpu-shares
--cpuset-cpus
--cpuset-mems
--creds
-m
--memory
--memory-swap
--name
--registry
--signature-policy
--security-opt
--ulimit
"
@@ -628,6 +687,16 @@ return 1
esac
}
_buildah_version() {
local boolean_options="
--help
-h
"
local options_with_args="
"
}
_buildah() {
local previous_extglob_setting=$(shopt -p extglob)
shopt -s extglob
@@ -651,6 +720,7 @@ return 1
tag
umount
unmount
version
)
# These options are valid as global options for all client commands

View File

@@ -21,11 +21,12 @@
# https://github.com/projectatomic/buildah
%global provider_prefix %{provider}.%{provider_tld}/%{project}/%{repo}
%global import_path %{provider_prefix}
%global commit a0a5333b94264d1fb1e072d63bcb98f9e2981b49
%global commit REPLACEWITHCOMMITID
%global shortcommit %(c=%{commit}; echo ${c:0:7})
Name: buildah
Version: 0.1
# Bump version in buildah.go too
Version: 0.15
Release: 1.git%{shortcommit}%{?dist}
Summary: A command line tool used to creating OCI Images
License: ASL 2.0
@@ -41,7 +42,12 @@ BuildRequires: gpgme-devel
BuildRequires: device-mapper-devel
BuildRequires: btrfs-progs-devel
BuildRequires: libassuan-devel
BuildRequires: libseccomp-devel
BuildRequires: glib2-devel
BuildRequires: ostree-devel
BuildRequires: make
Requires: runc >= 1.0.0-6
Requires: container-selinux
Requires: skopeo-containers
Provides: %{repo} = %{version}-%{release}
@@ -67,7 +73,7 @@ popd
mv vendor src
export GOPATH=$(pwd)/_build:$(pwd):%{gopath}
make all
make all GIT_COMMIT=%{shortcommit}
%install
export GOPATH=$(pwd)/_build:$(pwd):%{gopath}
@@ -85,5 +91,157 @@ make DESTDIR=%{buildroot} PREFIX=%{_prefix} install install.completions
%{_datadir}/bash-completion/completions/*
%changelog
* Tue Feb 27 2018 Dan Walsh <dwalsh@redhat.com> 0.15-1
- Fix handling of buildah run command options
* Mon Feb 26 2018 Dan Walsh <dwalsh@redhat.com> 0.14-1
- If commonOpts do not exist, we should return rather then segfault
- Display full error string instead of just status
- Implement --volume and --shm-size for bud and from
- Fix secrets patch for buildah bud
- Fixes the naming issue of blobs and config for the dir transport by removing the .tar extension
* Thu Feb 22 2018 Dan Walsh <dwalsh@redhat.com> 0.13-1
- Vendor in latest containers/storage
- This fixes a large SELinux bug.
- run: do not open /etc/hosts if not needed
- Add the following flags to buildah bud and from
--add-host
--cgroup-parent
--cpu-period
--cpu-quota
--cpu-shares
--cpuset-cpus
--cpuset-mems
--memory
--memory-swap
--security-opt
--ulimit
* Mon Feb 12 2018 Dan Walsh <dwalsh@redhat.com> 0.12-1
- Added handing for simpler error message for Unknown Dockerfile instructions.
- Change default certs directory to /etc/containers/certs.d
- Vendor in latest containers/image
- Vendor in latest containers/storage
- build-using-dockerfile: set the 'author' field for MAINTAINER
- Return exit code 1 when buildah-rmi fails
- Trim the image reference to just its name before calling getImageName
- Touch up rmi -f usage statement
- Add --format and --filter to buildah containers
- Add --prune,-p option to rmi command
- Add authfile param to commit
- Fix --runtime-flag for buildah run and bud
- format should override quiet for images
- Allow all auth params to work with bud
- Do not overwrite directory permissions on --chown
- Unescape HTML characters output into the terminal
- Fix: setting the container name to the image
- Prompt for un/pwd if not supplied with --creds
- Make bud be really quiet
- Return a better error message when failed to resolve an image
- Update auth tests and fix bud man page
* Tue Jan 16 2018 Dan Walsh <dwalsh@redhat.com> 0.11-1
- Add --all to remove containers
- Add --all functionality to rmi
- Show ctrid when doing rm -all
- Ignore sequential duplicate layers when reading v2s1
- Lots of minor bug fixes
- Vendor in latest containers/image and containers/storage
* Sat Dec 23 2017 Dan Walsh <dwalsh@redhat.com> 0.10-1
- Display Config and Manifest as strings
- Bump containers/image
- Use configured registries to resolve image names
- Update to work with newer image library
- Add --chown option to add/copy commands
* Sat Dec 2 2017 Dan Walsh <dwalsh@redhat.com> 0.9-1
- Allow push to use the image id
- Make sure builtin volumes have the correct label
* Thu Nov 16 2017 Dan Walsh <dwalsh@redhat.com> 0.8-1
- Buildah bud was failing on SELinux machines, this fixes this
- Block access to certain kernel file systems inside of the container
* Thu Nov 16 2017 Dan Walsh <dwalsh@redhat.com> 0.7-1
- Ignore errors when trying to read containers buildah.json for loading SELinux reservations
- Use credentials from kpod login for buildah
* Wed Nov 15 2017 Dan Walsh <dwalsh@redhat.com> 0.6-1
- Adds support for converting manifest types when using the dir transport
- Rework how we do UID resolution in images
- Bump github.com/vbatts/tar-split
- Set option.terminal appropriately in run
* Wed Nov 08 2017 Dan Walsh <dwalsh@redhat.com> 0.5-2
- Bump github.com/vbatts/tar-split
- Fixes CVE That could allow a container image to cause a DOS
* Tue Nov 07 2017 Dan Walsh <dwalsh@redhat.com> 0.5-1
- Add secrets patch to buildah
- Add proper SELinux labeling to buildah run
- Add tls-verify to bud command
- Make filtering by date use the image's date
- images: don't list unnamed images twice
- Fix timeout issue
- Add further tty verbiage to buildah run
- Make inspect try an image on failure if type not specified
- Add support for `buildah run --hostname`
- Tons of bug fixes and code cleanup
* Fri Sep 22 2017 Dan Walsh <dwalsh@redhat.com> 0.4-1.git9cbccf88c
- Add default transport to push if not provided
- Avoid trying to print a nil ImageReference
- Add authentication to commit and push
- Add information on buildah from man page on transports
- Remove --transport flag
- Run: do not complain about missing volume locations
- Add credentials to buildah from
- Remove export command
- Run(): create the right working directory
- Improve "from" behavior with unnamed references
- Avoid parsing image metadata for dates and layers
- Read the image's creation date from public API
- Bump containers/storage and containers/image
- Don't panic if an image's ID can't be parsed
- Turn on --enable-gc when running gometalinter
- rmi: handle truncated image IDs
* Tue Aug 15 2017 Josh Boyer <jwboyer@redhat.com> - 0.3-5.gitb9b2a8a
- Build for s390x as well
* Wed Aug 02 2017 Fedora Release Engineering <releng@fedoraproject.org> - 0.3-4.gitb9b2a8a
- Rebuilt for https://fedoraproject.org/wiki/Fedora_27_Binutils_Mass_Rebuild
* Wed Jul 26 2017 Fedora Release Engineering <releng@fedoraproject.org> - 0.3-3.gitb9b2a8a
- Rebuilt for https://fedoraproject.org/wiki/Fedora_27_Mass_Rebuild
* Thu Jul 20 2017 Dan Walsh <dwalsh@redhat.com> 0.3-2.gitb9b2a8a7e
- Bump for inclusion of OCI 1.0 Runtime and Image Spec
* Tue Jul 18 2017 Dan Walsh <dwalsh@redhat.com> 0.2.0-1.gitac2aad6
- buildah run: Add support for -- ending options parsing
- buildah Add/Copy support for glob syntax
- buildah commit: Add flag to remove containers on commit
- buildah push: Improve man page and help information
- buildah run: add a way to disable PTY allocation
- Buildah docs: clarify --runtime-flag of run command
- Update to match newer storage and image-spec APIs
- Update containers/storage and containers/image versions
- buildah export: add support
- buildah images: update commands
- buildah images: Add JSON output option
- buildah rmi: update commands
- buildah containers: Add JSON output option
- buildah version: add command
- buildah run: Handle run without an explicit command correctly
- Ensure volume points get created, and with perms
- buildah containers: Add a -a/--all option
* Wed Jun 14 2017 Dan Walsh <dwalsh@redhat.com> 0.1.0-2.git597d2ab9
- Release Candidate 1
- All features have now been implemented.
* Fri Apr 14 2017 Dan Walsh <dwalsh@redhat.com> 0.0.1-1.git7a0a5333
- First package for Fedora

View File

@@ -1,6 +1,7 @@
package buildah
import (
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
)
@@ -13,5 +14,5 @@ func (b *Builder) Delete() error {
b.MountPoint = ""
b.Container = ""
b.ContainerID = ""
return nil
return label.ReleaseLabel(b.ProcessLabel)
}

View File

@@ -13,10 +13,18 @@ appears to be an archive, its contents are extracted and added instead of the
archive file itself. If a local directory is specified as a source, its
*contents* are copied to the destination.
## OPTIONS
**--chown** *owner*:*group*
Sets the user and group ownership of the destination content.
## EXAMPLE
buildah add containerID '/myapp/app.conf' '/myapp/app.conf'
buildah add --chown myuser:mygroup containerID '/myapp/app.conf' '/myapp/app.conf'
buildah add containerID '/home/myuser/myproject.go'
buildah add containerID '/home/myuser/myfiles.tar' '/tmp'

View File

@@ -14,6 +14,102 @@ to a temporary location.
## OPTIONS
**--add-host**=[]
Add a custom host-to-IP mapping (host:ip)
Add a line to /etc/hosts. The format is hostname:ip. The **--add-host** option can be set multiple times.
**--authfile** *path*
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--build-arg** *arg=value*
Specifies a build argument and its value, which will be interpolated in
instructions read from the Dockerfiles in the same way that environment
variables are, but which will not be added to environment variable list in the
resulting image's configuration.
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry.
Default certificates directory is _/etc/containers/certs.d_.
**--cgroup-parent**=""
Path to cgroups under which the cgroup for the container will be created. If the path is not absolute, the path is considered to be relative to the cgroups path of the init process. Cgroups will be created if they do not already exist.
**--cpu-period**=*0*
Limit the CPU CFS (Completely Fair Scheduler) period
Limit the container's CPU usage. This flag tell the kernel to restrict the container's CPU usage to the period you specify.
**--cpu-quota**=*0*
Limit the CPU CFS (Completely Fair Scheduler) quota
Limit the container's CPU usage. By default, containers run with the full
CPU resource. This flag tell the kernel to restrict the container's CPU usage
to the quota you specify.
**--cpu-shares**=*0*
CPU shares (relative weight)
By default, all containers get the same proportion of CPU cycles. This proportion
can be modified by changing the container's CPU share weighting relative
to the weighting of all other running containers.
To modify the proportion from the default of 1024, use the **--cpu-shares**
flag to set the weighting to 2 or higher.
The proportion will only apply when CPU-intensive processes are running.
When tasks in one container are idle, other containers can use the
left-over CPU time. The actual amount of CPU time will vary depending on
the number of containers running on the system.
For example, consider three containers, one has a cpu-share of 1024 and
two others have a cpu-share setting of 512. When processes in all three
containers attempt to use 100% of CPU, the first container would receive
50% of the total CPU time. If you add a fourth container with a cpu-share
of 1024, the first container only gets 33% of the CPU. The remaining containers
receive 16.5%, 16.5% and 33% of the CPU.
On a multi-core system, the shares of CPU time are distributed over all CPU
cores. Even if a container is limited to less than 100% of CPU time, it can
use 100% of each individual CPU core.
For example, consider a system with more than three cores. If you start one
container **{C0}** with **-c=512** running one process, and another container
**{C1}** with **-c=1024** running two processes, this can result in the following
division of CPU shares:
PID container CPU CPU share
100 {C0} 0 100% of CPU0
101 {C1} 1 100% of CPU1
102 {C1} 2 100% of CPU2
**--cpuset-cpus**=""
CPUs in which to allow execution (0-3, 0,1)
**--cpuset-mems**=""
Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
If you have four memory nodes on your system (0-3), use `--cpuset-mems=0,1`
then processes in your container will only use memory from the first
two memory nodes.
**--creds** *creds*
The [username[:password]] to use to authenticate with the registry if required.
If one or both values are not supplied, a command line prompt will appear and the
value can be entered. The password is entered without echo.
**-f, --file** *Dockerfile*
Specifies a Dockerfile which contains instructions for building the image,
@@ -25,6 +121,33 @@ If a build context is not specified, and at least one Dockerfile is a
local file, the directory in which it resides will be used as the build
context.
**--format**
Control the format for the built image's manifest and configuration data.
Recognized formats include *oci* (OCI image-spec v1.0, the default) and
*docker* (version 2, using schema format 2 for the manifest).
**-m**, **--memory**=""
Memory limit (format: <number>[<unit>], where unit = b, k, m or g)
Allows you to constrain the memory available to a container. If the host
supports swap memory, then the **-m** memory setting can be larger than physical
RAM. If a limit of 0 is specified (not using **-m**), the container's memory is
not limited. The actual limit may be rounded up to a multiple of the operating
system's page size (the value would be very large, that's millions of trillions).
**--memory-swap**="LIMIT"
A limit value equal to memory plus swap. Must be used with the **-m**
(**--memory**) flag. The swap `LIMIT` should always be larger than **-m**
(**--memory**) value. By default, the swap `LIMIT` will be set to double
the value of --memory.
The format of `LIMIT` is `<number>[<unit>]`. Unit can be `b` (bytes),
`k` (kilobytes), `m` (megabytes), or `g` (gigabytes). If you don't specify a
unit, `b` is used. Set LIMIT to `-1` to enable unlimited swap.
**--pull**
Pull the image if it is not present. If this flag is disabled (with
@@ -35,23 +158,11 @@ Defaults to *true*.
Pull the image even if a version of the image is already present.
**--registry** *registry*
**-q, --quiet**
A prefix to prepend to the image name in order to pull the image. Default
value is "docker://"
**--signature-policy** *signaturepolicy*
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--build-arg** *arg=value*
Specifies a build argument and its value, which will be interpolated in
instructions read from the Dockerfiles in the same way that environment
variables are, but which will not be added to environment variable list in the
resulting image's configuration.
Suppress output messages which indicate which instruction is being processed,
and of progress when pulling images from a registry, and when writing the
output image.
**--runtime** *path*
@@ -60,24 +171,118 @@ commands specified by the **RUN** instruction.
**--runtime-flag** *flag*
Adds global flags for the container rutime.
Adds global flags for the container rutime. To list the supported flags, please
consult the manpages of the selected container runtime (`runc` is the default
runtime, the manpage to consult is `runc(8)`).
Note: Do not pass the leading `--` to the flag. To pass the runc flag `--log-format json`
to buildah bud, the option given would be `--runtime-flag log-format=json`.
**--security-opt**=[]
Security Options
"label=user:USER" : Set the label user for the container
"label=role:ROLE" : Set the label role for the container
"label=type:TYPE" : Set the label type for the container
"label=level:LEVEL" : Set the label level for the container
"label=disable" : Turn off label confinement for the container
"no-new-privileges" : Not supported
"seccomp=unconfined" : Turn off seccomp confinement for the container
"seccomp=profile.json : White listed syscalls seccomp Json file to be used as a seccomp filter
"apparmor=unconfined" : Turn off apparmor confinement for the container
"apparmor=your-profile" : Set the apparmor confinement profile for the container
**--shm-size**=""
Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater than `0`.
Unit is optional and can be `b` (bytes), `k` (kilobytes), `m`(megabytes), or `g` (gigabytes).
If you omit the unit, the system uses bytes. If you omit the size entirely, the system uses `64m`.
**--signature-policy** *signaturepolicy*
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**-t, --tag** *imageName*
Specifies the name which will be assigned to the resulting image if the build
process completes successfully.
**--format**
**--tls-verify** *bool-value*
Control the format for the built image's manifest and configuration data.
Recognized formats include *oci* (OCI image-spec v1.0, the default) and
*docker* (version 2, using schema format 2 for the manifest).
Require HTTPS and verify certificates when talking to container registries (defaults to true).
**--quiet**
**--ulimit**=[]
Suppress output messages which indicate which instruction is being processed,
and of progress when pulling images from a registry, and when writing the
output image.
Ulimit options
**-v**|**--volume**[=*[HOST-DIR:CONTAINER-DIR[:OPTIONS]]*]
Create a bind mount. If you specify, ` -v /HOST-DIR:/CONTAINER-DIR`, podman
bind mounts `/HOST-DIR` in the host to `/CONTAINER-DIR` in the podman
container. The `OPTIONS` are a comma delimited list and can be:
* [rw|ro]
* [z|Z]
* [`[r]shared`|`[r]slave`|`[r]private`]
The `CONTAINER-DIR` must be an absolute path such as `/src/docs`. The `HOST-DIR`
must be an absolute path as well. podman bind-mounts the `HOST-DIR` to the
path you specify. For example, if you supply the `/foo` value, podman creates a bind-mount.
You can specify multiple **-v** options to mount one or more mounts to a
container.
You can add `:ro` or `:rw` suffix to a volume to mount it read-only or
read-write mode, respectively. By default, the volumes are mounted read-write.
See examples.
Labeling systems like SELinux require that proper labels are placed on volume
content mounted into a container. Without a label, the security system might
prevent the processes running inside the container from using the content. By
default, podman does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes
`:z` or `:Z` to the volume mount. These suffixes tell podman to relabel file
objects on the shared volumes. The `z` option tells podman that two containers
share the volume content. As a result, podman labels the content with a shared
content label. Shared volume labels allow all containers to read/write content.
The `Z` option tells podman to label the content with a private unshared label.
Only the current container can use a private volume.
By default bind mounted volumes are `private`. That means any mounts done
inside container will not be visible on the host and vice versa. This behavior can
be changed by specifying a volume mount propagation property.
When the mount propagation policy is set to `shared`, any mounts completed inside
the container on that volume will be visible to both the host and container. When
the mount propagation policy is set to `slave`, one way mount propagation is enabled
and any mounts completed on the host for that volume will be visible only inside of the container.
To control the mount propagation property of volume use the `:[r]shared`,
`:[r]slave` or `:[r]private` propagation flag. The propagation property can
be specified only for bind mounted volumes and not for internal volumes or
named volumes. For mount propagation to work on the source mount point (mount point
where source dir is mounted on) has to have the right propagation properties. For
shared volumes, the source mount point has to be shared. And for slave volumes,
the source mount has to be either shared or slave.
Use `df <source-dir>` to determine the source mount and then use
`findmnt -o TARGET,PROPAGATION <source-mount-dir>` to determine propagation
properties of source mount, if `findmnt` utility is not available, the source mount point
can be determined by looking at the mount entry in `/proc/self/mountinfo`. Look
at `optional fields` and see if any propagaion properties are specified.
`shared:X` means the mount is `shared`, `master:X` means the mount is `slave` and if
nothing is there that means the mount is `private`.
To change propagation properties of a mount point use the `mount` command. For
example, to bind mount the source directory `/foo` do
`mount --bind /foo /foo` and `mount --make-private --make-shared /foo`. This
will convert /foo into a `shared` mount point. The propagation properties of the source
mount can be changed directly. For instance if `/` is the source mount for
`/foo`, then use `mount --make-shared /` to convert `/` into a `shared` mount.
## EXAMPLE
@@ -89,7 +294,21 @@ buildah bud -f Dockerfile.simple -f Dockerfile.notsosimple
buildah bud -t imageName .
buildah bud -t imageName -f Dockerfile.simple
buildah bud --tls-verify=true -t imageName -f Dockerfile.simple
buildah bud --tls-verify=false -t imageName .
buildah bud --runtime-flag log-format=json .
buildah bud --runtime-flag debug .
buildah bud --authfile /tmp/auths/myauths.json --cert-dir ~/auth --tls-verify=true --creds=username:password -t imageName -f Dockerfile.simple
buildah bud --memory 40m --cpu-period 10000 --cpu-quota 50000 --ulimit nofile=1024:1028 -t imageName .
buildah bud --security-opt label=level:s0:c100,c200 --cgroup-parent /path/to/cgroup/parent -t imageName .
buildah bud --volume /home/test:/myvol:ro,Z -t imageName .
## SEE ALSO
buildah(1)
buildah(1), podman-login(1), docker-login(1)

View File

@@ -13,19 +13,26 @@ specified, an ID is assigned, but no name is assigned to the image.
## OPTIONS
**--authfile** *path*
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry.
Default certificates directory is _/etc/containers/certs.d_.
**--creds** *creds*
The [username[:password]] to use to authenticate with the registry if required.
If one or both values are not supplied, a command line prompt will appear and the
value can be entered. The password is entered without echo.
**--disable-compression, -D**
Don't compress filesystem layers when building the image.
**--signature-policy**
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--quiet**
When writing the output image, suppress progress output.
**--format**
@@ -33,15 +40,47 @@ Control the format for the image manifest and configuration data. Recognized
formats include *oci* (OCI image-spec v1.0, the default) and *docker* (version
2, using schema format 2 for the manifest).
**--quiet**
When writing the output image, suppress progress output.
**--rm**
Remove the container and its content after committing it to an image.
Default leaves the container and its content in place.
**--signature-policy**
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--tls-verify** *bool-value*
Require HTTPS and verify certificates when talking to container registries (defaults to true)
## EXAMPLE
buildah commit containerID
This example saves an image based on the container.
`buildah commit containerID`
buildah commit containerID newImageName
This example saves an image named newImageName based on the container.
`buildah commit --rm containerID newImageName`
buildah commit --disable-compression --signature-policy '/etc/containers/policy.json' containerID
buildah commit --disable-compression --signature-policy '/etc/containers/policy.json' containerID newImageName
This example saves an image based on the container disabling compression.
`buildah commit --disable-compression containerID`
This example saves an image named newImageName based on the container disabling compression.
`buildah commit --disable-compression containerID newImageName`
This example commits the container to the image on the local registry while turning off tls verification.
`buildah commit --tls-verify=false containerID docker://localhost:5000/imageId`
This example commits the container to the image on the local registry using credentials and certificates for authentication.
`buildah commit --cert-dir ~/auth --tls-verify=true --creds=username:password containerID docker://localhost:5000/imageId`
This example commits the container to the image on the local registry using credentials from the /tmp/auths/myauths.json file and certificates for authentication.
`buildah commit --authfile /tmp/auths/myauths.json --cert-dir ~/auth --tls-verify=true --creds=username:password containerID docker://localhost:5000/imageId`
## SEE ALSO
buildah(1)

View File

@@ -7,11 +7,47 @@ buildah containers - List the working containers and their base images.
**buildah** **containers** [*options* [...]]
## DESCRIPTION
Lists containers which appear to be buildah working containers, their names and
Lists containers which appear to be Buildah working containers, their names and
IDs, and the names and IDs of the images from which they were initialized.
## OPTIONS
**--all, -a**
List information about all containers, including those which were not created
by and are not being used by Buildah. Containers created by Buildah are
denoted with an '*' in the 'BUILDER' column.
**--filter, -f**
Filter output based on conditions provided.
Valid filters are listed below:
| **Filter** | **Description** |
| --------------- | ------------------------------------------------------------------- |
| id | [ID] Container's ID |
| name | [Name] Container's name |
| ancestor | [ImageName] Image or descendant used to create container |
**--format**
Pretty-print containers using a Go template.
Valid placeholders for the Go template are listed below:
| **Placeholder** | **Description** |
| --------------- | -----------------------------------------|
| .ContainerID | Container ID |
| .Builder | Whether container was created by buildah |
| .ImageID | Image ID |
| .ImageName | Image name |
| .ContainerName | Container name |
**--json**
Output in JSON format.
**--noheading, -n**
Omit the table headings from the listing of containers.
@@ -27,10 +63,55 @@ Displays only the container IDs.
## EXAMPLE
buildah containers
```
CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME
29bdb522fc62 * 3fd9065eaf02 docker.io/library/alpine:latest alpine-working-container
c6b04237ac8e * f9b6f7f7b9d3 docker.io/library/busybox:latest busybox-working-container
```
buildah containers --quiet
```
29bdb522fc62d43fca0c1a0f11cfc6dfcfed169cf6cf25f928ebca1a612ff5b0
c6b04237ac8e9d435ec9cf0e7eda91e302f2db9ef908418522c2d666352281eb
```
buildah containers -q --noheading --notruncate
```
29bdb522fc62d43fca0c1a0f11cfc6dfcfed169cf6cf25f928ebca1a612ff5b0
c6b04237ac8e9d435ec9cf0e7eda91e302f2db9ef908418522c2d666352281eb
```
buildah containers --json
```
[
{
"id": "29bdb522fc62d43fca0c1a0f11cfc6dfcfed169cf6cf25f928ebca1a612ff5b0",
"builder": true,
"imageid": "3fd9065eaf02feaf94d68376da52541925650b81698c53c6824d92ff63f98353",
"imagename": "docker.io/library/alpine:latest",
"containername": "alpine-working-container"
},
{
"id": "c6b04237ac8e9d435ec9cf0e7eda91e302f2db9ef908418522c2d666352281eb",
"builder": true,
"imageid": "f9b6f7f7b9d34113f66e16a9da3e921a580937aec98da344b852ca540aaa2242",
"imagename": "docker.io/library/busybox:latest",
"containername": "busybox-working-container"
}
]
```
buildah containers --format "{{.ContainerID}} {{.ContainerName}}"
```
3fbeaa87e583ee7a3e6787b2d3af961ef21946a0c01a08938e4f52d53cce4c04 myalpine-working-container
fbfd3505376ee639c3ed50f9d32b78445cd59198a1dfcacf2e7958cda2516d5c ubuntu-working-container
```
buildah containers --filter ancestor=ubuntu
```
CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME
fbfd3505376e * 0ff04b2e7b63 docker.io/library/ubuntu:latest ubuntu-working-container
```
## SEE ALSO
buildah(1)

View File

@@ -11,10 +11,18 @@ Copies the contents of a file, URL, or a directory to a container's working
directory or a specified location in the container. If a local directory is
specified as a source, its *contents* are copied to the destination.
## OPTIONS
**--chown** *owner*:*group*
Sets the user and group ownership of the destination content.
## EXAMPLE
buildah copy containerID '/myapp/app.conf' '/myapp/app.conf'
buildah copy --chown myuser:mygroup containerID '/myapp/app.conf' '/myapp/app.conf'
buildah copy containerID '/home/myuser/myproject.go'
buildah copy containerID '/home/myuser/myfiles.tar' '/tmp'

View File

@@ -8,13 +8,144 @@ buildah from - Creates a new working container, either from scratch or using a s
## DESCRIPTION
Creates a working container based upon the specified image name. If the
supplied image name is "scratch" a new empty container is created.
supplied image name is "scratch" a new empty container is created. Image names
uses a "transport":"details" format.
Multiple transports are supported:
**dir:**_path_
An existing local directory _path_ retrieving the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_ (Default)
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(podman login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
**docker-archive:**_path_
An image is retrieved as a `docker load` formatted file.
**docker-daemon:**_docker-reference_
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
**oci:**_path_**:**_tag_
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
## RETURN VALUE
The container ID of the container that was created. On error, -1 is returned and errno is returned.
## OPTIONS
**--add-host**=[]
Add a custom host-to-IP mapping (host:ip)
Add a line to /etc/hosts. The format is hostname:ip. The **--add-host** option can be set multiple times.
**--authfile** *path*
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry.
Default certificates directory is _/etc/containers/certs.d_.
**--cgroup-parent**=""
Path to cgroups under which the cgroup for the container will be created. If the path is not absolute, the path is considered to be relative to the cgroups path of the init process. Cgroups will be created if they do not already exist.
**--cpu-period**=*0*
Limit the CPU CFS (Completely Fair Scheduler) period
Limit the container's CPU usage. This flag tell the kernel to restrict the container's CPU usage to the period you specify.
**--cpu-quota**=*0*
Limit the CPU CFS (Completely Fair Scheduler) quota
Limit the container's CPU usage. By default, containers run with the full
CPU resource. This flag tell the kernel to restrict the container's CPU usage
to the quota you specify.
**--cpu-shares**=*0*
CPU shares (relative weight)
By default, all containers get the same proportion of CPU cycles. This proportion
can be modified by changing the container's CPU share weighting relative
to the weighting of all other running containers.
To modify the proportion from the default of 1024, use the **--cpu-shares**
flag to set the weighting to 2 or higher.
The proportion will only apply when CPU-intensive processes are running.
When tasks in one container are idle, other containers can use the
left-over CPU time. The actual amount of CPU time will vary depending on
the number of containers running on the system.
For example, consider three containers, one has a cpu-share of 1024 and
two others have a cpu-share setting of 512. When processes in all three
containers attempt to use 100% of CPU, the first container would receive
50% of the total CPU time. If you add a fourth container with a cpu-share
of 1024, the first container only gets 33% of the CPU. The remaining containers
receive 16.5%, 16.5% and 33% of the CPU.
On a multi-core system, the shares of CPU time are distributed over all CPU
cores. Even if a container is limited to less than 100% of CPU time, it can
use 100% of each individual CPU core.
For example, consider a system with more than three cores. If you start one
container **{C0}** with **-c=512** running one process, and another container
**{C1}** with **-c=1024** running two processes, this can result in the following
division of CPU shares:
PID container CPU CPU share
100 {C0} 0 100% of CPU0
101 {C1} 1 100% of CPU1
102 {C1} 2 100% of CPU2
**--cpuset-cpus**=""
CPUs in which to allow execution (0-3, 0,1)
**--cpuset-mems**=""
Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
If you have four memory nodes on your system (0-3), use `--cpuset-mems=0,1`
then processes in your container will only use memory from the first
two memory nodes.
**--creds** *creds*
The [username[:password]] to use to authenticate with the registry if required.
If one or both values are not supplied, a command line prompt will appear and the
value can be entered. The password is entered without echo.
**-m**, **--memory**=""
Memory limit (format: <number>[<unit>], where unit = b, k, m or g)
Allows you to constrain the memory available to a container. If the host
supports swap memory, then the **-m** memory setting can be larger than physical
RAM. If a limit of 0 is specified (not using **-m**), the container's memory is
not limited. The actual limit may be rounded up to a multiple of the operating
system's page size (the value would be very large, that's millions of trillions).
**--memory-swap**="LIMIT"
A limit value equal to memory plus swap. Must be used with the **-m**
(**--memory**) flag. The swap `LIMIT` should always be larger than **-m**
(**--memory**) value. By default, the swap `LIMIT` will be set to double
the value of --memory.
The format of `LIMIT` is `<number>[<unit>]`. Unit can be `b` (bytes),
`k` (kilobytes), `m` (megabytes), or `g` (gigabytes). If you don't specify a
unit, `b` is used. Set LIMIT to `-1` to enable unlimited swap.
**--name** *name*
A *name* for the working container
@@ -29,10 +160,32 @@ Defaults to *true*.
Pull the image even if a version of the image is already present.
**--registry** *registry*
**--quiet**
A prefix to prepend to the image name in order to pull the image. Default
value is "docker://"
If an image needs to be pulled from the registry, suppress progress output.
**--security-opt**=[]
Security Options
"label=user:USER" : Set the label user for the container
"label=role:ROLE" : Set the label role for the container
"label=type:TYPE" : Set the label type for the container
"label=level:LEVEL" : Set the label level for the container
"label=disable" : Turn off label confinement for the container
"no-new-privileges" : Not supported
"seccomp=unconfined" : Turn off seccomp confinement for the container
"seccomp=profile.json : White listed syscalls seccomp Json file to be used as a seccomp filter
"apparmor=unconfined" : Turn off apparmor confinement for the container
"apparmor=your-profile" : Set the apparmor confinement profile for the container
**--shm-size**=""
Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater than `0`.
Unit is optional and can be `b` (bytes), `k` (kilobytes), `m`(megabytes), or `g` (gigabytes).
If you omit the unit, the system uses bytes. If you omit the size entirely, the system uses `64m`.
**--signature-policy** *signaturepolicy*
@@ -40,19 +193,100 @@ Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--quiet**
**--tls-verify** *bool-value*
If an image needs to be pulled from the registry, suppress progress output.
Require HTTPS and verify certificates when talking to container registries (defaults to true)
**--ulimit**=[]
Ulimit options
**-v**|**--volume**[=*[HOST-DIR:CONTAINER-DIR[:OPTIONS]]*]
Create a bind mount. If you specify, ` -v /HOST-DIR:/CONTAINER-DIR`, podman
bind mounts `/HOST-DIR` in the host to `/CONTAINER-DIR` in the podman
container. The `OPTIONS` are a comma delimited list and can be:
* [rw|ro]
* [z|Z]
* [`[r]shared`|`[r]slave`|`[r]private`]
The `CONTAINER-DIR` must be an absolute path such as `/src/docs`. The `HOST-DIR`
must be an absolute path as well. podman bind-mounts the `HOST-DIR` to the
path you specify. For example, if you supply the `/foo` value, podman creates a bind-mount.
You can specify multiple **-v** options to mount one or more mounts to a
container.
You can add `:ro` or `:rw` suffix to a volume to mount it read-only or
read-write mode, respectively. By default, the volumes are mounted read-write.
See examples.
Labeling systems like SELinux require that proper labels are placed on volume
content mounted into a container. Without a label, the security system might
prevent the processes running inside the container from using the content. By
default, podman does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes
`:z` or `:Z` to the volume mount. These suffixes tell podman to relabel file
objects on the shared volumes. The `z` option tells podman that two containers
share the volume content. As a result, podman labels the content with a shared
content label. Shared volume labels allow all containers to read/write content.
The `Z` option tells podman to label the content with a private unshared label.
Only the current container can use a private volume.
By default bind mounted volumes are `private`. That means any mounts done
inside container will not be visible on the host and vice versa. This behavior can
be changed by specifying a volume mount propagation property.
When the mount propagation policy is set to `shared`, any mounts completed inside
the container on that volume will be visible to both the host and container. When
the mount propagation policy is set to `slave`, one way mount propagation is enabled
and any mounts completed on the host for that volume will be visible only inside of the container.
To control the mount propagation property of volume use the `:[r]shared`,
`:[r]slave` or `:[r]private` propagation flag. The propagation property can
be specified only for bind mounted volumes and not for internal volumes or
named volumes. For mount propagation to work on the source mount point (mount point
where source dir is mounted on) has to have the right propagation properties. For
shared volumes, the source mount point has to be shared. And for slave volumes,
the source mount has to be either shared or slave.
Use `df <source-dir>` to determine the source mount and then use
`findmnt -o TARGET,PROPAGATION <source-mount-dir>` to determine propagation
properties of source mount, if `findmnt` utility is not available, the source mount point
can be determined by looking at the mount entry in `/proc/self/mountinfo`. Look
at `optional fields` and see if any propagaion properties are specified.
`shared:X` means the mount is `shared`, `master:X` means the mount is `slave` and if
nothing is there that means the mount is `private`.
To change propagation properties of a mount point use the `mount` command. For
example, to bind mount the source directory `/foo` do
`mount --bind /foo /foo` and `mount --make-private --make-shared /foo`. This
will convert /foo into a `shared` mount point. The propagation properties of the source
mount can be changed directly. For instance if `/` is the source mount for
`/foo`, then use `mount --make-shared /` to convert `/` into a `shared` mount.
## EXAMPLE
buildah from imagename --pull --registry "myregistry://"
buildah from imagename --pull
buildah from myregistry://imagename --pull
buildah from docker://myregistry.example.com/imagename --pull
buildah from imagename --signature-policy /etc/containers/policy.json
buildah from imagename --pull-always --registry "myregistry://" --name "mycontainer"
buildah from docker://myregistry.example.com/imagename --pull-always --name "mycontainer"
buildah from myregistry/myrepository/imagename:imagetag --tls-verify=false
buildah from myregistry/myrepository/imagename:imagetag --creds=myusername:mypassword --cert-dir ~/auth
buildah from myregistry/myrepository/imagename:imagetag --authfile=/tmp/auths/myauths.json
buildah from --memory 40m --cpu-shares 2 --cpuset-cpus 0,2 --security-opt label=level:s0:c100,c200 myregistry/myrepository/imagename:imagetag
buildah from --ulimit nofile=1024:1028 --cgroup-parent /path/to/cgroup/parent myregistry/myrepository/imagename:imagetag
buildah from --volume /home/test:/myvol:ro,Z myregistry/myrepository/imagename:imagetag
## SEE ALSO
buildah(1)
buildah(1), podman-login(1), docker-login(1)

View File

@@ -11,25 +11,46 @@ Displays locally stored images, their names, and their IDs.
## OPTIONS
**--digests**
Show the image digests.
**--filter, -f=[]**
Filter output based on conditions provided (default []). Valid
keywords are 'dangling', 'label', 'before' and 'since'.
**--format="TEMPLATE"**
Pretty-print images using a Go template.
**--json**
Display the output in JSON format.
**--noheading, -n**
Omit the table headings from the listing of images.
**--notruncate**
**no-trunc**
Do not truncate output.
**--quiet, -q**
Lists only the image IDs.
Displays only the image IDs.
## EXAMPLE
buildah images
buildah images --json
buildah images --quiet
buildah images -q --noheading --notruncate
buildah images --filter dangling=true
## SEE ALSO
buildah(1)

View File

@@ -7,7 +7,7 @@ buildah inspect - Display information about working containers or images.
**buildah** **inspect** [*options* [...] --] **ID**
## DESCRIPTION
Prints the low-level information on buildah object(s) (e.g. container, images) identified by name or ID. By default, this will render all results in a
Prints the low-level information on Buildah object(s) (e.g. container, images) identified by name or ID. By default, this will render all results in a
JSON array. If the container and image have the same name, this will return container JSON for unspecified type. If a format is specified,
the given template will be executed for each result.
@@ -19,7 +19,7 @@ Use *template* as a Go template when formatting the output.
Users of this option should be familiar with the [*text/template*
package](https://golang.org/pkg/text/template/) in the Go standard library, and
of internals of buildah's implementation.
of internals of Buildah's implementation.
**--type** *container* | *image*

View File

@@ -12,7 +12,7 @@ buildah mount - Mount a working container's root filesystem.
Mounts the specified container's root file system in a location which can be
accessed from the host, and returns its location.
If you execute the command without any arguments, the tool will list all of the
If the mount command is invoked without any arguments, the tool will list all of the
currently mounted containers.
## RETURN VALUE

View File

@@ -10,29 +10,101 @@ buildah push - Push an image from local storage to elsewhere.
Pushes an image from local storage to a specified destination, decompressing
and recompessing layers as needed.
## imageID
Image stored in local container/storage
## DESTINATION
The DESTINATION is a location to store container images
The Image "DESTINATION" uses a "transport":"details" format.
Multiple transports are supported:
**dir:**_path_
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(podman login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
**docker-archive:**_path_[**:**_docker-reference_]
An image is stored in the `docker save` formatted file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
**docker-daemon:**_docker-reference_
An image _docker-reference_ stored in the docker daemon internal storage. _docker-reference_ must contain either a tag or a digest. Alternatively, when reading images, the format can also be docker-daemon:algo:digest (an image ID).
**oci:**_path_**:**_tag_
An image _tag_ in a directory compliant with "Open Container Image Layout Specification" at _path_.
**ostree:**_image_[**@**_/absolute/repo/path_]
An image in local OSTree repository. _/absolute/repo/path_ defaults to _/ostree/repo_.
## OPTIONS
**--authfile** *path*
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry.
Default certificates directory is _/etc/containers/certs.d_.
**--creds** *creds*
The [username[:password]] to use to authenticate with the registry if required.
If one or both values are not supplied, a command line prompt will appear and the
value can be entered. The password is entered without echo.
**--disable-compression, -D**
Don't compress copies of filesystem layers which will be pushed.
**--format, -f**
Manifest Type (oci, v2s1, or v2s2) to use when saving image to directory using the 'dir:' transport (default is manifest type of source)
**--quiet**
When writing the output image, suppress progress output.
**--signature-policy**
Pathname of a signature policy file to use. It is not recommended that this
option be used, as the default behavior of using the system-wide default policy
(frequently */etc/containers/policy.json*) is most often preferred.
**--quiet**
**--tls-verify** *bool-value*
When writing the output image, suppress progress output.
Require HTTPS and verify certificates when talking to container registries (defaults to true)
## EXAMPLE
buildah push imageID dir:/path/to/image
This example extracts the imageID image to a local directory in docker format.
buildah push imageID oci-layout:/path/to/layout
`# buildah push imageID dir:/path/to/image`
buildah push imageID docker://registry/repository:tag
This example extracts the imageID image to a local directory in oci format.
`# buildah push imageID oci:/path/to/layout`
This example extracts the imageID image to a container registry named registry.example.com.
`# buildah push imageID docker://registry.example.com/repository:tag`
This example extracts the imageID image to a private container registry named registry.example.com with authentication from /tmp/auths/myauths.json.
`# buildah push --authfile /tmp/auths/myauths.json imageID docker://registry.example.com/repository:tag`
This example extracts the imageID image and puts into the local docker container store.
`# buildah push imageID docker-daemon:image:tag`
This example extracts the imageID image and puts it into the registry on the localhost while turning off tls verification.
`# buildah push --tls-verify=false imageID docker://localhost:5000/my-imageID`
This example extracts the imageID image and puts it into the registry on the localhost using credentials and certificates for authentication.
`# buildah push --cert-dir ~/auth --tls-verify=true --creds=username:password imageID docker://localhost:5000/my-imageID`
## SEE ALSO
buildah(1)
buildah(1), podman-login(1), docker-login(1)

View File

@@ -9,11 +9,19 @@ buildah rm - Removes one or more working containers.
## DESCRIPTION
Removes one or more working containers, unmounting them if necessary.
## OPTIONS
**--all, -a**
All Buildah containers will be removed. Buildah containers are denoted with an '*' in the 'BUILDER' column listed by the command 'buildah containers'.
## EXAMPLE
buildah rm containerID
buildah rm containerID1 containerID2 containerID3
buildah rm --all
## SEE ALSO
buildah(1)

View File

@@ -9,10 +9,37 @@ buildah rmi - Removes one or more images.
## DESCRIPTION
Removes one or more locally stored images.
## LIMITATIONS
If the image was pushed to a directory path using the 'dir:' transport
the rmi command can not remove the image. Instead standard file system
commands should be used.
## OPTIONS
**--all, -a**
All local images will be removed from the system that do not have containers using the image as a reference image.
**--prune, -p**
All local images will be removed from the system that do not have a tag and do not have a child image pointing to them.
**--force, -f**
This option will cause Buildah to remove all containers that are using the image before removing the image from the system.
## EXAMPLE
buildah rmi imageID
buildah rmi --all
buildah rmi --all --force
buildah rmi --prune
buildah rmi --force imageID
buildah rmi imageID1 imageID2 imageID3
## SEE ALSO

View File

@@ -10,9 +10,12 @@ buildah run - Run a command inside of the container.
Launches a container and runs the specified command in that container using the
container's root filesystem as a root filesystem, using configuration settings
inherited from the container's image or as specified using previous calls to
the *buildah config* command.
the *buildah config* command. To execute *buildah run* within an
interactive shell, specify the --tty option.
## OPTIONS
**--hostname**
Set the hostname inside of the running container.
**--runtime** *path*
@@ -20,17 +23,40 @@ The *path* to an alternate OCI-compatible runtime.
**--runtime-flag** *flag*
Adds global flags for the container rutime.
Adds global flags for the container runtime. To list the supported flags, please
consult the manpages of the selected container runtime (`runc` is the default
runtime, the manpage to consult is `runc(8)`).
Note: Do not pass the leading `--` to the flag. To pass the runc flag `--log-format json`
to buildah run, the option given would be `--runtime-flag log-format=json`.
**--tty**
By default a pseudo-TTY is allocated only when buildah's standard input is
attached to a pseudo-TTY. Setting the `--tty` option to `true` will cause a
pseudo-TTY to be allocated inside the container connecting the user's "terminal"
with the stdin and stdout stream of the container. Setting the `--tty` option to
`false` will prevent the pseudo-TTY from being allocated.
**--volume, -v** *source*:*destination*:*flags*
Bind mount a location from the host into the container for its lifetime.
NOTE: End parsing of options with the `--` option, so that other
options can be passed to the command inside of the container.
## EXAMPLE
buildah run containerID 'ps -auxw'
buildah run containerID -- ps -auxw
buildah run containerID --runtime-flag --no-new-keyring 'ps -auxw'
buildah run containerID --hostname myhost -- ps -auxw
buildah run --runtime-flag log-format=json containerID /bin/bash
buildah run --runtime-flag debug containerID /bin/bash
buildah run --tty containerID /bin/bash
buildah run --tty=false containerID ls /
## SEE ALSO
buildah(1)

27
docs/buildah-version.md Normal file
View File

@@ -0,0 +1,27 @@
## buildah-version "1" "June 2017" "Buildah"
## NAME
buildah version - Display the Buildah Version Information.
## SYNOPSIS
**buildah version**
[**--help**|**-h**]
## DESCRIPTION
Shows the following information: Version, Go Version, Image Spec, Runtime Spec, Git Commit, Build Time, OS, and Architecture.
## OPTIONS
**--help, -h**
Print usage statement
## EXAMPLE
buildah version
buildah version --help
buildah version -h
## SEE ALSO
buildah(1)

View File

@@ -1,14 +1,14 @@
## buildah "1" "March 2017" "buildah"
## NAME
buildah - A command line tool to facilitate working with containers and using them to build images.
Buildah - A command line tool to facilitate working with containers and using them to build images.
## SYNOPSIS
buildah [OPTIONS] COMMAND [ARG...]
## DESCRIPTION
The buildah package provides a command line tool which can be used to:
The Buildah package provides a command line tool which can be used to:
* Create a working container, either from scratch or using an image as a starting point.
* Mount a working container's root filesystem for manipulation.
@@ -16,8 +16,38 @@ The buildah package provides a command line tool which can be used to:
* Use the updated contents of a container's root filesystem as a filesystem layer to create a new image.
* Delete a working container or an image.
This tool needs to be run as the root user.
## OPTIONS
**--debug**
Print debugging information
**--default-mounts-file**
Path to default mounts file (default path: "/usr/share/containers/mounts.conf")
**--help, -h**
Show help
**--registries-conf** *path*
Pathname of the configuration file which specifies which registries should be
consulted when completing image names which do not include a registry or domain
portion. It is not recommended that this option be used, as the default
behavior of using the system-wide configuration
(*/etc/containers/registries.conf*) is most often preferred.
**--registries-conf-dir** *path*
Pathname of the directory which contains configuration snippets which specify
registries which should be consulted when completing image names which do not
include a registry or domain portion. It is not recommended that this option
be used, as the default behavior of using the system-wide configuration
(*/etc/containers/registries.d*) is most often preferred.
**--root** **value**
Storage root dir (default: "/var/lib/containers/storage")
@@ -34,14 +64,6 @@ Storage driver
Storage driver option
**--debug**
Print debugging information
**--help, -h**
Show help
**--version, -v**
Print the version
@@ -67,3 +89,5 @@ Print the version
| buildah-run(1) | Run a command inside of the container. |
| buildah-tag(1) | Add an additional name to a local image. |
| buildah-umount(1) | Unmount a working container's root file system. |
| buildah-version(1) | Display the Buildah Version Information
|

238
docs/tutorials/01-intro.md Normal file
View File

@@ -0,0 +1,238 @@
![buildah logo](https://cdn.rawgit.com/projectatomic/buildah/master/logos/buildah-logo_large.png)
# Buildah Tutorial 1
## Building OCI container images
The purpose of this tutorial is to demonstrate how Buildah can be used to build container images compliant with the [Open Container Initiative](https://www.opencontainers.org/) (OCI) [image specification](https://github.com/opencontainers/image-spec). Images can be built from existing images, from scratch, and using Dockerfiles. OCI images built using the Buildah command line tool (CLI) and the underlying OCI based technologies (e.g. [containers/image](https://github.com/containers/image) and [containers/storage](https://github.com/containers/storage)) are portable and can therefore run in a Docker environment.
In brief the `containers/image` project provides mechanisms to copy, push, pull, inspect and sign container images. The `containers/storage` project provides mechanisms for storing filesystem layers, container images, and containers. Buildah is a CLI that takes advantage of these underlying projects and therefore allows you to build, move, and manage container images and containers.
First step is to install Buildah. Run as root because you will need to be root for running Buildah commands:
# dnf -y install buildah
After installing Buildah we can see there are no images installed. The `buildah images` command will list all the images:
# buildah images
We can also see that there are also no containers by running:
# buildah containers
When you build a working container from an existing image, Buildah defaults to appending '-working-container' to the image's name to construct a name for the container. The Buildah CLI conveniently returns the name of the new container. You can take advantage of this by assigning the returned value to a shell varible using standard shell assignment :
# container=$(buildah from fedora)
It is not required to assign a shell variable. Running `buildah from fedora` is sufficient. It just helps simplify commands later. To see the name of the container that we stored in the shell variable:
# echo $container
What can we do with this new container? Let's try running bash:
# buildah run $container bash
Notice we get a new shell prompt because we are running a bash shell inside of the container. It should be noted that `buildah run` is primarily intended for helping debug during the build process. A runtime like runc or a container interface like [CRI-O](https://github.com/kubernetes-incubator/cri-o) is more suited for starting containers in production.
Be sure to `exit` out of the container and let's try running something else:
# buildah run $container java
Oops. Java is not installed. A message containing something like the following was returned.
container_linux.go:274: starting container process caused "exec: \"java\": executable file not found in $PATH"
Lets try installing it using:
# buildah run $container -- dnf -y install java
The `--` syntax basically tells Buildah: there are no more `buildah run` command options after this point. The options after this point are for inside the containers shell. It is required if the command we specify includes command line options which are not meant for Buildah.
Now running `buildah run $container java` will show that Java has been installed. It will return the standard Java `Usage` output.
## Building a container from scratch
One of the advantages of using `buildah` to build OCI compliant container images is that you can easily build a container image from scratch and therefore exclude unnecessary packages from your image. E.g. most final container images for production probably don't need a package manager like `dnf`.
Let's build a container from scratch. The special "image" name "scratch" tells Buildah to create an empty container. The container has a small amount of metadata about the container but no real Linux content.
# newcontainer=$(buildah from scratch)
You can see this new empty container by running:
# buildah containers
You should see output similar to the following:
CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME
82af3b9a9488 * 3d85fcda5754 docker.io/library/fedora:latest fedora-working-container
ac8fa6be0f0a * scratch working-container
Its container name is working-container by default and it's stored in the `$newcontainer` variable. Notice the image name (IMAGE NAME) is "scratch". This just indicates that there is no real image yet. i.e. It is containers/storage but there is no representation in containers/image. So when we run:
# buildah images
We don't see the image listed. There is no corresponding scratch image. It is an empty container.
So does this container actually do anything? Let's see.
# buildah run $newcontainer bash
Nope. This really is empty. The package installer `dnf` is not even inside this container. It's essentially an empty layer on top of the kernel. So what can be done with that? Thankfully there is a `buildah mount` command.
# scratchmnt=$(buildah mount $newcontainer)
By echoing `$scratchmnt` we can see the path for the [overlay image](https://wiki.archlinux.org/index.php/Overlay_filesystem), which gives you a link directly to the root file system of the container.
# echo $scratchmnt
/var/lib/containers/storage/overlay/b78d0e11957d15b5d1fe776293bd40a36c28825fb6cf76f407b4d0a95b2a200d/diff
Notice that the overlay image is under `/var/lib/containers/storage` as one would expect. (See above on `containers/storage` or for more information see [containers/storage](https://github.com/containers/storage).)
Now that we have a new empty container we can install or remove software packages or simply copy content into that container. So let's install `bash` and `coreutils` so that we can run bash scripts. This could easily be `nginx` or other packages needed for your container.
# dnf install --installroot $scratchmnt --release 26 bash coreutils --setopt install_weak_deps=false -y
Let's try it out (showing the prompt in this example to demonstrate the difference):
# buildah run $newcontainer bash
bash-4.4# cd /usr/bin
bash-4.4# ls
bash-4.4# exit
Notice we have a `/usr/bin` directory in the newcontainer's image layer. Let's first copy a simple file from our host into the container. Create a file called runecho.sh which contains the following:
#!/bin/bash
for i in `seq 0 9`;
do
echo "This is a new container from ipbabble [" $i "]"
done
Change the permissions on the file so that it can be run:
# chmod +x runecho.sh
With `buildah` files can be copied into the new image and we can also configure the image to run commands. Let's copy this new command into the container's `/usr/bin` directory and configure the container to run the command when the container is run:
# buildah copy $newcontainer ./runecho.sh /usr/bin
# buildah config --cmd /usr/bin/runecho.sh $newcontainer
Now run the container:
# buildah run $newcontainer
This is a new container from ipbabble [ 0 ]
This is a new container from ipbabble [ 1 ]
This is a new container from ipbabble [ 2 ]
This is a new container from ipbabble [ 3 ]
This is a new container from ipbabble [ 4 ]
This is a new container from ipbabble [ 5 ]
This is a new container from ipbabble [ 6 ]
This is a new container from ipbabble [ 7 ]
This is a new container from ipbabble [ 8 ]
This is a new container from ipbabble [ 9 ]
It works! Congratulations, you have built a new OCI container from scratch that uses bash scripting. Let's add some more configuration information.
# buildah config --created-by "ipbabble" $newcontainer
# buildah config --author "wgh at redhat.com @ipbabble" --label name=fedora26-bashecho $newcontainer
We can inspect the container's metadata using the `inspect` command:
# buildah inspect $newcontainer
We should probably unmount and commit the image:
# buildah unmount $newcontainer
# buildah commit $newcontainer fedora-bashecho
# buildah images
And you can see there is a new image called `fedora-bashecho:latest`. You can inspect the new image using:
# buildah inspect --type=image fedora-bashecho
Later when you want to create a new container or containers from this image, you simply need need to do `buildah from fedora-bashecho`. This will create a new containers based on this image for you.
Now that you have the new image you can remove the scratch container called working-container:
# buildah rm $newcontainer
or
# buildah rm working-container
## OCI images built using Buildah are portable
Let's test if this new OCI image is really portable to another OCI technology like Docker. First you should install Docker and start it. Notice that Docker requires a daemon process (that's quite big) in order to run any client commands. Buildah has no daemon requirement.
# dnf -y install docker
# systemctl start docker
Let's copy that image from where containers/storage stores it to where the Docker daemon stores its images, so that we can run it using Docker. We can achieve this using `buildah push`. This copies the image to Docker's repository area which is located under `/var/lib/docker`. Docker's repository is managed by the Docker daemon. This needs to be explicitly stated by telling Buildah to push to the Docker repository protocol using `docker-daemon:`.
# buildah push fedora-bashecho docker-daemon:fedora-bashecho:latest
Under the covers, the containers/image library calls into the containers/storage library to read the image's contents, and sends them to the local Docker daemon. This can take a little while. And usually you won't need to do this. If you're using `buildah` you are probably not using Docker. This is just for demo purposes. Let's try it:
# docker run fedora-bashecho
This is a new container from ipbabble [ 0 ]
This is a new container from ipbabble [ 1 ]
This is a new container from ipbabble [ 2 ]
This is a new container from ipbabble [ 3 ]
This is a new container from ipbabble [ 4 ]
This is a new container from ipbabble [ 5 ]
This is a new container from ipbabble [ 6 ]
This is a new container from ipbabble [ 7 ]
This is a new container from ipbabble [ 8 ]
This is a new container from ipbabble [ 9 ]
OCI container images built with `buildah` are completely standard as expected. So now it might be time to run:
# dnf -y remove docker
## Using Dockerfiles with Buildah
What if you have been using Docker for a while and have some existing Dockerfiles. Not a problem. Buildah can build images using a Dockerfile. The `build-using-dockerfile`, or `bud` for short, takes a Dockerfile as input and produces an OCI image.
Find one of your Dockerfiles or create a file called Dockerfile. Use the following example or some variation if you'd like:
# Base on the Fedora
FROM fedora:latest
MAINTAINER ipbabble email buildahboy@redhat.com # not a real email
# Update image and install httpd
RUN echo "Updating all fedora packages"; dnf -y update; dnf -y clean all
RUN echo "Installing httpd"; dnf -y install httpd
# Expose the default httpd port 80
EXPOSE 80
# Run the httpd
CMD ["/usr/sbin/httpd", "-DFOREGROUND"]
Now run `buildah bud` with the name of the Dockerfile and the name to be given to the created image (e.g. fedora-httpd):
# buildah bud -f Dockerfile -t fedora-httpd
or, because `buildah bud` defaults to Dockerfile (note the period at the end of the example):
# buildah bud -t fedora-httpd .
You will see all the steps of the Dockerfile executing. Afterwards `buildah images` will show you the new image. Now we need to create the container using `buildah from` and test it with `buildah run`:
# httpcontainer=$(buildah from fedora-httpd)
# buildah run $httpcontainer
While that container is running, in another shell run:
# curl localhost
You will see the standard Apache webpage.
Why not try and modify the Dockerfile. Do not install httpd, but instead ADD the runecho.sh file and have it run as the CMD.
## Congratulations
Well done. You have learned a lot about Buildah using this short tutorial. Hopefully you followed along with the examples and found them to be sufficient. Be sure to look at Buildah's man pages to see the other useful commands you can use. Have fun playing.
If you have any suggestions or issues please post them at the [ProjectAtomic Buildah Issues page](https://github.com/projectatomic/buildah/issues).
For more information on Buildah and how you might contribute please visit the [Buildah home page on Github](https://github.com/projectatomic/buildah).

View File

@@ -0,0 +1,134 @@
![buildah logo](https://cdn.rawgit.com/projectatomic/buildah/master/logos/buildah-logo_large.png)
# Buildah Tutorial 2
## Using Buildah with container registries
The purpose of this tutorial is to demonstrate how Buildah can be used to move OCI compliant images in and out of private or public registries.
In the [first tutorial](https://github.com/projectatomic/buildah/blob/master/docs/tutorials/01-intro.md) we built an image from scratch that we called `fedora-bashecho` and we pushed it to a local Docker repository using the `docker-daemon` protocol. We are going to use the same image to push to a private Docker registry.
First we must pull down a registry. As a shortcut we will save the container name that is returned from the `buildah from` command, into a bash variable called `registry`. This is just like we did in Tutorial 1:
# registry=$(buildah from registry)
It is worth pointing out that the `from` command can also use other protocols beyond the default (and implicity assumed) order that first looks in local containers-storage (containers-storage:) and then looks in the Docker hub (docker:). For example, if you already had a registry container image in a local Docker registry then you could use the following:
# registry=$(buildah from docker-daemon:registry:latest)
Then we need to start the registry. You should start the registry in a separate shell and leave it running there:
# buildah run $registry
If you would like to see more details as to what is going on inside the registry, especially if you are having problems with the registry, you can run the registry container in debug mode as follows:
# buildah --debug run $registry
You can use `--debug` on any Buildah command.
The registry is running and is waiting for requests to process. Notice that this registry is a Docker registry that we pulled from Docker hub and we are running it for this example using `buildah run`. There is no Docker daemon running at this time.
Let's push our image to the private registry. By default, Buildah is set up to expect secure connections to a registry. Therefore we will need to turn the TLS verification off using the `--tls-verify` flag. We also need to tell Buildah that the registry is on this local host ( i.e. localhost) and listening on port 5000. Similar to what you'd expect to do on multi-tenant Docker hub, we will explicitly specify that the registry is to store the image under the `ipbabble` repository - so as not to clash with other users' similarly named images.
# buildah push --tls-verify=false fedora-bashecho docker://localhost:5000/ipbabble/fedora-bashecho:latest
[Skopeo](https://github.com/projectatomic/skopeo) is a ProjectAtomic tool that was created to inspect images in registries without having to pull the image from the registry. It has grown to have many other uses. We will verify that the image has been stored by using Skopeo to inspect the image in the registry:
# skopeo inspect --tls-verify=false docker://localhost:5000/ipbabble/fedora-bashecho:latest
{
"Name": "localhost:5000/ipbabble/fedora-bashecho",
"Digest": "sha256:6806f9385f97bc09f54b5c0ef583e58c3bc906c8c0b3e693d8782d0a0acf2137",
"RepoTags": [
"latest"
],
"Created": "2017-12-05T21:38:12.311901938Z",
"DockerVersion": "",
"Labels": {
"name": "fedora-bashecho"
},
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:0cb7556c714767b8da6e0299cbeab765abaddede84769475c023785ae66d10ca"
]
}
We can verify that it is still portable with Docker by starting Docker again, as we did in the first tutorial. Then we can pull down the image and starting the container using Docker:
# systemctl start docker
# docker pull localhost:5000/ipbabble/fedora-bashecho
Using default tag: latest
Trying to pull repository localhost:5000/ipbabble/fedora-bashecho ...
sha256:6806f9385f97bc09f54b5c0ef583e58c3bc906c8c0b3e693d8782d0a0acf2137: Pulling from localhost:5000/ipbabble/fedora-bashecho
0cb7556c7147: Pull complete
Digest: sha256:6806f9385f97bc09f54b5c0ef583e58c3bc906c8c0b3e693d8782d0a0acf2137
Status: Downloaded newer image for localhost:5000/ipbabble/fedora-bashecho:latest
# docker run localhost:5000/ipbabble/fedora-bashecho
This is a new container named ipbabble [ 0 ]
This is a new container named ipbabble [ 1 ]
This is a new container named ipbabble [ 2 ]
This is a new container named ipbabble [ 3 ]
This is a new container named ipbabble [ 4 ]
This is a new container named ipbabble [ 5 ]
This is a new container named ipbabble [ 6 ]
This is a new container named ipbabble [ 7 ]
This is a new container named ipbabble [ 8 ]
This is a new container named ipbabble [ 9 ]
# systemctl stop docker
Pushing to Docker hub is just as easy. Of course you must have an account with credentials. In this example I'm using a Docker hub API key, which has the form "username:password" (example password has been edited for privacy), that I created with my Docker hub account. I use the `--creds` flag to use my API key. I also specify my local image name `fedora-bashecho` as my image source and I use the `docker` protocol with no host or port so that it will look at the default Docker hub registry:
# buildah push --creds ipbabble:5bbb9990-6eeb-1234-af1a-aaa80066887c fedora-bashecho docker://ipbabble/fedora-bashecho:latest
And let's inspect that with Skopeo:
# skopeo inspect --creds ipbabble:5bbb9990-6eeb-1234-af1a-aaa80066887c docker://ipbabble/fedora-bashecho:latest
{
"Name": "docker.io/ipbabble/fedora-bashecho",
"Digest": "sha256:6806f9385f97bc09f54b5c0ef583e58c3bc906c8c0b3e693d8782d0a0acf2137",
"RepoTags": [
"latest"
],
"Created": "2017-12-05T21:38:12.311901938Z",
"DockerVersion": "",
"Labels": {
"name": "fedora-bashecho"
},
"Architecture": "amd64",
"Os": "linux",
"Layers": [
"sha256:0cb7556c714767b8da6e0299cbeab765abaddede84769475c023785ae66d10ca"
]
}
We can use Buildah to pull down the image using the `buildah from` command. But before we do let's clean up our local containers-storage so that we don't have an existing fedora-bashecho - otherwise Buildah will know it already exists and not bother pulling it down.
# buildah images
IMAGE ID IMAGE NAME CREATED AT SIZE
d4cd7d73ee42 docker.io/library/registry:latest Dec 1, 2017 22:15 31.74 MB
e31b0f0b0a63 docker.io/library/fedora-bashecho:latest Dec 5, 2017 21:38 772 B
# buildah rmi fedora-bashecho
untagged: docker.io/library/fedora-bashecho:latest
e31b0f0b0a63e94c5a558d438d7490fab930a282a4736364360ab9b92cb25f3a
# buildah images
IMAGE ID IMAGE NAME CREATED AT SIZE
d4cd7d73ee42 docker.io/library/registry:latest Dec 1, 2017 22:15 31.74 MB
Okay, so we don't have a fedora-bashecho anymore. Let's pull the image from Docker hub:
# buildah from ipbabble/fedora-bashecho
If you don't want to bother doing the remove image step (`rmi`) you can use the flag `--pull-always` to force the image to be pulled again and overwrite any corresponding local image.
Now check that image is in the local containers-storage:
# buildah images
IMAGE ID IMAGE NAME CREATED AT SIZE
d4cd7d73ee42 docker.io/library/registry:latest Dec 1, 2017 22:15 31.74 MB
864871ac1c45 docker.io/ipbabble/fedora-bashecho:latest Dec 5, 2017 21:38 315.4 MB
Success!
If you have any suggestions or issues please post them at the [ProjectAtomic Buildah Issues page](https://github.com/projectatomic/buildah/issues).
For more information on Buildah and how you might contribute please visit the [Buildah home page on Github](https://github.com/projectatomic/buildah).

16
docs/tutorials/README.md Normal file
View File

@@ -0,0 +1,16 @@
![buildah logo](https://cdn.rawgit.com/projectatomic/buildah/master/logos/buildah-logo_large.png)
# Buildah Tutorials
## Links to a number of useful tutorials for the Buildah project.
**[Introduction Tutorial](https://github.com/projectatomic/buildah/tree/master/docs/tutorials/01-intro.md)**
Learn how to build container images compliant with the [Open Container Initiative](https://www.opencontainers.org/) (OCI) [image specification](https://github.com/opencontainers/image-spec) using Buildah.
**[Buildah and Registries Tutorial](https://github.com/projectatomic/buildah/tree/master/docs/tutorials/02-registries-repositories.md)**
Learn how Buildah can be used to move OCI compliant images in and out of private or public registries.

View File

@@ -24,7 +24,7 @@ echo yay > $mountpoint1/file-in-root
read
: " Produce an image from the container "
read
buildah commit "$container1" containers-storage:${2:-first-new-image}
buildah commit "$container1" ${2:-first-new-image}
read
: " Verify that our new image is there "
read

17
examples/lighttpd.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/bash -x
ctr1=`buildah from ${1:-fedora}`
## Get all updates and install our minimal httpd server
buildah run $ctr1 -- dnf update -y
buildah run $ctr1 -- dnf install -y lighttpd
## Include some buildtime annotations
buildah config --annotation "com.example.build.host=$(uname -n)" $ctr1
## Run our server and expose the port
buildah config $ctr1 --cmd "/usr/sbin/lighttpd -D -f /etc/lighttpd/lighttpd.conf"
buildah config $ctr1 --port 80
## Commit this container to an image name
buildah commit $ctr1 ${2:-$USER/lighttpd}

219
image.go
View File

@@ -2,6 +2,7 @@ package buildah
import (
"bytes"
"context"
"encoding/json"
"io"
"io/ioutil"
@@ -9,10 +10,8 @@ import (
"path/filepath"
"time"
"github.com/Sirupsen/logrus"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/manifest"
is "github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/containers/storage"
@@ -23,6 +22,7 @@ import (
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/projectatomic/buildah/docker"
"github.com/sirupsen/logrus"
)
const (
@@ -42,7 +42,6 @@ type containerImageRef struct {
name reference.Named
names []string
layerID string
addHistory bool
oconfig []byte
dconfig []byte
created time.Time
@@ -58,7 +57,6 @@ type containerImageSource struct {
store storage.Store
layerID string
names []string
addHistory bool
compression archive.Compression
config []byte
configDigest digest.Digest
@@ -67,33 +65,37 @@ type containerImageSource struct {
exporting bool
}
func (i *containerImageRef) NewImage(sc *types.SystemContext) (types.Image, error) {
src, err := i.NewImageSource(sc, nil)
func (i *containerImageRef) NewImage(sc *types.SystemContext) (types.ImageCloser, error) {
src, err := i.NewImageSource(sc)
if err != nil {
return nil, err
}
return image.FromSource(src)
return image.FromSource(sc, src)
}
func selectManifestType(preferred string, acceptable, supported []string) string {
selected := preferred
for _, accept := range acceptable {
if preferred == accept {
return preferred
}
for _, support := range supported {
if accept == support {
selected = accept
}
func expectedOCIDiffIDs(image v1.Image) int {
expected := 0
for _, history := range image.History {
if !history.EmptyLayer {
expected = expected + 1
}
}
return selected
return expected
}
func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestTypes []string) (src types.ImageSource, err error) {
func expectedDockerDiffIDs(image docker.V2Image) int {
expected := 0
for _, history := range image.History {
if !history.EmptyLayer {
expected = expected + 1
}
}
return expected
}
func (i *containerImageRef) NewImageSource(sc *types.SystemContext) (src types.ImageSource, err error) {
// Decide which type of manifest and configuration output we're going to provide.
supportedManifestTypes := []string{v1.MediaTypeImageManifest, docker.V2S2MediaTypeManifest}
manifestType := selectManifestType(i.preferredManifestType, manifestTypes, supportedManifestTypes)
manifestType := i.preferredManifestType
// If it's not a format we support, return an error.
if manifestType != v1.MediaTypeImageManifest && manifestType != docker.V2S2MediaTypeManifest {
return nil, errors.Errorf("no supported manifest types (attempted to use %q, only know %q and %q)",
@@ -143,11 +145,14 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestType
if err != nil {
return nil, err
}
created := i.created
oimage.Created = &created
dimage := docker.V2Image{}
err = json.Unmarshal(i.dconfig, &dimage)
if err != nil {
return nil, err
}
dimage.Created = created
// Start building manifests.
omanifest := v1.Manifest{
@@ -172,16 +177,46 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestType
}
oimage.RootFS.Type = docker.TypeLayers
oimage.RootFS.DiffIDs = []string{}
oimage.RootFS.DiffIDs = []digest.Digest{}
dimage.RootFS = &docker.V2S2RootFS{}
dimage.RootFS.Type = docker.TypeLayers
dimage.RootFS.DiffIDs = []digest.Digest{}
// Extract each layer and compute its digests, both compressed (if requested) and uncompressed.
for _, layerID := range layers {
// The default layer media type assumes no compression.
omediaType := v1.MediaTypeImageLayer
dmediaType := docker.V2S2MediaTypeUncompressedLayer
// Figure out which media type we want to call this. Assume no compression.
// If we're not re-exporting the data, reuse the blobsum and diff IDs.
if !i.exporting && layerID != i.layerID {
layer, err2 := i.store.Layer(layerID)
if err2 != nil {
return nil, errors.Wrapf(err, "unable to locate layer %q", layerID)
}
if layer.UncompressedDigest == "" {
return nil, errors.Errorf("unable to look up size of layer %q", layerID)
}
layerBlobSum := layer.UncompressedDigest
layerBlobSize := layer.UncompressedSize
// Note this layer in the manifest, using the uncompressed blobsum.
olayerDescriptor := v1.Descriptor{
MediaType: omediaType,
Digest: layerBlobSum,
Size: layerBlobSize,
}
omanifest.Layers = append(omanifest.Layers, olayerDescriptor)
dlayerDescriptor := docker.V2S2Descriptor{
MediaType: dmediaType,
Digest: layerBlobSum,
Size: layerBlobSize,
}
dmanifest.Layers = append(dmanifest.Layers, dlayerDescriptor)
// Note this layer in the list of diffIDs, again using the uncompressed blobsum.
oimage.RootFS.DiffIDs = append(oimage.RootFS.DiffIDs, layerBlobSum)
dimage.RootFS.DiffIDs = append(dimage.RootFS.DiffIDs, layerBlobSum)
continue
}
// Figure out if we need to change the media type, in case we're using compression.
if i.compression != archive.Uncompressed {
switch i.compression {
case archive.Gzip:
@@ -192,50 +227,26 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestType
// Until the image specs define a media type for bzip2-compressed layers, even if we know
// how to decompress them, we can't try to compress layers with bzip2.
return nil, errors.New("media type for bzip2-compressed layers is not defined")
case archive.Xz:
// Until the image specs define a media type for xz-compressed layers, even if we know
// how to decompress them, we can't try to compress layers with xz.
return nil, errors.New("media type for xz-compressed layers is not defined")
default:
logrus.Debugf("compressing layer %q with unknown compressor(?)", layerID)
}
}
// If we're not re-exporting the data, just fake up layer and diff IDs for the manifest.
if !i.exporting {
fakeLayerDigest := digest.NewDigestFromHex(digest.Canonical.String(), layerID)
// Add a note in the manifest about the layer. The blobs should be identified by their
// possibly-compressed blob digests, but just use the layer IDs here.
olayerDescriptor := v1.Descriptor{
MediaType: omediaType,
Digest: fakeLayerDigest,
Size: -1,
}
omanifest.Layers = append(omanifest.Layers, olayerDescriptor)
dlayerDescriptor := docker.V2S2Descriptor{
MediaType: dmediaType,
Digest: fakeLayerDigest,
Size: -1,
}
dmanifest.Layers = append(dmanifest.Layers, dlayerDescriptor)
// Add a note about the diffID, which should be uncompressed digest of the blob, but
// just use the layer ID here.
oimage.RootFS.DiffIDs = append(oimage.RootFS.DiffIDs, fakeLayerDigest.String())
dimage.RootFS.DiffIDs = append(dimage.RootFS.DiffIDs, fakeLayerDigest)
continue
}
// Start reading the layer.
rc, err := i.store.Diff("", layerID)
noCompression := archive.Uncompressed
diffOptions := &storage.DiffOptions{
Compression: &noCompression,
}
rc, err := i.store.Diff("", layerID, diffOptions)
if err != nil {
return nil, errors.Wrapf(err, "error extracting layer %q", layerID)
}
defer rc.Close()
// Set up to decompress the layer, in case it's coming out compressed. Due to implementation
// differences, the result may not match the digest the blob had when it was originally imported,
// so we have to recompute all of this anyway if we want to be sure the digests we use will be
// correct.
uncompressed, err := archive.DecompressStream(rc)
if err != nil {
return nil, errors.Wrapf(err, "error decompressing layer %q", layerID)
}
defer uncompressed.Close()
srcHasher := digest.Canonical.Digester()
reader := io.TeeReader(uncompressed, srcHasher.Hash())
reader := io.TeeReader(rc, srcHasher.Hash())
// Set up to write the possibly-recompressed blob.
layerFile, err := os.OpenFile(filepath.Join(path, "layer"), os.O_CREATE|os.O_WRONLY, 0600)
if err != nil {
@@ -244,7 +255,7 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestType
destHasher := digest.Canonical.Digester()
counter := ioutils.NewWriteCounter(layerFile)
multiWriter := io.MultiWriter(counter, destHasher.Hash())
// Compress the layer, if we're compressing it.
// Compress the layer, if we're recompressing it.
writer, err := archive.CompressStream(multiWriter, i.compression)
if err != nil {
return nil, errors.Wrapf(err, "error compressing layer %q", layerID)
@@ -282,27 +293,36 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestType
Size: size,
}
dmanifest.Layers = append(dmanifest.Layers, dlayerDescriptor)
// Add a note about the diffID, which is always an uncompressed value.
oimage.RootFS.DiffIDs = append(oimage.RootFS.DiffIDs, srcHasher.Digest().String())
// Add a note about the diffID, which is always the layer's uncompressed digest.
oimage.RootFS.DiffIDs = append(oimage.RootFS.DiffIDs, srcHasher.Digest())
dimage.RootFS.DiffIDs = append(dimage.RootFS.DiffIDs, srcHasher.Digest())
}
if i.addHistory {
// Build history notes in the image configurations.
onews := v1.History{
Created: i.created,
CreatedBy: i.createdBy,
Author: oimage.Author,
EmptyLayer: false,
}
oimage.History = append(oimage.History, onews)
dnews := docker.V2S2History{
Created: i.created,
CreatedBy: i.createdBy,
Author: dimage.Author,
EmptyLayer: false,
}
dimage.History = append(dimage.History, dnews)
// Build history notes in the image configurations.
onews := v1.History{
Created: &i.created,
CreatedBy: i.createdBy,
Author: oimage.Author,
EmptyLayer: false,
}
oimage.History = append(oimage.History, onews)
dnews := docker.V2S2History{
Created: i.created,
CreatedBy: i.createdBy,
Author: dimage.Author,
EmptyLayer: false,
}
dimage.History = append(dimage.History, dnews)
// Sanity check that we didn't just create a mismatch between non-empty layers in the
// history and the number of diffIDs.
expectedDiffIDs := expectedOCIDiffIDs(oimage)
if len(oimage.RootFS.DiffIDs) != expectedDiffIDs {
return nil, errors.Errorf("internal error: history lists %d non-empty layers, but we have %d layers on disk", expectedDiffIDs, len(oimage.RootFS.DiffIDs))
}
expectedDiffIDs = expectedDockerDiffIDs(dimage)
if len(dimage.RootFS.DiffIDs) != expectedDiffIDs {
return nil, errors.Errorf("internal error: history lists %d non-empty layers, but we have %d layers on disk", expectedDiffIDs, len(dimage.RootFS.DiffIDs))
}
// Encode the image configuration blob.
@@ -362,7 +382,6 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext, manifestType
store: i.store,
layerID: i.layerID,
names: i.names,
addHistory: i.addHistory,
compression: i.compression,
config: config,
configDigest: digest.Canonical.FromBytes(config),
@@ -417,16 +436,22 @@ func (i *containerImageSource) Reference() types.ImageReference {
return i.ref
}
func (i *containerImageSource) GetSignatures() ([][]byte, error) {
func (i *containerImageSource) GetSignatures(ctx context.Context, instanceDigest *digest.Digest) ([][]byte, error) {
if instanceDigest != nil && *instanceDigest != digest.FromBytes(i.manifest) {
return nil, errors.Errorf("TODO")
}
return nil, nil
}
func (i *containerImageSource) GetTargetManifest(digest digest.Digest) ([]byte, string, error) {
return []byte{}, "", errors.Errorf("TODO")
func (i *containerImageSource) GetManifest(instanceDigest *digest.Digest) ([]byte, string, error) {
if instanceDigest != nil && *instanceDigest != digest.FromBytes(i.manifest) {
return nil, "", errors.Errorf("TODO")
}
return i.manifest, i.manifestType, nil
}
func (i *containerImageSource) GetManifest() ([]byte, string, error) {
return i.manifest, i.manifestType, nil
func (i *containerImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}
func (i *containerImageSource) GetBlob(blob types.BlobInfo) (reader io.ReadCloser, size int64, err error) {
@@ -460,10 +485,14 @@ func (i *containerImageSource) GetBlob(blob types.BlobInfo) (reader io.ReadClose
return ioutils.NewReadCloserWrapper(layerFile, closer), size, nil
}
func (b *Builder) makeImageRef(manifestType string, exporting, addHistory bool, compress archive.Compression, names []string, layerID string, historyTimestamp *time.Time) (types.ImageReference, error) {
func (b *Builder) makeImageRef(manifestType string, exporting bool, compress archive.Compression, historyTimestamp *time.Time) (types.ImageReference, error) {
var name reference.Named
if len(names) > 0 {
if parsed, err := reference.ParseNamed(names[0]); err == nil {
container, err := b.store.Container(b.ContainerID)
if err != nil {
return nil, errors.Wrapf(err, "error locating container %q", b.ContainerID)
}
if len(container.Names) > 0 {
if parsed, err2 := reference.ParseNamed(container.Names[0]); err2 == nil {
name = parsed
}
}
@@ -486,9 +515,8 @@ func (b *Builder) makeImageRef(manifestType string, exporting, addHistory bool,
store: b.store,
compression: compress,
name: name,
names: names,
layerID: layerID,
addHistory: addHistory,
names: container.Names,
layerID: container.LayerID,
oconfig: oconfig,
dconfig: dconfig,
created: created,
@@ -499,18 +527,3 @@ func (b *Builder) makeImageRef(manifestType string, exporting, addHistory bool,
}
return ref, nil
}
func (b *Builder) makeContainerImageRef(manifestType string, exporting bool, compress archive.Compression, historyTimestamp *time.Time) (types.ImageReference, error) {
if manifestType == "" {
manifestType = OCIv1ImageManifest
}
container, err := b.store.Container(b.ContainerID)
if err != nil {
return nil, errors.Wrapf(err, "error locating container %q", b.ContainerID)
}
return b.makeImageRef(manifestType, exporting, true, compress, container.Names, container.LayerID, historyTimestamp)
}
func (b *Builder) makeImageImageRef(compress archive.Compression, names []string, layerID string, historyTimestamp *time.Time) (types.ImageReference, error) {
return b.makeImageRef(manifest.GuessMIMEType(b.Manifest), true, false, compress, names, layerID, historyTimestamp)
}

View File

@@ -8,7 +8,6 @@ import (
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
@@ -22,6 +21,7 @@ import (
"github.com/openshift/imagebuilder"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
)
const (
@@ -51,8 +51,13 @@ type BuildOptions struct {
PullPolicy int
// Registry is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone can not be resolved to a
// reference to a source image.
// reference to a source image. No separator is implicitly added.
Registry string
// Transport is a value which is prepended to the image's name, if it
// needs to be pulled and the image name alone, or the image name and
// the registry together, can not be resolved to a reference to a
// source image. No separator is implicitly added.
Transport string
// IgnoreUnrecognizedInstructions tells us to just log instructions we
// don't recognize, and try to keep going.
IgnoreUnrecognizedInstructions bool
@@ -98,6 +103,11 @@ type BuildOptions struct {
// configuration data.
// Accepted values are OCIv1ImageFormat and Dockerv2ImageFormat.
OutputFormat string
// SystemContext holds parameters used for authentication.
SystemContext *types.SystemContext
CommonBuildOpts *buildah.CommonBuildOptions
// DefaultMountsFilePath is the file path holding the mounts to be mounted in "host-path:container-path" format
DefaultMountsFilePath string
}
// Executor is a buildah-based implementation of the imagebuilder.Executor
@@ -108,6 +118,7 @@ type Executor struct {
builder *buildah.Builder
pullPolicy int
registry string
transport string
ignoreUnrecognizedInstructions bool
quiet bool
runtime string
@@ -128,14 +139,8 @@ type Executor struct {
volumeCache map[string]string
volumeCacheInfo map[string]os.FileInfo
reportWriter io.Writer
}
func makeSystemContext(signaturePolicyPath string) *types.SystemContext {
sc := &types.SystemContext{}
if signaturePolicyPath != "" {
sc.SignaturePolicyPath = signaturePolicyPath
}
return sc
commonBuildOptions *buildah.CommonBuildOptions
defaultMountsFilePath string
}
// Preserve informs the executor that from this point on, it needs to ensure
@@ -153,7 +158,15 @@ func (b *Executor) Preserve(path string) error {
logrus.Debugf("PRESERVE %q", path)
if b.volumes.Covers(path) {
// This path is already a subdirectory of a volume path that
// we're already preserving, so there's nothing new to be done.
// we're already preserving, so there's nothing new to be done
// except ensure that it exists.
archivedPath := filepath.Join(b.mountPoint, path)
if err := os.MkdirAll(archivedPath, 0755); err != nil {
return errors.Wrapf(err, "error ensuring volume path %q exists", archivedPath)
}
if err := b.volumeCacheInvalidate(path); err != nil {
return errors.Wrapf(err, "error ensuring volume path %q is preserved", archivedPath)
}
return nil
}
// Figure out where the cache for this volume would be stored.
@@ -166,9 +179,15 @@ func (b *Executor) Preserve(path string) error {
// Save info about the top level of the location that we'll be archiving.
archivedPath := filepath.Join(b.mountPoint, path)
st, err := os.Stat(archivedPath)
if os.IsNotExist(err) {
if err = os.MkdirAll(archivedPath, 0755); err != nil {
return errors.Wrapf(err, "error ensuring volume path %q exists", archivedPath)
}
st, err = os.Stat(archivedPath)
}
if err != nil {
logrus.Debugf("error reading info about %q: %v", archivedPath, err)
return err
return errors.Wrapf(err, "error reading info about volume path %q", archivedPath)
}
b.volumeCacheInfo[path] = st
if !b.volumes.Add(path) {
@@ -241,6 +260,9 @@ func (b *Executor) volumeCacheSave() error {
if !os.IsNotExist(err) {
return errors.Wrapf(err, "error checking for cache of %q in %q", archivedPath, cacheFile)
}
if err := os.MkdirAll(archivedPath, 0755); err != nil {
return errors.Wrapf(err, "error ensuring volume path %q exists", archivedPath)
}
logrus.Debugf("caching contents of volume %q in %q", archivedPath, cacheFile)
cache, err := os.Create(cacheFile)
if err != nil {
@@ -273,7 +295,7 @@ func (b *Executor) volumeCacheRestore() error {
if err := os.RemoveAll(archivedPath); err != nil {
return errors.Wrapf(err, "error clearing volume path %q", archivedPath)
}
if err := os.MkdirAll(archivedPath, 0700); err != nil {
if err := os.MkdirAll(archivedPath, 0755); err != nil {
return errors.Wrapf(err, "error recreating volume path %q", archivedPath)
}
err = archive.Untar(cache, archivedPath, nil)
@@ -311,7 +333,7 @@ func (b *Executor) Copy(excludes []string, copies ...imagebuilder.Copy) error {
sources = append(sources, filepath.Join(b.contextDir, src))
}
}
if err := b.builder.Add(copy.Dest, copy.Download, sources...); err != nil {
if err := b.builder.Add(copy.Dest, copy.Download, buildah.AddAndCopyOptions{}, sources...); err != nil {
return err
}
}
@@ -350,6 +372,7 @@ func (b *Executor) Run(run imagebuilder.Run, config docker.Config) error {
Entrypoint: config.Entrypoint,
Cmd: config.Cmd,
NetworkDisabled: config.NetworkDisabled,
Quiet: b.quiet,
}
args := run.Args
@@ -371,12 +394,23 @@ func (b *Executor) Run(run imagebuilder.Run, config docker.Config) error {
// UnrecognizedInstruction is called when we encounter an instruction that the
// imagebuilder parser didn't understand.
func (b *Executor) UnrecognizedInstruction(step *imagebuilder.Step) error {
if !b.ignoreUnrecognizedInstructions {
logrus.Debugf("+(UNIMPLEMENTED?) %#v", step)
err_str := fmt.Sprintf("Build error: Unknown instruction: %q ", step.Command)
err := fmt.Sprintf(err_str+"%#v", step)
if b.ignoreUnrecognizedInstructions {
logrus.Debugf(err)
return nil
}
logrus.Errorf("+(UNIMPLEMENTED?) %#v", step)
return errors.Errorf("Unrecognized instruction: %#v", step)
switch logrus.GetLevel() {
case logrus.ErrorLevel:
logrus.Errorf(err_str)
case logrus.DebugLevel:
logrus.Debugf(err)
default:
logrus.Errorf("+(UNHANDLED LOGLEVEL) %#v", step)
}
return errors.Errorf(err)
}
// NewExecutor creates a new instance of the imagebuilder.Executor interface.
@@ -386,23 +420,26 @@ func NewExecutor(store storage.Store, options BuildOptions) (*Executor, error) {
contextDir: options.ContextDirectory,
pullPolicy: options.PullPolicy,
registry: options.Registry,
transport: options.Transport,
ignoreUnrecognizedInstructions: options.IgnoreUnrecognizedInstructions,
quiet: options.Quiet,
runtime: options.Runtime,
runtimeArgs: options.RuntimeArgs,
transientMounts: options.TransientMounts,
compression: options.Compression,
output: options.Output,
outputFormat: options.OutputFormat,
additionalTags: options.AdditionalTags,
signaturePolicyPath: options.SignaturePolicyPath,
systemContext: makeSystemContext(options.SignaturePolicyPath),
volumeCache: make(map[string]string),
volumeCacheInfo: make(map[string]os.FileInfo),
log: options.Log,
out: options.Out,
err: options.Err,
reportWriter: options.ReportWriter,
quiet: options.Quiet,
runtime: options.Runtime,
runtimeArgs: options.RuntimeArgs,
transientMounts: options.TransientMounts,
compression: options.Compression,
output: options.Output,
outputFormat: options.OutputFormat,
additionalTags: options.AdditionalTags,
signaturePolicyPath: options.SignaturePolicyPath,
systemContext: options.SystemContext,
volumeCache: make(map[string]string),
volumeCacheInfo: make(map[string]os.FileInfo),
log: options.Log,
out: options.Out,
err: options.Err,
reportWriter: options.ReportWriter,
commonBuildOptions: options.CommonBuildOpts,
defaultMountsFilePath: options.DefaultMountsFilePath,
}
if exec.err == nil {
exec.err = os.Stderr
@@ -438,11 +475,15 @@ func (b *Executor) Prepare(ib *imagebuilder.Builder, node *parser.Node, from str
b.log("FROM %s", from)
}
builderOptions := buildah.BuilderOptions{
FromImage: from,
PullPolicy: b.pullPolicy,
Registry: b.registry,
SignaturePolicyPath: b.signaturePolicyPath,
ReportWriter: b.reportWriter,
FromImage: from,
PullPolicy: b.pullPolicy,
Registry: b.registry,
Transport: b.transport,
SignaturePolicyPath: b.signaturePolicyPath,
ReportWriter: b.reportWriter,
SystemContext: b.systemContext,
CommonBuildOpts: b.commonBuildOptions,
DefaultMountsFilePath: b.defaultMountsFilePath,
}
builder, err := buildah.NewBuilder(b.store, builderOptions)
if err != nil {
@@ -489,7 +530,7 @@ func (b *Executor) Prepare(ib *imagebuilder.Builder, node *parser.Node, from str
}
return errors.Wrapf(err, "error updating build context")
}
mountPoint, err := builder.Mount("")
mountPoint, err := builder.Mount(builder.MountLabel)
if err != nil {
if err2 := builder.Delete(); err2 != nil {
logrus.Debugf("error deleting container which we failed to mount: %v", err2)
@@ -546,6 +587,8 @@ func (b *Executor) Commit(ib *imagebuilder.Builder) (err error) {
if err2 == nil {
imageRef = imageRef2
err = nil
} else {
err = err2
}
}
} else {
@@ -554,6 +597,9 @@ func (b *Executor) Commit(ib *imagebuilder.Builder) (err error) {
if err != nil {
return errors.Wrapf(err, "error parsing reference for image to be written")
}
if ib.Author != "" {
b.builder.SetMaintainer(ib.Author)
}
config := ib.Config()
b.builder.SetHostname(config.Hostname)
b.builder.SetDomainname(config.Domainname)

View File

@@ -9,10 +9,10 @@ import (
"path"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/chrootarchive"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/sirupsen/logrus"
)
func cloneToDirectory(url, dir string) error {

View File

@@ -16,9 +16,9 @@ func importBuilderDataFromImage(store storage.Store, systemContext *types.System
imageName := ""
if imageID != "" {
ref, err := is.Transport.ParseStoreReference(store, "@"+imageID)
ref, err := is.Transport.ParseStoreReference(store, imageID)
if err != nil {
return nil, errors.Wrapf(err, "no such image %q", "@"+imageID)
return nil, errors.Wrapf(err, "no such image %q", imageID)
}
src, err2 := ref.NewImage(systemContext)
if err2 != nil {
@@ -68,7 +68,7 @@ func importBuilder(store storage.Store, options ImportOptions) (*Builder, error)
return nil, err
}
systemContext := getSystemContext(options.SignaturePolicyPath)
systemContext := getSystemContext(&types.SystemContext{}, options.SignaturePolicyPath)
builder, err := importBuilderDataFromImage(store, systemContext, c.ImageID, options.Container, c.ID)
if err != nil {
@@ -95,21 +95,27 @@ func importBuilder(store storage.Store, options ImportOptions) (*Builder, error)
}
func importBuilderFromImage(store storage.Store, options ImportFromImageOptions) (*Builder, error) {
var img *storage.Image
var err error
if options.Image == "" {
return nil, errors.Errorf("image name must be specified")
}
img, err := util.FindImage(store, options.Image)
if err != nil {
return nil, errors.Wrapf(err, "error locating image %q for importing settings", options.Image)
systemContext := getSystemContext(options.SystemContext, options.SignaturePolicyPath)
for _, image := range util.ResolveName(options.Image, "", systemContext, store) {
img, err = util.FindImage(store, image)
if err != nil {
continue
}
builder, err2 := importBuilderDataFromImage(store, systemContext, img.ID, "", "")
if err2 != nil {
return nil, errors.Wrapf(err2, "error importing build settings from image %q", options.Image)
}
return builder, nil
}
systemContext := getSystemContext(options.SignaturePolicyPath)
builder, err := importBuilderDataFromImage(store, systemContext, img.ID, "", "")
if err != nil {
return nil, errors.Wrapf(err, "error importing build settings from image %q", options.Image)
}
return builder, nil
return nil, errors.Wrapf(err, "error locating image %q for importing settings", options.Image)
}

167
install.md Normal file
View File

@@ -0,0 +1,167 @@
# Installation Instructions
## System Requirements
### Kernel Version Requirements
To run Buildah on Red Hat Enterprise Linux or CentOS, version 7.4 or higher is required.
On other Linux distributions Buildah requires a kernel version of 4.0 or
higher in order to support the OverlayFS filesystem. The kernel version can be checked
with the 'uname -a' command.
### runc Requirement
Buildah uses `runc` to run commands when `buildah run` is used, or when `buildah build-using-dockerfile`
encounters a `RUN` instruction, so you'll also need to build and install a compatible version of
[runc](https://github.com/opencontainers/runc) for Buildah to call for those cases. If Buildah is installed
via a package manager such as yum, dnf or apt-get, runc will be installed as part of that process.
## Package Installation
Buildah is available on several software repositories and can be installed via a package manager such
as yum, dnf or apt-get on a number of Linux distributions.
## Installation from GitHub
Prior to installing Buildah, install the following packages on your Linux distro:
* make
* golang (Requires version 1.8.1 or higher.)
* bats
* btrfs-progs-devel
* bzip2
* device-mapper-devel
* git
* go-md2man
* gpgme-devel
* glib2-devel
* libassuan-devel
* libseccomp-devel
* ostree-devel
* runc (Requires version 1.0 RC4 or higher.)
* skopeo-containers
### Fedora
In Fedora, you can use this command:
```
dnf -y install \
make \
golang \
bats \
btrfs-progs-devel \
device-mapper-devel \
glib2-devel \
gpgme-devel \
libassuan-devel \
libseccomp-devel \
ostree-devel \
git \
bzip2 \
go-md2man \
runc \
skopeo-containers
```
Then to install Buildah on Fedora follow the steps in this example:
```
mkdir ~/buildah
cd ~/buildah
export GOPATH=`pwd`
git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
cd ./src/github.com/projectatomic/buildah
make
sudo make install
buildah --help
```
### RHEL, CentOS
In RHEL and CentOS 7, ensure that you are subscribed to `rhel-7-server-rpms`,
`rhel-7-server-extras-rpms`, and `rhel-7-server-optional-rpms`, then
run this command:
```
yum -y install \
make \
golang \
bats \
btrfs-progs-devel \
device-mapper-devel \
glib2-devel \
gpgme-devel \
libassuan-devel \
libseccomp-devel \
ostree-devel \
git \
bzip2 \
go-md2man \
runc \
skopeo-containers
```
The build steps for Buildah on RHEL or CentOS are the same as Fedora, above.
### openSUSE
Currently openSUSE Leap 15 offers `go1.8` , while openSUSE Tumbleweed has `go1.9`.
`zypper in go1.X` should do the work, then run this command:
```
zypper in make \
git \
golang \
runc \
bzip2 \
libgpgme-devel \
libseccomp-devel \
device-mapper-devel \
libbtrfs-devel \
go-md2man
```
The build steps for Buildah on SUSE / openSUSE are the same as Fedora, above.
### Ubuntu
In Ubuntu zesty and xenial, you can use these commands:
```
apt-get -y install software-properties-common
add-apt-repository -y ppa:alexlarsson/flatpak
add-apt-repository -y ppa:gophers/archive
apt-add-repository -y ppa:projectatomic/ppa
apt-get -y -qq update
apt-get -y install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libostree-dev libseccomp-dev libselinux1-dev skopeo-containers go-md2man
apt-get -y install golang-1.8
```
Then to install Buildah on Ubuntu follow the steps in this example:
```
mkdir ~/buildah
cd ~/buildah
export GOPATH=`pwd`
git clone https://github.com/projectatomic/buildah ./src/github.com/projectatomic/buildah
cd ./src/github.com/projectatomic/buildah
PATH=/usr/lib/go-1.8/bin:$PATH make runc all TAGS="apparmor seccomp"
sudo make install install.runc
buildah --help
```
### Debian
To install the required dependencies, you can use those commands, tested under Debian GNU/Linux amd64 9.3 (stretch):
```
gpg --recv-keys 0x018BA5AD9DF57A4448F0E6CF8BECF1637AD8C79D
gpg --export 0x018BA5AD9DF57A4448F0E6CF8BECF1637AD8C79D >> /usr/share/keyrings/projectatomic-ppa.gpg
echo 'deb [signed-by=/usr/share/keyrings/projectatomic-ppa.gpg] http://ppa.launchpad.net/projectatomic/ppa/ubuntu zesty main' > /etc/apt/sources.list.d/projectatomic-ppa.list
apt update
apt -y install -t stretch-backports libostree-dev golang
apt -y install bats btrfs-tools git libapparmor-dev libdevmapper-dev libglib2.0-dev libgpgme11-dev libseccomp-dev libselinux1-dev skopeo-containers go-md2man
```
The build steps on Debian are otherwise the same as Ubuntu, above.

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

325
new.go
View File

@@ -2,58 +2,249 @@ package buildah
import (
"fmt"
"os"
"strings"
"github.com/Sirupsen/logrus"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/opencontainers/selinux/go-selinux"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/openshift/imagebuilder"
"github.com/pkg/errors"
"github.com/projectatomic/buildah/util"
"github.com/sirupsen/logrus"
)
const (
// BaseImageFakeName is the "name" of a source image which we interpret
// as "no image".
BaseImageFakeName = imagebuilder.NoBaseImageSpecifier
// DefaultTransport is a prefix that we apply to an image name if we
// can't find one in the local Store, in order to generate a source
// reference for the image that we can then copy to the local Store.
DefaultTransport = "docker://"
// minimumTruncatedIDLength is the minimum length of an identifier that
// we'll accept as possibly being a truncated image ID.
minimumTruncatedIDLength = 3
)
func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
var img *storage.Image
manifest := []byte{}
config := []byte{}
func reserveSELinuxLabels(store storage.Store, id string) error {
if selinux.GetEnabled() {
containers, err := store.Containers()
if err != nil {
return err
}
for _, c := range containers {
if id == c.ID {
continue
} else {
b, err := OpenBuilder(store, c.ID)
if err != nil {
if os.IsNotExist(err) {
// Ignore not exist errors since containers probably created by other tool
// TODO, we need to read other containers json data to reserve their SELinux labels
continue
}
return err
}
// Prevent containers from using same MCS Label
if err := label.ReserveLabel(b.ProcessLabel); err != nil {
return err
}
}
}
}
return nil
}
func pullAndFindImage(store storage.Store, imageName string, options BuilderOptions, sc *types.SystemContext) (*storage.Image, types.ImageReference, error) {
ref, err := pullImage(store, imageName, options, sc)
if err != nil {
logrus.Debugf("error pulling image %q: %v", imageName, err)
return nil, nil, err
}
img, err := is.Transport.GetStoreImage(store, ref)
if err != nil {
logrus.Debugf("error reading pulled image %q: %v", imageName, err)
return nil, nil, err
}
return img, ref, nil
}
func getImageName(name string, img *storage.Image) string {
imageName := name
if len(img.Names) > 0 {
imageName = img.Names[0]
// When the image used by the container is a tagged image
// the container name might be set to the original image instead of
// the image given in the "form" command line.
// This loop is supposed to fix this.
for _, n := range img.Names {
if strings.Contains(n, name) {
imageName = n
break
}
}
}
return imageName
}
func imageNamePrefix(imageName string) string {
prefix := imageName
s := strings.Split(imageName, "/")
if len(s) > 0 {
prefix = s[len(s)-1]
}
s = strings.Split(prefix, ":")
if len(s) > 0 {
prefix = s[0]
}
s = strings.Split(prefix, "@")
if len(s) > 0 {
prefix = s[0]
}
return prefix
}
func imageManifestAndConfig(ref types.ImageReference, systemContext *types.SystemContext) (manifest, config []byte, err error) {
if ref != nil {
src, err := ref.NewImage(systemContext)
if err != nil {
return nil, nil, errors.Wrapf(err, "error instantiating image for %q", transports.ImageName(ref))
}
defer src.Close()
config, err := src.ConfigBlob()
if err != nil {
return nil, nil, errors.Wrapf(err, "error reading image configuration for %q", transports.ImageName(ref))
}
manifest, _, err := src.Manifest()
if err != nil {
return nil, nil, errors.Wrapf(err, "error reading image manifest for %q", transports.ImageName(ref))
}
return manifest, config, nil
}
return nil, nil, nil
}
func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
var ref types.ImageReference
var img *storage.Image
var err error
var manifest []byte
var config []byte
name := "working-container"
if options.FromImage == BaseImageFakeName {
options.FromImage = ""
}
if options.Transport == "" {
options.Transport = DefaultTransport
}
systemContext := getSystemContext(options.SystemContext, options.SignaturePolicyPath)
for _, image := range util.ResolveName(options.FromImage, options.Registry, systemContext, store) {
if len(image) >= minimumTruncatedIDLength {
if img, err = store.Image(image); err == nil && img != nil && strings.HasPrefix(img.ID, image) {
if ref, err = is.Transport.ParseStoreReference(store, img.ID); err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", img.ID)
}
break
}
}
if options.PullPolicy == PullAlways {
pulledImg, pulledReference, err2 := pullAndFindImage(store, image, options, systemContext)
if err2 != nil {
logrus.Debugf("error pulling and reading image %q: %v", image, err2)
err = err2
continue
}
ref = pulledReference
img = pulledImg
break
}
srcRef, err2 := alltransports.ParseImageName(image)
if err2 != nil {
if options.Transport == "" {
logrus.Debugf("error parsing image name %q: %v", image, err2)
err = err2
continue
}
srcRef2, err3 := alltransports.ParseImageName(options.Transport + image)
if err3 != nil {
logrus.Debugf("error parsing image name %q: %v", image, err2)
err = err3
continue
}
srcRef = srcRef2
}
destImage, err2 := localImageNameForReference(store, srcRef)
if err2 != nil {
return nil, errors.Wrapf(err2, "error computing local image name for %q", transports.ImageName(srcRef))
}
if destImage == "" {
return nil, errors.Errorf("error computing local image name for %q", transports.ImageName(srcRef))
}
ref, err = is.Transport.ParseStoreReference(store, destImage)
if err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", destImage)
}
img, err = is.Transport.GetStoreImage(store, ref)
if err != nil {
if errors.Cause(err) == storage.ErrImageUnknown && options.PullPolicy != PullIfMissing {
logrus.Debugf("no such image %q: %v", transports.ImageName(ref), err)
continue
}
pulledImg, pulledReference, err2 := pullAndFindImage(store, image, options, systemContext)
if err2 != nil {
logrus.Debugf("error pulling and reading image %q: %v", image, err2)
err = err2
continue
}
ref = pulledReference
img = pulledImg
}
break
}
if options.FromImage != "" && (ref == nil || img == nil) {
// If options.FromImage is set but we ended up
// with nil in ref or in img then there was an error that
// we should return.
return nil, util.GetFailureCause(err, errors.Wrapf(storage.ErrImageUnknown, "no such image %q in registry", options.FromImage))
}
image := options.FromImage
imageID := ""
if img != nil {
image = getImageName(imageNamePrefix(image), img)
imageID = img.ID
}
if manifest, config, err = imageManifestAndConfig(ref, systemContext); err != nil {
return nil, errors.Wrapf(err, "error reading data from image %q", transports.ImageName(ref))
}
name := "working-container"
if options.Container != "" {
name = options.Container
} else {
var err2 error
if image != "" {
prefix := image
s := strings.Split(prefix, "/")
if len(s) > 0 {
prefix = s[len(s)-1]
}
s = strings.Split(prefix, ":")
if len(s) > 0 {
prefix = s[0]
}
s = strings.Split(prefix, "@")
if len(s) > 0 {
prefix = s[0]
}
name = prefix + "-" + name
name = imageNamePrefix(image) + "-" + name
}
}
if name != "" {
var err error
suffix := 1
tmpName := name
for err != storage.ErrContainerUnknown {
_, err = store.Container(tmpName)
if err == nil {
for errors.Cause(err2) != storage.ErrContainerUnknown {
_, err2 = store.Container(tmpName)
if err2 == nil {
suffix++
tmpName = fmt.Sprintf("%s-%d", name, suffix)
}
@@ -61,54 +252,6 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
name = tmpName
}
systemContext := getSystemContext(options.SignaturePolicyPath)
imageID := ""
if image != "" {
if options.PullPolicy == PullAlways {
err := pullImage(store, options, systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error pulling image %q", image)
}
}
ref, err := is.Transport.ParseStoreReference(store, image)
if err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", image)
}
img, err = is.Transport.GetStoreImage(store, ref)
if err != nil {
if err == storage.ErrImageUnknown && options.PullPolicy != PullIfMissing {
return nil, errors.Wrapf(err, "no such image %q", image)
}
err = pullImage(store, options, systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error pulling image %q", image)
}
ref, err = is.Transport.ParseStoreReference(store, image)
if err != nil {
return nil, errors.Wrapf(err, "error parsing reference to image %q", image)
}
img, err = is.Transport.GetStoreImage(store, ref)
}
if err != nil {
return nil, errors.Wrapf(err, "no such image %q", image)
}
imageID = img.ID
src, err := ref.NewImage(systemContext)
if err != nil {
return nil, errors.Wrapf(err, "error instantiating image")
}
defer src.Close()
config, err = src.ConfigBlob()
if err != nil {
return nil, errors.Wrapf(err, "error reading image configuration")
}
manifest, _, err = src.Manifest()
if err != nil {
return nil, errors.Wrapf(err, "error reading image manifest")
}
}
coptions := storage.ContainerOptions{}
container, err := store.CreateContainer("", []string{name}, imageID, "", "", &coptions)
if err != nil {
@@ -123,21 +266,33 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
}
}()
if err = reserveSELinuxLabels(store, container.ID); err != nil {
return nil, err
}
processLabel, mountLabel, err := label.InitLabels(options.CommonBuildOpts.LabelOpts)
if err != nil {
return nil, err
}
builder := &Builder{
store: store,
Type: containerType,
FromImage: image,
FromImageID: imageID,
Config: config,
Manifest: manifest,
Container: name,
ContainerID: container.ID,
ImageAnnotations: map[string]string{},
ImageCreatedBy: "",
store: store,
Type: containerType,
FromImage: image,
FromImageID: imageID,
Config: config,
Manifest: manifest,
Container: name,
ContainerID: container.ID,
ImageAnnotations: map[string]string{},
ImageCreatedBy: "",
ProcessLabel: processLabel,
MountLabel: mountLabel,
DefaultMountsFilePath: options.DefaultMountsFilePath,
CommonBuildOpts: options.CommonBuildOpts,
}
if options.Mount {
_, err = builder.Mount("")
_, err = builder.Mount(mountLabel)
if err != nil {
return nil, errors.Wrapf(err, "error mounting build container")
}

28
new_test.go Normal file
View File

@@ -0,0 +1,28 @@
package buildah
import (
"testing"
"github.com/containers/storage"
)
func TestGetImageName(t *testing.T) {
tt := []struct {
caseName string
name string
names []string
expected string
}{
{"tagged image", "busybox1", []string{"docker.io/library/busybox:latest", "docker.io/library/busybox1:latest"}, "docker.io/library/busybox1:latest"},
{"image name not in the resolved image names", "image1", []string{"docker.io/library/busybox:latest", "docker.io/library/busybox1:latest"}, "docker.io/library/busybox:latest"},
{"resolved image with empty name list", "image1", []string{}, "image1"},
}
for _, tc := range tt {
img := &storage.Image{Names: tc.names}
res := getImageName(tc.name, img)
if res != tc.expected {
t.Errorf("test case '%s' failed: expected %#v but got %#v", tc.caseName, tc.expected, res)
}
}
}

4
ostree_tag.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
if ! pkg-config ostree-1 2> /dev/null ; then
echo containers_image_ostree_stub
fi

97
pull.go
View File

@@ -1,58 +1,113 @@
package buildah
import (
"github.com/Sirupsen/logrus"
"strings"
cp "github.com/containers/image/copy"
"github.com/containers/image/docker/reference"
"github.com/containers/image/signature"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
func pullImage(store storage.Store, options BuilderOptions, sc *types.SystemContext) error {
name := options.FromImage
spec := name
if options.Registry != "" {
spec = options.Registry + spec
func localImageNameForReference(store storage.Store, srcRef types.ImageReference) (string, error) {
if srcRef == nil {
return "", errors.Errorf("reference to image is empty")
}
ref := srcRef.DockerReference()
if ref == nil {
name := srcRef.StringWithinTransport()
_, err := is.Transport.ParseStoreReference(store, name)
if err == nil {
return name, nil
}
if strings.LastIndex(name, "/") != -1 {
name = name[strings.LastIndex(name, "/")+1:]
_, err = is.Transport.ParseStoreReference(store, name)
if err == nil {
return name, nil
}
}
return "", errors.Errorf("reference to image %q is not a named reference", transports.ImageName(srcRef))
}
srcRef, err := alltransports.ParseImageName(name)
name := ""
if named, ok := ref.(reference.Named); ok {
name = named.Name()
if namedTagged, ok := ref.(reference.NamedTagged); ok {
name = name + ":" + namedTagged.Tag()
}
if canonical, ok := ref.(reference.Canonical); ok {
name = name + "@" + canonical.Digest().String()
}
}
if _, err := is.Transport.ParseStoreReference(store, name); err != nil {
return "", errors.Wrapf(err, "error parsing computed local image name %q", name)
}
return name, nil
}
func pullImage(store storage.Store, imageName string, options BuilderOptions, sc *types.SystemContext) (types.ImageReference, error) {
spec := imageName
srcRef, err := alltransports.ParseImageName(spec)
if err != nil {
if options.Transport == "" {
return nil, errors.Wrapf(err, "error parsing image name %q", spec)
}
spec = options.Transport + spec
srcRef2, err2 := alltransports.ParseImageName(spec)
if err2 != nil {
return errors.Wrapf(err2, "error parsing image name %q", spec)
return nil, errors.Wrapf(err2, "error parsing image name %q", spec)
}
srcRef = srcRef2
}
if ref := srcRef.DockerReference(); ref != nil {
name = srcRef.DockerReference().Name()
if tagged, ok := srcRef.DockerReference().(reference.NamedTagged); ok {
name = name + ":" + tagged.Tag()
}
destName, err := localImageNameForReference(store, srcRef)
if err != nil {
return nil, errors.Wrapf(err, "error computing local image name for %q", transports.ImageName(srcRef))
}
if destName == "" {
return nil, errors.Errorf("error computing local image name for %q", transports.ImageName(srcRef))
}
destRef, err := is.Transport.ParseStoreReference(store, name)
destRef, err := is.Transport.ParseStoreReference(store, destName)
if err != nil {
return errors.Wrapf(err, "error parsing full image name %q", name)
return nil, errors.Wrapf(err, "error parsing image name %q", destName)
}
img, err := srcRef.NewImageSource(sc)
if err != nil {
return nil, errors.Wrapf(err, "error initializing %q as an image source", spec)
}
img.Close()
policy, err := signature.DefaultPolicy(sc)
if err != nil {
return err
return nil, errors.Wrapf(err, "error obtaining default signature policy")
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return err
return nil, errors.Wrapf(err, "error creating new signature policy context")
}
logrus.Debugf("copying %q to %q", spec, name)
defer func() {
if err2 := policyContext.Destroy(); err2 != nil {
logrus.Debugf("error destroying signature policy context: %v", err2)
}
}()
err = cp.Image(policyContext, destRef, srcRef, getCopyOptions(options.ReportWriter))
return err
logrus.Debugf("copying %q to %q", spec, destName)
err = cp.Image(policyContext, destRef, srcRef, getCopyOptions(options.ReportWriter, options.SystemContext, nil, ""))
if err == nil {
return destRef, nil
}
return nil, err
}

302
run.go
View File

@@ -1,18 +1,25 @@
package buildah
import (
"bufio"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/Sirupsen/logrus"
"github.com/containers/storage/pkg/ioutils"
"github.com/docker/docker/profiles/seccomp"
units "github.com/docker/go-units"
digest "github.com/opencontainers/go-digest"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/opencontainers/runtime-tools/generate"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/crypto/ssh/terminal"
)
const (
@@ -62,9 +69,85 @@ type RunOptions struct {
// decision can be overridden by specifying either WithTerminal or
// WithoutTerminal.
Terminal int
// Quiet tells the run to turn off output to stdout.
Quiet bool
}
func setupMounts(spec *specs.Spec, optionMounts []specs.Mount, bindFiles, volumes []string) error {
func addRlimits(ulimit []string, g *generate.Generator) error {
var (
ul *units.Ulimit
err error
)
for _, u := range ulimit {
if ul, err = units.ParseUlimit(u); err != nil {
return errors.Wrapf(err, "ulimit option %q requires name=SOFT:HARD, failed to be parsed", u)
}
g.AddProcessRlimits("RLIMIT_"+strings.ToUpper(ul.Name), uint64(ul.Hard), uint64(ul.Soft))
}
return nil
}
func addHostsToFile(hosts []string) error {
if len(hosts) == 0 {
return nil
}
file, err := os.OpenFile("/etc/hosts", os.O_APPEND|os.O_WRONLY, os.ModeAppend)
if err != nil {
return err
}
defer file.Close()
w := bufio.NewWriter(file)
for _, host := range hosts {
fmt.Fprintln(w, host)
}
return w.Flush()
}
func addCommonOptsToSpec(commonOpts *CommonBuildOptions, g *generate.Generator) error {
// RESOURCES - CPU
if commonOpts.CPUPeriod != 0 {
g.SetLinuxResourcesCPUPeriod(commonOpts.CPUPeriod)
}
if commonOpts.CPUQuota != 0 {
g.SetLinuxResourcesCPUQuota(commonOpts.CPUQuota)
}
if commonOpts.CPUShares != 0 {
g.SetLinuxResourcesCPUShares(commonOpts.CPUShares)
}
if commonOpts.CPUSetCPUs != "" {
g.SetLinuxResourcesCPUCpus(commonOpts.CPUSetCPUs)
}
if commonOpts.CPUSetMems != "" {
g.SetLinuxResourcesCPUMems(commonOpts.CPUSetMems)
}
// RESOURCES - MEMORY
if commonOpts.Memory != 0 {
g.SetLinuxResourcesMemoryLimit(commonOpts.Memory)
}
if commonOpts.MemorySwap != 0 {
g.SetLinuxResourcesMemorySwap(commonOpts.MemorySwap)
}
if commonOpts.CgroupParent != "" {
g.SetLinuxCgroupsPath(commonOpts.CgroupParent)
}
if err := addRlimits(commonOpts.Ulimit, g); err != nil {
return err
}
if err := addHostsToFile(commonOpts.AddHost); err != nil {
return err
}
logrus.Debugln("Resources:", commonOpts)
return nil
}
func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts []specs.Mount, bindFiles, builtinVolumes, volumeMounts []string, shmSize string) error {
// The passed-in mounts matter the most to us.
mounts := make([]specs.Mount, len(optionMounts))
copy(mounts, optionMounts)
@@ -79,6 +162,9 @@ func setupMounts(spec *specs.Spec, optionMounts []specs.Mount, bindFiles, volume
}
// Add mounts from the generated list, unless they conflict.
for _, specMount := range spec.Mounts {
if specMount.Destination == "/dev/shm" {
specMount.Options = []string{"nosuid", "noexec", "nodev", "mode=1777", "size=" + shmSize}
}
if haveMount(specMount.Destination) {
// Already have something to mount there, so skip this one.
continue
@@ -98,17 +184,107 @@ func setupMounts(spec *specs.Spec, optionMounts []specs.Mount, bindFiles, volume
Options: []string{"rbind", "ro"},
})
}
// Add tmpfs filesystems at volume locations, unless we already have something there.
for _, volume := range volumes {
if haveMount(volume) {
// Already mounting something there, no need for a tmpfs.
cdir, err := b.store.ContainerDirectory(b.ContainerID)
if err != nil {
return errors.Wrapf(err, "error determining work directory for container %q", b.ContainerID)
}
// Add secrets mounts
mountsFiles := []string{OverrideMountsFile, b.DefaultMountsFilePath}
for _, file := range mountsFiles {
secretMounts, err := secretMounts(file, b.MountLabel, cdir)
if err != nil {
logrus.Warn("error mounting secrets, skipping...")
continue
}
// Mount a tmpfs there.
for _, mount := range secretMounts {
if haveMount(mount.Destination) {
continue
}
mounts = append(mounts, mount)
}
}
// Add temporary copies of the contents of volume locations at the
// volume locations, unless we already have something there.
for _, volume := range builtinVolumes {
if haveMount(volume) {
// Already mounting something there, no need to bother.
continue
}
subdir := digest.Canonical.FromString(volume).Hex()
volumePath := filepath.Join(cdir, "buildah-volumes", subdir)
// If we need to, initialize the volume path's initial contents.
if _, err = os.Stat(volumePath); os.IsNotExist(err) {
if err = os.MkdirAll(volumePath, 0755); err != nil {
return errors.Wrapf(err, "error creating directory %q for volume %q in container %q", volumePath, volume, b.ContainerID)
}
if err = label.Relabel(volumePath, b.MountLabel, false); err != nil {
return errors.Wrapf(err, "error relabeling directory %q for volume %q in container %q", volumePath, volume, b.ContainerID)
}
srcPath := filepath.Join(mountPoint, volume)
if err = copyWithTar(srcPath, volumePath); err != nil && !os.IsNotExist(err) {
return errors.Wrapf(err, "error populating directory %q for volume %q in container %q using contents of %q", volumePath, volume, b.ContainerID, srcPath)
}
}
// Add the bind mount.
mounts = append(mounts, specs.Mount{
Source: "tmpfs",
Source: volumePath,
Destination: volume,
Type: "tmpfs",
Type: "bind",
Options: []string{"bind"},
})
}
// Bind mount volumes given by the user at execution
var options []string
for _, i := range volumeMounts {
spliti := strings.Split(i, ":")
if len(spliti) > 2 {
options = strings.Split(spliti[2], ",")
}
if haveMount(spliti[1]) {
continue
}
options = append(options, "rbind")
var foundrw, foundro, foundz, foundZ bool
var rootProp string
for _, opt := range options {
switch opt {
case "rw":
foundrw = true
case "ro":
foundro = true
case "z":
foundz = true
case "Z":
foundZ = true
case "private", "rprivate", "slave", "rslave", "shared", "rshared":
rootProp = opt
}
}
if !foundrw && !foundro {
options = append(options, "rw")
}
if foundz {
if err := label.Relabel(spliti[0], spec.Linux.MountLabel, true); err != nil {
return errors.Wrapf(err, "relabel failed %q", spliti[0])
}
}
if foundZ {
if err := label.Relabel(spliti[0], spec.Linux.MountLabel, false); err != nil {
return errors.Wrapf(err, "relabel failed %q", spliti[0])
}
}
if rootProp == "" {
options = append(options, "private")
}
mounts = append(mounts, specs.Mount{
Destination: spliti[1],
Type: "bind",
Source: spliti[0],
Options: options,
})
}
// Set the list in the spec.
@@ -131,28 +307,33 @@ func (b *Builder) Run(command []string, options RunOptions) error {
}()
g := generate.New()
if b.OS() != "" {
g.SetPlatformOS(b.OS())
}
if b.Architecture() != "" {
g.SetPlatformArch(b.Architecture())
}
for _, envSpec := range append(b.Env(), options.Env...) {
env := strings.SplitN(envSpec, "=", 2)
if len(env) > 1 {
g.AddProcessEnv(env[0], env[1])
}
}
if b.CommonBuildOpts == nil {
return errors.Errorf("Invalid format on container you must recreate the container")
}
if err := addCommonOptsToSpec(b.CommonBuildOpts, &g); err != nil {
return err
}
if len(command) > 0 {
g.SetProcessArgs(command)
} else if len(options.Cmd) != 0 {
g.SetProcessArgs(options.Cmd)
} else if len(b.Cmd()) != 0 {
g.SetProcessArgs(b.Cmd())
} else if len(options.Entrypoint) != 0 {
g.SetProcessArgs(options.Entrypoint)
} else if len(b.Entrypoint()) != 0 {
g.SetProcessArgs(b.Entrypoint())
} else {
cmd := b.Cmd()
if len(options.Cmd) > 0 {
cmd = options.Cmd
}
ep := b.Entrypoint()
if len(options.Entrypoint) > 0 {
ep = options.Entrypoint
}
g.SetProcessArgs(append(ep, cmd...))
}
if options.WorkingDir != "" {
g.SetProcessCwd(options.WorkingDir)
@@ -164,7 +345,9 @@ func (b *Builder) Run(command []string, options RunOptions) error {
} else if b.Hostname() != "" {
g.SetHostname(b.Hostname())
}
mountPoint, err := b.Mount("")
g.SetProcessSelinuxLabel(b.ProcessLabel)
g.SetLinuxMountLabel(b.MountLabel)
mountPoint, err := b.Mount(b.MountLabel)
if err != nil {
return err
}
@@ -173,10 +356,32 @@ func (b *Builder) Run(command []string, options RunOptions) error {
logrus.Errorf("error unmounting container: %v", err2)
}
}()
for _, mp := range []string{
"/proc/kcore",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware",
} {
g.AddLinuxMaskedPaths(mp)
}
for _, rp := range []string{
"/proc/asound",
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger",
} {
g.AddLinuxReadonlyPaths(rp)
}
g.SetRootPath(mountPoint)
switch options.Terminal {
case DefaultTerminal:
g.SetProcessTerminal(logrus.IsTerminal(os.Stdout))
g.SetProcessTerminal(terminal.IsTerminal(int(os.Stdout.Fd())))
case WithTerminal:
g.SetProcessTerminal(true)
case WithoutTerminal:
@@ -187,11 +392,7 @@ func (b *Builder) Run(command []string, options RunOptions) error {
return errors.Wrapf(err, "error removing network namespace for run")
}
}
if options.User != "" {
user, err = getUser(mountPoint, options.User)
} else {
user, err = getUser(mountPoint, b.User())
}
user, err = b.user(mountPoint, options.User)
if err != nil {
return err
}
@@ -201,12 +402,44 @@ func (b *Builder) Run(command []string, options RunOptions) error {
if spec.Process.Cwd == "" {
spec.Process.Cwd = DefaultWorkingDir
}
if err = os.MkdirAll(filepath.Join(mountPoint, b.WorkDir()), 0755); err != nil {
return errors.Wrapf(err, "error ensuring working directory %q exists", b.WorkDir())
if err = os.MkdirAll(filepath.Join(mountPoint, spec.Process.Cwd), 0755); err != nil {
return errors.Wrapf(err, "error ensuring working directory %q exists", spec.Process.Cwd)
}
//Security Opts
g.SetProcessApparmorProfile(b.CommonBuildOpts.ApparmorProfile)
// HANDLE SECCOMP
if b.CommonBuildOpts.SeccompProfilePath != "unconfined" {
if b.CommonBuildOpts.SeccompProfilePath != "" {
seccompProfile, err := ioutil.ReadFile(b.CommonBuildOpts.SeccompProfilePath)
if err != nil {
return errors.Wrapf(err, "opening seccomp profile (%s) failed", b.CommonBuildOpts.SeccompProfilePath)
}
seccompConfig, err := seccomp.LoadProfile(string(seccompProfile), spec)
if err != nil {
return errors.Wrapf(err, "loading seccomp profile (%s) failed", b.CommonBuildOpts.SeccompProfilePath)
}
spec.Linux.Seccomp = seccompConfig
} else {
seccompConfig, err := seccomp.GetDefaultProfile(spec)
if err != nil {
return errors.Wrapf(err, "loading seccomp profile (%s) failed", b.CommonBuildOpts.SeccompProfilePath)
}
spec.Linux.Seccomp = seccompConfig
}
}
cgroupMnt := specs.Mount{
Destination: "/sys/fs/cgroup",
Type: "cgroup",
Source: "cgroup",
Options: []string{"nosuid", "noexec", "nodev", "relatime", "ro"},
}
g.AddMount(cgroupMnt)
bindFiles := []string{"/etc/hosts", "/etc/resolv.conf"}
err = setupMounts(spec, options.Mounts, bindFiles, b.Volumes())
err = b.setupMounts(mountPoint, spec, options.Mounts, bindFiles, b.Volumes(), b.CommonBuildOpts.Volumes, b.CommonBuildOpts.ShmSize)
if err != nil {
return errors.Wrapf(err, "error resolving mountpoints for container")
}
@@ -228,6 +461,9 @@ func (b *Builder) Run(command []string, options RunOptions) error {
cmd.Dir = mountPoint
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
if options.Quiet {
cmd.Stdout = nil
}
cmd.Stderr = os.Stderr
err = cmd.Run()
if err != nil {

115
secrets.go Normal file
View File

@@ -0,0 +1,115 @@
package buildah
import (
"bufio"
"fmt"
"os"
"path/filepath"
"strings"
rspec "github.com/opencontainers/runtime-spec/specs-go"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var (
// DefaultMountsFile holds the default mount paths in the form
// "host_path:container_path"
DefaultMountsFile = "/usr/share/containers/mounts.conf"
// OverrideMountsFile holds the default mount paths in the form
// "host_path:container_path" overriden by the user
OverrideMountsFile = "/etc/containers/mounts.conf"
)
func getMounts(filePath string) []string {
file, err := os.Open(filePath)
if err != nil {
logrus.Warnf("file %q not found, skipping...", filePath)
return nil
}
defer file.Close()
scanner := bufio.NewScanner(file)
if err = scanner.Err(); err != nil {
logrus.Warnf("error reading file %q, skipping...", filePath)
return nil
}
var mounts []string
for scanner.Scan() {
mounts = append(mounts, scanner.Text())
}
return mounts
}
// getHostAndCtrDir separates the host:container paths
func getMountsMap(path string) (string, string, error) {
arr := strings.SplitN(path, ":", 2)
if len(arr) == 2 {
return arr[0], arr[1], nil
}
return "", "", errors.Errorf("unable to get host and container dir")
}
// secretMount copies the contents of host directory to container directory
// and returns a list of mounts
func secretMounts(filePath, mountLabel, containerWorkingDir string) ([]rspec.Mount, error) {
var mounts []rspec.Mount
defaultMountsPaths := getMounts(filePath)
for _, path := range defaultMountsPaths {
hostDir, ctrDir, err := getMountsMap(path)
if err != nil {
return nil, err
}
// skip if the hostDir path doesn't exist
if _, err = os.Stat(hostDir); os.IsNotExist(err) {
logrus.Warnf("%q doesn't exist, skipping", hostDir)
continue
}
ctrDirOnHost := filepath.Join(containerWorkingDir, ctrDir)
if err = os.RemoveAll(ctrDirOnHost); err != nil {
return nil, fmt.Errorf("remove container directory failed: %v", err)
}
if err = os.MkdirAll(ctrDirOnHost, 0755); err != nil {
return nil, fmt.Errorf("making container directory failed: %v", err)
}
hostDir, err = resolveSymbolicLink(hostDir)
if err != nil {
return nil, err
}
if err = copyWithTar(hostDir, ctrDirOnHost); err != nil && !os.IsNotExist(err) {
return nil, errors.Wrapf(err, "error getting host secret data")
}
err = label.Relabel(ctrDirOnHost, mountLabel, false)
if err != nil {
return nil, errors.Wrap(err, "error applying correct labels")
}
m := rspec.Mount{
Source: ctrDirOnHost,
Destination: ctrDir,
Type: "bind",
Options: []string{"bind"},
}
mounts = append(mounts, m)
}
return mounts, nil
}
// resolveSymbolicLink resolves a possbile symlink path. If the path is a symlink, returns resolved
// path; if not, returns the original path.
func resolveSymbolicLink(path string) (string, error) {
info, err := os.Lstat(path)
if err != nil {
return "", err
}
if info.Mode()&os.ModeSymlink != os.ModeSymlink {
return path, nil
}
return filepath.EvalSymlinks(path)
}

4
selinux_tag.sh Executable file
View File

@@ -0,0 +1,4 @@
#!/bin/bash
if pkg-config libselinux 2> /dev/null ; then
echo selinux
fi

48
tests/authenticate.bats Normal file
View File

@@ -0,0 +1,48 @@
#!/usr/bin/env bats
load helpers
@test "from-authenticate-cert-and-creds" {
buildah from --pull --name "alpine" --signature-policy ${TESTSDIR}/policy.json alpine
run buildah push --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds testuser:testpassword alpine localhost:5000/my-alpine
echo "$output"
[ "$status" -eq 0 ]
# This should fail
run buildah push --signature-policy ${TESTSDIR}/policy.json --tls-verify=true localhost:5000/my-alpine
[ "$status" -ne 0 ]
# This should fail
run buildah from --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds baduser:badpassword localhost:5000/my-alpine
[ "$status" -ne 0 ]
# This should work
run buildah from --name "my-alpine" --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds testuser:testpassword localhost:5000/my-alpine
[ "$status" -eq 0 ]
# Create Dockerfile for bud tests
FILE=./Dockerfile
/bin/cat <<EOM >$FILE
FROM localhost:5000/my-alpine
EOM
chmod +x $FILE
# Remove containers and images before bud tests
buildah rm --all
buildah rmi -f --all
# bud test bad password should fail
run buildah bud -f ./Dockerfile --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds=testuser:badpassword
[ "$status" -ne 0 ]
# bud test this should work
run buildah bud -f ./Dockerfile --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds=testuser:testpassword
echo $status
[ "$status" -eq 0 ]
# Clean up
rm -f ./Dockerfile
buildah rm -a
buildah rmi -f --all
}

View File

@@ -7,7 +7,7 @@ load helpers
buildah rm $cid
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json scratch)
buildah rm $cid
cid=$(buildah from alpine --pull --signature-policy ${TESTSDIR}/policy.json --name i-love-naming-things)
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json --name i-love-naming-things alpine)
buildah rm i-love-naming-things
}
@@ -95,6 +95,12 @@ load helpers
cmp ${TESTDIR}/other-randomfile $yetanothernewroot/other-randomfile
buildah delete $yetanothernewcid
newcid=$(buildah from new-image)
buildah commit --rm --signature-policy ${TESTSDIR}/policy.json $newcid containers-storage:remove-container-image
run buildah mount $newcid
[ "$status" -ne 0 ]
buildah rmi remove-container-image
buildah rmi containers-storage:other-new-image
buildah rmi another-new-image
run buildah --debug=false images -q
@@ -104,5 +110,6 @@ load helpers
buildah rmi $id
done
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" == "" ]
}

View File

@@ -9,6 +9,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -24,6 +25,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
target=alpine-image
@@ -37,6 +39,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -52,6 +55,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
target=alpine-image
@@ -65,6 +69,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -81,11 +86,14 @@ load helpers
run test -s $root/vol/subvol/subvolfile
[ "$status" -ne 0 ]
test -s $root/vol/volfile
test -s $root/vol/Dockerfile
test -s $root/vol/Dockerfile2
run test -s $root/vol/anothervolfile
[ "$status" -ne 0 ]
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -98,6 +106,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -110,6 +119,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -122,18 +132,20 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@test "bud-http-context-dir-with-Dockerfile-post" {
starthttpd ${TESTSDIR}/bud/http-context-subdir
target=scratch-image
buildah bud http://0.0.0.0:${HTTP_SERVER_PORT}/context.tar --signature-policy ${TESTSDIR}/policy.json -t ${target} -f context/Dockerfile
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} -f context/Dockerfile http://0.0.0.0:${HTTP_SERVER_PORT}/context.tar
stophttpd
cid=$(buildah from ${target})
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -153,6 +165,7 @@ load helpers
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -166,6 +179,7 @@ load helpers
buildah --debug=false images -q
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@@ -175,13 +189,76 @@ load helpers
target3=so-many-scratch-images
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} -t ${target2} -t ${target3} ${TESTSDIR}/bud/from-scratch
run buildah --debug=false images
[ "$status" -eq 0 ]
cid=$(buildah from ${target})
buildah rm ${cid}
cid=$(buildah from library/${target2})
buildah rm ${cid}
cid=$(buildah from ${target3}:latest)
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
buildah rmi -f $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@test "bud-volume-perms" {
# This Dockerfile needs us to be able to handle a working RUN instruction.
if ! which runc ; then
skip
fi
target=volume-image
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} ${TESTSDIR}/bud/volume-perms
cid=$(buildah from ${target})
root=$(buildah mount ${cid})
run test -s $root/vol/subvol/subvolfile
[ "$status" -ne 0 ]
run stat -c %f $root/vol/subvol
[ "$status" -eq 0 ]
[ "$output" = 41ed ]
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@test "bud-from-glob" {
target=alpine-image
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} -f Dockerfile2.glob ${TESTSDIR}/bud/from-multiple-files
cid=$(buildah from ${target})
root=$(buildah mount ${cid})
cmp $root/Dockerfile1.alpine ${TESTSDIR}/bud/from-multiple-files/Dockerfile1.alpine
cmp $root/Dockerfile2.withfrom ${TESTSDIR}/bud/from-multiple-files/Dockerfile2.withfrom
buildah rm ${cid}
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@test "bud-maintainer" {
target=alpine-image
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} ${TESTSDIR}/bud/maintainer
run buildah --debug=false inspect --type=image --format '{{.Docker.Author}}' ${target}
[ "$status" -eq 0 ]
[ "$output" = kilroy ]
run buildah --debug=false inspect --type=image --format '{{.OCIv1.Author}}' ${target}
[ "$status" -eq 0 ]
[ "$output" = kilroy ]
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@test "bud-unrecognized-instruction" {
target=alpine-image
run buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} ${TESTSDIR}/bud/unrecognized
[ "$status" -ne 0 ]
[[ "$output" =~ BOGUS ]]
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}

View File

@@ -0,0 +1,2 @@
FROM alpine
COPY Dockerfile* /

View File

@@ -0,0 +1,2 @@
FROM alpine
MAINTAINER kilroy

View File

@@ -14,3 +14,9 @@ VOLUME /vol/subvol
RUN dd if=/dev/zero bs=512 count=1 of=/vol/anothervolfile
# Which means that in the image we're about to commit, /vol/anothervolfile
# shouldn't exist, either.
# ADD files which should persist.
ADD Dockerfile /vol/Dockerfile
RUN stat /vol/Dockerfile
ADD Dockerfile /vol/Dockerfile2
RUN stat /vol/Dockerfile2

View File

@@ -0,0 +1,2 @@
FROM alpine
BOGUS nope-nope-nope

View File

@@ -0,0 +1,6 @@
FROM alpine
VOLUME /vol/subvol
# At this point, the directory should exist, with default permissions 0755, the
# contents below /vol/subvol should be frozen, and we shouldn't get an error
# from trying to write to it because we it was created automatically.
RUN dd if=/dev/zero bs=512 count=1 of=/vol/subvol/subvolfile

124
tests/byid.bats Normal file
View File

@@ -0,0 +1,124 @@
#!/usr/bin/env bats
load helpers
@test "from-by-id" {
image=busybox
# Pull down the image, if we have to.
cid=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json $image)
[ $? -eq 0 ]
[ $(wc -l <<< "$cid") -eq 1 ]
buildah rm $cid
# Get the image's ID.
run buildah --debug=false images -q $image
echo "$output"
[ $status -eq 0 ]
[ $(wc -l <<< "$output") -eq 1 ]
iid="$output"
# Use the image's ID to create a container.
run buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json ${iid}
echo "$output"
[ $status -eq 0 ]
[ $(wc -l <<< "$output") -eq 1 ]
cid="$output"
buildah rm $cid
# Use a truncated form of the image's ID to create a container.
run buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json ${iid:0:6}
echo "$output"
[ $status -eq 0 ]
[ $(wc -l <<< "$output") -eq 1 ]
cid="$output"
buildah rm $cid
buildah rmi $iid
}
@test "inspect-by-id" {
image=busybox
# Pull down the image, if we have to.
cid=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json $image)
[ $? -eq 0 ]
[ $(wc -l <<< "$cid") -eq 1 ]
buildah rm $cid
# Get the image's ID.
run buildah --debug=false images -q $image
echo "$output"
[ $status -eq 0 ]
[ $(wc -l <<< "$output") -eq 1 ]
iid="$output"
# Use the image's ID to inspect it.
run buildah --debug=false inspect --type=image ${iid}
echo "$output"
[ $status -eq 0 ]
# Use a truncated copy of the image's ID to inspect it.
run buildah --debug=false inspect --type=image ${iid:0:6}
echo "$output"
[ $status -eq 0 ]
buildah rmi $iid
}
@test "push-by-id" {
for image in busybox kubernetes/pause ; do
echo pulling/pushing image $image
TARGET=${TESTDIR}/subdir-$(basename $image)
mkdir -p $TARGET $TARGET-truncated
# Pull down the image, if we have to.
cid=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json $image)
[ $? -eq 0 ]
[ $(wc -l <<< "$cid") -eq 1 ]
buildah rm $cid
# Get the image's ID.
run buildah --debug=false images -q $IMAGE
echo "$output"
[ $status -eq 0 ]
[ $(wc -l <<< "$output") -eq 1 ]
iid="$output"
# Use the image's ID to push it.
run buildah push --signature-policy ${TESTSDIR}/policy.json $iid dir:$TARGET
echo "$output"
[ $status -eq 0 ]
# Use a truncated form of the image's ID to push it.
run buildah push --signature-policy ${TESTSDIR}/policy.json ${iid:0:6} dir:$TARGET-truncated
echo "$output"
[ $status -eq 0 ]
# Use the image's complete ID to remove it.
buildah rmi $iid
done
}
@test "rmi-by-id" {
image=busybox
# Pull down the image, if we have to.
cid=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json $image)
[ $? -eq 0 ]
[ $(wc -l <<< "$cid") -eq 1 ]
buildah rm $cid
# Get the image's ID.
run buildah --debug=false images -q $image
echo "$output"
[ $status -eq 0 ]
[ $(wc -l <<< "$output") -eq 1 ]
iid="$output"
# Use a truncated copy of the image's ID to remove it.
run buildah --debug=false rmi ${iid:0:6}
echo "$output"
[ $status -eq 0 ]
}

View File

@@ -21,6 +21,13 @@ load helpers
buildah copy $cid ${TESTDIR}/randomfile
buildah copy $cid ${TESTDIR}/other-randomfile ${TESTDIR}/third-randomfile ${TESTDIR}/randomfile /etc
buildah rm $cid
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
root=$(buildah mount $cid)
buildah config --workingdir / $cid
buildah copy $cid "${TESTDIR}/*randomfile" /etc
(cd ${TESTDIR}; for i in *randomfile; do cmp $i ${root}/etc/$i; done)
buildah rm $cid
}
@test "copy-local-plain" {
@@ -104,3 +111,29 @@ load helpers
[ "$status" -ne 0 ]
buildah rm $cid
}
@test "copy --chown" {
mkdir -p ${TESTDIR}/subdir
mkdir -p ${TESTDIR}/other-subdir
createrandom ${TESTDIR}/subdir/randomfile
createrandom ${TESTDIR}/subdir/other-randomfile
createrandom ${TESTDIR}/randomfile
createrandom ${TESTDIR}/other-subdir/randomfile
createrandom ${TESTDIR}/other-subdir/other-randomfile
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
root=$(buildah mount $cid)
buildah config --workingdir / $cid
buildah copy --chown 1:1 $cid ${TESTDIR}/randomfile
buildah copy --chown root:1 $cid ${TESTDIR}/randomfile /randomfile2
buildah copy --chown nobody $cid ${TESTDIR}/randomfile /randomfile3
buildah copy --chown nobody:root $cid ${TESTDIR}/subdir /subdir
test $(stat -c "%u:%g" $root/randomfile) = "1:1"
test $(stat -c "%U:%g" $root/randomfile2) = "root:1"
test $(stat -c "%U" $root/randomfile3) = "nobody"
(cd $root/subdir/; for i in *; do test $(stat -c "%U:%G" $i) = "nobody:root"; done)
buildah copy --chown root:root $cid ${TESTDIR}/other-subdir /subdir
(cd $root/subdir/; for i in *randomfile; do test $(stat -c "%U:%G" $i) = "root:root"; done)
test $(stat -c "%U:%G" $root/subdir) = "nobody:root"
buildah rm $cid
}

44
tests/digest.bats Normal file
View File

@@ -0,0 +1,44 @@
#!/usr/bin/env bats
load helpers
fromreftest() {
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json $1)
pushdir=${TESTDIR}/fromreftest
mkdir -p ${pushdir}/{1,2,3}
buildah push --signature-policy ${TESTSDIR}/policy.json $1 dir:${pushdir}/1
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid new-image
buildah push --signature-policy ${TESTSDIR}/policy.json new-image dir:${pushdir}/2
buildah rmi new-image
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid dir:${pushdir}/3
buildah rm $cid
rm -fr ${pushdir}
}
@test "from-by-digest-s1" {
fromreftest kubernetes/pause@sha256:f8cd50c5a287dd8c5f226cf69c60c737d34ed43726c14b8a746d9de2d23eda2b
}
@test "from-by-digest-s1-a-discarded-layer" {
fromreftest docker/whalesay@sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b
}
@test "from-by-tag-s1" {
fromreftest kubernetes/pause:go
}
@test "from-by-repo-only-s1" {
fromreftest kubernetes/pause
}
@test "from-by-digest-s2" {
fromreftest alpine@sha256:e9cec9aec697d8b9d450edd32860ecd363f2f3174c8338beb5f809422d182c63
}
@test "from-by-tag-s2" {
fromreftest alpine:2.6
}
@test "from-by-repo-only-s2" {
fromreftest alpine
}

View File

@@ -0,0 +1,333 @@
package integration
import (
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"
"testing"
"encoding/json"
"github.com/containers/image/copy"
"github.com/containers/image/signature"
"github.com/containers/image/storage"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
sstorage "github.com/containers/storage"
"github.com/containers/storage/pkg/reexec"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/onsi/gomega/gexec"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
)
var (
INTEGRATION_ROOT string
STORAGE_OPTIONS = "--storage-driver vfs"
ARTIFACT_DIR = "/tmp/.artifacts"
CACHE_IMAGES = []string{"alpine", "busybox", FEDORA_MINIMAL}
RESTORE_IMAGES = []string{"alpine", "busybox"}
ALPINE = "docker.io/library/alpine:latest"
BB_GLIBC = "docker.io/library/busybox:glibc"
FEDORA_MINIMAL = "registry.fedoraproject.org/fedora-minimal:latest"
defaultWaitTimeout = 90
)
// BuildAhSession wraps the gexec.session so we can extend it
type BuildAhSession struct {
*gexec.Session
}
// BuildAhTest struct for command line options
type BuildAhTest struct {
BuildAhBinary string
RunRoot string
StorageOptions string
ArtifactPath string
TempDir string
SignaturePath string
Root string
RegistriesConf string
}
// TestBuildAh ginkgo master function
func TestBuildAh(t *testing.T) {
if reexec.Init() {
os.Exit(1)
}
RegisterFailHandler(Fail)
RunSpecs(t, "Buildah Suite")
}
var _ = BeforeSuite(func() {
//Cache images
cwd, _ := os.Getwd()
INTEGRATION_ROOT = filepath.Join(cwd, "../../")
buildah := BuildahCreate("/tmp")
buildah.ArtifactPath = ARTIFACT_DIR
if _, err := os.Stat(ARTIFACT_DIR); os.IsNotExist(err) {
if err = os.Mkdir(ARTIFACT_DIR, 0777); err != nil {
fmt.Printf("%q\n", err)
os.Exit(1)
}
}
for _, image := range CACHE_IMAGES {
fmt.Printf("Caching %s...\n", image)
if err := buildah.CreateArtifact(image); err != nil {
fmt.Printf("%q\n", err)
os.Exit(1)
}
}
})
// CreateTempDirin
func CreateTempDirInTempDir() (string, error) {
return ioutil.TempDir("", "buildah_test")
}
// BuildahCreate a BuildAhTest instance for the tests
func BuildahCreate(tempDir string) BuildAhTest {
cwd, _ := os.Getwd()
buildAhBinary := filepath.Join(cwd, "../../buildah")
if os.Getenv("BUILDAH_BINARY") != "" {
buildAhBinary = os.Getenv("BUILDAH_BINARY")
}
storageOptions := STORAGE_OPTIONS
if os.Getenv("STORAGE_OPTIONS") != "" {
storageOptions = os.Getenv("STORAGE_OPTIONS")
}
return BuildAhTest{
BuildAhBinary: buildAhBinary,
RunRoot: filepath.Join(tempDir, "runroot"),
Root: filepath.Join(tempDir, "root"),
StorageOptions: storageOptions,
ArtifactPath: ARTIFACT_DIR,
TempDir: tempDir,
SignaturePath: "../../tests/policy.json",
RegistriesConf: "../../registries.conf",
}
}
//MakeOptions assembles all the buildah main options
func (p *BuildAhTest) MakeOptions() []string {
return strings.Split(fmt.Sprintf("--root %s --runroot %s --registries-conf %s",
p.Root, p.RunRoot, p.RegistriesConf), " ")
}
// BuildAh is the exec call to buildah on the filesystem
func (p *BuildAhTest) BuildAh(args []string) *BuildAhSession {
buildAhOptions := p.MakeOptions()
buildAhOptions = append(buildAhOptions, strings.Split(p.StorageOptions, " ")...)
buildAhOptions = append(buildAhOptions, args...)
fmt.Printf("Running: %s %s\n", p.BuildAhBinary, strings.Join(buildAhOptions, " "))
command := exec.Command(p.BuildAhBinary, buildAhOptions...)
session, err := gexec.Start(command, GinkgoWriter, GinkgoWriter)
if err != nil {
Fail(fmt.Sprintf("unable to run buildah command: %s", strings.Join(buildAhOptions, " ")))
}
return &BuildAhSession{session}
}
// Cleanup cleans up the temporary store
func (p *BuildAhTest) Cleanup() {
// Nuke tempdir
if err := os.RemoveAll(p.TempDir); err != nil {
fmt.Printf("%q\n", err)
}
}
// GrepString takes session output and behaves like grep. it returns a bool
// if successful and an array of strings on positive matches
func (s *BuildAhSession) GrepString(term string) (bool, []string) {
var (
greps []string
matches bool
)
for _, line := range strings.Split(s.OutputToString(), "\n") {
if strings.Contains(line, term) {
matches = true
greps = append(greps, line)
}
}
return matches, greps
}
// OutputToString formats session output to string
func (s *BuildAhSession) OutputToString() string {
fields := strings.Fields(fmt.Sprintf("%s", s.Out.Contents()))
return strings.Join(fields, " ")
}
// OutputToStringArray returns the output as a []string
// where each array item is a line split by newline
func (s *BuildAhSession) OutputToStringArray() []string {
output := fmt.Sprintf("%s", s.Out.Contents())
return strings.Split(output, "\n")
}
// IsJSONOutputValid attempts to unmarshall the session buffer
// and if successful, returns true, else false
func (s *BuildAhSession) IsJSONOutputValid() bool {
var i interface{}
if err := json.Unmarshal(s.Out.Contents(), &i); err != nil {
fmt.Println(err)
return false
}
return true
}
func (s *BuildAhSession) WaitWithDefaultTimeout() {
s.Wait(defaultWaitTimeout)
}
// SystemExec is used to exec a system command to check its exit code or output
func (p *BuildAhTest) SystemExec(command string, args []string) *BuildAhSession {
c := exec.Command(command, args...)
session, err := gexec.Start(c, GinkgoWriter, GinkgoWriter)
if err != nil {
Fail(fmt.Sprintf("unable to run command: %s %s", command, strings.Join(args, " ")))
}
return &BuildAhSession{session}
}
// CreateArtifact creates a cached image in the artifact dir
func (p *BuildAhTest) CreateArtifact(image string) error {
imageName := fmt.Sprintf("docker://%s", image)
systemContext := types.SystemContext{
SignaturePolicyPath: p.SignaturePath,
}
policy, err := signature.DefaultPolicy(&systemContext)
if err != nil {
return errors.Errorf("error loading signature policy: %v", err)
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return errors.Errorf("error loading signature policy: %v", err)
}
defer func() {
_ = policyContext.Destroy()
}()
options := &copy.Options{}
importRef, err := alltransports.ParseImageName(imageName)
if err != nil {
return errors.Errorf("error parsing image name %v: %v", image, err)
}
imageDir := strings.Replace(image, "/", "_", -1)
exportTo := filepath.Join("dir:", p.ArtifactPath, imageDir)
exportRef, err := alltransports.ParseImageName(exportTo)
if err != nil {
return errors.Errorf("error parsing image name %v: %v", exportTo, err)
}
return copy.Image(policyContext, exportRef, importRef, options)
}
// RestoreArtifact puts the cached image into our test store
func (p *BuildAhTest) RestoreArtifact(image string) error {
storeOptions := sstorage.DefaultStoreOptions
storeOptions.GraphDriverName = "vfs"
//storeOptions.GraphDriverOptions = storageOptions
storeOptions.GraphRoot = p.Root
storeOptions.RunRoot = p.RunRoot
store, err := sstorage.GetStore(storeOptions)
options := &copy.Options{}
if err != nil {
return errors.Errorf("error opening storage: %v", err)
}
defer func() {
_, _ = store.Shutdown(false)
}()
storage.Transport.SetStore(store)
ref, err := storage.Transport.ParseStoreReference(store, image)
if err != nil {
return errors.Errorf("error parsing image name: %v", err)
}
imageDir := strings.Replace(image, "/", "_", -1)
importFrom := fmt.Sprintf("dir:%s", filepath.Join(p.ArtifactPath, imageDir))
importRef, err := alltransports.ParseImageName(importFrom)
if err != nil {
return errors.Errorf("error parsing image name %v: %v", image, err)
}
systemContext := types.SystemContext{
SignaturePolicyPath: p.SignaturePath,
}
policy, err := signature.DefaultPolicy(&systemContext)
if err != nil {
return errors.Errorf("error loading signature policy: %v", err)
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return errors.Errorf("error loading signature policy: %v", err)
}
defer func() {
_ = policyContext.Destroy()
}()
err = copy.Image(policyContext, ref, importRef, options)
if err != nil {
return errors.Errorf("error importing %s: %v", importFrom, err)
}
return nil
}
// RestoreAllArtifacts unpacks all cached images
func (p *BuildAhTest) RestoreAllArtifacts() error {
for _, image := range RESTORE_IMAGES {
if err := p.RestoreArtifact(image); err != nil {
return err
}
}
return nil
}
// StringInSlice determines if a string is in a string slice, returns bool
func StringInSlice(s string, sl []string) bool {
for _, i := range sl {
if i == s {
return true
}
}
return false
}
//LineInOutputStartsWith returns true if a line in a
// session output starts with the supplied string
func (s *BuildAhSession) LineInOuputStartsWith(term string) bool {
for _, i := range s.OutputToStringArray() {
if strings.HasPrefix(i, term) {
return true
}
}
return false
}
//LineInOutputContains returns true if a line in a
// session output starts with the supplied string
func (s *BuildAhSession) LineInOuputContains(term string) bool {
for _, i := range s.OutputToStringArray() {
if strings.Contains(i, term) {
return true
}
}
return false
}
// InspectContainerToJSON takes the session output of an inspect
// container and returns json
func (s *BuildAhSession) InspectImageJSON() buildah.BuilderInfo {
var i buildah.BuilderInfo
err := json.Unmarshal(s.Out.Contents(), &i)
Expect(err).To(BeNil())
return i
}

Some files were not shown because too many files have changed in this diff Show More