Compare commits

...

85 Commits
v0.10 ... v0.15

Author SHA1 Message Date
Daniel J Walsh
d1330a5c46 Bump to v0.15
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #503
Approved by: rhatdan
2018-02-27 12:54:29 +00:00
Daniel J Walsh
b75bf0a5b3 Currently buildah run is not handling command options correctly
This patch will allow commands like

buildah run $ctr ls -lZ /

To work correctly.

Need to update vendor of urfave cli.

Also changed all commands to no longer accept global options after the COMMAND.
Single boolean options can now be passed together.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #493
Approved by: rhatdan
2018-02-27 12:08:45 +00:00
TomSweeneyRedHat
cb42905a7f Add selinux test from #486
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #501
Approved by: rhatdan
2018-02-27 00:47:42 +00:00
Daniel J Walsh
ee11d75f3e Bump to v0.14
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #499
Approved by: rhatdan
2018-02-27 00:28:50 +00:00
Daniel J Walsh
9bf5a5e52a Breaking change on CommonBuildOpts
Just have to refuse to use previous created containers when doing a run.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #500
Approved by: rhatdan
2018-02-27 00:05:12 +00:00
Daniel J Walsh
873ecd8791 If commonOpts do not exist, we should return rather then segfault
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #498
Approved by: TomSweeneyRedHat
2018-02-26 22:49:44 +00:00
Shyukri Shyukriev
99066e0104 Add openSUSE in install section
Signed-off-by: Shyukri Shyukriev <shshyukriev@suse.com>

Closes: #492
Approved by: rhatdan
2018-02-26 00:20:33 +00:00
TomSweeneyRedHat
68a6c0a4c0 Display full error string instead of just status
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #485
Approved by: rhatdan
2018-02-24 09:12:53 +00:00
Daniel J Walsh
963c1d95c0 Merge pull request #494 from umohnani8/secrets
Fix secrets patch for buildah bud
2018-02-23 13:49:47 -05:00
umohnani8
4bbe6e7cc0 Implement --volume and --shm-size for bud and from
Add the remaining --volume and --shm-size flags to buildah bud and from
--volume supports the following options: rw, ro, z, Z, private, slave, shared

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #491
Approved by: rhatdan
2018-02-23 17:53:00 +00:00
umohnani8
fb14850b50 Fix secrets patch for buildah bud
buildah bud was failing to get the secrets data
The issue was buildah bud was not being given the /usr/share/containers/mounts.conf file path
so it had no secrets to mount
Also reworked the way the secrets data was being copied from the host to the container

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-23 12:38:39 -05:00
umohnani8
669ffddd99 Vendor in latest containers/image
Fixes the naming issue of blobs and config for the dir transport
by removing the .tar extension

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #489
Approved by: rhatdan
2018-02-22 18:57:31 +00:00
TomSweeneyRedHat
ac093aecd1 Add libseccomp-devel to packages to install
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #490
Approved by: rhatdan
2018-02-22 18:01:09 +00:00
Daniel J Walsh
ef0ca9cd2d Bump to v0.13
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #488
Approved by: TomSweeneyRedHat
2018-02-22 14:25:16 +00:00
Daniel J Walsh
5ca91c0eb7 Vendor in latest containers/storage
This fixes a large SELinux bug.  Currently if you do the following
commands

ctr=$(buildah from scratch)
mnt=$(buildah mount $ctr)
dnf install --installroot=$mnt httpd
buildah run $ctr touch /test

The last command fails.  The reason for this is the SELinux labels are getting applied
to the mount point, since it was not being mounted as an overlay file system.

Containers/storage was updated to always mount an overlay even if the lower layer is empty
This then causes the mount point to use a context mount, and changes dnf to not apply
labels.  This change then allows buildah run to create confined containers to run code.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #486
Approved by: TomSweeneyRedHat
2018-02-22 13:55:26 +00:00
baude
e623e5c004 Initial ginkgo framework
This is an initial attempt at bringing in the ginkgo test framework into
buildah.  The inspect bats file was also imported.

Signed-off-by: baude <bbaude@redhat.com>

Closes: #472
Approved by: rhatdan
2018-02-22 13:06:08 +00:00
Giuseppe Scrivano
9d163a50d1 run: do not open /etc/hosts if not needed
Avoid opening the file in write mode if we are not going to write
anything.

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>

Closes: #487
Approved by: rhatdan
2018-02-22 13:04:38 +00:00
umohnani8
93a3c89943 Add the following flags to buildah bud and from
--add-host
	--cgroup-parent
	--cpu-period
	--cpu-quota
	--cpu-shares
	--cpuset-cpus
	--cpuset-mems
	--memory
	--memory-swap
	--security-opt
	--ulimit

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #477
Approved by: rhatdan
2018-02-19 17:00:29 +00:00
umohnani8
b23f145416 Vendor in packages
vendor in profiles from github.com/docker/docker/profiles to support seccomp.
vendor in latest runtime-tools to support bind mounting.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #477
Approved by: rhatdan
2018-02-19 17:00:29 +00:00
TomSweeneyRedHat
43d1102d02 Touchup doc in rpm spec
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #483
Approved by: rhatdan
2018-02-19 14:52:38 +00:00
TomSweeneyRedHat
95f16ab260 Remove you/your from manpages
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #484
Approved by: rhatdan
2018-02-19 14:30:16 +00:00
Michael Gugino
09e7ddf544 Fix README link to tutorials
Current link is broken.

This commit corrects the link to point at the
correct file.

Signed-off-by: Michael Gugino <mgugino@redhat.com>

Closes: #480
Approved by: nalind
2018-02-15 15:15:15 +00:00
TomSweeneyRedHat
ee383ec9cf Add redis test to baseline
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #476
Approved by: nalind
2018-02-14 14:18:08 +00:00
TomSweeneyRedHat
d59af12866 Fix versioning to 0.12
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #478
Approved by: rhatdan
2018-02-14 10:06:43 +00:00
Daniel J Walsh
e073df11aa Merge pull request #473 from rhatdan/master
Bump version to 0.12
2018-02-12 12:08:24 -05:00
Daniel J Walsh
8badcc2d02 Bump version to 0.12
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
2018-02-12 11:48:48 -05:00
Daniel J Walsh
a586779353 We are copying a directory not a single file
When populating a container from a container image with a
volume directory, we need to copy the content of the source
directory into the target.  The code was mistakenly looking
for a file not a directory.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #471
Approved by: nalind
2018-02-12 15:57:23 +00:00
umohnani8
4eb654f10c Removing docs and completions for run options
Figured that these options need to be in from and bud instead.
Removed the options from the documentation of run and bud for now.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #470
Approved by: rhatdan
2018-02-12 15:22:09 +00:00
Boaz Shuster
f29314579d Return multi errors in buildah-rm
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #458
Approved by: rhatdan
2018-02-12 12:02:37 +00:00
TomSweeneyRedHat
9e20c3d948 Don't drop error on mix case imagename
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #468
Approved by: rhatdan
2018-02-11 11:41:44 +00:00
TomSweeneyRedHat
46c1a54b15 Revert to using latest go-md2man
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #463
Approved by: rhatdan
2018-02-10 12:24:54 +00:00
Benjamin Kircher
67b565da7b Docs: note that buildah needs to run as root
You have to be root to run buildah. This commit adds a notice to the
buildah(1) man-page and improves the front-page README.md a bit so that
this is more obvious to the user.

Fixes issue #420.

Signed-off-by: Benjamin Kircher <benjamin.kircher@gmail.com>

Closes: #462
Approved by: rhatdan
2018-02-10 12:07:07 +00:00
baude
dd4a6aea97 COPR enablement
For COPR builds, we will use a a slightly modified spec and the
makesrpm method over SCM builds so we can have dynamic package
names.

Signed-off-by: baude <bbaude@redhat.com>

Closes: #460
Approved by: rhatdan
2018-02-10 11:49:46 +00:00
William Henry
9116598a2e Added handing for simpler error message for Unknown Dockerfile instructions.
Signed-off-by: William Henry <whenry@redhat.com>

Closes: #457
Approved by: rhatdan
2018-02-10 11:32:38 +00:00
TomSweeneyRedHat
df2a10d43f Add limitation to buildah rmi man page
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #464
Approved by: rhatdan
2018-02-10 11:15:34 +00:00
TomSweeneyRedHat
e9915937ac Rename tutorials.md to README.md in tutorial dir
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #459
Approved by: rhatdan
2018-02-10 11:14:52 +00:00
Daniel J Walsh
5a9c591abf Merge pull request #455 from umohnani8/certs
Make /etc/containers/certs.d the default certs directory
2018-02-07 08:44:32 -05:00
Daniel J Walsh
531ef9159d Merge pull request #446 from umohnani8/flags_docs
Add documentation and completions for the following flags
2018-02-06 17:21:41 -05:00
umohnani8
811cf927d7 Change default certs directory to /etc/containers/certs.dir
Made changes to the man pages to reflect this.

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-06 17:04:34 -05:00
umohnani8
032b56ee8d Vendor in latest containers/image
Adds support for default certs directory to be /etc/containers/certs.d

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-06 17:04:34 -05:00
Daniel J Walsh
6af847dd2a Vendor in latest containers/storage
A patch got merged into containers/storage that makes sure SELinux labels
are appliced when committing to storage.  This prevents a failure condition
which arrises from leaked mount points between the time a container is mounted
and it is committed.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #453
Approved by: TomSweeneyRedHat
2018-02-06 20:26:45 +00:00
Nalin Dahyabhai
6b207f7b0c Fix unintended reversal of the ignoreUnrecognizedInstructions flag
We were interpreting the ignoreUnrecognizedInstructions incorrectly, so
fix that, and call out the unrecognized instruction keyword in the error
message (or debug message, if we're ignoring it).

Should fix #451.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #452
Approved by: rhatdan
2018-02-06 18:58:25 +00:00
umohnani8
c14697ebe4 Add documentation and completions for the following flags
--add-host
	--cgroup-parent
	--cpu-period
	--cpu-quota
	--cpu-shares
	--cpuset-mems
	--memory
	--memory-swap
	--security-opt
	--ulimit

These flags are going to be used by buildah run and bud.
The implementation will follow in another PR.

Signed-off-by: umohnani8 <umohnani@redhat.com>
2018-02-06 12:45:17 -05:00
Nalin Dahyabhai
c35493248e build-using-dockerfile: set the 'author' field for MAINTAINER
When we encounter the MAINTAINER keyword in a Dockerfile, imagebuilder
updates the Author field in the imagebuilder.Builder structure.  Pick up
that value when we go to commit the image.

Should fix #448.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #450
Approved by: rhatdan
2018-02-06 01:18:55 +00:00
umohnani8
d03a894969 Fix md2man issues
Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #447
Approved by: rhatdan
2018-02-05 21:52:41 +00:00
Boaz Shuster
fbb8b702bc Return exit code 1 when buildah-rmi fails
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #412
Approved by: rhatdan
2018-02-05 13:50:30 +00:00
Boaz Shuster
815cedfc71 Trim the image reference to just its name before calling getImageName
When setting a container name the getImageName function goes through
all the names of the resolved image and finds the name that contains
the given name by the user.

However, if the user is specifying "docker.io/tagged-image"
the Docker transport returns "docker.io/library/tagged-image" which
makes getImageName returns the original image name because it does
not find a match.

To resolve this issue before calling getImageName the image given
by the user will be trimmed to be just the name.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #422
Approved by: rhatdan
2018-02-04 11:26:43 +00:00
TomSweeneyRedHat
1c97f6ac2c Bump Fedora 26 to 27 in rpm test
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #444
Approved by: rhatdan
2018-02-04 11:25:25 +00:00
umohnani8
bc9d574c10 Add new line after executing template for buildah inspect
No new line was returned when using the --format flag for buildah
inspect.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #442
Approved by: rhatdan
2018-02-02 21:11:52 +00:00
TomSweeneyRedHat
c84db980ae Touch up rmi -f usage statement
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #441
Approved by: rhatdan
2018-02-02 20:15:08 +00:00
umohnani8
85a37b39e8 Add --format and --filter to buildah containers
buildah containers now supports oretty-printing using a Go template
with the --format flag. And output can be filtered based on id, name, or
ancestor.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #437
Approved by: rhatdan
2018-02-02 19:32:06 +00:00
Arthur Mello
49095a83f8 Add --prune,-p option to rmi command
Allows rmi to remove all dangling images (images without a tag and without a child image)
Add new test case

Signed-off-by: Arthur Mello <amello@redhat.com>

Closes: #418
Approved by: rhatdan
2018-02-01 10:50:33 +00:00
TomSweeneyRedHat
6c05a352df Add authfile param to commit
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #433
Approved by: rhatdan
2018-02-01 05:48:09 +00:00
umohnani8
1849466827 Fix --runtime-flag for buildah run and bud
The --runtime-flag flag for buildah run and bud would fail
whenever the global flags of the runtime were passed to it.
Changed it to accept the format [global-flag]=[value] where
global-flag would be converted to --[global-flag] in the code.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #431
Approved by: rhatdan
2018-01-30 18:21:46 +00:00
umohnani8
4f38267342 format should override quiet for images
quiet was overriding format, but we want format to override quiet
if both the flags are set for buildah images.

Changed it so that it errors out if both quiet and format are set.

Signed-off-by: umohnani8 <umohnani@redhat.com>

Closes: #426
Approved by: rhatdan
2018-01-30 17:06:51 +00:00
TomSweeneyRedHat
9790b89771 Allow all auth params to work with bud
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #419
Approved by: rhatdan
2018-01-30 15:41:52 +00:00
Fabio Bertinatto
61f5319504 Don't overwrite directory permissions on --chown
Signed-off-by: Fabio Bertinatto <fbertina@redhat.com>

Closes: #389
Approved by: rhatdan
2018-01-30 05:09:06 +00:00
Boaz Shuster
947714fbd2 Unescape HTML characters output into the terminal
By default, the JSON encoder from the Go standard library
escapes the HTML characters which causes the maintainer output
looks strange:

"maintainer": "NGINX Docker Maintainers \u003cdocker-maint@nginx.com\u003e"
Instead of:
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>"

This patch fixes this issue in "buildah-inspect" only as this is
the only place that such characters are displayed.

Note: if the output of "buildah-inspect" is piped or redirected
then the HTML characters are not escaped.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #421
Approved by: rhatdan
2018-01-30 04:51:17 +00:00
Boaz Shuster
c615c3e23d Fix typo then->than
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #423
Approved by: rhatdan
2018-01-29 05:50:24 +00:00
Daniel J Walsh
d0e1ad1a1a Merge pull request #415 from TomSweeneyRedHat/dev/tsweeney/pwdprompt
Prompt for un/pwd if not supplied with --creds
2018-01-27 09:34:51 +01:00
Boaz Shuster
b68f88c53d Fix: setting the container name to the image
In commit 47ac96155f the image name that is used
for setting the container name is taken from the resolved image
unless it is empty.

The image has the "Names" field and right now the first name is
taken. However, when the image is a tagged image, the container name
will end up using the original name instead of the given one.

For example:

$ buildah tag busybox busybox1
$ buildah from busybox1

Will set the name of the container as "busybox-working-container"
while it was expected to be "busybox1-working-container".

This patch fixes this particular issue.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #399
Approved by: rhatdan
2018-01-26 08:07:58 +00:00
TomSweeneyRedHat
7dc787a9c7 Prompt for un/pwd if not supplied with --creds
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>
2018-01-25 15:03:59 -05:00
TomSweeneyRedHat
2dbb2a13ed Make bud be really quiet
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #408
Approved by: rhatdan
2018-01-24 15:11:30 +00:00
Boaz Shuster
ad49b24d0b Return a better error message when failed to resolve an image
During the creation of a new builder object there are errors
that are only logged into "logrus.Debugf".

If in the end of the process "ref" or "img" are nil and "options.FromImage"
is set then it means that there was an issue.
By default, it was assumed that the image name is wrong. Yet,
this assumption isn't always correct. For example, it might fail due to
authorization or connection errors.

In this patch, I am attempting to fix this problem by checking the
last error stored in the "err" variable and returning the cause
of the failure.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #406
Approved by: rhatdan
2018-01-24 14:03:28 +00:00
Boaz Shuster
ba128004ca Fix "make validate" warnings
Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #405
Approved by: rhatdan
2018-01-22 14:46:54 +00:00
TomSweeneyRedHat
5179733c63 Update auth tests and fix bud man page
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #404
Approved by: rhatdan
2018-01-22 13:34:50 +00:00
TomSweeneyRedHat
40c3a57d5a Try to fix buildah-containers.md CI test issue
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #395
Approved by: rhatdan
2018-01-19 20:20:13 +00:00
Daniel J Walsh
de9e71dda7 Drop support for 1.7
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #400
Approved by: rhatdan
2018-01-18 23:00:15 +00:00
TomSweeneyRedHat
1052f3ba40 Create Buildah issue template
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #397
Approved by: rhatdan
2018-01-18 11:56:31 +00:00
Daniel J Walsh
6bad262ff1 Bump to version 0.11
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #394
Approved by: @ripcurld0
2018-01-17 13:42:19 +00:00
TomSweeneyRedHat
092591620b Show ctrid when doing rm -all
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #392
Approved by: rhatdan
2018-01-16 18:38:28 +00:00
Boaz Shuster
4d6c90e902 vendor containers/image to 386d6c33c9d622ed84baf14f4b1ff1be86800ccd
Signed-off-by: Boaz Shuster <bshuster@redhat.com>

Closes: #393
Approved by: rhatdan
2018-01-16 16:44:34 +00:00
TomSweeneyRedHat
17d9a73329 Touchup rm and container man pages
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #391
Approved by: rhatdan
2018-01-16 15:48:29 +00:00
Boaz Shuster
fe2de4f491 Handle commit error gracefully
This change gives a better error message when the commit fails
because of bad authentication.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>

Closes: #385
Approved by: rhatdan
2018-01-15 15:48:20 +00:00
TomSweeneyRedHat
adfb256a0f Remove new errors and use established ones in rmi
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #388
Approved by: rhatdan
2018-01-10 17:01:41 +00:00
Huamin Chen
029bdbcbd0 fix buildah push description
Signed-off-by: Huamin Chen <hchen@redhat.com>

Closes: #387
Approved by: rhatdan
2018-01-09 18:45:53 +00:00
TomSweeneyRedHat
fd995e6166 Add --all functionality to rmi
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #384
Approved by: rhatdan
2018-01-08 21:07:25 +00:00
Nalin Dahyabhai
ae7d2f3547 Ignore sequential duplicate layers when reading v2s1
When a v2s1 image is stored to disk, some of the layer blobs listed in
its manifest may be discarded as.  Account for this.

Start treating a failure to decode v1compat information as a fatal error
instead of trying to fake it.

Tweak how we build the created-by field in history when generating one
from v2s1 information to better match what we see in v2s2 images.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #383
Approved by: rhatdan
2018-01-08 21:06:35 +00:00
Nalin Dahyabhai
86fa0803e8 Sanity check the history/diffid list sizes
When building an image's config blob, add a sanity check that the number
of diffIDs that we're including matches the number of entries in the
history which don't claim to be empty layers.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #383
Approved by: rhatdan
2018-01-08 21:06:35 +00:00
Nalin Dahyabhai
81dfe0a964 When we say we skip a secrets config file, do so
When we warn about not processing a secrets configuration file, actually
skip anything we might have salvaged from it to make our behavior match
the warning.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #380
Approved by: rhatdan
2018-01-05 16:09:53 +00:00
Nalin Dahyabhai
9bff989832 Use NewImageSource() instead of NewImage()
Use NewImageSource() instead of NewImage() when checking if an image is
actually there, since it makes the image library do less work while
answering the same question for us.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>

Closes: #381
Approved by: rhatdan
2018-01-05 15:54:34 +00:00
TomSweeneyRedHat
b8740e386e Add --all to remove containers
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #382
Approved by: rhatdan
2018-01-05 15:53:19 +00:00
Daniel J Walsh
9f5e1b3a77 Make lint was complaining about some vetshowed err
We often use err as a variable inside of subblocks, and
we don't want golint to complain about it.

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #379
Approved by: nalind
2018-01-03 21:10:28 +00:00
Daniel J Walsh
01f8c7afee Remove chrootuser handling and use libpod/pkg
I have made a subpackage of libpod to handle chrootuser,
using the user code from buildah.

This patch removes user handling from buildah and uses
projectatomic/libpod/pkg/chrootuser

Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>

Closes: #377
Approved by: nalind
2018-01-03 15:36:10 +00:00
TomSweeneyRedHat
67e5341846 Add kernel version requirement to install.md and touchups
Signed-off-by: TomSweeneyRedHat <tsweeney@redhat.com>

Closes: #372
Approved by: rhatdan
2018-01-03 13:00:36 +00:00
160 changed files with 11376 additions and 1938 deletions

28
.copr/Makefile Normal file
View File

@@ -0,0 +1,28 @@
#!/usr/bin/make -f
spec := contrib/rpm/buildah_copr.spec
outdir := $(CURDIR)
tmpdir := build
gitdir := $(PWD)/.git
rev := $(shell sed 's/\(.......\).*/\1/' $(gitdir)/$$(sed -n '/^ref:/{s/.* //;p}' $(gitdir)/HEAD))
date := $(shell date +%Y%m%d.%H%M)
version := $(shell sed -n '/Version:/{s/.* //;p}' $(spec))
release := $(date).git.$(rev)
srpm: $(outdir)/buildah-$(version)-$(release).src.rpm
$(tmpdir)/buildah.spec: $(spec)
@mkdir -p $(tmpdir)
sed '/^Release:/s/\(: *\).*/\1$(release)%{?dist}/' $< >$@
$(tmpdir)/$(version).tar.gz: $(gitdir)/..
@mkdir -p $(tmpdir)
tar c --exclude-vcs --exclude-vcs-ignores -C $< --transform 's|^\.|buildah-$(version)|' . | gzip -9 >$@
$(outdir)/buildah-$(version)-$(release).src.rpm: $(tmpdir)/buildah.spec $(tmpdir)/$(version).tar.gz
@mkdir -p $(outdir)
rpmbuild -D'_srcrpmdir $(outdir)' -D'_sourcedir $(tmpdir)' -bs $(tmpdir)/buildah.spec
.PHONY: srpm

65
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,65 @@
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
**Steps to reproduce the issue:**
1.
2.
3.
**Describe the results you received:**
**Describe the results you expected:**
**Output of `rpm -q buildah` or `apt list buildah`:**
```
(paste your output here)
```
**Output of `buildah version`:**
```
(paste your output here)
```
**Output of `cat /etc/*release`:**
```
(paste your output here)
```
**Output of `uname -a`:**
```
(paste your output here)
```
**Output of `cat /etc/containers/storage.conf`:**
```
(paste your output here)
```

1
.gitignore vendored
View File

@@ -1,3 +1,4 @@
docs/buildah*.1
/buildah
/imgtype
/build/

View File

@@ -25,8 +25,13 @@ dnf install -y \
make \
openssl \
ostree-devel \
skopeo-containers \
which
# Install gomega
go get github.com/onsi/gomega/...
# PAPR adds a merge commit, for testing, which fails the
# short-commit-subject validation test, so tell git-validate.sh to only check
# up to, but not including, the merge commit.

View File

@@ -2,7 +2,6 @@ language: go
dist: trusty
sudo: required
go:
- 1.7
- 1.8
- 1.9.x
- tip

View File

@@ -24,7 +24,7 @@ imgtype: *.go docker/*.go util/*.go tests/imgtype.go
.PHONY: clean
clean:
$(RM) buildah imgtype
$(RM) buildah imgtype build
$(MAKE) -C docs clean
.PHONY: docs
@@ -53,6 +53,7 @@ validate:
install.tools:
$(GO) get -u $(BUILDFLAGS) github.com/cpuguy83/go-md2man
$(GO) get -u $(BUILDFLAGS) github.com/vbatts/git-validation
$(GO) get -u $(BUILDFLAGS) github.com/onsi/ginkgo/ginkgo
$(GO) get -u $(BUILDFLAGS) gopkg.in/alecthomas/gometalinter.v1
gometalinter.v1 -i
@@ -84,6 +85,7 @@ install.runc:
.PHONY: test-integration
test-integration:
ginkgo -v tests/e2e/.
cd tests; ./test_runner.sh
.PHONY: test-unit

View File

@@ -21,20 +21,14 @@ The Buildah package provides a command line tool which can be used to
**[Installation notes](install.md)**
**[Tutorials](docs/tutorials/tutorials.md)**
## runc Requirement
Buildah uses `runc` to run commands when `buildah run` is used, or when `buildah build-using-dockerfile`
encounters a `RUN` instruction, so you'll also need to build and install a compatible version of
[runc](https://github.com/opencontainers/runc) for Buildah to call for those cases.
**[Tutorials](docs/tutorials/README.md)**
## Example
From [`./examples/lighttpd.sh`](examples/lighttpd.sh):
```bash
cat > lighttpd.sh <<EOF
$ cat > lighttpd.sh <<EOF
#!/bin/bash -x
ctr1=`buildah from ${1:-fedora}`
@@ -54,8 +48,8 @@ buildah config $ctr1 --port 80
buildah commit $ctr1 ${2:-$USER/lighttpd}
EOF
chmod +x lighttpd.sh
./lighttpd.sh
$ chmod +x lighttpd.sh
$ sudo ./lighttpd.sh
```
## Commands
@@ -71,7 +65,7 @@ chmod +x lighttpd.sh
| [buildah-images(1)](/docs/buildah-images.md) | List images in local storage. |
| [buildah-inspect(1)](/docs/buildah-inspect.md) | Inspects the configuration of a container or image. |
| [buildah-mount(1)](/docs/buildah-mount.md) | Mount the working container's root filesystem. |
| [buildah-push(1)](/docs/buildah-push.md) | Copies an image from local storage. |
| [buildah-push(1)](/docs/buildah-push.md) | Push an image from local storage to elsewhere. |
| [buildah-rm(1)](/docs/buildah-rm.md) | Removes one or more working containers. |
| [buildah-rmi(1)](/docs/buildah-rmi.md) | Removes one or more images. |
| [buildah-run(1)](/docs/buildah-run.md) | Run a command inside of the container. |

100
add.go
View File

@@ -14,6 +14,7 @@ import (
"github.com/containers/storage/pkg/archive"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/pkg/errors"
"github.com/projectatomic/libpod/pkg/chrootuser"
"github.com/sirupsen/logrus"
)
@@ -74,12 +75,17 @@ func (b *Builder) Add(destination string, extract bool, options AddAndCopyOption
logrus.Errorf("error unmounting container: %v", err2)
}
}()
// Find out which user (and group) the destination should belong to.
user, err := b.user(mountPoint, options.Chown)
if err != nil {
return err
}
dest := mountPoint
if destination != "" && filepath.IsAbs(destination) {
dest = filepath.Join(dest, destination)
} else {
if err = os.MkdirAll(filepath.Join(dest, b.WorkDir()), 0755); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists)", filepath.Join(dest, b.WorkDir()))
if err = ensureDir(filepath.Join(dest, b.WorkDir()), user, 0755); err != nil {
return err
}
dest = filepath.Join(dest, b.WorkDir(), destination)
}
@@ -87,8 +93,8 @@ func (b *Builder) Add(destination string, extract bool, options AddAndCopyOption
// with a '/', create it so that we can be sure that it's a directory,
// and any files we're copying will be placed in the directory.
if len(destination) > 0 && destination[len(destination)-1] == os.PathSeparator {
if err = os.MkdirAll(dest, 0755); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists", dest)
if err = ensureDir(dest, user, 0755); err != nil {
return err
}
}
// Make sure the destination's parent directory is usable.
@@ -106,11 +112,6 @@ func (b *Builder) Add(destination string, extract bool, options AddAndCopyOption
if len(source) > 1 && (destfi == nil || !destfi.IsDir()) {
return errors.Errorf("destination %q is not a directory", dest)
}
// Find out which user (and group) the destination should belong to.
user, err := b.user(mountPoint, options)
if err != nil {
return err
}
for _, src := range source {
if strings.HasPrefix(src, "http://") || strings.HasPrefix(src, "https://") {
// We assume that source is a file, and we're copying
@@ -129,7 +130,7 @@ func (b *Builder) Add(destination string, extract bool, options AddAndCopyOption
if err := addURL(d, src); err != nil {
return err
}
if err := setOwner(d, user); err != nil {
if err := setOwner("", d, user); err != nil {
return err
}
continue
@@ -152,15 +153,14 @@ func (b *Builder) Add(destination string, extract bool, options AddAndCopyOption
// the source directory into the target directory. Try
// to create it first, so that if there's a problem,
// we'll discover why that won't work.
d := dest
if err := os.MkdirAll(d, 0755); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists", d)
if err = ensureDir(dest, user, 0755); err != nil {
return err
}
logrus.Debugf("copying %q to %q", gsrc+string(os.PathSeparator)+"*", d+string(os.PathSeparator)+"*")
if err := copyWithTar(gsrc, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", gsrc, d)
logrus.Debugf("copying %q to %q", gsrc+string(os.PathSeparator)+"*", dest+string(os.PathSeparator)+"*")
if err := copyWithTar(gsrc, dest); err != nil {
return errors.Wrapf(err, "error copying %q to %q", gsrc, dest)
}
if err := setOwner(d, user); err != nil {
if err := setOwner(gsrc, dest, user); err != nil {
return err
}
continue
@@ -178,7 +178,7 @@ func (b *Builder) Add(destination string, extract bool, options AddAndCopyOption
if err := copyFileWithTar(gsrc, d); err != nil {
return errors.Wrapf(err, "error copying %q to %q", gsrc, d)
}
if err := setOwner(d, user); err != nil {
if err := setOwner(gsrc, d, user); err != nil {
return err
}
continue
@@ -194,34 +194,60 @@ func (b *Builder) Add(destination string, extract bool, options AddAndCopyOption
}
// user returns the user (and group) information which the destination should belong to.
func (b *Builder) user(mountPoint string, options AddAndCopyOptions) (specs.User, error) {
if options.Chown != "" {
return getUser(mountPoint, options.Chown)
func (b *Builder) user(mountPoint string, userspec string) (specs.User, error) {
if userspec == "" {
userspec = b.User()
}
return getUser(mountPoint, b.User())
uid, gid, err := chrootuser.GetUser(mountPoint, userspec)
u := specs.User{
UID: uid,
GID: gid,
Username: userspec,
}
return u, err
}
// setOwner sets the uid and gid owners of a given path.
// If path is a directory, recursively changes the owner.
func setOwner(path string, user specs.User) error {
fi, err := os.Stat(path)
func setOwner(src, dest string, user specs.User) error {
fid, err := os.Stat(dest)
if err != nil {
return errors.Wrapf(err, "error reading %q", path)
return errors.Wrapf(err, "error reading %q", dest)
}
if fi.IsDir() {
err2 := filepath.Walk(path, func(p string, info os.FileInfo, we error) error {
if err3 := os.Lchown(p, int(user.UID), int(user.GID)); err3 != nil {
return errors.Wrapf(err3, "error setting ownership of %q", p)
}
return nil
})
if err2 != nil {
return errors.Wrapf(err2, "error walking dir %q to set ownership", path)
if !fid.IsDir() || src == "" {
if err := os.Lchown(dest, int(user.UID), int(user.GID)); err != nil {
return errors.Wrapf(err, "error setting ownership of %q", dest)
}
return nil
}
if err := os.Lchown(path, int(user.UID), int(user.GID)); err != nil {
return errors.Wrapf(err, "error setting ownership of %q", path)
err = filepath.Walk(src, func(p string, info os.FileInfo, we error) error {
relPath, err2 := filepath.Rel(src, p)
if err2 != nil {
return errors.Wrapf(err2, "error getting relative path of %q to set ownership on destination", p)
}
if relPath != "." {
absPath := filepath.Join(dest, relPath)
if err2 := os.Lchown(absPath, int(user.UID), int(user.GID)); err != nil {
return errors.Wrapf(err2, "error setting ownership of %q", absPath)
}
}
return nil
})
if err != nil {
return errors.Wrapf(err, "error walking dir %q to set ownership", src)
}
return nil
}
// ensureDir creates a directory if it doesn't exist, setting ownership and permissions as passed by user and perm.
func ensureDir(path string, user specs.User, perm os.FileMode) error {
if _, err := os.Stat(path); os.IsNotExist(err) {
if err := os.MkdirAll(path, perm); err != nil {
return errors.Wrapf(err, "error ensuring directory %q exists", path)
}
if err := os.Chown(path, int(user.UID), int(user.GID)); err != nil {
return errors.Wrapf(err, "error setting ownership of %q", path)
}
}
return nil
}

View File

@@ -19,8 +19,9 @@ const (
// Package is the name of this package, used in help output and to
// identify working containers.
Package = "buildah"
// Version for the Package
Version = "0.10"
// Version for the Package. Bump version in contrib/rpm/buildah.spec
// too.
Version = "0.15"
// The value we use to identify what type of information, currently a
// serialized Builder structure, we are using as per-container state.
// This should only be changed when we make incompatible changes to
@@ -93,6 +94,7 @@ type Builder struct {
Docker docker.V2Image `json:"docker,omitempty"`
// DefaultMountsFilePath is the file path holding the mounts to be mounted in "host-path:container-path" format
DefaultMountsFilePath string `json:"defaultMountsFilePath,omitempty"`
CommonBuildOpts *CommonBuildOptions
}
// BuilderInfo are used as objects to display container information
@@ -135,6 +137,38 @@ func GetBuildInfo(b *Builder) BuilderInfo {
}
}
// CommonBuildOptions are reseources that can be defined by flags for both buildah from and bud
type CommonBuildOptions struct {
// AddHost is the list of hostnames to add to the resolv.conf
AddHost []string
//CgroupParent it the path to cgroups under which the cgroup for the container will be created.
CgroupParent string
//CPUPeriod limits the CPU CFS (Completely Fair Scheduler) period
CPUPeriod uint64
//CPUQuota limits the CPU CFS (Completely Fair Scheduler) quota
CPUQuota int64
//CPUShares (relative weight
CPUShares uint64
//CPUSetCPUs in which to allow execution (0-3, 0,1)
CPUSetCPUs string
//CPUSetMems memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
CPUSetMems string
//Memory limit
Memory int64
//MemorySwap limit value equal to memory plus swap.
MemorySwap int64
//SecruityOpts modify the way container security is running
LabelOpts []string
SeccompProfilePath string
ApparmorProfile string
//ShmSize is the shared memory size
ShmSize string
//Ulimit options
Ulimit []string
//Volumes to bind mount into the container
Volumes []string
}
// BuilderOptions are used to initialize a new Builder.
type BuilderOptions struct {
// FromImage is the name of the image which should be used as the
@@ -174,6 +208,7 @@ type BuilderOptions struct {
SystemContext *types.SystemContext
// DefaultMountsFilePath is the file path holding the mounts to be mounted in "host-path:container-path" format
DefaultMountsFilePath string
CommonBuildOpts *CommonBuildOptions
}
// ImportOptions are used to initialize a Builder from an existing container

View File

@@ -17,21 +17,23 @@ var (
copyDescription = "Copies the contents of a file, URL, or directory into a container's working\n directory"
addCommand = cli.Command{
Name: "add",
Usage: "Add content to the container",
Description: addDescription,
Flags: addAndCopyFlags,
Action: addCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [[FILE | DIRECTORY | URL] ...] [DESTINATION]",
Name: "add",
Usage: "Add content to the container",
Description: addDescription,
Flags: addAndCopyFlags,
Action: addCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [[FILE | DIRECTORY | URL] ...] [DESTINATION]",
SkipArgReorder: true,
}
copyCommand = cli.Command{
Name: "copy",
Usage: "Copy content into the container",
Description: copyDescription,
Flags: addAndCopyFlags,
Action: copyCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [[FILE | DIRECTORY | URL] ...] [DESTINATION]",
Name: "copy",
Usage: "Copy content into the container",
Description: copyDescription,
Flags: addAndCopyFlags,
Action: copyCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [[FILE | DIRECTORY | URL] ...] [DESTINATION]",
SkipArgReorder: true,
}
)
@@ -47,7 +49,7 @@ func addAndCopyCmd(c *cli.Context, extractLocalArchives bool) error {
return err
}
// If list is greater then one, the last item is the destination
// If list is greater than one, the last item is the destination
dest := ""
size := len(args)
if size > 1 {

View File

@@ -21,6 +21,16 @@ var (
Name: "build-arg",
Usage: "`argument=value` to supply to the builder",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
Usage: "use certificates at the specified path to access the registry",
},
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `[username[:password]]` for accessing the registry",
},
cli.StringSliceFlag{
Name: "file, f",
Usage: "`pathname or URL` of a Dockerfile",
@@ -66,13 +76,14 @@ var (
budDescription = "Builds an OCI image using instructions in one or more Dockerfiles."
budCommand = cli.Command{
Name: "build-using-dockerfile",
Aliases: []string{"bud"},
Usage: "Build an image using instructions in a Dockerfile",
Description: budDescription,
Flags: budFlags,
Action: budCmd,
ArgsUsage: "CONTEXT-DIRECTORY | URL",
Name: "build-using-dockerfile",
Aliases: []string{"bud"},
Usage: "Build an image using instructions in a Dockerfile",
Description: budDescription,
Flags: append(budFlags, fromAndBudFlags...),
Action: budCmd,
ArgsUsage: "CONTEXT-DIRECTORY | URL",
SkipArgReorder: true,
}
)
@@ -181,21 +192,38 @@ func budCmd(c *cli.Context) error {
return err
}
options := imagebuildah.BuildOptions{
ContextDirectory: contextDir,
PullPolicy: pullPolicy,
Compression: imagebuildah.Gzip,
Quiet: c.Bool("quiet"),
SignaturePolicyPath: c.String("signature-policy"),
SkipTLSVerify: !c.Bool("tls-verify"),
Args: args,
Output: output,
AdditionalTags: tags,
Runtime: c.String("runtime"),
RuntimeArgs: c.StringSlice("runtime-flag"),
OutputFormat: format,
AuthFilePath: c.String("authfile"),
systemContext, err := systemContextFromOptions(c)
if err != nil {
return errors.Wrapf(err, "error building system context")
}
runtimeFlags := []string{}
for _, arg := range c.StringSlice("runtime-flag") {
runtimeFlags = append(runtimeFlags, "--"+arg)
}
commonOpts, err := parseCommonBuildOptions(c)
if err != nil {
return err
}
options := imagebuildah.BuildOptions{
ContextDirectory: contextDir,
PullPolicy: pullPolicy,
Compression: imagebuildah.Gzip,
Quiet: c.Bool("quiet"),
SignaturePolicyPath: c.String("signature-policy"),
Args: args,
Output: output,
AdditionalTags: tags,
Runtime: c.String("runtime"),
RuntimeArgs: runtimeFlags,
OutputFormat: format,
SystemContext: systemContext,
CommonBuildOpts: commonOpts,
DefaultMountsFilePath: c.GlobalString("default-mounts-file"),
}
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}

View File

@@ -10,11 +10,16 @@ import (
"github.com/containers/storage/pkg/archive"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/projectatomic/buildah/util"
"github.com/urfave/cli"
)
var (
commitFlags = []cli.Flag{
cli.StringFlag{
Name: "authfile",
Usage: "path of the authentication file. Default is ${XDG_RUNTIME_DIR}/containers/auth.json",
},
cli.StringFlag{
Name: "cert-dir",
Value: "",
@@ -23,7 +28,7 @@ var (
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `username[:password]` for accessing the registry",
Usage: "use `[username[:password]]` for accessing the registry",
},
cli.BoolFlag{
Name: "disable-compression, D",
@@ -58,12 +63,13 @@ var (
}
commitDescription = "Writes a new image using the container's read-write layer and, if it is based\n on an image, the layers of that image"
commitCommand = cli.Command{
Name: "commit",
Usage: "Create an image from a working container",
Description: commitDescription,
Flags: commitFlags,
Action: commitCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID IMAGE",
Name: "commit",
Usage: "Create an image from a working container",
Description: commitDescription,
Flags: commitFlags,
Action: commitCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID IMAGE",
SkipArgReorder: true,
}
)
@@ -143,7 +149,10 @@ func commitCmd(c *cli.Context) error {
}
err = builder.Commit(dest, options)
if err != nil {
return errors.Wrapf(err, "error committing container %q to %q", builder.Container, image)
return util.GetFailureCause(
err,
errors.Wrapf(err, "error committing container %q to %q", builder.Container, image),
)
}
if c.Bool("rm") {

View File

@@ -1,20 +1,30 @@
package main
import (
"fmt"
"net"
"os"
"reflect"
"regexp"
"strings"
"syscall"
"time"
is "github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/containers/storage"
units "github.com/docker/go-units"
digest "github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
"golang.org/x/crypto/ssh/terminal"
)
const (
// SeccompDefaultPath defines the default seccomp path
SeccompDefaultPath = "/usr/share/containers/seccomp.json"
// SeccompOverridePath if this exists it overrides the default seccomp path
SeccompOverridePath = "/etc/crio/seccomp.json"
)
var needToShutdownStore = false
@@ -146,25 +156,35 @@ func systemContextFromOptions(c *cli.Context) (*types.SystemContext, error) {
return ctx, nil
}
func parseCreds(creds string) (string, string, error) {
func parseCreds(creds string) (string, string) {
if creds == "" {
return "", "", errors.Wrapf(syscall.EINVAL, "credentials can't be empty")
return "", ""
}
up := strings.SplitN(creds, ":", 2)
if len(up) == 1 {
return up[0], "", nil
return up[0], ""
}
if up[0] == "" {
return "", "", errors.Wrapf(syscall.EINVAL, "username can't be empty")
return "", up[1]
}
return up[0], up[1], nil
return up[0], up[1]
}
func getDockerAuth(creds string) (*types.DockerAuthConfig, error) {
username, password, err := parseCreds(creds)
if err != nil {
return nil, err
username, password := parseCreds(creds)
if username == "" {
fmt.Print("Username: ")
fmt.Scanln(&username)
}
if password == "" {
fmt.Print("Password: ")
termPassword, err := terminal.ReadPassword(0)
if err != nil {
return nil, errors.Wrapf(err, "could not read password from terminal")
}
password = string(termPassword)
}
return &types.DockerAuthConfig{
Username: username,
Password: password,
@@ -206,3 +226,243 @@ func validateFlags(c *cli.Context, flags []cli.Flag) error {
}
return nil
}
var fromAndBudFlags = []cli.Flag{
cli.StringSliceFlag{
Name: "add-host",
Usage: "add a custom host-to-IP mapping (host:ip) (default [])",
},
cli.StringFlag{
Name: "cgroup-parent",
Usage: "optional parent cgroup for the container",
},
cli.Uint64Flag{
Name: "cpu-period",
Usage: "limit the CPU CFS (Completely Fair Scheduler) period",
},
cli.Int64Flag{
Name: "cpu-quota",
Usage: "limit the CPU CFS (Completely Fair Scheduler) quota",
},
cli.Uint64Flag{
Name: "cpu-shares",
Usage: "CPU shares (relative weight)",
},
cli.StringFlag{
Name: "cpuset-cpus",
Usage: "CPUs in which to allow execution (0-3, 0,1)",
},
cli.StringFlag{
Name: "cpuset-mems",
Usage: "memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.",
},
cli.StringFlag{
Name: "memory, m",
Usage: "memory limit (format: <number>[<unit>], where unit = b, k, m or g)",
},
cli.StringFlag{
Name: "memory-swap",
Usage: "swap limit equal to memory plus swap: '-1' to enable unlimited swap",
},
cli.StringSliceFlag{
Name: "security-opt",
Usage: "security Options (default [])",
},
cli.StringFlag{
Name: "shm-size",
Usage: "size of `/dev/shm`. The format is `<number><unit>`.",
Value: "65536k",
},
cli.StringSliceFlag{
Name: "ulimit",
Usage: "ulimit options (default [])",
},
cli.StringSliceFlag{
Name: "volume, v",
Usage: "bind mount a volume into the container (default [])",
},
}
func parseCommonBuildOptions(c *cli.Context) (*buildah.CommonBuildOptions, error) {
var (
memoryLimit int64
memorySwap int64
err error
)
if c.String("memory") != "" {
memoryLimit, err = units.RAMInBytes(c.String("memory"))
if err != nil {
return nil, errors.Wrapf(err, "invalid value for memory")
}
}
if c.String("memory-swap") != "" {
memorySwap, err = units.RAMInBytes(c.String("memory-swap"))
if err != nil {
return nil, errors.Wrapf(err, "invalid value for memory-swap")
}
}
if len(c.StringSlice("add-host")) > 0 {
for _, host := range c.StringSlice("add-host") {
if err := validateExtraHost(host); err != nil {
return nil, errors.Wrapf(err, "invalid value for add-host")
}
}
}
if _, err := units.FromHumanSize(c.String("shm-size")); err != nil {
return nil, errors.Wrapf(err, "invalid --shm-size")
}
if err := parseVolumes(c.StringSlice("volume")); err != nil {
return nil, err
}
commonOpts := &buildah.CommonBuildOptions{
AddHost: c.StringSlice("add-host"),
CgroupParent: c.String("cgroup-parent"),
CPUPeriod: c.Uint64("cpu-period"),
CPUQuota: c.Int64("cpu-quota"),
CPUSetCPUs: c.String("cpuset-cpus"),
CPUSetMems: c.String("cpuset-mems"),
CPUShares: c.Uint64("cpu-shares"),
Memory: memoryLimit,
MemorySwap: memorySwap,
ShmSize: c.String("shm-size"),
Ulimit: c.StringSlice("ulimit"),
Volumes: c.StringSlice("volume"),
}
if err := parseSecurityOpts(c.StringSlice("security-opt"), commonOpts); err != nil {
return nil, err
}
return commonOpts, nil
}
func parseSecurityOpts(securityOpts []string, commonOpts *buildah.CommonBuildOptions) error {
for _, opt := range securityOpts {
if opt == "no-new-privileges" {
return errors.Errorf("no-new-privileges is not supported")
}
con := strings.SplitN(opt, "=", 2)
if len(con) != 2 {
return errors.Errorf("Invalid --security-opt 1: %q", opt)
}
switch con[0] {
case "label":
commonOpts.LabelOpts = append(commonOpts.LabelOpts, con[1])
case "apparmor":
commonOpts.ApparmorProfile = con[1]
case "seccomp":
commonOpts.SeccompProfilePath = con[1]
default:
return errors.Errorf("Invalid --security-opt 2: %q", opt)
}
}
if commonOpts.SeccompProfilePath == "" {
if _, err := os.Stat(SeccompOverridePath); err == nil {
commonOpts.SeccompProfilePath = SeccompOverridePath
} else {
if !os.IsNotExist(err) {
return errors.Wrapf(err, "can't check if %q exists", SeccompOverridePath)
}
if _, err := os.Stat(SeccompDefaultPath); err != nil {
if !os.IsNotExist(err) {
return errors.Wrapf(err, "can't check if %q exists", SeccompDefaultPath)
}
} else {
commonOpts.SeccompProfilePath = SeccompDefaultPath
}
}
}
return nil
}
func parseVolumes(volumes []string) error {
if len(volumes) == 0 {
return nil
}
for _, volume := range volumes {
arr := strings.SplitN(volume, ":", 3)
if len(arr) < 2 {
return errors.Errorf("incorrect volume format %q, should be host-dir:ctr-dir[:option]", volume)
}
if err := validateVolumeHostDir(arr[0]); err != nil {
return err
}
if err := validateVolumeCtrDir(arr[1]); err != nil {
return err
}
if len(arr) > 2 {
if err := validateVolumeOpts(arr[2]); err != nil {
return err
}
}
}
return nil
}
func validateVolumeHostDir(hostDir string) error {
if _, err := os.Stat(hostDir); err != nil {
return errors.Wrapf(err, "error checking path %q", hostDir)
}
return nil
}
func validateVolumeCtrDir(ctrDir string) error {
if ctrDir[0] != '/' {
return errors.Errorf("invalid container directory path %q", ctrDir)
}
return nil
}
func validateVolumeOpts(option string) error {
var foundRootPropagation, foundRWRO, foundLabelChange int
options := strings.Split(option, ",")
for _, opt := range options {
switch opt {
case "rw", "ro":
if foundRWRO > 1 {
return errors.Errorf("invalid options %q, can only specify 1 'rw' or 'ro' option", option)
}
foundRWRO++
case "z", "Z":
if foundLabelChange > 1 {
return errors.Errorf("invalid options %q, can only specify 1 'z' or 'Z' option", option)
}
foundLabelChange++
case "private", "rprivate", "shared", "rshared", "slave", "rslave":
if foundRootPropagation > 1 {
return errors.Errorf("invalid options %q, can only specify 1 '[r]shared', '[r]private' or '[r]slave' option", option)
}
foundRootPropagation++
default:
return errors.Errorf("invalid option type %q", option)
}
}
return nil
}
// validateExtraHost validates that the specified string is a valid extrahost and returns it.
// ExtraHost is in the form of name:ip where the ip has to be a valid ip (ipv4 or ipv6).
// for add-host flag
func validateExtraHost(val string) error {
// allow for IPv6 addresses in extra hosts by only splitting on first ":"
arr := strings.SplitN(val, ":", 2)
if len(arr) != 2 || len(arr[0]) == 0 {
return fmt.Errorf("bad format for add-host: %q", val)
}
if _, err := validateIPAddress(arr[1]); err != nil {
return fmt.Errorf("invalid IP address in add-host: %q", arr[1])
}
return nil
}
// validateIPAddress validates an Ip address.
// for dns, ip, and ip6 flags also
func validateIPAddress(val string) (string, error) {
var ip = net.ParseIP(strings.TrimSpace(val))
if ip != nil {
return ip.String(), nil
}
return "", fmt.Errorf("%s is not an ip address", val)
}

View File

@@ -105,9 +105,13 @@ func pullTestImage(t *testing.T, imageName string) (string, error) {
if err != nil {
t.Fatal(err)
}
commonOpts := &buildah.CommonBuildOptions{
LabelOpts: nil,
}
options := buildah.BuilderOptions{
FromImage: imageName,
SignaturePolicyPath: signaturePolicyPath,
CommonBuildOpts: commonOpts,
}
b, err := buildah.NewBuilder(store, options)

View File

@@ -74,12 +74,13 @@ var (
}
configDescription = "Modifies the configuration values which will be saved to the image"
configCommand = cli.Command{
Name: "config",
Usage: "Update image configuration settings",
Description: configDescription,
Flags: configFlags,
Action: configCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
Name: "config",
Usage: "Update image configuration settings",
Description: configDescription,
Flags: configFlags,
Action: configCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
SkipArgReorder: true,
}
)

View File

@@ -3,7 +3,11 @@ package main
import (
"encoding/json"
"fmt"
"os"
"strings"
"text/template"
"github.com/containers/storage"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
@@ -17,12 +21,43 @@ type jsonContainer struct {
ContainerName string `json:"containername"`
}
type containerOutputParams struct {
ContainerID string
Builder string
ImageID string
ImageName string
ContainerName string
}
type containerOptions struct {
all bool
format string
json bool
noHeading bool
noTruncate bool
quiet bool
}
type containerFilterParams struct {
id string
name string
ancestor string
}
var (
containersFlags = []cli.Flag{
cli.BoolFlag{
Name: "all, a",
Usage: "also list non-buildah containers",
},
cli.StringFlag{
Name: "filter, f",
Usage: "filter output based on conditions provided",
},
cli.StringFlag{
Name: "format",
Usage: "pretty-print containers using a Go template",
},
cli.BoolFlag{
Name: "json",
Usage: "output in JSON format",
@@ -42,12 +77,13 @@ var (
}
containersDescription = "Lists containers which appear to be " + buildah.Package + " working containers, their\n names and IDs, and the names and IDs of the images from which they were\n initialized"
containersCommand = cli.Command{
Name: "containers",
Usage: "List working containers and their base images",
Description: containersDescription,
Flags: containersFlags,
Action: containersCmd,
ArgsUsage: " ",
Name: "containers",
Usage: "List working containers and their base images",
Description: containersDescription,
Flags: containersFlags,
Action: containersCmd,
ArgsUsage: " ",
SkipArgReorder: true,
}
)
@@ -60,38 +96,35 @@ func containersCmd(c *cli.Context) error {
return err
}
quiet := c.Bool("quiet")
truncate := !c.Bool("notruncate")
JSONContainers := []jsonContainer{}
jsonOut := c.Bool("json")
if c.IsSet("quiet") && c.IsSet("format") {
return errors.Errorf("quiet and format are mutually exclusive")
}
list := func(n int, containerID, imageID, image, container string, isBuilder bool) {
if jsonOut {
JSONContainers = append(JSONContainers, jsonContainer{ID: containerID, Builder: isBuilder, ImageID: imageID, ImageName: image, ContainerName: container})
return
}
opts := containerOptions{
all: c.Bool("all"),
format: c.String("format"),
json: c.Bool("json"),
noHeading: c.Bool("noheading"),
noTruncate: c.Bool("notruncate"),
quiet: c.Bool("quiet"),
}
if n == 0 && !c.Bool("noheading") && !quiet {
if truncate {
fmt.Printf("%-12s %-8s %-12s %-32s %s\n", "CONTAINER ID", "BUILDER", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
} else {
fmt.Printf("%-64s %-8s %-64s %-32s %s\n", "CONTAINER ID", "BUILDER", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
}
}
if quiet {
fmt.Printf("%s\n", containerID)
} else {
isBuilderValue := ""
if isBuilder {
isBuilderValue = " *"
}
if truncate {
fmt.Printf("%-12.12s %-8s %-12.12s %-32s %s\n", containerID, isBuilderValue, imageID, image, container)
} else {
fmt.Printf("%-64s %-8s %-64s %-32s %s\n", containerID, isBuilderValue, imageID, image, container)
}
var params *containerFilterParams
if c.IsSet("filter") {
params, err = parseCtrFilter(c.String("filter"))
if err != nil {
return errors.Wrapf(err, "error parsing filter")
}
}
if !opts.noHeading && !opts.quiet && opts.format == "" && !opts.json {
containerOutputHeader(!opts.noTruncate)
}
return outputContainers(store, opts, params)
}
func outputContainers(store storage.Store, opts containerOptions, params *containerFilterParams) error {
seenImages := make(map[string]string)
imageNameForID := func(id string) string {
if id == "" {
@@ -112,12 +145,36 @@ func containersCmd(c *cli.Context) error {
if err != nil {
return errors.Wrapf(err, "error reading build containers")
}
if !c.Bool("all") {
for i, builder := range builders {
var (
containerOutput []containerOutputParams
JSONContainers []jsonContainer
)
if !opts.all {
// only output containers created by buildah
for _, builder := range builders {
image := imageNameForID(builder.FromImageID)
list(i, builder.ContainerID, builder.FromImageID, image, builder.Container, true)
if !matchesCtrFilter(builder.ContainerID, builder.Container, builder.FromImageID, image, params) {
continue
}
if opts.json {
JSONContainers = append(JSONContainers, jsonContainer{ID: builder.ContainerID,
Builder: true,
ImageID: builder.FromImageID,
ImageName: image,
ContainerName: builder.Container})
continue
}
output := containerOutputParams{
ContainerID: builder.ContainerID,
Builder: " *",
ImageID: builder.FromImageID,
ImageName: image,
ContainerName: builder.Container,
}
containerOutput = append(containerOutput, output)
}
} else {
// output all containers currently in storage
builderMap := make(map[string]struct{})
for _, builder := range builders {
builderMap[builder.ContainerID] = struct{}{}
@@ -126,22 +183,136 @@ func containersCmd(c *cli.Context) error {
if err2 != nil {
return errors.Wrapf(err2, "error reading list of all containers")
}
for i, container := range containers {
for _, container := range containers {
name := ""
if len(container.Names) > 0 {
name = container.Names[0]
}
_, ours := builderMap[container.ID]
list(i, container.ID, container.ImageID, imageNameForID(container.ImageID), name, ours)
builder := ""
if ours {
builder = " *"
}
if !matchesCtrFilter(container.ID, name, container.ImageID, imageNameForID(container.ImageID), params) {
continue
}
if opts.json {
JSONContainers = append(JSONContainers, jsonContainer{ID: container.ID,
Builder: ours,
ImageID: container.ImageID,
ImageName: imageNameForID(container.ImageID),
ContainerName: name})
}
output := containerOutputParams{
ContainerID: container.ID,
Builder: builder,
ImageID: container.ImageID,
ImageName: imageNameForID(container.ImageID),
ContainerName: name,
}
containerOutput = append(containerOutput, output)
}
}
if jsonOut {
if opts.json {
data, err := json.MarshalIndent(JSONContainers, "", " ")
if err != nil {
return err
}
fmt.Printf("%s\n", data)
return nil
}
for _, ctr := range containerOutput {
if opts.quiet {
fmt.Printf("%-64s\n", ctr.ContainerID)
continue
}
if opts.format != "" {
if err := containerOutputUsingTemplate(opts.format, ctr); err != nil {
return err
}
continue
}
containerOutputUsingFormatString(!opts.noTruncate, ctr)
}
return nil
}
func containerOutputUsingTemplate(format string, params containerOutputParams) error {
tmpl, err := template.New("container").Parse(format)
if err != nil {
return errors.Wrapf(err, "Template parsing error")
}
err = tmpl.Execute(os.Stdout, params)
if err != nil {
return err
}
fmt.Println()
return nil
}
func containerOutputUsingFormatString(truncate bool, params containerOutputParams) {
if truncate {
fmt.Printf("%-12.12s %-8s %-12.12s %-32s %s\n", params.ContainerID, params.Builder, params.ImageID, params.ImageName, params.ContainerName)
} else {
fmt.Printf("%-64s %-8s %-64s %-32s %s\n", params.ContainerID, params.Builder, params.ImageID, params.ImageName, params.ContainerName)
}
}
func containerOutputHeader(truncate bool) {
if truncate {
fmt.Printf("%-12s %-8s %-12s %-32s %s\n", "CONTAINER ID", "BUILDER", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
} else {
fmt.Printf("%-64s %-8s %-64s %-32s %s\n", "CONTAINER ID", "BUILDER", "IMAGE ID", "IMAGE NAME", "CONTAINER NAME")
}
}
func parseCtrFilter(filter string) (*containerFilterParams, error) {
params := new(containerFilterParams)
filters := strings.Split(filter, ",")
for _, param := range filters {
pair := strings.SplitN(param, "=", 2)
if len(pair) != 2 {
return nil, errors.Errorf("incorrect filter value %q, should be of form filter=value", param)
}
switch strings.TrimSpace(pair[0]) {
case "id":
params.id = pair[1]
case "name":
params.name = pair[1]
case "ancestor":
params.ancestor = pair[1]
default:
return nil, errors.Errorf("invalid filter %q", pair[0])
}
}
return params, nil
}
func matchesCtrName(ctrName, argName string) bool {
return strings.Contains(ctrName, argName)
}
func matchesAncestor(imgName, imgID, argName string) bool {
if matchesID(imgID, argName) {
return true
}
return matchesReference(imgName, argName)
}
func matchesCtrFilter(ctrID, ctrName, imgID, imgName string, params *containerFilterParams) bool {
if params == nil {
return true
}
if params.id != "" && !matchesID(ctrID, params.id) {
return false
}
if params.name != "" && !matchesCtrName(ctrName, params.name) {
return false
}
if params.ancestor != "" && !matchesAncestor(imgName, imgID, params.ancestor) {
return false
}
return true
}

View File

@@ -23,7 +23,7 @@ var (
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `username[:password]` for accessing the registry",
Usage: "use `[username[:password]]` for accessing the registry",
},
cli.StringFlag{
Name: "name",
@@ -53,12 +53,13 @@ var (
fromDescription = "Creates a new working container, either from scratch or using a specified\n image as a starting point"
fromCommand = cli.Command{
Name: "from",
Usage: "Create a working container based on an image",
Description: fromDescription,
Flags: fromFlags,
Action: fromCmd,
ArgsUsage: "IMAGE",
Name: "from",
Usage: "Create a working container based on an image",
Description: fromDescription,
Flags: append(fromFlags, fromAndBudFlags...),
Action: fromCmd,
ArgsUsage: "IMAGE",
SkipArgReorder: true,
}
)
@@ -94,6 +95,11 @@ func fromCmd(c *cli.Context) error {
return err
}
commonOpts, err := parseCommonBuildOptions(c)
if err != nil {
return err
}
options := buildah.BuilderOptions{
FromImage: args[0],
Container: c.String("name"),
@@ -101,7 +107,9 @@ func fromCmd(c *cli.Context) error {
SignaturePolicyPath: signaturePolicy,
SystemContext: systemContext,
DefaultMountsFilePath: c.GlobalString("default-mounts-file"),
CommonBuildOpts: commonOpts,
}
if !c.Bool("quiet") {
options.ReportWriter = os.Stderr
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
"golang.org/x/crypto/ssh/terminal"
)
type jsonImage struct {
@@ -51,7 +52,7 @@ var (
},
cli.StringFlag{
Name: "format",
Usage: "pretty-print images using a Go template. will override --quiet",
Usage: "pretty-print images using a Go template",
},
cli.BoolFlag{
Name: "json",
@@ -73,12 +74,13 @@ var (
imagesDescription = "Lists locally stored images."
imagesCommand = cli.Command{
Name: "images",
Usage: "List images in local storage",
Description: imagesDescription,
Flags: imagesFlags,
Action: imagesCmd,
ArgsUsage: " ",
Name: "images",
Usage: "List images in local storage",
Description: imagesDescription,
Flags: imagesFlags,
Action: imagesCmd,
ArgsUsage: " ",
SkipArgReorder: true,
}
)
@@ -96,6 +98,10 @@ func imagesCmd(c *cli.Context) error {
return errors.Wrapf(err, "error reading images")
}
if c.IsSet("quiet") && c.IsSet("format") {
return errors.Errorf("quiet and format are mutually exclusive")
}
quiet := c.Bool("quiet")
truncate := !c.Bool("no-trunc")
digests := c.Bool("digests")
@@ -125,8 +131,6 @@ func imagesCmd(c *cli.Context) error {
if err != nil {
return errors.Wrapf(err, "error parsing filter")
}
} else {
params = nil
}
if len(images) > 0 && !c.Bool("noheading") && !quiet && !hasTemplate {
@@ -370,7 +374,9 @@ func outputUsingTemplate(format string, params imageOutputParams) error {
if err != nil {
return err
}
fmt.Println()
if terminal.IsTerminal(int(os.Stdout.Fd())) {
fmt.Println()
}
return nil
}

View File

@@ -253,7 +253,7 @@ func TestOutputImagesFormatString(t *testing.T) {
output, err := captureOutputWithError(func() error {
return outputImages(images[:1], "{{.ID}}", store, nil, "", true, true, false, false)
})
expectedOutput := fmt.Sprintf("%s", images[0].ID)
expectedOutput := images[0].ID
if err != nil {
t.Error("format string output produces error")
} else if strings.TrimSpace(output) != strings.TrimSpace(expectedOutput) {

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/urfave/cli"
"golang.org/x/crypto/ssh/terminal"
)
const (
@@ -33,12 +34,13 @@ var (
}
inspectDescription = "Inspects a build container's or built image's configuration."
inspectCommand = cli.Command{
Name: "inspect",
Usage: "Inspects the configuration of a container or image",
Description: inspectDescription,
Flags: inspectFlags,
Action: inspectCmd,
ArgsUsage: "CONTAINER-OR-IMAGE",
Name: "inspect",
Usage: "Inspects the configuration of a container or image",
Description: inspectDescription,
Flags: inspectFlags,
Action: inspectCmd,
ArgsUsage: "CONTAINER-OR-IMAGE",
SkipArgReorder: true,
}
)
@@ -96,13 +98,19 @@ func inspectCmd(c *cli.Context) error {
}
if c.IsSet("format") {
return t.Execute(os.Stdout, buildah.GetBuildInfo(builder))
if err := t.Execute(os.Stdout, buildah.GetBuildInfo(builder)); err != nil {
return err
}
if terminal.IsTerminal(int(os.Stdout.Fd())) {
fmt.Println()
}
return nil
}
b, err := json.MarshalIndent(builder, "", " ")
if err != nil {
return errors.Wrapf(err, "error encoding build container as json")
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
if terminal.IsTerminal(int(os.Stdout.Fd())) {
enc.SetEscapeHTML(false)
}
_, err = fmt.Println(string(b))
return err
return enc.Encode(builder)
}

View File

@@ -16,12 +16,13 @@ var (
},
}
mountCommand = cli.Command{
Name: "mount",
Usage: "Mount a working container's root filesystem",
Description: mountDescription,
Action: mountCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
Flags: mountFlags,
Name: "mount",
Usage: "Mount a working container's root filesystem",
Description: mountDescription,
Action: mountCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
Flags: mountFlags,
SkipArgReorder: true,
}
)

View File

@@ -12,6 +12,7 @@ import (
imgspecv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
"github.com/projectatomic/buildah/util"
"github.com/urfave/cli"
)
@@ -29,7 +30,7 @@ var (
cli.StringFlag{
Name: "creds",
Value: "",
Usage: "use `username[:password]` for accessing the registry",
Usage: "use `[username[:password]]` for accessing the registry",
},
cli.BoolFlag{
Name: "disable-compression, D",
@@ -64,12 +65,13 @@ var (
`, strings.Join(transports.ListNames(), ", "))
pushCommand = cli.Command{
Name: "push",
Usage: "Push an image to a specified destination",
Description: pushDescription,
Flags: pushFlags,
Action: pushCmd,
ArgsUsage: "IMAGE DESTINATION",
Name: "push",
Usage: "Push an image to a specified destination",
Description: pushDescription,
Flags: pushFlags,
Action: pushCmd,
ArgsUsage: "IMAGE DESTINATION",
SkipArgReorder: true,
}
)
@@ -141,7 +143,10 @@ func pushCmd(c *cli.Context) error {
err = buildah.Push(src, dest, options)
if err != nil {
return errors.Wrapf(err, "error pushing image %q to %q", src, destSpec)
return util.GetFailureCause(
err,
errors.Wrapf(err, "error pushing image %q to %q", src, destSpec),
)
}
return nil

View File

@@ -2,6 +2,7 @@ package main
import (
"fmt"
"io"
"os"
"github.com/pkg/errors"
@@ -10,19 +11,36 @@ import (
var (
rmDescription = "Removes one or more working containers, unmounting them if necessary"
rmCommand = cli.Command{
Name: "rm",
Aliases: []string{"delete"},
Usage: "Remove one or more working containers",
Description: rmDescription,
Action: rmCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [...]",
rmFlags = []cli.Flag{
cli.BoolFlag{
Name: "all, a",
Usage: "remove all containers",
},
}
rmCommand = cli.Command{
Name: "rm",
Aliases: []string{"delete"},
Usage: "Remove one or more working containers",
Description: rmDescription,
Action: rmCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID [...]",
Flags: rmFlags,
SkipArgReorder: true,
}
)
// writeError writes `lastError` into `w` if not nil and return the next error `err`
func writeError(w io.Writer, err error, lastError error) error {
if lastError != nil {
fmt.Fprintln(w, lastError)
}
return err
}
func rmCmd(c *cli.Context) error {
delContainerErrStr := "error removing container"
args := c.Args()
if len(args) == 0 {
if len(args) == 0 && !c.Bool("all") {
return errors.Errorf("container ID must be specified")
}
store, err := getStore(c)
@@ -30,28 +48,36 @@ func rmCmd(c *cli.Context) error {
return err
}
var e error
for _, name := range args {
builder, err := openBuilder(store, name)
if e == nil {
e = err
}
var lastError error
if c.Bool("all") {
builders, err := openBuilders(store)
if err != nil {
fmt.Fprintf(os.Stderr, "error reading build container %q: %v\n", name, err)
continue
return errors.Wrapf(err, "error reading build containers")
}
id := builder.ContainerID
err = builder.Delete()
if e == nil {
e = err
for _, builder := range builders {
id := builder.ContainerID
if err = builder.Delete(); err != nil {
lastError = writeError(os.Stderr, errors.Wrapf(err, "%s %q", delContainerErrStr, builder.Container), lastError)
continue
}
fmt.Printf("%s\n", id)
}
if err != nil {
fmt.Fprintf(os.Stderr, "error removing container %q: %v\n", builder.Container, err)
continue
} else {
for _, name := range args {
builder, err := openBuilder(store, name)
if err != nil {
lastError = writeError(os.Stderr, errors.Wrapf(err, "%s %q", delContainerErrStr, name), lastError)
continue
}
id := builder.ContainerID
if err = builder.Delete(); err != nil {
lastError = writeError(os.Stderr, errors.Wrapf(err, "%s %q", delContainerErrStr, name), lastError)
continue
}
fmt.Printf("%s\n", id)
}
fmt.Printf("%s\n", id)
}
return e
return lastError
}

View File

@@ -2,6 +2,7 @@ package main
import (
"fmt"
"os"
is "github.com/containers/image/storage"
"github.com/containers/image/transports"
@@ -16,28 +17,46 @@ import (
var (
rmiDescription = "removes one or more locally stored images."
rmiFlags = []cli.Flag{
cli.BoolFlag{
Name: "all, a",
Usage: "remove all images",
},
cli.BoolFlag{
Name: "prune, p",
Usage: "prune dangling images",
},
cli.BoolFlag{
Name: "force, f",
Usage: "force removal of the image",
Usage: "force removal of the image and any containers using the image",
},
}
rmiCommand = cli.Command{
Name: "rmi",
Usage: "removes one or more images from local storage",
Description: rmiDescription,
Action: rmiCmd,
ArgsUsage: "IMAGE-NAME-OR-ID [...]",
Flags: rmiFlags,
Name: "rmi",
Usage: "removes one or more images from local storage",
Description: rmiDescription,
Action: rmiCmd,
ArgsUsage: "IMAGE-NAME-OR-ID [...]",
Flags: rmiFlags,
SkipArgReorder: true,
}
)
func rmiCmd(c *cli.Context) error {
force := c.Bool("force")
removeAll := c.Bool("all")
pruneDangling := c.Bool("prune")
args := c.Args()
if len(args) == 0 {
if len(args) == 0 && !removeAll && !pruneDangling {
return errors.Errorf("image name or ID must be specified")
}
if len(args) > 0 && removeAll {
return errors.Errorf("when using the --all switch, you may not pass any images names or IDs")
}
if removeAll && pruneDangling {
return errors.Errorf("when using the --all switch, you may not use --prune switch")
}
if err := validateFlags(c, rmiFlags); err != nil {
return err
}
@@ -47,26 +66,62 @@ func rmiCmd(c *cli.Context) error {
return err
}
for _, id := range args {
image, err := getImage(id, store)
imagesToDelete := args[:]
var lastError error
if removeAll {
imagesToDelete, err = findAllImages(store)
if err != nil {
return errors.Wrapf(err, "could not get image %q", id)
return err
}
}
if pruneDangling {
imagesToDelete, err = findDanglingImages(store)
if err != nil {
return err
}
}
for _, id := range imagesToDelete {
image, err := getImage(id, store)
if err != nil || image == nil {
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
if err == nil {
err = storage.ErrNotAnImage
}
lastError = errors.Wrapf(err, "could not get image %q", id)
continue
}
if image != nil {
ctrIDs, err := runningContainers(image, store)
if err != nil {
return errors.Wrapf(err, "error getting running containers for image %q", id)
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(err, "error getting running containers for image %q", id)
continue
}
if len(ctrIDs) > 0 && len(image.Names) <= 1 {
if force {
err = removeContainers(ctrIDs, store)
if err != nil {
return errors.Wrapf(err, "error removing containers %v for image %q", ctrIDs, id)
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(err, "error removing containers %v for image %q", ctrIDs, id)
continue
}
} else {
for _, ctrID := range ctrIDs {
return fmt.Errorf("Could not remove image %q (must force) - container %q is using its reference image", id, ctrID)
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(storage.ErrImageUsedByContainer, "Could not remove image %q (must force) - container %q is using its reference image", id, ctrID)
}
continue
}
}
// If the user supplied an ID, we cannot delete the image if it is referred to by multiple tags
@@ -79,7 +134,11 @@ func rmiCmd(c *cli.Context) error {
} else {
name, err2 := untagImage(id, image, store)
if err2 != nil {
return errors.Wrapf(err, "error removing tag %q from image %q", id, image.ID)
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(err2, "error removing tag %q from image %q", id, image.ID)
continue
}
fmt.Printf("untagged: %s\n", name)
}
@@ -89,13 +148,17 @@ func rmiCmd(c *cli.Context) error {
}
id, err := removeImage(image, store)
if err != nil {
return errors.Wrapf(err, "error removing image %q", image.ID)
if lastError != nil {
fmt.Fprintln(os.Stderr, lastError)
}
lastError = errors.Wrapf(err, "error removing image %q", image.ID)
continue
}
fmt.Printf("%s\n", id)
}
}
return nil
return lastError
}
func getImage(id string, store storage.Store) (*storage.Image, error) {
@@ -178,7 +241,7 @@ func removeContainers(ctrIDs []string, store storage.Store) error {
func properImageRef(id string) (types.ImageReference, error) {
var err error
if ref, err := alltransports.ParseImageName(id); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
if img, err2 := ref.NewImageSource(nil); err2 == nil {
img.Close()
return ref, nil
}
@@ -192,7 +255,7 @@ func properImageRef(id string) (types.ImageReference, error) {
func storageImageRef(store storage.Store, id string) (types.ImageReference, error) {
var err error
if ref, err := is.Transport.ParseStoreReference(store, id); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
if img, err2 := ref.NewImageSource(nil); err2 == nil {
img.Close()
return ref, nil
}
@@ -211,7 +274,7 @@ func storageImageID(store storage.Store, id string) (types.ImageReference, error
imageID = img.ID
}
if ref, err := is.Transport.ParseStoreReference(store, imageID); err == nil {
if img, err2 := ref.NewImage(nil); err2 == nil {
if img, err2 := ref.NewImageSource(nil); err2 == nil {
img.Close()
return ref, nil
}
@@ -219,3 +282,35 @@ func storageImageID(store storage.Store, id string) (types.ImageReference, error
}
return nil, errors.Wrapf(err, "error parsing %q as a storage image reference: %v", id)
}
// Returns a list of all existing images
func findAllImages(store storage.Store) ([]string, error) {
imagesToDelete := []string{}
images, err := store.Images()
if err != nil {
return nil, errors.Wrapf(err, "error reading images")
}
for _, image := range images {
imagesToDelete = append(imagesToDelete, image.ID)
}
return imagesToDelete, nil
}
// Returns a list of all dangling images
func findDanglingImages(store storage.Store) ([]string, error) {
imagesToDelete := []string{}
images, err := store.Images()
if err != nil {
return nil, errors.Wrapf(err, "error reading images")
}
for _, image := range images {
if len(image.Names) == 0 {
imagesToDelete = append(imagesToDelete, image.ID)
}
}
return imagesToDelete, nil
}

View File

@@ -17,7 +17,7 @@ var (
runFlags = []cli.Flag{
cli.StringFlag{
Name: "hostname",
Usage: "Set the hostname inside of the container",
Usage: "set the hostname inside of the container",
},
cli.StringFlag{
Name: "runtime",
@@ -28,6 +28,10 @@ var (
Name: "runtime-flag",
Usage: "add global flags for the container runtime",
},
cli.StringSliceFlag{
Name: "security-opt",
Usage: "security Options (default [])",
},
cli.BoolFlag{
Name: "tty",
Usage: "allocate a pseudo-TTY in the container",
@@ -39,12 +43,13 @@ var (
}
runDescription = "Runs a specified command using the container's root filesystem as a root\n filesystem, using configuration settings inherited from the container's\n image or as specified using previous calls to the config command"
runCommand = cli.Command{
Name: "run",
Usage: "Run a command inside of the container",
Description: runDescription,
Flags: runFlags,
Action: runCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID COMMAND [ARGS [...]]",
Name: "run",
Usage: "Run a command inside of the container",
Description: runDescription,
Flags: runFlags,
Action: runCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID COMMAND [ARGS [...]]",
SkipArgReorder: true,
}
)
@@ -73,10 +78,15 @@ func runCmd(c *cli.Context) error {
return errors.Wrapf(err, "error reading build container %q", name)
}
runtimeFlags := []string{}
for _, arg := range c.StringSlice("runtime-flag") {
runtimeFlags = append(runtimeFlags, "--"+arg)
}
options := buildah.RunOptions{
Hostname: c.String("hostname"),
Runtime: c.String("runtime"),
Args: c.StringSlice("runtime-flag"),
Args: runtimeFlags,
}
if c.IsSet("tty") {

View File

@@ -9,11 +9,12 @@ import (
var (
tagDescription = "Adds one or more additional names to locally-stored image"
tagCommand = cli.Command{
Name: "tag",
Usage: "Add an additional name to a local image",
Description: tagDescription,
Action: tagCmd,
ArgsUsage: "IMAGE-NAME [IMAGE-NAME ...]",
Name: "tag",
Usage: "Add an additional name to a local image",
Description: tagDescription,
Action: tagCmd,
ArgsUsage: "IMAGE-NAME [IMAGE-NAME ...]",
SkipArgReorder: true,
}
)

View File

@@ -7,12 +7,13 @@ import (
var (
umountCommand = cli.Command{
Name: "umount",
Aliases: []string{"unmount"},
Usage: "Unmount a working container's root filesystem",
Description: "Unmounts a working container's root filesystem",
Action: umountCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
Name: "umount",
Aliases: []string{"unmount"},
Usage: "Unmount a working container's root filesystem",
Description: "Unmounts a working container's root filesystem",
Action: umountCmd,
ArgsUsage: "CONTAINER-NAME-OR-ID",
SkipArgReorder: true,
}
)

View File

@@ -42,7 +42,8 @@ func versionCmd(c *cli.Context) error {
//cli command to print out the version info of buildah
var versionCommand = cli.Command{
Name: "version",
Usage: "Display the Buildah Version Information",
Action: versionCmd,
Name: "version",
Usage: "Display the Buildah Version Information",
Action: versionCmd,
SkipArgReorder: true,
}

View File

@@ -2,7 +2,6 @@ package buildah
import (
"encoding/json"
"fmt"
"path/filepath"
"runtime"
"strings"
@@ -139,23 +138,30 @@ func makeDockerV2S1Image(manifest docker.V2S1Manifest) (docker.V2Image, error) {
}
// Build a filesystem history.
history := []docker.V2S2History{}
lastID := ""
for i := range manifest.History {
h := docker.V2S2History{
Created: time.Now().UTC(),
Author: "",
CreatedBy: "",
Comment: "",
EmptyLayer: false,
}
// Decode the compatibility field.
dcompat := docker.V1Compatibility{}
if err2 := json.Unmarshal([]byte(manifest.History[i].V1Compatibility), &dcompat); err2 == nil {
h.Created = dcompat.Created.UTC()
h.Author = dcompat.Author
h.Comment = dcompat.Comment
if len(dcompat.ContainerConfig.Cmd) > 0 {
h.CreatedBy = fmt.Sprintf("%v", dcompat.ContainerConfig.Cmd)
}
h.EmptyLayer = dcompat.ThrowAway
if err = json.Unmarshal([]byte(manifest.History[i].V1Compatibility), &dcompat); err != nil {
return docker.V2Image{}, errors.Errorf("error parsing image compatibility data (%q) from history", manifest.History[i].V1Compatibility)
}
// Skip this history item if it shares the ID of the last one
// that we saw, since the image library will do the same.
if i > 0 && dcompat.ID == lastID {
continue
}
lastID = dcompat.ID
// Construct a new history item using the recovered information.
createdBy := ""
if len(dcompat.ContainerConfig.Cmd) > 0 {
createdBy = strings.Join(dcompat.ContainerConfig.Cmd, " ")
}
h := docker.V2S2History{
Created: dcompat.Created.UTC(),
Author: dcompat.Author,
CreatedBy: createdBy,
Comment: dcompat.Comment,
EmptyLayer: dcompat.ThrowAway,
}
// Prepend this layer to the list, because a v2s1 format manifest's list is in reverse order
// compared to v2s2, which lists earlier layers before later ones.

View File

@@ -210,6 +210,10 @@ return 1
_buildah_rmi() {
local boolean_options="
--all
-a
--force
-f
--help
-h
"
@@ -226,6 +230,8 @@ return 1
_buildah_rm() {
local boolean_options="
--all
-a
--help
-h
"
@@ -296,6 +302,7 @@ return 1
"
local options_with_args="
--authfile
--cert-dir
--creds
--signature-policy
@@ -345,16 +352,31 @@ return 1
"
local options_with_args="
--add-host
--authfile
--signature-policy
--build-arg
--cert-dir
--cgroup-parent
--cpu-period
--cpu-quota
--cpu-shares
--cpuset-cpus
--cpuset-mems
--creds
-f
--file
--format
--label
-m
--memory
--memory-swap
--runtime
--runtime-flag
--tag
--security-opt
--signature-policy
-t
--file
-f
--build-arg
--format
--tag
--ulimit
"
local all_options="$options_with_args $boolean_options"
@@ -390,6 +412,7 @@ return 1
--hostname
--runtime
--runtime-flag
--security-opt
--volume
-v
"
@@ -554,6 +577,9 @@ return 1
"
local options_with_args="
--filter
-f
--format
"
local all_options="$options_with_args $boolean_options"
@@ -631,11 +657,23 @@ return 1
"
local options_with_args="
--add-host
--authfile
--cert-dir
--cgroup-parent
--cpu-period
--cpu-quota
--cpu-shares
--cpuset-cpus
--cpuset-mems
--creds
-m
--memory
--memory-swap
--name
--signature-policy
--security-opt
--ulimit
"

View File

@@ -25,7 +25,8 @@
%global shortcommit %(c=%{commit}; echo ${c:0:7})
Name: buildah
Version: 0.10
# Bump version in buildah.go too
Version: 0.15
Release: 1.git%{shortcommit}%{?dist}
Summary: A command line tool used to creating OCI Images
License: ASL 2.0
@@ -41,6 +42,7 @@ BuildRequires: gpgme-devel
BuildRequires: device-mapper-devel
BuildRequires: btrfs-progs-devel
BuildRequires: libassuan-devel
BuildRequires: libseccomp-devel
BuildRequires: glib2-devel
BuildRequires: ostree-devel
BuildRequires: make
@@ -89,6 +91,64 @@ make DESTDIR=%{buildroot} PREFIX=%{_prefix} install install.completions
%{_datadir}/bash-completion/completions/*
%changelog
* Tue Feb 27 2018 Dan Walsh <dwalsh@redhat.com> 0.15-1
- Fix handling of buildah run command options
* Mon Feb 26 2018 Dan Walsh <dwalsh@redhat.com> 0.14-1
- If commonOpts do not exist, we should return rather then segfault
- Display full error string instead of just status
- Implement --volume and --shm-size for bud and from
- Fix secrets patch for buildah bud
- Fixes the naming issue of blobs and config for the dir transport by removing the .tar extension
* Thu Feb 22 2018 Dan Walsh <dwalsh@redhat.com> 0.13-1
- Vendor in latest containers/storage
- This fixes a large SELinux bug.
- run: do not open /etc/hosts if not needed
- Add the following flags to buildah bud and from
--add-host
--cgroup-parent
--cpu-period
--cpu-quota
--cpu-shares
--cpuset-cpus
--cpuset-mems
--memory
--memory-swap
--security-opt
--ulimit
* Mon Feb 12 2018 Dan Walsh <dwalsh@redhat.com> 0.12-1
- Added handing for simpler error message for Unknown Dockerfile instructions.
- Change default certs directory to /etc/containers/certs.d
- Vendor in latest containers/image
- Vendor in latest containers/storage
- build-using-dockerfile: set the 'author' field for MAINTAINER
- Return exit code 1 when buildah-rmi fails
- Trim the image reference to just its name before calling getImageName
- Touch up rmi -f usage statement
- Add --format and --filter to buildah containers
- Add --prune,-p option to rmi command
- Add authfile param to commit
- Fix --runtime-flag for buildah run and bud
- format should override quiet for images
- Allow all auth params to work with bud
- Do not overwrite directory permissions on --chown
- Unescape HTML characters output into the terminal
- Fix: setting the container name to the image
- Prompt for un/pwd if not supplied with --creds
- Make bud be really quiet
- Return a better error message when failed to resolve an image
- Update auth tests and fix bud man page
* Tue Jan 16 2018 Dan Walsh <dwalsh@redhat.com> 0.11-1
- Add --all to remove containers
- Add --all functionality to rmi
- Show ctrid when doing rm -all
- Ignore sequential duplicate layers when reading v2s1
- Lots of minor bug fixes
- Vendor in latest containers/image and containers/storage
* Sat Dec 23 2017 Dan Walsh <dwalsh@redhat.com> 0.10-1
- Display Config and Manifest as strings
- Bump containers/image

View File

@@ -14,9 +14,15 @@ to a temporary location.
## OPTIONS
**--add-host**=[]
Add a custom host-to-IP mapping (host:ip)
Add a line to /etc/hosts. The format is hostname:ip. The **--add-host** option can be set multiple times.
**--authfile** *path*
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--build-arg** *arg=value*
@@ -26,6 +32,84 @@ instructions read from the Dockerfiles in the same way that environment
variables are, but which will not be added to environment variable list in the
resulting image's configuration.
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry.
Default certificates directory is _/etc/containers/certs.d_.
**--cgroup-parent**=""
Path to cgroups under which the cgroup for the container will be created. If the path is not absolute, the path is considered to be relative to the cgroups path of the init process. Cgroups will be created if they do not already exist.
**--cpu-period**=*0*
Limit the CPU CFS (Completely Fair Scheduler) period
Limit the container's CPU usage. This flag tell the kernel to restrict the container's CPU usage to the period you specify.
**--cpu-quota**=*0*
Limit the CPU CFS (Completely Fair Scheduler) quota
Limit the container's CPU usage. By default, containers run with the full
CPU resource. This flag tell the kernel to restrict the container's CPU usage
to the quota you specify.
**--cpu-shares**=*0*
CPU shares (relative weight)
By default, all containers get the same proportion of CPU cycles. This proportion
can be modified by changing the container's CPU share weighting relative
to the weighting of all other running containers.
To modify the proportion from the default of 1024, use the **--cpu-shares**
flag to set the weighting to 2 or higher.
The proportion will only apply when CPU-intensive processes are running.
When tasks in one container are idle, other containers can use the
left-over CPU time. The actual amount of CPU time will vary depending on
the number of containers running on the system.
For example, consider three containers, one has a cpu-share of 1024 and
two others have a cpu-share setting of 512. When processes in all three
containers attempt to use 100% of CPU, the first container would receive
50% of the total CPU time. If you add a fourth container with a cpu-share
of 1024, the first container only gets 33% of the CPU. The remaining containers
receive 16.5%, 16.5% and 33% of the CPU.
On a multi-core system, the shares of CPU time are distributed over all CPU
cores. Even if a container is limited to less than 100% of CPU time, it can
use 100% of each individual CPU core.
For example, consider a system with more than three cores. If you start one
container **{C0}** with **-c=512** running one process, and another container
**{C1}** with **-c=1024** running two processes, this can result in the following
division of CPU shares:
PID container CPU CPU share
100 {C0} 0 100% of CPU0
101 {C1} 1 100% of CPU1
102 {C1} 2 100% of CPU2
**--cpuset-cpus**=""
CPUs in which to allow execution (0-3, 0,1)
**--cpuset-mems**=""
Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
If you have four memory nodes on your system (0-3), use `--cpuset-mems=0,1`
then processes in your container will only use memory from the first
two memory nodes.
**--creds** *creds*
The [username[:password]] to use to authenticate with the registry if required.
If one or both values are not supplied, a command line prompt will appear and the
value can be entered. The password is entered without echo.
**-f, --file** *Dockerfile*
Specifies a Dockerfile which contains instructions for building the image,
@@ -43,6 +127,27 @@ Control the format for the built image's manifest and configuration data.
Recognized formats include *oci* (OCI image-spec v1.0, the default) and
*docker* (version 2, using schema format 2 for the manifest).
**-m**, **--memory**=""
Memory limit (format: <number>[<unit>], where unit = b, k, m or g)
Allows you to constrain the memory available to a container. If the host
supports swap memory, then the **-m** memory setting can be larger than physical
RAM. If a limit of 0 is specified (not using **-m**), the container's memory is
not limited. The actual limit may be rounded up to a multiple of the operating
system's page size (the value would be very large, that's millions of trillions).
**--memory-swap**="LIMIT"
A limit value equal to memory plus swap. Must be used with the **-m**
(**--memory**) flag. The swap `LIMIT` should always be larger than **-m**
(**--memory**) value. By default, the swap `LIMIT` will be set to double
the value of --memory.
The format of `LIMIT` is `<number>[<unit>]`. Unit can be `b` (bytes),
`k` (kilobytes), `m` (megabytes), or `g` (gigabytes). If you don't specify a
unit, `b` is used. Set LIMIT to `-1` to enable unlimited swap.
**--pull**
Pull the image if it is not present. If this flag is disabled (with
@@ -53,7 +158,7 @@ Defaults to *true*.
Pull the image even if a version of the image is already present.
**--quiet**
**-q, --quiet**
Suppress output messages which indicate which instruction is being processed,
and of progress when pulling images from a registry, and when writing the
@@ -66,7 +171,34 @@ commands specified by the **RUN** instruction.
**--runtime-flag** *flag*
Adds global flags for the container rutime.
Adds global flags for the container rutime. To list the supported flags, please
consult the manpages of the selected container runtime (`runc` is the default
runtime, the manpage to consult is `runc(8)`).
Note: Do not pass the leading `--` to the flag. To pass the runc flag `--log-format json`
to buildah bud, the option given would be `--runtime-flag log-format=json`.
**--security-opt**=[]
Security Options
"label=user:USER" : Set the label user for the container
"label=role:ROLE" : Set the label role for the container
"label=type:TYPE" : Set the label type for the container
"label=level:LEVEL" : Set the label level for the container
"label=disable" : Turn off label confinement for the container
"no-new-privileges" : Not supported
"seccomp=unconfined" : Turn off seccomp confinement for the container
"seccomp=profile.json : White listed syscalls seccomp Json file to be used as a seccomp filter
"apparmor=unconfined" : Turn off apparmor confinement for the container
"apparmor=your-profile" : Set the apparmor confinement profile for the container
**--shm-size**=""
Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater than `0`.
Unit is optional and can be `b` (bytes), `k` (kilobytes), `m`(megabytes), or `g` (gigabytes).
If you omit the unit, the system uses bytes. If you omit the size entirely, the system uses `64m`.
**--signature-policy** *signaturepolicy*
@@ -81,7 +213,76 @@ process completes successfully.
**--tls-verify** *bool-value*
Require HTTPS and verify certificates when talking to container registries (defaults to true)
Require HTTPS and verify certificates when talking to container registries (defaults to true).
**--ulimit**=[]
Ulimit options
**-v**|**--volume**[=*[HOST-DIR:CONTAINER-DIR[:OPTIONS]]*]
Create a bind mount. If you specify, ` -v /HOST-DIR:/CONTAINER-DIR`, podman
bind mounts `/HOST-DIR` in the host to `/CONTAINER-DIR` in the podman
container. The `OPTIONS` are a comma delimited list and can be:
* [rw|ro]
* [z|Z]
* [`[r]shared`|`[r]slave`|`[r]private`]
The `CONTAINER-DIR` must be an absolute path such as `/src/docs`. The `HOST-DIR`
must be an absolute path as well. podman bind-mounts the `HOST-DIR` to the
path you specify. For example, if you supply the `/foo` value, podman creates a bind-mount.
You can specify multiple **-v** options to mount one or more mounts to a
container.
You can add `:ro` or `:rw` suffix to a volume to mount it read-only or
read-write mode, respectively. By default, the volumes are mounted read-write.
See examples.
Labeling systems like SELinux require that proper labels are placed on volume
content mounted into a container. Without a label, the security system might
prevent the processes running inside the container from using the content. By
default, podman does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes
`:z` or `:Z` to the volume mount. These suffixes tell podman to relabel file
objects on the shared volumes. The `z` option tells podman that two containers
share the volume content. As a result, podman labels the content with a shared
content label. Shared volume labels allow all containers to read/write content.
The `Z` option tells podman to label the content with a private unshared label.
Only the current container can use a private volume.
By default bind mounted volumes are `private`. That means any mounts done
inside container will not be visible on the host and vice versa. This behavior can
be changed by specifying a volume mount propagation property.
When the mount propagation policy is set to `shared`, any mounts completed inside
the container on that volume will be visible to both the host and container. When
the mount propagation policy is set to `slave`, one way mount propagation is enabled
and any mounts completed on the host for that volume will be visible only inside of the container.
To control the mount propagation property of volume use the `:[r]shared`,
`:[r]slave` or `:[r]private` propagation flag. The propagation property can
be specified only for bind mounted volumes and not for internal volumes or
named volumes. For mount propagation to work on the source mount point (mount point
where source dir is mounted on) has to have the right propagation properties. For
shared volumes, the source mount point has to be shared. And for slave volumes,
the source mount has to be either shared or slave.
Use `df <source-dir>` to determine the source mount and then use
`findmnt -o TARGET,PROPAGATION <source-mount-dir>` to determine propagation
properties of source mount, if `findmnt` utility is not available, the source mount point
can be determined by looking at the mount entry in `/proc/self/mountinfo`. Look
at `optional fields` and see if any propagaion properties are specified.
`shared:X` means the mount is `shared`, `master:X` means the mount is `slave` and if
nothing is there that means the mount is `private`.
To change propagation properties of a mount point use the `mount` command. For
example, to bind mount the source directory `/foo` do
`mount --bind /foo /foo` and `mount --make-private --make-shared /foo`. This
will convert /foo into a `shared` mount point. The propagation properties of the source
mount can be changed directly. For instance if `/` is the source mount for
`/foo`, then use `mount --make-shared /` to convert `/` into a `shared` mount.
## EXAMPLE
@@ -97,5 +298,17 @@ buildah bud --tls-verify=true -t imageName -f Dockerfile.simple
buildah bud --tls-verify=false -t imageName .
buildah bud --runtime-flag log-format=json .
buildah bud --runtime-flag debug .
buildah bud --authfile /tmp/auths/myauths.json --cert-dir ~/auth --tls-verify=true --creds=username:password -t imageName -f Dockerfile.simple
buildah bud --memory 40m --cpu-period 10000 --cpu-quota 50000 --ulimit nofile=1024:1028 -t imageName .
buildah bud --security-opt label=level:s0:c100,c200 --cgroup-parent /path/to/cgroup/parent -t imageName .
buildah bud --volume /home/test:/myvol:ro,Z -t imageName .
## SEE ALSO
buildah(1), kpod-login(1), docker-login(1)
buildah(1), podman-login(1), docker-login(1)

View File

@@ -13,13 +13,21 @@ specified, an ID is assigned, but no name is assigned to the image.
## OPTIONS
**--authfile** *path*
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry.
Default certificates directory is _/etc/containers/certs.d_.
**--creds** *creds*
The username[:password] to use to authenticate with the registry if required.
The [username[:password]] to use to authenticate with the registry if required.
If one or both values are not supplied, a command line prompt will appear and the
value can be entered. The password is entered without echo.
**--disable-compression, -D**
@@ -71,5 +79,8 @@ This example commits the container to the image on the local registry while turn
This example commits the container to the image on the local registry using credentials and certificates for authentication.
`buildah commit --cert-dir ~/auth --tls-verify=true --creds=username:password containerID docker://localhost:5000/imageId`
This example commits the container to the image on the local registry using credentials from the /tmp/auths/myauths.json file and certificates for authentication.
`buildah commit --authfile /tmp/auths/myauths.json --cert-dir ~/auth --tls-verify=true --creds=username:password containerID docker://localhost:5000/imageId`
## SEE ALSO
buildah(1)

View File

@@ -15,7 +15,34 @@ IDs, and the names and IDs of the images from which they were initialized.
**--all, -a**
List information about all containers, including those which were not created
by and are not being used by Buildah.
by and are not being used by Buildah. Containers created by Buildah are
denoted with an '*' in the 'BUILDER' column.
**--filter, -f**
Filter output based on conditions provided.
Valid filters are listed below:
| **Filter** | **Description** |
| --------------- | ------------------------------------------------------------------- |
| id | [ID] Container's ID |
| name | [Name] Container's name |
| ancestor | [ImageName] Image or descendant used to create container |
**--format**
Pretty-print containers using a Go template.
Valid placeholders for the Go template are listed below:
| **Placeholder** | **Description** |
| --------------- | -----------------------------------------|
| .ContainerID | Container ID |
| .Builder | Whether container was created by buildah |
| .ImageID | Image ID |
| .ImageName | Image name |
| .ContainerName | Container name |
**--json**
@@ -36,12 +63,55 @@ Displays only the container IDs.
## EXAMPLE
buildah containers
```
CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME
29bdb522fc62 * 3fd9065eaf02 docker.io/library/alpine:latest alpine-working-container
c6b04237ac8e * f9b6f7f7b9d3 docker.io/library/busybox:latest busybox-working-container
```
buildah containers --quiet
```
29bdb522fc62d43fca0c1a0f11cfc6dfcfed169cf6cf25f928ebca1a612ff5b0
c6b04237ac8e9d435ec9cf0e7eda91e302f2db9ef908418522c2d666352281eb
```
buildah containers -q --noheading --notruncate
```
29bdb522fc62d43fca0c1a0f11cfc6dfcfed169cf6cf25f928ebca1a612ff5b0
c6b04237ac8e9d435ec9cf0e7eda91e302f2db9ef908418522c2d666352281eb
```
buildah containers --json
```
[
{
"id": "29bdb522fc62d43fca0c1a0f11cfc6dfcfed169cf6cf25f928ebca1a612ff5b0",
"builder": true,
"imageid": "3fd9065eaf02feaf94d68376da52541925650b81698c53c6824d92ff63f98353",
"imagename": "docker.io/library/alpine:latest",
"containername": "alpine-working-container"
},
{
"id": "c6b04237ac8e9d435ec9cf0e7eda91e302f2db9ef908418522c2d666352281eb",
"builder": true,
"imageid": "f9b6f7f7b9d34113f66e16a9da3e921a580937aec98da344b852ca540aaa2242",
"imagename": "docker.io/library/busybox:latest",
"containername": "busybox-working-container"
}
]
```
buildah containers --format "{{.ContainerID}} {{.ContainerName}}"
```
3fbeaa87e583ee7a3e6787b2d3af961ef21946a0c01a08938e4f52d53cce4c04 myalpine-working-container
fbfd3505376ee639c3ed50f9d32b78445cd59198a1dfcacf2e7958cda2516d5c ubuntu-working-container
```
buildah containers --filter ancestor=ubuntu
```
CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME
fbfd3505376e * 0ff04b2e7b63 docker.io/library/ubuntu:latest ubuntu-working-container
```
## SEE ALSO
buildah(1)

View File

@@ -17,7 +17,7 @@ Multiple transports are supported:
An existing local directory _path_ retrieving the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_ (Default)
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(kpod login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(podman login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
**docker-archive:**_path_
An image is retrieved as a `docker load` formatted file.
@@ -36,18 +36,115 @@ The container ID of the container that was created. On error, -1 is returned an
## OPTIONS
**--add-host**=[]
Add a custom host-to-IP mapping (host:ip)
Add a line to /etc/hosts. The format is hostname:ip. The **--add-host** option can be set multiple times.
**--authfile** *path*
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry.
Default certificates directory is _/etc/containers/certs.d_.
**--cgroup-parent**=""
Path to cgroups under which the cgroup for the container will be created. If the path is not absolute, the path is considered to be relative to the cgroups path of the init process. Cgroups will be created if they do not already exist.
**--cpu-period**=*0*
Limit the CPU CFS (Completely Fair Scheduler) period
Limit the container's CPU usage. This flag tell the kernel to restrict the container's CPU usage to the period you specify.
**--cpu-quota**=*0*
Limit the CPU CFS (Completely Fair Scheduler) quota
Limit the container's CPU usage. By default, containers run with the full
CPU resource. This flag tell the kernel to restrict the container's CPU usage
to the quota you specify.
**--cpu-shares**=*0*
CPU shares (relative weight)
By default, all containers get the same proportion of CPU cycles. This proportion
can be modified by changing the container's CPU share weighting relative
to the weighting of all other running containers.
To modify the proportion from the default of 1024, use the **--cpu-shares**
flag to set the weighting to 2 or higher.
The proportion will only apply when CPU-intensive processes are running.
When tasks in one container are idle, other containers can use the
left-over CPU time. The actual amount of CPU time will vary depending on
the number of containers running on the system.
For example, consider three containers, one has a cpu-share of 1024 and
two others have a cpu-share setting of 512. When processes in all three
containers attempt to use 100% of CPU, the first container would receive
50% of the total CPU time. If you add a fourth container with a cpu-share
of 1024, the first container only gets 33% of the CPU. The remaining containers
receive 16.5%, 16.5% and 33% of the CPU.
On a multi-core system, the shares of CPU time are distributed over all CPU
cores. Even if a container is limited to less than 100% of CPU time, it can
use 100% of each individual CPU core.
For example, consider a system with more than three cores. If you start one
container **{C0}** with **-c=512** running one process, and another container
**{C1}** with **-c=1024** running two processes, this can result in the following
division of CPU shares:
PID container CPU CPU share
100 {C0} 0 100% of CPU0
101 {C1} 1 100% of CPU1
102 {C1} 2 100% of CPU2
**--cpuset-cpus**=""
CPUs in which to allow execution (0-3, 0,1)
**--cpuset-mems**=""
Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
If you have four memory nodes on your system (0-3), use `--cpuset-mems=0,1`
then processes in your container will only use memory from the first
two memory nodes.
**--creds** *creds*
The username[:password] to use to authenticate with the registry if required.
The [username[:password]] to use to authenticate with the registry if required.
If one or both values are not supplied, a command line prompt will appear and the
value can be entered. The password is entered without echo.
**-m**, **--memory**=""
Memory limit (format: <number>[<unit>], where unit = b, k, m or g)
Allows you to constrain the memory available to a container. If the host
supports swap memory, then the **-m** memory setting can be larger than physical
RAM. If a limit of 0 is specified (not using **-m**), the container's memory is
not limited. The actual limit may be rounded up to a multiple of the operating
system's page size (the value would be very large, that's millions of trillions).
**--memory-swap**="LIMIT"
A limit value equal to memory plus swap. Must be used with the **-m**
(**--memory**) flag. The swap `LIMIT` should always be larger than **-m**
(**--memory**) value. By default, the swap `LIMIT` will be set to double
the value of --memory.
The format of `LIMIT` is `<number>[<unit>]`. Unit can be `b` (bytes),
`k` (kilobytes), `m` (megabytes), or `g` (gigabytes). If you don't specify a
unit, `b` is used. Set LIMIT to `-1` to enable unlimited swap.
**--name** *name*
@@ -67,6 +164,29 @@ Pull the image even if a version of the image is already present.
If an image needs to be pulled from the registry, suppress progress output.
**--security-opt**=[]
Security Options
"label=user:USER" : Set the label user for the container
"label=role:ROLE" : Set the label role for the container
"label=type:TYPE" : Set the label type for the container
"label=level:LEVEL" : Set the label level for the container
"label=disable" : Turn off label confinement for the container
"no-new-privileges" : Not supported
"seccomp=unconfined" : Turn off seccomp confinement for the container
"seccomp=profile.json : White listed syscalls seccomp Json file to be used as a seccomp filter
"apparmor=unconfined" : Turn off apparmor confinement for the container
"apparmor=your-profile" : Set the apparmor confinement profile for the container
**--shm-size**=""
Size of `/dev/shm`. The format is `<number><unit>`. `number` must be greater than `0`.
Unit is optional and can be `b` (bytes), `k` (kilobytes), `m`(megabytes), or `g` (gigabytes).
If you omit the unit, the system uses bytes. If you omit the size entirely, the system uses `64m`.
**--signature-policy** *signaturepolicy*
Pathname of a signature policy file to use. It is not recommended that this
@@ -77,6 +197,75 @@ option be used, as the default behavior of using the system-wide default policy
Require HTTPS and verify certificates when talking to container registries (defaults to true)
**--ulimit**=[]
Ulimit options
**-v**|**--volume**[=*[HOST-DIR:CONTAINER-DIR[:OPTIONS]]*]
Create a bind mount. If you specify, ` -v /HOST-DIR:/CONTAINER-DIR`, podman
bind mounts `/HOST-DIR` in the host to `/CONTAINER-DIR` in the podman
container. The `OPTIONS` are a comma delimited list and can be:
* [rw|ro]
* [z|Z]
* [`[r]shared`|`[r]slave`|`[r]private`]
The `CONTAINER-DIR` must be an absolute path such as `/src/docs`. The `HOST-DIR`
must be an absolute path as well. podman bind-mounts the `HOST-DIR` to the
path you specify. For example, if you supply the `/foo` value, podman creates a bind-mount.
You can specify multiple **-v** options to mount one or more mounts to a
container.
You can add `:ro` or `:rw` suffix to a volume to mount it read-only or
read-write mode, respectively. By default, the volumes are mounted read-write.
See examples.
Labeling systems like SELinux require that proper labels are placed on volume
content mounted into a container. Without a label, the security system might
prevent the processes running inside the container from using the content. By
default, podman does not change the labels set by the OS.
To change a label in the container context, you can add either of two suffixes
`:z` or `:Z` to the volume mount. These suffixes tell podman to relabel file
objects on the shared volumes. The `z` option tells podman that two containers
share the volume content. As a result, podman labels the content with a shared
content label. Shared volume labels allow all containers to read/write content.
The `Z` option tells podman to label the content with a private unshared label.
Only the current container can use a private volume.
By default bind mounted volumes are `private`. That means any mounts done
inside container will not be visible on the host and vice versa. This behavior can
be changed by specifying a volume mount propagation property.
When the mount propagation policy is set to `shared`, any mounts completed inside
the container on that volume will be visible to both the host and container. When
the mount propagation policy is set to `slave`, one way mount propagation is enabled
and any mounts completed on the host for that volume will be visible only inside of the container.
To control the mount propagation property of volume use the `:[r]shared`,
`:[r]slave` or `:[r]private` propagation flag. The propagation property can
be specified only for bind mounted volumes and not for internal volumes or
named volumes. For mount propagation to work on the source mount point (mount point
where source dir is mounted on) has to have the right propagation properties. For
shared volumes, the source mount point has to be shared. And for slave volumes,
the source mount has to be either shared or slave.
Use `df <source-dir>` to determine the source mount and then use
`findmnt -o TARGET,PROPAGATION <source-mount-dir>` to determine propagation
properties of source mount, if `findmnt` utility is not available, the source mount point
can be determined by looking at the mount entry in `/proc/self/mountinfo`. Look
at `optional fields` and see if any propagaion properties are specified.
`shared:X` means the mount is `shared`, `master:X` means the mount is `slave` and if
nothing is there that means the mount is `private`.
To change propagation properties of a mount point use the `mount` command. For
example, to bind mount the source directory `/foo` do
`mount --bind /foo /foo` and `mount --make-private --make-shared /foo`. This
will convert /foo into a `shared` mount point. The propagation properties of the source
mount can be changed directly. For instance if `/` is the source mount for
`/foo`, then use `mount --make-shared /` to convert `/` into a `shared` mount.
## EXAMPLE
buildah from imagename --pull
@@ -93,5 +282,11 @@ buildah from myregistry/myrepository/imagename:imagetag --creds=myusername:mypas
buildah from myregistry/myrepository/imagename:imagetag --authfile=/tmp/auths/myauths.json
buildah from --memory 40m --cpu-shares 2 --cpuset-cpus 0,2 --security-opt label=level:s0:c100,c200 myregistry/myrepository/imagename:imagetag
buildah from --ulimit nofile=1024:1028 --cgroup-parent /path/to/cgroup/parent myregistry/myrepository/imagename:imagetag
buildah from --volume /home/test:/myvol:ro,Z myregistry/myrepository/imagename:imagetag
## SEE ALSO
buildah(1), kpod-login(1), docker-login(1)
buildah(1), podman-login(1), docker-login(1)

View File

@@ -22,7 +22,7 @@ keywords are 'dangling', 'label', 'before' and 'since'.
**--format="TEMPLATE"**
Pretty-print images using a Go template. Will override --quiet
Pretty-print images using a Go template.
**--json**

View File

@@ -12,7 +12,7 @@ buildah mount - Mount a working container's root filesystem.
Mounts the specified container's root file system in a location which can be
accessed from the host, and returns its location.
If you execute the command without any arguments, the tool will list all of the
If the mount command is invoked without any arguments, the tool will list all of the
currently mounted containers.
## RETURN VALUE

View File

@@ -24,7 +24,7 @@ Image stored in local container/storage
An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. This is a non-standardized format, primarily useful for debugging or noninvasive container inspection.
**docker://**_docker-reference_
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(kpod login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
An image in a registry implementing the "Docker Registry HTTP API V2". By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using `(podman login)`. If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using `(docker login)`.
**docker-archive:**_path_[**:**_docker-reference_]
An image is stored in the `docker save` formatted file. _docker-reference_ is only used when creating such a file, and it must not contain a digest.
@@ -42,16 +42,19 @@ Image stored in local container/storage
**--authfile** *path*
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `kpod login`.
Path of the authentication file. Default is ${XDG_RUNTIME\_DIR}/containers/auth.json, which is set using `podman login`.
If the authorization state is not found there, $HOME/.docker/config.json is checked, which is set using `docker login`.
**--cert-dir** *path*
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry
Use certificates at *path* (*.crt, *.cert, *.key) to connect to the registry.
Default certificates directory is _/etc/containers/certs.d_.
**--creds** *creds*
The username[:password] to use to authenticate with the registry if required.
The [username[:password]] to use to authenticate with the registry if required.
If one or both values are not supplied, a command line prompt will appear and the
value can be entered. The password is entered without echo.
**--disable-compression, -D**
@@ -104,4 +107,4 @@ This example extracts the imageID image and puts it into the registry on the loc
`# buildah push --cert-dir ~/auth --tls-verify=true --creds=username:password imageID docker://localhost:5000/my-imageID`
## SEE ALSO
buildah(1), kpod-login(1), docker-login(1)
buildah(1), podman-login(1), docker-login(1)

View File

@@ -9,11 +9,19 @@ buildah rm - Removes one or more working containers.
## DESCRIPTION
Removes one or more working containers, unmounting them if necessary.
## OPTIONS
**--all, -a**
All Buildah containers will be removed. Buildah containers are denoted with an '*' in the 'BUILDER' column listed by the command 'buildah containers'.
## EXAMPLE
buildah rm containerID
buildah rm containerID1 containerID2 containerID3
buildah rm --all
## SEE ALSO
buildah(1)

View File

@@ -9,16 +9,35 @@ buildah rmi - Removes one or more images.
## DESCRIPTION
Removes one or more locally stored images.
## LIMITATIONS
If the image was pushed to a directory path using the 'dir:' transport
the rmi command can not remove the image. Instead standard file system
commands should be used.
## OPTIONS
**--all, -a**
All local images will be removed from the system that do not have containers using the image as a reference image.
**--prune, -p**
All local images will be removed from the system that do not have a tag and do not have a child image pointing to them.
**--force, -f**
Executing this command will stop all containers that are using the image and remove them from the system
This option will cause Buildah to remove all containers that are using the image before removing the image from the system.
## EXAMPLE
buildah rmi imageID
buildah rmi --all
buildah rmi --all --force
buildah rmi --prune
buildah rmi --force imageID
buildah rmi imageID1 imageID2 imageID3

View File

@@ -10,11 +10,10 @@ buildah run - Run a command inside of the container.
Launches a container and runs the specified command in that container using the
container's root filesystem as a root filesystem, using configuration settings
inherited from the container's image or as specified using previous calls to
the *buildah config* command. If you execute *buildah run* and expect an
interactive shell, you need to specify the --tty flag.
the *buildah config* command. To execute *buildah run* within an
interactive shell, specify the --tty option.
## OPTIONS
**--hostname**
Set the hostname inside of the running container.
@@ -25,8 +24,10 @@ The *path* to an alternate OCI-compatible runtime.
**--runtime-flag** *flag*
Adds global flags for the container runtime. To list the supported flags, please
consult manpages of your selected container runtime (`runc` is the default
runtime, the manpage to consult is `runc(8)`)
consult the manpages of the selected container runtime (`runc` is the default
runtime, the manpage to consult is `runc(8)`).
Note: Do not pass the leading `--` to the flag. To pass the runc flag `--log-format json`
to buildah run, the option given would be `--runtime-flag log-format=json`.
**--tty**
@@ -40,8 +41,8 @@ with the stdin and stdout stream of the container. Setting the `--tty` option t
Bind mount a location from the host into the container for its lifetime.
NOTE: End parsing of options with the `--` option, so that you can pass other
options to the command inside of the container
NOTE: End parsing of options with the `--` option, so that other
options can be passed to the command inside of the container.
## EXAMPLE
@@ -49,7 +50,9 @@ buildah run containerID -- ps -auxw
buildah run containerID --hostname myhost -- ps -auxw
buildah run containerID --runtime-flag --no-new-keyring -- ps -auxw
buildah run --runtime-flag log-format=json containerID /bin/bash
buildah run --runtime-flag debug containerID /bin/bash
buildah run --tty containerID /bin/bash

View File

@@ -16,6 +16,8 @@ The Buildah package provides a command line tool which can be used to:
* Use the updated contents of a container's root filesystem as a filesystem layer to create a new image.
* Delete a working container or an image.
This tool needs to be run as the root user.
## OPTIONS
**--debug**

View File

@@ -73,6 +73,26 @@ func (i *containerImageRef) NewImage(sc *types.SystemContext) (types.ImageCloser
return image.FromSource(sc, src)
}
func expectedOCIDiffIDs(image v1.Image) int {
expected := 0
for _, history := range image.History {
if !history.EmptyLayer {
expected = expected + 1
}
}
return expected
}
func expectedDockerDiffIDs(image docker.V2Image) int {
expected := 0
for _, history := range image.History {
if !history.EmptyLayer {
expected = expected + 1
}
}
return expected
}
func (i *containerImageRef) NewImageSource(sc *types.SystemContext) (src types.ImageSource, err error) {
// Decide which type of manifest and configuration output we're going to provide.
manifestType := i.preferredManifestType
@@ -207,6 +227,10 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext) (src types.I
// Until the image specs define a media type for bzip2-compressed layers, even if we know
// how to decompress them, we can't try to compress layers with bzip2.
return nil, errors.New("media type for bzip2-compressed layers is not defined")
case archive.Xz:
// Until the image specs define a media type for xz-compressed layers, even if we know
// how to decompress them, we can't try to compress layers with xz.
return nil, errors.New("media type for xz-compressed layers is not defined")
default:
logrus.Debugf("compressing layer %q with unknown compressor(?)", layerID)
}
@@ -290,6 +314,17 @@ func (i *containerImageRef) NewImageSource(sc *types.SystemContext) (src types.I
}
dimage.History = append(dimage.History, dnews)
// Sanity check that we didn't just create a mismatch between non-empty layers in the
// history and the number of diffIDs.
expectedDiffIDs := expectedOCIDiffIDs(oimage)
if len(oimage.RootFS.DiffIDs) != expectedDiffIDs {
return nil, errors.Errorf("internal error: history lists %d non-empty layers, but we have %d layers on disk", expectedDiffIDs, len(oimage.RootFS.DiffIDs))
}
expectedDiffIDs = expectedDockerDiffIDs(dimage)
if len(dimage.RootFS.DiffIDs) != expectedDiffIDs {
return nil, errors.Errorf("internal error: history lists %d non-empty layers, but we have %d layers on disk", expectedDiffIDs, len(dimage.RootFS.DiffIDs))
}
// Encode the image configuration blob.
oconfig, err := json.Marshal(&oimage)
if err != nil {
@@ -415,8 +450,8 @@ func (i *containerImageSource) GetManifest(instanceDigest *digest.Digest) ([]byt
return i.manifest, i.manifestType, nil
}
func (i *containerImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (i *containerImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}
func (i *containerImageSource) GetBlob(blob types.BlobInfo) (reader io.ReadCloser, size int64, err error) {

View File

@@ -95,8 +95,6 @@ type BuildOptions struct {
// specified, indicating that the shared, system-wide default policy
// should be used.
SignaturePolicyPath string
// SkipTLSVerify denotes whether TLS verification should not be used.
SkipTLSVerify bool
// ReportWriter is an io.Writer which will be used to report the
// progress of the (possible) pulling of the source image and the
// writing of the new image.
@@ -105,7 +103,11 @@ type BuildOptions struct {
// configuration data.
// Accepted values are OCIv1ImageFormat and Dockerv2ImageFormat.
OutputFormat string
AuthFilePath string
// SystemContext holds parameters used for authentication.
SystemContext *types.SystemContext
CommonBuildOpts *buildah.CommonBuildOptions
// DefaultMountsFilePath is the file path holding the mounts to be mounted in "host-path:container-path" format
DefaultMountsFilePath string
}
// Executor is a buildah-based implementation of the imagebuilder.Executor
@@ -137,18 +139,8 @@ type Executor struct {
volumeCache map[string]string
volumeCacheInfo map[string]os.FileInfo
reportWriter io.Writer
}
func makeSystemContext(signaturePolicyPath, authFilePath string, skipTLSVerify bool) *types.SystemContext {
sc := &types.SystemContext{}
if signaturePolicyPath != "" {
sc.SignaturePolicyPath = signaturePolicyPath
}
if authFilePath != "" {
sc.AuthFilePath = authFilePath
}
sc.DockerInsecureSkipTLSVerify = skipTLSVerify
return sc
commonBuildOptions *buildah.CommonBuildOptions
defaultMountsFilePath string
}
// Preserve informs the executor that from this point on, it needs to ensure
@@ -380,6 +372,7 @@ func (b *Executor) Run(run imagebuilder.Run, config docker.Config) error {
Entrypoint: config.Entrypoint,
Cmd: config.Cmd,
NetworkDisabled: config.NetworkDisabled,
Quiet: b.quiet,
}
args := run.Args
@@ -401,12 +394,23 @@ func (b *Executor) Run(run imagebuilder.Run, config docker.Config) error {
// UnrecognizedInstruction is called when we encounter an instruction that the
// imagebuilder parser didn't understand.
func (b *Executor) UnrecognizedInstruction(step *imagebuilder.Step) error {
if !b.ignoreUnrecognizedInstructions {
logrus.Debugf("+(UNIMPLEMENTED?) %#v", step)
err_str := fmt.Sprintf("Build error: Unknown instruction: %q ", step.Command)
err := fmt.Sprintf(err_str+"%#v", step)
if b.ignoreUnrecognizedInstructions {
logrus.Debugf(err)
return nil
}
logrus.Errorf("+(UNIMPLEMENTED?) %#v", step)
return errors.Errorf("Unrecognized instruction: %#v", step)
switch logrus.GetLevel() {
case logrus.ErrorLevel:
logrus.Errorf(err_str)
case logrus.DebugLevel:
logrus.Debugf(err)
default:
logrus.Errorf("+(UNHANDLED LOGLEVEL) %#v", step)
}
return errors.Errorf(err)
}
// NewExecutor creates a new instance of the imagebuilder.Executor interface.
@@ -418,22 +422,24 @@ func NewExecutor(store storage.Store, options BuildOptions) (*Executor, error) {
registry: options.Registry,
transport: options.Transport,
ignoreUnrecognizedInstructions: options.IgnoreUnrecognizedInstructions,
quiet: options.Quiet,
runtime: options.Runtime,
runtimeArgs: options.RuntimeArgs,
transientMounts: options.TransientMounts,
compression: options.Compression,
output: options.Output,
outputFormat: options.OutputFormat,
additionalTags: options.AdditionalTags,
signaturePolicyPath: options.SignaturePolicyPath,
systemContext: makeSystemContext(options.SignaturePolicyPath, options.AuthFilePath, options.SkipTLSVerify),
volumeCache: make(map[string]string),
volumeCacheInfo: make(map[string]os.FileInfo),
log: options.Log,
out: options.Out,
err: options.Err,
reportWriter: options.ReportWriter,
quiet: options.Quiet,
runtime: options.Runtime,
runtimeArgs: options.RuntimeArgs,
transientMounts: options.TransientMounts,
compression: options.Compression,
output: options.Output,
outputFormat: options.OutputFormat,
additionalTags: options.AdditionalTags,
signaturePolicyPath: options.SignaturePolicyPath,
systemContext: options.SystemContext,
volumeCache: make(map[string]string),
volumeCacheInfo: make(map[string]os.FileInfo),
log: options.Log,
out: options.Out,
err: options.Err,
reportWriter: options.ReportWriter,
commonBuildOptions: options.CommonBuildOpts,
defaultMountsFilePath: options.DefaultMountsFilePath,
}
if exec.err == nil {
exec.err = os.Stderr
@@ -469,12 +475,15 @@ func (b *Executor) Prepare(ib *imagebuilder.Builder, node *parser.Node, from str
b.log("FROM %s", from)
}
builderOptions := buildah.BuilderOptions{
FromImage: from,
PullPolicy: b.pullPolicy,
Registry: b.registry,
Transport: b.transport,
SignaturePolicyPath: b.signaturePolicyPath,
ReportWriter: b.reportWriter,
FromImage: from,
PullPolicy: b.pullPolicy,
Registry: b.registry,
Transport: b.transport,
SignaturePolicyPath: b.signaturePolicyPath,
ReportWriter: b.reportWriter,
SystemContext: b.systemContext,
CommonBuildOpts: b.commonBuildOptions,
DefaultMountsFilePath: b.defaultMountsFilePath,
}
builder, err := buildah.NewBuilder(b.store, builderOptions)
if err != nil {
@@ -578,6 +587,8 @@ func (b *Executor) Commit(ib *imagebuilder.Builder) (err error) {
if err2 == nil {
imageRef = imageRef2
err = nil
} else {
err = err2
}
}
} else {
@@ -586,6 +597,9 @@ func (b *Executor) Commit(ib *imagebuilder.Builder) (err error) {
if err != nil {
return errors.Wrapf(err, "error parsing reference for image to be written")
}
if ib.Author != "" {
b.builder.SetMaintainer(ib.Author)
}
config := ib.Config()
b.builder.SetHostname(config.Hostname)
b.builder.SetDomainname(config.Domainname)

View File

@@ -1,6 +1,28 @@
# Installation Instructions
Prior to installing Buildah, install the following packages on your linux distro:
## System Requirements
### Kernel Version Requirements
To run Buildah on Red Hat Enterprise Linux or CentOS, version 7.4 or higher is required.
On other Linux distributions Buildah requires a kernel version of 4.0 or
higher in order to support the OverlayFS filesystem. The kernel version can be checked
with the 'uname -a' command.
### runc Requirement
Buildah uses `runc` to run commands when `buildah run` is used, or when `buildah build-using-dockerfile`
encounters a `RUN` instruction, so you'll also need to build and install a compatible version of
[runc](https://github.com/opencontainers/runc) for Buildah to call for those cases. If Buildah is installed
via a package manager such as yum, dnf or apt-get, runc will be installed as part of that process.
## Package Installation
Buildah is available on several software repositories and can be installed via a package manager such
as yum, dnf or apt-get on a number of Linux distributions.
## Installation from GitHub
Prior to installing Buildah, install the following packages on your Linux distro:
* make
* golang (Requires version 1.8.1 or higher.)
* bats
@@ -12,11 +34,12 @@ Prior to installing Buildah, install the following packages on your linux distro
* gpgme-devel
* glib2-devel
* libassuan-devel
* libseccomp-devel
* ostree-devel
* runc (Requires version 1.0 RC4 or higher.)
* skopeo-containers
## Fedora
### Fedora
In Fedora, you can use this command:
@@ -30,6 +53,7 @@ In Fedora, you can use this command:
glib2-devel \
gpgme-devel \
libassuan-devel \
libseccomp-devel \
ostree-devel \
git \
bzip2 \
@@ -52,7 +76,7 @@ Then to install Buildah on Fedora follow the steps in this example:
buildah --help
```
## RHEL, CentOS
### RHEL, CentOS
In RHEL and CentOS 7, ensure that you are subscribed to `rhel-7-server-rpms`,
`rhel-7-server-extras-rpms`, and `rhel-7-server-optional-rpms`, then
@@ -68,6 +92,7 @@ run this command:
glib2-devel \
gpgme-devel \
libassuan-devel \
libseccomp-devel \
ostree-devel \
git \
bzip2 \
@@ -78,9 +103,31 @@ run this command:
The build steps for Buildah on RHEL or CentOS are the same as Fedora, above.
## Ubuntu
In Ubuntu zesty and xenial, you can use this command:
### openSUSE
Currently openSUSE Leap 15 offers `go1.8` , while openSUSE Tumbleweed has `go1.9`.
`zypper in go1.X` should do the work, then run this command:
```
zypper in make \
git \
golang \
runc \
bzip2 \
libgpgme-devel \
libseccomp-devel \
device-mapper-devel \
libbtrfs-devel \
go-md2man
```
The build steps for Buildah on SUSE / openSUSE are the same as Fedora, above.
### Ubuntu
In Ubuntu zesty and xenial, you can use these commands:
```
apt-get -y install software-properties-common
@@ -104,7 +151,7 @@ Then to install Buildah on Ubuntu follow the steps in this example:
buildah --help
```
## Debian
### Debian
To install the required dependencies, you can use those commands, tested under Debian GNU/Linux amd64 9.3 (stretch):

34
new.go
View File

@@ -77,6 +77,24 @@ func pullAndFindImage(store storage.Store, imageName string, options BuilderOpti
return img, ref, nil
}
func getImageName(name string, img *storage.Image) string {
imageName := name
if len(img.Names) > 0 {
imageName = img.Names[0]
// When the image used by the container is a tagged image
// the container name might be set to the original image instead of
// the image given in the "form" command line.
// This loop is supposed to fix this.
for _, n := range img.Names {
if strings.Contains(n, name) {
imageName = n
break
}
}
}
return imageName
}
func imageNamePrefix(imageName string) string {
prefix := imageName
s := strings.Split(imageName, "/")
@@ -144,6 +162,7 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
pulledImg, pulledReference, err2 := pullAndFindImage(store, image, options, systemContext)
if err2 != nil {
logrus.Debugf("error pulling and reading image %q: %v", image, err2)
err = err2
continue
}
ref = pulledReference
@@ -155,11 +174,13 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
if err2 != nil {
if options.Transport == "" {
logrus.Debugf("error parsing image name %q: %v", image, err2)
err = err2
continue
}
srcRef2, err3 := alltransports.ParseImageName(options.Transport + image)
if err3 != nil {
logrus.Debugf("error parsing image name %q: %v", image, err2)
err = err3
continue
}
srcRef = srcRef2
@@ -186,6 +207,7 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
pulledImg, pulledReference, err2 := pullAndFindImage(store, image, options, systemContext)
if err2 != nil {
logrus.Debugf("error pulling and reading image %q: %v", image, err2)
err = err2
continue
}
ref = pulledReference
@@ -195,14 +217,15 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
}
if options.FromImage != "" && (ref == nil || img == nil) {
return nil, errors.Wrapf(storage.ErrImageUnknown, "no such image %q", options.FromImage)
// If options.FromImage is set but we ended up
// with nil in ref or in img then there was an error that
// we should return.
return nil, util.GetFailureCause(err, errors.Wrapf(storage.ErrImageUnknown, "no such image %q in registry", options.FromImage))
}
image := options.FromImage
imageID := ""
if img != nil {
if len(img.Names) > 0 {
image = img.Names[0]
}
image = getImageName(imageNamePrefix(image), img)
imageID = img.ID
}
if manifest, config, err = imageManifestAndConfig(ref, systemContext); err != nil {
@@ -246,7 +269,7 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
if err = reserveSELinuxLabels(store, container.ID); err != nil {
return nil, err
}
processLabel, mountLabel, err := label.InitLabels(nil)
processLabel, mountLabel, err := label.InitLabels(options.CommonBuildOpts.LabelOpts)
if err != nil {
return nil, err
}
@@ -265,6 +288,7 @@ func newBuilder(store storage.Store, options BuilderOptions) (*Builder, error) {
ProcessLabel: processLabel,
MountLabel: mountLabel,
DefaultMountsFilePath: options.DefaultMountsFilePath,
CommonBuildOpts: options.CommonBuildOpts,
}
if options.Mount {

28
new_test.go Normal file
View File

@@ -0,0 +1,28 @@
package buildah
import (
"testing"
"github.com/containers/storage"
)
func TestGetImageName(t *testing.T) {
tt := []struct {
caseName string
name string
names []string
expected string
}{
{"tagged image", "busybox1", []string{"docker.io/library/busybox:latest", "docker.io/library/busybox1:latest"}, "docker.io/library/busybox1:latest"},
{"image name not in the resolved image names", "image1", []string{"docker.io/library/busybox:latest", "docker.io/library/busybox1:latest"}, "docker.io/library/busybox:latest"},
{"resolved image with empty name list", "image1", []string{}, "image1"},
}
for _, tc := range tt {
img := &storage.Image{Names: tc.names}
res := getImageName(tc.name, img)
if res != tc.expected {
t.Errorf("test case '%s' failed: expected %#v but got %#v", tc.caseName, tc.expected, res)
}
}
}

193
run.go
View File

@@ -1,7 +1,9 @@
package buildah
import (
"bufio"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"os/exec"
@@ -9,6 +11,8 @@ import (
"strings"
"github.com/containers/storage/pkg/ioutils"
"github.com/docker/docker/profiles/seccomp"
units "github.com/docker/go-units"
digest "github.com/opencontainers/go-digest"
"github.com/opencontainers/runtime-spec/specs-go"
"github.com/opencontainers/runtime-tools/generate"
@@ -65,9 +69,85 @@ type RunOptions struct {
// decision can be overridden by specifying either WithTerminal or
// WithoutTerminal.
Terminal int
// Quiet tells the run to turn off output to stdout.
Quiet bool
}
func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts []specs.Mount, bindFiles, volumes []string) error {
func addRlimits(ulimit []string, g *generate.Generator) error {
var (
ul *units.Ulimit
err error
)
for _, u := range ulimit {
if ul, err = units.ParseUlimit(u); err != nil {
return errors.Wrapf(err, "ulimit option %q requires name=SOFT:HARD, failed to be parsed", u)
}
g.AddProcessRlimits("RLIMIT_"+strings.ToUpper(ul.Name), uint64(ul.Hard), uint64(ul.Soft))
}
return nil
}
func addHostsToFile(hosts []string) error {
if len(hosts) == 0 {
return nil
}
file, err := os.OpenFile("/etc/hosts", os.O_APPEND|os.O_WRONLY, os.ModeAppend)
if err != nil {
return err
}
defer file.Close()
w := bufio.NewWriter(file)
for _, host := range hosts {
fmt.Fprintln(w, host)
}
return w.Flush()
}
func addCommonOptsToSpec(commonOpts *CommonBuildOptions, g *generate.Generator) error {
// RESOURCES - CPU
if commonOpts.CPUPeriod != 0 {
g.SetLinuxResourcesCPUPeriod(commonOpts.CPUPeriod)
}
if commonOpts.CPUQuota != 0 {
g.SetLinuxResourcesCPUQuota(commonOpts.CPUQuota)
}
if commonOpts.CPUShares != 0 {
g.SetLinuxResourcesCPUShares(commonOpts.CPUShares)
}
if commonOpts.CPUSetCPUs != "" {
g.SetLinuxResourcesCPUCpus(commonOpts.CPUSetCPUs)
}
if commonOpts.CPUSetMems != "" {
g.SetLinuxResourcesCPUMems(commonOpts.CPUSetMems)
}
// RESOURCES - MEMORY
if commonOpts.Memory != 0 {
g.SetLinuxResourcesMemoryLimit(commonOpts.Memory)
}
if commonOpts.MemorySwap != 0 {
g.SetLinuxResourcesMemorySwap(commonOpts.MemorySwap)
}
if commonOpts.CgroupParent != "" {
g.SetLinuxCgroupsPath(commonOpts.CgroupParent)
}
if err := addRlimits(commonOpts.Ulimit, g); err != nil {
return err
}
if err := addHostsToFile(commonOpts.AddHost); err != nil {
return err
}
logrus.Debugln("Resources:", commonOpts)
return nil
}
func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts []specs.Mount, bindFiles, builtinVolumes, volumeMounts []string, shmSize string) error {
// The passed-in mounts matter the most to us.
mounts := make([]specs.Mount, len(optionMounts))
copy(mounts, optionMounts)
@@ -82,6 +162,9 @@ func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts
}
// Add mounts from the generated list, unless they conflict.
for _, specMount := range spec.Mounts {
if specMount.Destination == "/dev/shm" {
specMount.Options = []string{"nosuid", "noexec", "nodev", "mode=1777", "size=" + shmSize}
}
if haveMount(specMount.Destination) {
// Already have something to mount there, so skip this one.
continue
@@ -113,6 +196,7 @@ func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts
secretMounts, err := secretMounts(file, b.MountLabel, cdir)
if err != nil {
logrus.Warn("error mounting secrets, skipping...")
continue
}
for _, mount := range secretMounts {
if haveMount(mount.Destination) {
@@ -123,7 +207,7 @@ func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts
}
// Add temporary copies of the contents of volume locations at the
// volume locations, unless we already have something there.
for _, volume := range volumes {
for _, volume := range builtinVolumes {
if haveMount(volume) {
// Already mounting something there, no need to bother.
continue
@@ -139,7 +223,7 @@ func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts
return errors.Wrapf(err, "error relabeling directory %q for volume %q in container %q", volumePath, volume, b.ContainerID)
}
srcPath := filepath.Join(mountPoint, volume)
if err = copyFileWithTar(srcPath, volumePath); err != nil && !os.IsNotExist(err) {
if err = copyWithTar(srcPath, volumePath); err != nil && !os.IsNotExist(err) {
return errors.Wrapf(err, "error populating directory %q for volume %q in container %q using contents of %q", volumePath, volume, b.ContainerID, srcPath)
}
@@ -152,6 +236,57 @@ func (b *Builder) setupMounts(mountPoint string, spec *specs.Spec, optionMounts
Options: []string{"bind"},
})
}
// Bind mount volumes given by the user at execution
var options []string
for _, i := range volumeMounts {
spliti := strings.Split(i, ":")
if len(spliti) > 2 {
options = strings.Split(spliti[2], ",")
}
if haveMount(spliti[1]) {
continue
}
options = append(options, "rbind")
var foundrw, foundro, foundz, foundZ bool
var rootProp string
for _, opt := range options {
switch opt {
case "rw":
foundrw = true
case "ro":
foundro = true
case "z":
foundz = true
case "Z":
foundZ = true
case "private", "rprivate", "slave", "rslave", "shared", "rshared":
rootProp = opt
}
}
if !foundrw && !foundro {
options = append(options, "rw")
}
if foundz {
if err := label.Relabel(spliti[0], spec.Linux.MountLabel, true); err != nil {
return errors.Wrapf(err, "relabel failed %q", spliti[0])
}
}
if foundZ {
if err := label.Relabel(spliti[0], spec.Linux.MountLabel, false); err != nil {
return errors.Wrapf(err, "relabel failed %q", spliti[0])
}
}
if rootProp == "" {
options = append(options, "private")
}
mounts = append(mounts, specs.Mount{
Destination: spliti[1],
Type: "bind",
Source: spliti[0],
Options: options,
})
}
// Set the list in the spec.
spec.Mounts = mounts
return nil
@@ -178,6 +313,15 @@ func (b *Builder) Run(command []string, options RunOptions) error {
g.AddProcessEnv(env[0], env[1])
}
}
if b.CommonBuildOpts == nil {
return errors.Errorf("Invalid format on container you must recreate the container")
}
if err := addCommonOptsToSpec(b.CommonBuildOpts, &g); err != nil {
return err
}
if len(command) > 0 {
g.SetProcessArgs(command)
} else {
@@ -248,11 +392,7 @@ func (b *Builder) Run(command []string, options RunOptions) error {
return errors.Wrapf(err, "error removing network namespace for run")
}
}
if options.User != "" {
user, err = getUser(mountPoint, options.User)
} else {
user, err = getUser(mountPoint, b.User())
}
user, err = b.user(mountPoint, options.User)
if err != nil {
return err
}
@@ -266,8 +406,40 @@ func (b *Builder) Run(command []string, options RunOptions) error {
return errors.Wrapf(err, "error ensuring working directory %q exists", spec.Process.Cwd)
}
//Security Opts
g.SetProcessApparmorProfile(b.CommonBuildOpts.ApparmorProfile)
// HANDLE SECCOMP
if b.CommonBuildOpts.SeccompProfilePath != "unconfined" {
if b.CommonBuildOpts.SeccompProfilePath != "" {
seccompProfile, err := ioutil.ReadFile(b.CommonBuildOpts.SeccompProfilePath)
if err != nil {
return errors.Wrapf(err, "opening seccomp profile (%s) failed", b.CommonBuildOpts.SeccompProfilePath)
}
seccompConfig, err := seccomp.LoadProfile(string(seccompProfile), spec)
if err != nil {
return errors.Wrapf(err, "loading seccomp profile (%s) failed", b.CommonBuildOpts.SeccompProfilePath)
}
spec.Linux.Seccomp = seccompConfig
} else {
seccompConfig, err := seccomp.GetDefaultProfile(spec)
if err != nil {
return errors.Wrapf(err, "loading seccomp profile (%s) failed", b.CommonBuildOpts.SeccompProfilePath)
}
spec.Linux.Seccomp = seccompConfig
}
}
cgroupMnt := specs.Mount{
Destination: "/sys/fs/cgroup",
Type: "cgroup",
Source: "cgroup",
Options: []string{"nosuid", "noexec", "nodev", "relatime", "ro"},
}
g.AddMount(cgroupMnt)
bindFiles := []string{"/etc/hosts", "/etc/resolv.conf"}
err = b.setupMounts(mountPoint, spec, options.Mounts, bindFiles, b.Volumes())
err = b.setupMounts(mountPoint, spec, options.Mounts, bindFiles, b.Volumes(), b.CommonBuildOpts.Volumes, b.CommonBuildOpts.ShmSize)
if err != nil {
return errors.Wrapf(err, "error resolving mountpoints for container")
}
@@ -289,6 +461,9 @@ func (b *Builder) Run(command []string, options RunOptions) error {
cmd.Dir = mountPoint
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
if options.Quiet {
cmd.Stdout = nil
}
cmd.Stderr = os.Stderr
err = cmd.Run()
if err != nil {

View File

@@ -3,7 +3,6 @@ package buildah
import (
"bufio"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
@@ -23,12 +22,6 @@ var (
OverrideMountsFile = "/etc/containers/mounts.conf"
)
// SecretData info
type SecretData struct {
Name string
Data []byte
}
func getMounts(filePath string) []string {
file, err := os.Open(filePath)
if err != nil {
@@ -48,67 +41,6 @@ func getMounts(filePath string) []string {
return mounts
}
// SaveTo saves secret data to given directory
func (s SecretData) SaveTo(dir string) error {
path := filepath.Join(dir, s.Name)
if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil && !os.IsExist(err) {
return err
}
return ioutil.WriteFile(path, s.Data, 0700)
}
func readAll(root, prefix string) ([]SecretData, error) {
path := filepath.Join(root, prefix)
data := []SecretData{}
files, err := ioutil.ReadDir(path)
if err != nil {
if os.IsNotExist(err) {
return data, nil
}
return nil, err
}
for _, f := range files {
fileData, err := readFile(root, filepath.Join(prefix, f.Name()))
if err != nil {
// If the file did not exist, might be a dangling symlink
// Ignore the error
if os.IsNotExist(err) {
continue
}
return nil, err
}
data = append(data, fileData...)
}
return data, nil
}
func readFile(root, name string) ([]SecretData, error) {
path := filepath.Join(root, name)
s, err := os.Stat(path)
if err != nil {
return nil, err
}
if s.IsDir() {
dirData, err2 := readAll(root, name)
if err2 != nil {
return nil, err2
}
return dirData, nil
}
bytes, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
return []SecretData{{Name: name, Data: bytes}}, nil
}
// getHostAndCtrDir separates the host:container paths
func getMountsMap(path string) (string, string, error) {
arr := strings.SplitN(path, ":", 2)
@@ -118,15 +50,6 @@ func getMountsMap(path string) (string, string, error) {
return "", "", errors.Errorf("unable to get host and container dir")
}
func getHostSecretData(hostDir string) ([]SecretData, error) {
var allSecrets []SecretData
hostSecrets, err := readAll(hostDir, "")
if err != nil {
return nil, errors.Wrapf(err, "failed to read secrets from %q", hostDir)
}
return append(allSecrets, hostSecrets...), nil
}
// secretMount copies the contents of host directory to container directory
// and returns a list of mounts
func secretMounts(filePath, mountLabel, containerWorkingDir string) ([]rspec.Mount, error) {
@@ -157,16 +80,10 @@ func secretMounts(filePath, mountLabel, containerWorkingDir string) ([]rspec.Mou
return nil, err
}
data, err := getHostSecretData(hostDir)
if err != nil {
return nil, errors.Wrapf(err, "getting host secret data failed")
}
for _, s := range data {
err = s.SaveTo(ctrDirOnHost)
if err != nil {
return nil, err
}
if err = copyWithTar(hostDir, ctrDirOnHost); err != nil && !os.IsNotExist(err) {
return nil, errors.Wrapf(err, "error getting host secret data")
}
err = label.Relabel(ctrDirOnHost, mountLabel, false)
if err != nil {
return nil, errors.Wrap(err, "error applying correct labels")

View File

@@ -10,19 +10,39 @@ load helpers
[ "$status" -eq 0 ]
# This should fail
run buildah push localhost:5000/my-alpine --signature-policy ${TESTSDIR}/policy.json --tls-verify=true
run buildah push --signature-policy ${TESTSDIR}/policy.json --tls-verify=true localhost:5000/my-alpine
[ "$status" -ne 0 ]
# This should fail
run buildah from localhost:5000/my-alpine --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds baduser:badpassword
run buildah from --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds baduser:badpassword localhost:5000/my-alpine
[ "$status" -ne 0 ]
# This should work
run buildah from localhost:5000/my-alpine --name "my-alpine" --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds testuser:testpassword
run buildah from --name "my-alpine" --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds testuser:testpassword localhost:5000/my-alpine
[ "$status" -eq 0 ]
# Create Dockerfile for bud tests
FILE=./Dockerfile
/bin/cat <<EOM >$FILE
FROM localhost:5000/my-alpine
EOM
chmod +x $FILE
# Remove containers and images before bud tests
buildah rm --all
buildah rmi -f --all
# bud test bad password should fail
run buildah bud -f ./Dockerfile --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds=testuser:badpassword
[ "$status" -ne 0 ]
# bud test this should work
run buildah bud -f ./Dockerfile --signature-policy ${TESTSDIR}/policy.json --tls-verify=false --creds=testuser:testpassword
echo $status
[ "$status" -eq 0 ]
# Clean up
buildah rm my-alpine
buildah rm alpine
buildah rmi -f $(buildah --debug=false images -q)
rm -f ./Dockerfile
buildah rm -a
buildah rmi -f --all
}

View File

@@ -7,7 +7,7 @@ load helpers
buildah rm $cid
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json scratch)
buildah rm $cid
cid=$(buildah from alpine --pull --signature-policy ${TESTSDIR}/policy.json --name i-love-naming-things)
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json --name i-love-naming-things alpine)
buildah rm i-love-naming-things
}

View File

@@ -139,7 +139,7 @@ load helpers
@test "bud-http-context-dir-with-Dockerfile-post" {
starthttpd ${TESTSDIR}/bud/http-context-subdir
target=scratch-image
buildah bud http://0.0.0.0:${HTTP_SERVER_PORT}/context.tar --signature-policy ${TESTSDIR}/policy.json -t ${target} -f context/Dockerfile
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} -f context/Dockerfile http://0.0.0.0:${HTTP_SERVER_PORT}/context.tar
stophttpd
cid=$(buildah from ${target})
buildah rm ${cid}
@@ -236,3 +236,29 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@test "bud-maintainer" {
target=alpine-image
buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} ${TESTSDIR}/bud/maintainer
run buildah --debug=false inspect --type=image --format '{{.Docker.Author}}' ${target}
[ "$status" -eq 0 ]
[ "$output" = kilroy ]
run buildah --debug=false inspect --type=image --format '{{.OCIv1.Author}}' ${target}
[ "$status" -eq 0 ]
[ "$output" = kilroy ]
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}
@test "bud-unrecognized-instruction" {
target=alpine-image
run buildah bud --signature-policy ${TESTSDIR}/policy.json -t ${target} ${TESTSDIR}/bud/unrecognized
[ "$status" -ne 0 ]
[[ "$output" =~ BOGUS ]]
buildah rmi $(buildah --debug=false images -q)
run buildah --debug=false images -q
[ "$status" -eq 0 ]
[ "$output" = "" ]
}

View File

@@ -0,0 +1,2 @@
FROM alpine
MAINTAINER kilroy

View File

@@ -0,0 +1,2 @@
FROM alpine
BOGUS nope-nope-nope

View File

@@ -114,9 +114,12 @@ load helpers
@test "copy --chown" {
mkdir -p ${TESTDIR}/subdir
createrandom ${TESTDIR}/randomfile
mkdir -p ${TESTDIR}/other-subdir
createrandom ${TESTDIR}/subdir/randomfile
createrandom ${TESTDIR}/subdir/other-randomfile
createrandom ${TESTDIR}/randomfile
createrandom ${TESTDIR}/other-subdir/randomfile
createrandom ${TESTDIR}/other-subdir/other-randomfile
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
root=$(buildah mount $cid)
@@ -129,5 +132,8 @@ load helpers
test $(stat -c "%U:%g" $root/randomfile2) = "root:1"
test $(stat -c "%U" $root/randomfile3) = "nobody"
(cd $root/subdir/; for i in *; do test $(stat -c "%U:%G" $i) = "nobody:root"; done)
buildah copy --chown root:root $cid ${TESTDIR}/other-subdir /subdir
(cd $root/subdir/; for i in *randomfile; do test $(stat -c "%U:%G" $i) = "root:root"; done)
test $(stat -c "%U:%G" $root/subdir) = "nobody:root"
buildah rm $cid
}

View File

@@ -19,6 +19,10 @@ fromreftest() {
fromreftest kubernetes/pause@sha256:f8cd50c5a287dd8c5f226cf69c60c737d34ed43726c14b8a746d9de2d23eda2b
}
@test "from-by-digest-s1-a-discarded-layer" {
fromreftest docker/whalesay@sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b
}
@test "from-by-tag-s1" {
fromreftest kubernetes/pause:go
}

View File

@@ -0,0 +1,333 @@
package integration
import (
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"
"testing"
"encoding/json"
"github.com/containers/image/copy"
"github.com/containers/image/signature"
"github.com/containers/image/storage"
"github.com/containers/image/transports/alltransports"
"github.com/containers/image/types"
sstorage "github.com/containers/storage"
"github.com/containers/storage/pkg/reexec"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/onsi/gomega/gexec"
"github.com/pkg/errors"
"github.com/projectatomic/buildah"
)
var (
INTEGRATION_ROOT string
STORAGE_OPTIONS = "--storage-driver vfs"
ARTIFACT_DIR = "/tmp/.artifacts"
CACHE_IMAGES = []string{"alpine", "busybox", FEDORA_MINIMAL}
RESTORE_IMAGES = []string{"alpine", "busybox"}
ALPINE = "docker.io/library/alpine:latest"
BB_GLIBC = "docker.io/library/busybox:glibc"
FEDORA_MINIMAL = "registry.fedoraproject.org/fedora-minimal:latest"
defaultWaitTimeout = 90
)
// BuildAhSession wraps the gexec.session so we can extend it
type BuildAhSession struct {
*gexec.Session
}
// BuildAhTest struct for command line options
type BuildAhTest struct {
BuildAhBinary string
RunRoot string
StorageOptions string
ArtifactPath string
TempDir string
SignaturePath string
Root string
RegistriesConf string
}
// TestBuildAh ginkgo master function
func TestBuildAh(t *testing.T) {
if reexec.Init() {
os.Exit(1)
}
RegisterFailHandler(Fail)
RunSpecs(t, "Buildah Suite")
}
var _ = BeforeSuite(func() {
//Cache images
cwd, _ := os.Getwd()
INTEGRATION_ROOT = filepath.Join(cwd, "../../")
buildah := BuildahCreate("/tmp")
buildah.ArtifactPath = ARTIFACT_DIR
if _, err := os.Stat(ARTIFACT_DIR); os.IsNotExist(err) {
if err = os.Mkdir(ARTIFACT_DIR, 0777); err != nil {
fmt.Printf("%q\n", err)
os.Exit(1)
}
}
for _, image := range CACHE_IMAGES {
fmt.Printf("Caching %s...\n", image)
if err := buildah.CreateArtifact(image); err != nil {
fmt.Printf("%q\n", err)
os.Exit(1)
}
}
})
// CreateTempDirin
func CreateTempDirInTempDir() (string, error) {
return ioutil.TempDir("", "buildah_test")
}
// BuildahCreate a BuildAhTest instance for the tests
func BuildahCreate(tempDir string) BuildAhTest {
cwd, _ := os.Getwd()
buildAhBinary := filepath.Join(cwd, "../../buildah")
if os.Getenv("BUILDAH_BINARY") != "" {
buildAhBinary = os.Getenv("BUILDAH_BINARY")
}
storageOptions := STORAGE_OPTIONS
if os.Getenv("STORAGE_OPTIONS") != "" {
storageOptions = os.Getenv("STORAGE_OPTIONS")
}
return BuildAhTest{
BuildAhBinary: buildAhBinary,
RunRoot: filepath.Join(tempDir, "runroot"),
Root: filepath.Join(tempDir, "root"),
StorageOptions: storageOptions,
ArtifactPath: ARTIFACT_DIR,
TempDir: tempDir,
SignaturePath: "../../tests/policy.json",
RegistriesConf: "../../registries.conf",
}
}
//MakeOptions assembles all the buildah main options
func (p *BuildAhTest) MakeOptions() []string {
return strings.Split(fmt.Sprintf("--root %s --runroot %s --registries-conf %s",
p.Root, p.RunRoot, p.RegistriesConf), " ")
}
// BuildAh is the exec call to buildah on the filesystem
func (p *BuildAhTest) BuildAh(args []string) *BuildAhSession {
buildAhOptions := p.MakeOptions()
buildAhOptions = append(buildAhOptions, strings.Split(p.StorageOptions, " ")...)
buildAhOptions = append(buildAhOptions, args...)
fmt.Printf("Running: %s %s\n", p.BuildAhBinary, strings.Join(buildAhOptions, " "))
command := exec.Command(p.BuildAhBinary, buildAhOptions...)
session, err := gexec.Start(command, GinkgoWriter, GinkgoWriter)
if err != nil {
Fail(fmt.Sprintf("unable to run buildah command: %s", strings.Join(buildAhOptions, " ")))
}
return &BuildAhSession{session}
}
// Cleanup cleans up the temporary store
func (p *BuildAhTest) Cleanup() {
// Nuke tempdir
if err := os.RemoveAll(p.TempDir); err != nil {
fmt.Printf("%q\n", err)
}
}
// GrepString takes session output and behaves like grep. it returns a bool
// if successful and an array of strings on positive matches
func (s *BuildAhSession) GrepString(term string) (bool, []string) {
var (
greps []string
matches bool
)
for _, line := range strings.Split(s.OutputToString(), "\n") {
if strings.Contains(line, term) {
matches = true
greps = append(greps, line)
}
}
return matches, greps
}
// OutputToString formats session output to string
func (s *BuildAhSession) OutputToString() string {
fields := strings.Fields(fmt.Sprintf("%s", s.Out.Contents()))
return strings.Join(fields, " ")
}
// OutputToStringArray returns the output as a []string
// where each array item is a line split by newline
func (s *BuildAhSession) OutputToStringArray() []string {
output := fmt.Sprintf("%s", s.Out.Contents())
return strings.Split(output, "\n")
}
// IsJSONOutputValid attempts to unmarshall the session buffer
// and if successful, returns true, else false
func (s *BuildAhSession) IsJSONOutputValid() bool {
var i interface{}
if err := json.Unmarshal(s.Out.Contents(), &i); err != nil {
fmt.Println(err)
return false
}
return true
}
func (s *BuildAhSession) WaitWithDefaultTimeout() {
s.Wait(defaultWaitTimeout)
}
// SystemExec is used to exec a system command to check its exit code or output
func (p *BuildAhTest) SystemExec(command string, args []string) *BuildAhSession {
c := exec.Command(command, args...)
session, err := gexec.Start(c, GinkgoWriter, GinkgoWriter)
if err != nil {
Fail(fmt.Sprintf("unable to run command: %s %s", command, strings.Join(args, " ")))
}
return &BuildAhSession{session}
}
// CreateArtifact creates a cached image in the artifact dir
func (p *BuildAhTest) CreateArtifact(image string) error {
imageName := fmt.Sprintf("docker://%s", image)
systemContext := types.SystemContext{
SignaturePolicyPath: p.SignaturePath,
}
policy, err := signature.DefaultPolicy(&systemContext)
if err != nil {
return errors.Errorf("error loading signature policy: %v", err)
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return errors.Errorf("error loading signature policy: %v", err)
}
defer func() {
_ = policyContext.Destroy()
}()
options := &copy.Options{}
importRef, err := alltransports.ParseImageName(imageName)
if err != nil {
return errors.Errorf("error parsing image name %v: %v", image, err)
}
imageDir := strings.Replace(image, "/", "_", -1)
exportTo := filepath.Join("dir:", p.ArtifactPath, imageDir)
exportRef, err := alltransports.ParseImageName(exportTo)
if err != nil {
return errors.Errorf("error parsing image name %v: %v", exportTo, err)
}
return copy.Image(policyContext, exportRef, importRef, options)
}
// RestoreArtifact puts the cached image into our test store
func (p *BuildAhTest) RestoreArtifact(image string) error {
storeOptions := sstorage.DefaultStoreOptions
storeOptions.GraphDriverName = "vfs"
//storeOptions.GraphDriverOptions = storageOptions
storeOptions.GraphRoot = p.Root
storeOptions.RunRoot = p.RunRoot
store, err := sstorage.GetStore(storeOptions)
options := &copy.Options{}
if err != nil {
return errors.Errorf("error opening storage: %v", err)
}
defer func() {
_, _ = store.Shutdown(false)
}()
storage.Transport.SetStore(store)
ref, err := storage.Transport.ParseStoreReference(store, image)
if err != nil {
return errors.Errorf("error parsing image name: %v", err)
}
imageDir := strings.Replace(image, "/", "_", -1)
importFrom := fmt.Sprintf("dir:%s", filepath.Join(p.ArtifactPath, imageDir))
importRef, err := alltransports.ParseImageName(importFrom)
if err != nil {
return errors.Errorf("error parsing image name %v: %v", image, err)
}
systemContext := types.SystemContext{
SignaturePolicyPath: p.SignaturePath,
}
policy, err := signature.DefaultPolicy(&systemContext)
if err != nil {
return errors.Errorf("error loading signature policy: %v", err)
}
policyContext, err := signature.NewPolicyContext(policy)
if err != nil {
return errors.Errorf("error loading signature policy: %v", err)
}
defer func() {
_ = policyContext.Destroy()
}()
err = copy.Image(policyContext, ref, importRef, options)
if err != nil {
return errors.Errorf("error importing %s: %v", importFrom, err)
}
return nil
}
// RestoreAllArtifacts unpacks all cached images
func (p *BuildAhTest) RestoreAllArtifacts() error {
for _, image := range RESTORE_IMAGES {
if err := p.RestoreArtifact(image); err != nil {
return err
}
}
return nil
}
// StringInSlice determines if a string is in a string slice, returns bool
func StringInSlice(s string, sl []string) bool {
for _, i := range sl {
if i == s {
return true
}
}
return false
}
//LineInOutputStartsWith returns true if a line in a
// session output starts with the supplied string
func (s *BuildAhSession) LineInOuputStartsWith(term string) bool {
for _, i := range s.OutputToStringArray() {
if strings.HasPrefix(i, term) {
return true
}
}
return false
}
//LineInOutputContains returns true if a line in a
// session output starts with the supplied string
func (s *BuildAhSession) LineInOuputContains(term string) bool {
for _, i := range s.OutputToStringArray() {
if strings.Contains(i, term) {
return true
}
}
return false
}
// InspectContainerToJSON takes the session output of an inspect
// container and returns json
func (s *BuildAhSession) InspectImageJSON() buildah.BuilderInfo {
var i buildah.BuilderInfo
err := json.Unmarshal(s.Out.Contents(), &i)
Expect(err).To(BeNil())
return i
}

92
tests/e2e/inspect_test.go Normal file
View File

@@ -0,0 +1,92 @@
package integration
import (
"os"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Podman load", func() {
var (
tempdir string
err error
buildahtest BuildAhTest
)
BeforeEach(func() {
tempdir, err = CreateTempDirInTempDir()
if err != nil {
os.Exit(1)
}
buildahtest = BuildahCreate(tempdir)
})
AfterEach(func() {
buildahtest.Cleanup()
})
It("buildah inspect json", func() {
b := buildahtest.BuildAh([]string{"from", "--pull=false", "scratch"})
b.WaitWithDefaultTimeout()
Expect(b.ExitCode()).To(Equal(0))
cid := b.OutputToString()
result := buildahtest.BuildAh([]string{"inspect", cid})
result.WaitWithDefaultTimeout()
Expect(result.ExitCode()).To(Equal(0))
Expect(result.IsJSONOutputValid()).To(BeTrue())
})
It("buildah inspect format", func() {
b := buildahtest.BuildAh([]string{"from", "--pull=false", "scratch"})
b.WaitWithDefaultTimeout()
Expect(b.ExitCode()).To(Equal(0))
cid := b.OutputToString()
result := buildahtest.BuildAh([]string{"inspect", "--format", "\"{{.}}\"", cid})
result.WaitWithDefaultTimeout()
Expect(result.ExitCode()).To(Equal(0))
})
It("buildah inspect image", func() {
b := buildahtest.BuildAh([]string{"from", "--pull=false", "scratch"})
b.WaitWithDefaultTimeout()
Expect(b.ExitCode()).To(Equal(0))
cid := b.OutputToString()
commit := buildahtest.BuildAh([]string{"commit", cid, "scratchy-image"})
commit.WaitWithDefaultTimeout()
Expect(commit.ExitCode()).To(Equal(0))
result := buildahtest.BuildAh([]string{"inspect", "--type", "image", "scratchy-image"})
result.WaitWithDefaultTimeout()
Expect(result.ExitCode()).To(Equal(0))
Expect(result.IsJSONOutputValid()).To(BeTrue())
result = buildahtest.BuildAh([]string{"inspect", "--type", "image", "scratchy-image:latest"})
result.WaitWithDefaultTimeout()
Expect(result.ExitCode()).To(Equal(0))
Expect(result.IsJSONOutputValid()).To(BeTrue())
})
It("buildah HTML escaped", func() {
b := buildahtest.BuildAh([]string{"from", "--pull=false", "scratch"})
b.WaitWithDefaultTimeout()
Expect(b.ExitCode()).To(Equal(0))
cid := b.OutputToString()
config := buildahtest.BuildAh([]string{"config", "--label", "maintainer=\"Darth Vader <dvader@darkside.io>\"", cid})
config.WaitWithDefaultTimeout()
Expect(config.ExitCode()).To(Equal(0))
commit := buildahtest.BuildAh([]string{"commit", cid, "darkside-image"})
commit.WaitWithDefaultTimeout()
Expect(commit.ExitCode()).To(Equal(0))
result := buildahtest.BuildAh([]string{"inspect", "--type", "image", "darkside-image"})
result.WaitWithDefaultTimeout()
Expect(result.ExitCode()).To(Equal(0))
data := result.InspectImageJSON()
Expect(data.Docker.Config.Labels["maintainer"]).To(Equal("\"Darth Vader <dvader@darkside.io>\""))
})
})

View File

@@ -12,12 +12,10 @@ load helpers
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = elsewhere-img-working-container ]
cid=$(buildah from --pull-always --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = `basename ${elsewhere}`-working-container ]
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json scratch)
@@ -26,12 +24,10 @@ load helpers
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = elsewhere-img-working-container ]
cid=$(buildah from --pull-always --signature-policy ${TESTSDIR}/policy.json dir:${elsewhere})
buildah rm $cid
buildah rmi ${elsewhere}
[ "$cid" = `basename ${elsewhere}`-working-container ]
}
@@ -73,7 +69,6 @@ load helpers
}
@test "from-authenticate-cert-and-creds" {
mkdir -p ${TESTDIR}/auth
# Create creds and store in ${TESTDIR}/auth/htpasswd
# docker run --entrypoint htpasswd registry:2 -Bbn testuser testpassword > ${TESTDIR}/auth/htpasswd
@@ -112,3 +107,105 @@ load helpers
# buildah rm $ctrid
# buildah rmi -f $(buildah --debug=false images -q)
}
@test "from-tagged-image" {
# Github #396: Make sure the container name starts with the correct image even when it's tagged.
cid=$(buildah from --pull=false --signature-policy ${TESTSDIR}/policy.json scratch)
buildah commit --signature-policy ${TESTSDIR}/policy.json "$cid" scratch2
buildah rm $cid
buildah tag scratch2 scratch3
cid=$(buildah from --signature-policy ${TESTSDIR}/policy.json scratch3)
[ "$cid" == scratch3-working-container ]
buildah rm ${cid}
buildah rmi scratch2 scratch3
# Github https://github.com/projectatomic/buildah/issues/396#issuecomment-360949396
cid=$(buildah from --pull=true --signature-policy ${TESTSDIR}/policy.json alpine)
buildah rm $cid
buildah tag alpine alpine2
cid=$(buildah from --signature-policy ${TESTSDIR}/policy.json docker.io/alpine2)
[ "$cid" == alpine2-working-container ]
buildah rm ${cid}
buildah rmi alpine alpine2
}
@test "from cpu-period test" {
cid=$(buildah from --cpu-period=5000 --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah run $cid cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
echo $output
[ "$status" -eq 0 ]
[[ "$output" =~ "5000" ]]
buildah rm $cid
}
@test "from cpu-quota test" {
cid=$(buildah from --cpu-quota=5000 --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah run $cid cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
echo "$output"
[ "$status" -eq 0 ]
[[ "$output" =~ 5000 ]]
buildah rm $cid
}
@test "from cpu-shares test" {
cid=$(buildah from --cpu-shares=2 --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah run $cid cat /sys/fs/cgroup/cpu/cpu.shares
echo "$output"
[ "$status" -eq 0 ]
[[ "$output" =~ 2 ]]
buildah rm $cid
}
@test "from cpuset-cpus test" {
cid=$(buildah from --cpuset-cpus=0 --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah run $cid cat /sys/fs/cgroup/cpuset/cpuset.cpus
echo "$output"
[ "$status" -eq 0 ]
[[ "$output" =~ 0 ]]
buildah rm $cid
}
@test "from cpuset-mems test" {
cid=$(buildah from --cpuset-mems=0 --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah run $cid cat /sys/fs/cgroup/cpuset/cpuset.mems
echo "$output"
[ "$status" -eq 0 ]
[[ "$output" =~ 0 ]]
buildah rm $cid
}
@test "from memory test" {
cid=$(buildah from --memory=40m --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah run $cid cat /sys/fs/cgroup/memory/memory.limit_in_bytes
echo $output
[ "$status" -eq 0 ]
[[ "$output" =~ 41943040 ]]
buildah rm $cid
}
@test "from volume test" {
cid=$(buildah from --volume=${TESTDIR}:/myvol --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah run $cid -- cat /proc/mounts
echo $output
[ "$status" -eq 0 ]
[[ "$output" =~ /myvol ]]
buildah rm $cid
}
@test "from volume ro test" {
cid=$(buildah from --volume=${TESTDIR}:/myvol:ro --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah run $cid -- cat /proc/mounts
echo $output
[ "$status" -eq 0 ]
[[ "$output" =~ /myvol ]]
buildah rm $cid
}
@test "from shm-size test" {
cid=$(buildah from --shm-size=80m --pull --signature-policy ${TESTSDIR}/policy.json alpine)
run buildah run $cid -- df -h
echo $output
[ "$status" -eq 0 ]
[[ "$output" =~ 80 ]]
buildah rm $cid
}

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env bats
load helpers
@test "inspect-json" {
cid=$(buildah from --pull=false --signature-policy ${TESTSDIR}/policy.json scratch)
run buildah --debug=false inspect "$cid"
[ "$status" -eq 0 ]
[ "$output" != "" ]
}
@test "inspect-format" {
cid=$(buildah from --pull=false --signature-policy ${TESTSDIR}/policy.json scratch)
run buildah --debug=false inspect --format '{{.}}' "$cid"
[ "$status" -eq 0 ]
[ "$output" != "" ]
}
@test "inspect-image" {
cid=$(buildah from --pull=false --signature-policy ${TESTSDIR}/policy.json scratch)
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid scratchy-image
run buildah --debug=false inspect --type image scratchy-image
[ "$status" -eq 0 ]
[ "$output" != "" ]
run buildah --debug=false inspect --type image scratchy-image:latest
[ "$status" -eq 0 ]
[ "$output" != "" ]
}

12
tests/rm.bats Normal file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bats
load helpers
@test "remove multiple containers errors" {
run buildah --debug=false rm mycontainer1 mycontainer2 mycontainer3
[ "${lines[0]}" == "error removing container \"mycontainer1\": error reading build container: container not known" ]
[ "${lines[1]}" == "error removing container \"mycontainer2\": error reading build container: container not known" ]
[ "${lines[2]}" == "error removing container \"mycontainer3\": error reading build container: container not known" ]
[ $(wc -l <<< "$output") -eq 3 ]
[ "${status}" -eq 1 ]
}

80
tests/rmi.bats Normal file
View File

@@ -0,0 +1,80 @@
#!/usr/bin/env bats
load helpers
@test "remove one image" {
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
buildah rm "$cid"
buildah rmi alpine
run buildah --debug=false images -q
[ "$output" == "" ]
}
@test "remove multiple images" {
cid2=$(buildah from --signature-policy ${TESTSDIR}/policy.json alpine)
cid3=$(buildah from --signature-policy ${TESTSDIR}/policy.json busybox)
run buildah rmi alpine busybox
[ "$status" -eq 1 ]
run buildah --debug=false images -q
[ "$output" != "" ]
buildah rmi -f alpine busybox
run buildah --debug=false images -q
[ "$output" == "" ]
}
@test "remove all images" {
cid1=$(buildah from --signature-policy ${TESTSDIR}/policy.json scratch)
cid2=$(buildah from --signature-policy ${TESTSDIR}/policy.json alpine)
cid3=$(buildah from --signature-policy ${TESTSDIR}/policy.json busybox)
buildah rmi -a -f
run buildah --debug=false images -q
[ "$output" == "" ]
cid1=$(buildah from --signature-policy ${TESTSDIR}/policy.json scratch)
cid2=$(buildah from --signature-policy ${TESTSDIR}/policy.json alpine)
cid3=$(buildah from --signature-policy ${TESTSDIR}/policy.json busybox)
run buildah rmi --all
[ "$status" -eq 1 ]
run buildah --debug=false images -q
[ "$output" != "" ]
buildah rmi --all --force
run buildah --debug=false images -q
[ "$output" == "" ]
}
@test "use prune to remove dangling images" {
createrandom ${TESTDIR}/randomfile
createrandom ${TESTDIR}/other-randomfile
cid=$(buildah from --signature-policy ${TESTSDIR}/policy.json busybox)
run buildah --debug=false images -q
[ $(wc -l <<< "$output") -eq 1 ]
root=$(buildah mount $cid)
cp ${TESTDIR}/randomfile $root/randomfile
buildah unmount $cid
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid containers-storage:new-image
run buildah --debug=false images -q
[ $(wc -l <<< "$output") -eq 2 ]
root=$(buildah mount $cid)
cp ${TESTDIR}/other-randomfile $root/other-randomfile
buildah unmount $cid
buildah commit --signature-policy ${TESTSDIR}/policy.json $cid containers-storage:new-image
run buildah --debug=false images -q
[ $(wc -l <<< "$output") -eq 3 ]
buildah rmi --prune
run buildah --debug=false images -q
[ $(wc -l <<< "$output") -eq 2 ]
buildah rmi --all --force
run buildah --debug=false images -q
[ "$output" == "" ]
}

View File

@@ -27,7 +27,7 @@ load helpers
buildah --debug=false run $cid -- rpmbuild --define "_topdir /rpmbuild" -ba /rpmbuild/SPECS/buildah.spec
# Build a second new container.
cid2=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json registry.fedoraproject.org/fedora:26)
cid2=$(buildah --debug=false from --pull --signature-policy ${TESTSDIR}/policy.json registry.fedoraproject.org/fedora:27)
root2=$(buildah --debug=false mount $cid2)
# Copy the binary packages from the first container to the second one, and build a list of

View File

@@ -10,11 +10,11 @@ load helpers
createrandom ${TESTDIR}/randomfile
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
root=$(buildah mount $cid)
buildah config $cid --workingdir /tmp
buildah config --workingdir /tmp $cid
run buildah --debug=false run $cid pwd
[ "$status" -eq 0 ]
[ "$output" = /tmp ]
buildah config $cid --workingdir /root
buildah config --workingdir /root $cid
run buildah --debug=false run $cid pwd
[ "$status" -eq 0 ]
[ "$output" = /root ]
@@ -34,7 +34,7 @@ load helpers
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
# This should fail, because buildah run doesn't have a -n flag.
run buildah --debug=false run $cid echo -n test
run buildah --debug=false run -n $cid echo test
[ "$status" -ne 0 ]
# This should succeed, because buildah run stops caring at the --, which is preserved as part of the command.
@@ -69,35 +69,35 @@ load helpers
skip
fi
cid=$(buildah from --pull --signature-policy ${TESTSDIR}/policy.json alpine)
buildah config $cid --workingdir /tmp
buildah config --workingdir /tmp $cid
buildah config $cid --entrypoint ""
buildah config $cid --cmd pwd
buildah config --entrypoint "" $cid
buildah config --cmd pwd $cid
run buildah --debug=false run $cid
[ "$status" -eq 0 ]
[ "$output" = /tmp ]
buildah config $cid --entrypoint echo
buildah config --entrypoint echo $cid
run buildah --debug=false run $cid
[ "$status" -eq 0 ]
[ "$output" = pwd ]
buildah config $cid --cmd ""
buildah config --cmd "" $cid
run buildah --debug=false run $cid
[ "$status" -eq 0 ]
[ "$output" = "" ]
buildah config $cid --entrypoint ""
buildah config --entrypoint "" $cid
run buildah --debug=false run $cid echo that-other-thing
[ "$status" -eq 0 ]
[ "$output" = that-other-thing ]
buildah config $cid --cmd echo
buildah config --cmd echo $cid
run buildah --debug=false run $cid echo that-other-thing
[ "$status" -eq 0 ]
[ "$output" = that-other-thing ]
buildah config $cid --entrypoint echo
buildah config --entrypoint echo $cid
run buildah --debug=false run $cid echo that-other-thing
[ "$status" -eq 0 ]
[ "$output" = that-other-thing ]
@@ -127,7 +127,7 @@ load helpers
echo "$testuser:x:$testuid:$testgid:Jimbo Jenkins:/home/$testuser:/bin/sh" >> $root/etc/passwd
echo "$testgroup:x:$testgroupid:" >> $root/etc/group
buildah config $cid -u ""
buildah config -u "" $cid
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
@@ -136,7 +136,7 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = 0 ]
buildah config $cid -u ${testuser}
buildah config -u ${testuser} $cid
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
@@ -145,7 +145,7 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = $testgid ]
buildah config $cid -u ${testuid}
buildah config -u ${testuid} $cid
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
@@ -154,7 +154,7 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = $testgid ]
buildah config $cid -u ${testuser}:${testgroup}
buildah config -u ${testuser}:${testgroup} $cid
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
@@ -163,7 +163,7 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testuid}:${testgroup}
buildah config -u ${testuid}:${testgroup} $cid
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
@@ -172,7 +172,7 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testotheruid}:${testgroup}
buildah config -u ${testotheruid}:${testgroup} $cid
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
@@ -181,7 +181,7 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testotheruid}
buildah config -u ${testotheruid} $cid
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
@@ -190,7 +190,7 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = 0 ]
buildah config $cid -u ${testuser}:${testgroupid}
buildah config -u ${testuser}:${testgroupid} $cid
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
@@ -199,7 +199,7 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testuid}:${testgroupid}
buildah config -u ${testuid}:${testgroupid} $cid
buildah run -- $cid id
run buildah --debug=false run -- $cid id -u
[ "$status" -eq 0 ]
@@ -208,7 +208,7 @@ load helpers
[ "$status" -eq 0 ]
[ "$output" = $testgroupid ]
buildah config $cid -u ${testbogususer}
buildah config -u ${testbogususer} $cid
run buildah --debug=false run -- $cid id -u
[ "$status" -ne 0 ]
[[ "$output" =~ "unknown user" ]]
@@ -217,7 +217,7 @@ load helpers
[[ "$output" =~ "unknown user" ]]
ln -vsf /etc/passwd $root/etc/passwd
buildah config $cid -u ${testuser}:${testgroup}
buildah config -u ${testuser}:${testgroup} $cid
run buildah --debug=false run -- $cid id -u
echo "$output"
[ "$status" -ne 0 ]

View File

@@ -97,22 +97,27 @@ buildah images
docker logout localhost:5000
########
# Push using only certs, this should fail.
# Push using only certs, this should FAIL.
########
buildah push --cert-dir /root/auth --tls-verify=true alpine docker://localhost:5000/my-alpine
########
# Push using creds, certs and no transport, this should work.
# Push using creds, certs and no transport (docker://), this should work.
########
buildah push --cert-dir ~/auth --tls-verify=true --creds=testuser:testpassword alpine localhost:5000/my-alpine
########
# No creds anywhere, only the certificate, this should fail.
# Push using a bad password , this should FAIL.
########
buildah push --cert-dir ~/auth --tls-verify=true --creds=testuser:badpassword alpine localhost:5000/my-alpine
########
# No creds anywhere, only the certificate, this should FAIL.
########
buildah from localhost:5000/my-alpine --cert-dir /root/auth --tls-verify=true
########
# Log in with creds, this should work
# From with creds and certs, this should work
########
ctrid=$(buildah from localhost:5000/my-alpine --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword)
@@ -154,7 +159,7 @@ buildah images
########
########
# No credentials, this should fail.
# No credentials, this should FAIL.
########
buildah commit --cert-dir /root/auth --tls-verify=true alpine-working-container docker://localhost:5000/my-commit-alpine
@@ -163,10 +168,51 @@ buildah commit --cert-dir /root/auth --tls-verify=true alpine-working-container
########
buildah commit --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword alpine-working-container docker://localhost:5000/my-commit-alpine
########
# Use bad password on from/pull, this should FAIL
########
buildah from localhost:5000/my-commit-alpine --pull-always --cert-dir /root/auth --tls-verify=true --creds=testuser:badpassword
########
# Pull the new image that we just commited
########
buildah from localhost:5000/my-commit-alpine --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword
buildah from localhost:5000/my-commit-alpine --pull-always --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword
########
# Show stuff
########
docker ps --all
docker images
buildah containers
buildah images
########
# Create Dockerfile
########
FILE=./Dockerfile
/bin/cat <<EOM >$FILE
FROM localhost:5000/my-commit-alpine
EOM
chmod +x $FILE
########
# Clean up Buildah
########
buildah rm --all
buildah rmi -f $(buildah --debug=false images -q)
########
# Try Buildah bud with creds but no auth, this should FAIL
########
buildah bud -f ./Dockerfile --tls-verify=true --creds=testuser:testpassword
########
# Try Buildah bud with creds and auth, this should work
########
buildah bud -f ./Dockerfile --cert-dir /root/auth --tls-verify=true --creds=testuser:testpassword
########
# Show stuff
@@ -182,6 +228,9 @@ buildah images
########
# Clean up
########
read -p "Press enter to continue and clean up all"
rm -f ./Dockerfile
rm -rf ${TESTDIR}/auth
docker rm -f $(docker ps --all -q)
docker rmi -f $(docker images -q)

View File

@@ -17,6 +17,22 @@
buildah images
buildah containers
########
# Run ls in redis container, this should work
########
ctrid=$(buildah from registry.access.redhat.com/rhscl/redis-32-rhel7)
buildah run $ctrid ls /
########
# Validate touch works after installing httpd, solved selinux
# issue that should now work.
########
ctr=$(buildah from scratch)
mnt=$(buildah mount $ctr)
dnf -y install --installroot=$mnt --releasever=27 httpd
buildah run $ctr touch /test
########
# Create Fedora based container
########
@@ -189,5 +205,5 @@ buildah run $whalesays
########
# Clean up Buildah
########
buildah rm $(buildah containers -q)
buildah rmi -f $(buildah --debug=false images -q)
buildah rm --all
buildah rmi --all

View File

@@ -11,6 +11,7 @@ exec gometalinter.v1 \
--enable-gc \
--exclude='error return value not checked.*(Close|Log|Print).*\(errcheck\)$' \
--exclude='.*_test\.go:.*error return value not checked.*\(errcheck\)$' \
--exclude='declaration of.*err.*shadows declaration.*\(vetshadow\)$'\
--exclude='duplicate of.*_test.go.*\(dupl\)$' \
--exclude='vendor\/.*' \
--disable=gotype \

View File

@@ -9,6 +9,7 @@ load helpers
}
@test "buildah version up to date in .spec file" {
skip "cni doesnt version the same"
run buildah version
[ "$status" -eq 0 ]
bversion=$(echo "$output" | awk '/^Version:/ { print $NF }')

View File

@@ -1,6 +1,7 @@
package util
import (
"net/url"
"path"
"strings"
@@ -9,6 +10,7 @@ import (
is "github.com/containers/image/storage"
"github.com/containers/image/types"
"github.com/containers/storage"
"github.com/docker/distribution/registry/api/errcode"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -152,3 +154,17 @@ func AddImageNames(store storage.Store, image *storage.Image, addNames []string)
}
return nil
}
// GetFailureCause checks the type of the error "err" and returns a new
// error message that reflects the reason of the failure.
// In case err type is not a familiar one the error "defaultError" is returned.
func GetFailureCause(err, defaultError error) error {
switch nErr := errors.Cause(err).(type) {
case errcode.Errors:
return err
case errcode.Error, *url.Error:
return nErr
default:
return defaultError
}
}

View File

@@ -54,3 +54,4 @@ github.com/containerd/continuity master
github.com/gogo/protobuf master
github.com/xeipuuv/gojsonpointer master
github.com/pquerna/ffjson d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac
github.com/projectatomic/libpod master

View File

@@ -368,7 +368,10 @@ func (ic *imageCopier) copyLayers() error {
srcInfos := ic.src.LayerInfos()
destInfos := []types.BlobInfo{}
diffIDs := []digest.Digest{}
updatedSrcInfos := ic.src.LayerInfosForCopy()
updatedSrcInfos, err := ic.src.LayerInfosForCopy()
if err != nil {
return err
}
srcInfosUpdated := false
if updatedSrcInfos != nil && !reflect.DeepEqual(srcInfos, updatedSrcInfos) {
if !ic.canModifyManifest {

View File

@@ -46,6 +46,11 @@ func (ic *imageCopier) determineManifestConversion(destSupportedManifestMIMEType
if err != nil { // This should have been cached?!
return "", nil, errors.Wrap(err, "Error reading manifest")
}
normalizedSrcType := manifest.NormalizedMIMEType(srcType)
if srcType != normalizedSrcType {
logrus.Debugf("Source manifest MIME type %s, treating it as %s", srcType, normalizedSrcType)
srcType = normalizedSrcType
}
if forceManifestMIMEType != "" {
destSupportedManifestMIMETypes = []string{forceManifestMIMEType}

View File

@@ -12,7 +12,7 @@ import (
"github.com/sirupsen/logrus"
)
const version = "Directory Transport Version: 1.0\n"
const version = "Directory Transport Version: 1.1\n"
// ErrNotContainerImageDir indicates that the directory doesn't match the expected contents of a directory created
// using the 'dir' transport
@@ -70,7 +70,7 @@ func newImageDestination(ref dirReference, compress bool) (types.ImageDestinatio
}
}
// create version file
err = ioutil.WriteFile(d.ref.versionPath(), []byte(version), 0755)
err = ioutil.WriteFile(d.ref.versionPath(), []byte(version), 0644)
if err != nil {
return nil, errors.Wrapf(err, "error creating version file %q", d.ref.versionPath())
}

View File

@@ -52,11 +52,11 @@ func (s *dirImageSource) GetManifest(instanceDigest *digest.Digest) ([]byte, str
func (s *dirImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
r, err := os.Open(s.ref.layerPath(info.Digest))
if err != nil {
return nil, 0, nil
return nil, -1, err
}
fi, err := r.Stat()
if err != nil {
return nil, 0, nil
return nil, -1, err
}
return r, fi.Size(), nil
}
@@ -84,6 +84,6 @@ func (s *dirImageSource) GetSignatures(ctx context.Context, instanceDigest *dige
}
// LayerInfosForCopy() returns updated layer info that should be used when copying, in preference to values in the manifest, if specified.
func (s *dirImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *dirImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -5,14 +5,13 @@ import (
"path/filepath"
"strings"
"github.com/pkg/errors"
"github.com/containers/image/directory/explicitfilepath"
"github.com/containers/image/docker/reference"
"github.com/containers/image/image"
"github.com/containers/image/transports"
"github.com/containers/image/types"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
)
func init() {
@@ -173,7 +172,7 @@ func (ref dirReference) manifestPath() string {
// layerPath returns a path for a layer tarball within a directory using our conventions.
func (ref dirReference) layerPath(digest digest.Digest) string {
// FIXME: Should we keep the digest identification?
return filepath.Join(ref.path, digest.Hex()+".tar")
return filepath.Join(ref.path, digest.Hex())
}
// signaturePath returns a path for a signature within a directory using our conventions.

View File

@@ -36,6 +36,6 @@ func (s *archiveImageSource) Close() error {
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *archiveImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *archiveImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -83,6 +83,6 @@ func (s *daemonImageSource) Close() error {
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *daemonImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *daemonImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -8,7 +8,10 @@ import (
"io"
"io/ioutil"
"net/http"
"net/url"
"os"
"path/filepath"
"strconv"
"strings"
"time"
@@ -24,10 +27,9 @@ import (
)
const (
dockerHostname = "docker.io"
dockerRegistry = "registry-1.docker.io"
systemPerHostCertDirPath = "/etc/docker/certs.d"
dockerHostname = "docker.io"
dockerV1Hostname = "index.docker.io"
dockerRegistry = "registry-1.docker.io"
resolvedPingV2URL = "%s://%s/v2/"
resolvedPingV1URL = "%s://%s/v1/_ping"
@@ -49,6 +51,7 @@ var (
ErrV1NotSupported = errors.New("can't talk to a V1 docker registry")
// ErrUnauthorizedForCredentials is returned when the status code returned is 401
ErrUnauthorizedForCredentials = errors.New("unable to retrieve auth token: invalid username/password")
systemPerHostCertDirPaths = [2]string{"/etc/containers/certs.d", "/etc/docker/certs.d"}
)
// extensionSignature and extensionSignatureList come from github.com/openshift/origin/pkg/dockerregistry/server/signaturedispatcher.go:
@@ -66,9 +69,10 @@ type extensionSignatureList struct {
}
type bearerToken struct {
Token string `json:"token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
Token string `json:"token"`
AccessToken string `json:"access_token"`
ExpiresIn int `json:"expires_in"`
IssuedAt time.Time `json:"issued_at"`
}
// dockerClient is configuration for dealing with a single Docker registry.
@@ -96,6 +100,24 @@ type authScope struct {
actions string
}
func newBearerTokenFromJSONBlob(blob []byte) (*bearerToken, error) {
token := new(bearerToken)
if err := json.Unmarshal(blob, &token); err != nil {
return nil, err
}
if token.Token == "" {
token.Token = token.AccessToken
}
if token.ExpiresIn < minimumTokenLifetimeSeconds {
token.ExpiresIn = minimumTokenLifetimeSeconds
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
}
if token.IssuedAt.IsZero() {
token.IssuedAt = time.Now().UTC()
}
return token, nil
}
// this is cloned from docker/go-connections because upstream docker has changed
// it and make deps here fails otherwise.
// We'll drop this once we upgrade to docker 1.13.x deps.
@@ -109,19 +131,42 @@ func serverDefault() *tls.Config {
}
// dockerCertDir returns a path to a directory to be consumed by tlsclientconfig.SetupCertificates() depending on ctx and hostPort.
func dockerCertDir(ctx *types.SystemContext, hostPort string) string {
func dockerCertDir(ctx *types.SystemContext, hostPort string) (string, error) {
if ctx != nil && ctx.DockerCertPath != "" {
return ctx.DockerCertPath
return ctx.DockerCertPath, nil
}
var hostCertDir string
if ctx != nil && ctx.DockerPerHostCertDirPath != "" {
hostCertDir = ctx.DockerPerHostCertDirPath
} else if ctx != nil && ctx.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(ctx.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
return filepath.Join(ctx.DockerPerHostCertDirPath, hostPort), nil
}
return filepath.Join(hostCertDir, hostPort)
var (
hostCertDir string
fullCertDirPath string
)
for _, systemPerHostCertDirPath := range systemPerHostCertDirPaths {
if ctx != nil && ctx.RootForImplicitAbsolutePaths != "" {
hostCertDir = filepath.Join(ctx.RootForImplicitAbsolutePaths, systemPerHostCertDirPath)
} else {
hostCertDir = systemPerHostCertDirPath
}
fullCertDirPath = filepath.Join(hostCertDir, hostPort)
_, err := os.Stat(fullCertDirPath)
if err == nil {
break
}
if os.IsNotExist(err) {
continue
}
if os.IsPermission(err) {
logrus.Debugf("error accessing certs directory due to permissions: %v", err)
continue
}
if err != nil {
return "", err
}
}
return fullCertDirPath, nil
}
// newDockerClientFromRef returns a new dockerClient instance for refHostname (a host a specified in the Docker image reference, not canonicalized to dockerRegistry)
@@ -155,7 +200,10 @@ func newDockerClientWithDetails(ctx *types.SystemContext, registry, username, pa
// dockerHostname here, because it is more symmetrical to read the configuration in that case as well, and because
// generally the UI hides the existence of the different dockerRegistry. But note that this behavior is
// undocumented and may change if docker/docker changes.
certDir := dockerCertDir(ctx, hostName)
certDir, err := dockerCertDir(ctx, hostName)
if err != nil {
return nil, err
}
if err := tlsclientconfig.SetupCertificates(certDir, tr.TLSClientConfig); err != nil {
return nil, err
}
@@ -202,6 +250,100 @@ func CheckAuth(ctx context.Context, sCtx *types.SystemContext, username, passwor
}
}
// SearchResult holds the information of each matching image
// It matches the output returned by the v1 endpoint
type SearchResult struct {
Name string `json:"name"`
Description string `json:"description"`
// StarCount states the number of stars the image has
StarCount int `json:"star_count"`
IsTrusted bool `json:"is_trusted"`
// IsAutomated states whether the image is an automated build
IsAutomated bool `json:"is_automated"`
// IsOfficial states whether the image is an official build
IsOfficial bool `json:"is_official"`
}
// SearchRegistry queries a registry for images that contain "image" in their name
// The limit is the max number of results desired
// Note: The limit value doesn't work with all registries
// for example registry.access.redhat.com returns all the results without limiting it to the limit value
func SearchRegistry(ctx context.Context, sCtx *types.SystemContext, registry, image string, limit int) ([]SearchResult, error) {
type V2Results struct {
// Repositories holds the results returned by the /v2/_catalog endpoint
Repositories []string `json:"repositories"`
}
type V1Results struct {
// Results holds the results returned by the /v1/search endpoint
Results []SearchResult `json:"results"`
}
v2Res := &V2Results{}
v1Res := &V1Results{}
// The /v2/_catalog endpoint has been disabled for docker.io therefore the call made to that endpoint will fail
// So using the v1 hostname for docker.io for simplicity of implementation and the fact that it returns search results
if registry == dockerHostname {
registry = dockerV1Hostname
}
client, err := newDockerClientWithDetails(sCtx, registry, "", "", "", nil, "")
if err != nil {
return nil, errors.Wrapf(err, "error creating new docker client")
}
logrus.Debugf("trying to talk to v2 search endpoint\n")
resp, err := client.makeRequest(ctx, "GET", "/v2/_catalog", nil, nil)
if err != nil {
logrus.Debugf("error getting search results from v2 endpoint %q: %v", registry, err)
} else {
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
logrus.Debugf("error getting search results from v2 endpoint %q, status code %q", registry, resp.StatusCode)
} else {
if err := json.NewDecoder(resp.Body).Decode(v2Res); err != nil {
return nil, err
}
searchRes := []SearchResult{}
for _, repo := range v2Res.Repositories {
if strings.Contains(repo, image) {
res := SearchResult{
Name: repo,
}
searchRes = append(searchRes, res)
}
}
return searchRes, nil
}
}
// set up the query values for the v1 endpoint
u := url.URL{
Path: "/v1/search",
}
q := u.Query()
q.Set("q", image)
q.Set("n", strconv.Itoa(limit))
u.RawQuery = q.Encode()
logrus.Debugf("trying to talk to v1 search endpoint\n")
resp, err = client.makeRequest(ctx, "GET", u.String(), nil, nil)
if err != nil {
logrus.Debugf("error getting search results from v1 endpoint %q: %v", registry, err)
} else {
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
logrus.Debugf("error getting search results from v1 endpoint %q, status code %q", registry, resp.StatusCode)
} else {
if err := json.NewDecoder(resp.Body).Decode(v1Res); err != nil {
return nil, err
}
return v1Res.Results, nil
}
}
return nil, errors.Wrapf(err, "couldn't search registry %q", registry)
}
// makeRequest creates and executes a http.Request with the specified parameters, adding authentication and TLS options for the Docker client.
// The host name and schema is taken from the client or autodetected, and the path is relative to it, i.e. the path usually starts with /v2/.
func (c *dockerClient) makeRequest(ctx context.Context, method, path string, headers map[string][]string, stream io.Reader) (*http.Response, error) {
@@ -332,18 +474,8 @@ func (c *dockerClient) getBearerToken(ctx context.Context, realm, service, scope
if err != nil {
return nil, err
}
var token bearerToken
if err := json.Unmarshal(tokenBlob, &token); err != nil {
return nil, err
}
if token.ExpiresIn < minimumTokenLifetimeSeconds {
token.ExpiresIn = minimumTokenLifetimeSeconds
logrus.Debugf("Increasing token expiration to: %d seconds", token.ExpiresIn)
}
if token.IssuedAt.IsZero() {
token.IssuedAt = time.Now().UTC()
}
return &token, nil
return newBearerTokenFromJSONBlob(tokenBlob)
}
// detectProperties detects various properties of the registry.

View File

@@ -131,7 +131,7 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
defer res.Body.Close()
if res.StatusCode != http.StatusAccepted {
logrus.Debugf("Error initiating layer upload, response %#v", *res)
return types.BlobInfo{}, errors.Errorf("Error initiating layer upload to %s, status %d", uploadPath, res.StatusCode)
return types.BlobInfo{}, errors.Wrapf(client.HandleErrorResponse(res), "Error initiating layer upload to %s", uploadPath)
}
uploadLocation, err := res.Location()
if err != nil {
@@ -167,7 +167,7 @@ func (d *dockerImageDestination) PutBlob(stream io.Reader, inputInfo types.BlobI
defer res.Body.Close()
if res.StatusCode != http.StatusCreated {
logrus.Debugf("Error uploading layer, response %#v", *res)
return types.BlobInfo{}, errors.Errorf("Error uploading layer to %s, status %d", uploadLocation, res.StatusCode)
return types.BlobInfo{}, errors.Wrapf(client.HandleErrorResponse(res), "Error uploading layer to %s", uploadLocation)
}
logrus.Debugf("Upload of layer %s complete", computedDigest)
@@ -196,7 +196,7 @@ func (d *dockerImageDestination) HasBlob(info types.BlobInfo) (bool, int64, erro
return true, getBlobSize(res), nil
case http.StatusUnauthorized:
logrus.Debugf("... not authorized")
return false, -1, errors.Errorf("not authorized to read from destination repository %s", reference.Path(d.ref.ref))
return false, -1, client.HandleErrorResponse(res)
case http.StatusNotFound:
logrus.Debugf("... not present")
return false, -1, nil
@@ -447,7 +447,7 @@ sigExists:
logrus.Debugf("Error body %s", string(body))
}
logrus.Debugf("Error uploading signature, status %d, %#v", res.StatusCode, res)
return errors.Errorf("Error uploading signature to %s, status %d", path, res.StatusCode)
return errors.Wrapf(client.HandleErrorResponse(res), "Error uploading signature to %s", path)
}
}

View File

@@ -53,8 +53,8 @@ func (s *dockerImageSource) Close() error {
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *dockerImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *dockerImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}
// simplifyContentType drops parameters from a HTTP media type (see https://tools.ietf.org/html/rfc7231#section-3.1.1.1)

View File

@@ -95,7 +95,7 @@ func (m *manifestSchema1) imageInspectInfo() (*types.ImageInspectInfo, error) {
// This is a horribly specific interface, but computing InformationOnly.LayerDiffIDs can be very expensive to compute
// (most importantly it forces us to download the full layers even if they are already present at the destination).
func (m *manifestSchema1) UpdatedImageNeedsLayerDiffIDs(options types.ManifestUpdateOptions) bool {
return options.ManifestMIMEType == manifest.DockerV2Schema2MediaType
return (options.ManifestMIMEType == manifest.DockerV2Schema2MediaType || options.ManifestMIMEType == imgspecv1.MediaTypeImageManifest)
}
// UpdatedImage returns a types.Image modified according to options.

View File

@@ -65,6 +65,6 @@ func (i *memoryImage) Inspect() (*types.ImageInspectInfo, error) {
// LayerInfosForCopy returns an updated set of layer blob information which may not match the manifest.
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (i *memoryImage) LayerInfosForCopy() []types.BlobInfo {
return nil
func (i *memoryImage) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -149,6 +149,16 @@ func (m *manifestOCI1) UpdatedImage(options types.ManifestUpdateOptions) (types.
switch options.ManifestMIMEType {
case "": // No conversion, OK
case manifest.DockerV2Schema1MediaType, manifest.DockerV2Schema1SignedMediaType:
// We can't directly convert to V1, but we can transitively convert via a V2 image
m2, err := copy.convertToManifestSchema2()
if err != nil {
return nil, err
}
return m2.UpdatedImage(types.ManifestUpdateOptions{
ManifestMIMEType: options.ManifestMIMEType,
InformationOnly: options.InformationOnly,
})
case manifest.DockerV2Schema2MediaType:
return copy.convertToManifestSchema2()
default:

View File

@@ -101,6 +101,6 @@ func (i *sourcedImage) Inspect() (*types.ImageInspectInfo, error) {
return inspectManifest(i.genericManifest)
}
func (i *sourcedImage) LayerInfosForCopy() []types.BlobInfo {
func (i *sourcedImage) LayerInfosForCopy() ([]types.BlobInfo, error) {
return i.UnparsedImage.LayerInfosForCopy()
}

View File

@@ -97,6 +97,6 @@ func (i *UnparsedImage) Signatures(ctx context.Context) ([][]byte, error) {
// LayerInfosForCopy returns an updated set of layer blob information which may not match the manifest.
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
func (i *UnparsedImage) LayerInfosForCopy() []types.BlobInfo {
func (i *UnparsedImage) LayerInfosForCopy() ([]types.BlobInfo, error) {
return i.src.LayerInfosForCopy()
}

View File

@@ -90,6 +90,6 @@ func (s *ociArchiveImageSource) GetSignatures(ctx context.Context, instanceDiges
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *ociArchiveImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *ociArchiveImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -144,8 +144,8 @@ func (s *ociImageSource) getExternalBlob(urls []string) (io.ReadCloser, int64, e
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *ociImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *ociImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}
func getBlobSize(resp *http.Response) int64 {

View File

@@ -247,8 +247,8 @@ func (s *openshiftImageSource) GetSignatures(ctx context.Context, instanceDigest
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *openshiftImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (s *openshiftImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}
// ensureImageIsResolved sets up s.docker and s.imageStreamImageName

View File

@@ -14,6 +14,7 @@ import (
"os/exec"
"path/filepath"
"strconv"
"strings"
"syscall"
"time"
"unsafe"
@@ -175,7 +176,10 @@ func fixFiles(selinuxHnd *C.struct_selabel_handle, root string, dir string, user
if err != nil {
return err
}
relPath = fmt.Sprintf("/%s", relPath)
// Handle /exports/hostfs as a special case. Files under this directory are copied to the host,
// thus we benefit from maintaining the same SELinux label they would have on the host as we could
// use hard links instead of copying the files.
relPath = fmt.Sprintf("/%s", strings.TrimPrefix(relPath, "exports/hostfs/"))
relPathC := C.CString(relPath)
defer C.free(unsafe.Pointer(relPathC))
@@ -226,33 +230,36 @@ func (d *ostreeImageDestination) ostreeCommit(repo *otbuiltin.Repo, branch strin
return err
}
func generateTarSplitMetadata(output *bytes.Buffer, file string) error {
func generateTarSplitMetadata(output *bytes.Buffer, file string) (digest.Digest, int64, error) {
mfz := gzip.NewWriter(output)
defer mfz.Close()
metaPacker := storage.NewJSONPacker(mfz)
stream, err := os.OpenFile(file, os.O_RDONLY, 0)
if err != nil {
return err
return "", -1, err
}
defer stream.Close()
gzReader, err := gzip.NewReader(stream)
gzReader, err := archive.DecompressStream(stream)
if err != nil {
return err
return "", -1, err
}
defer gzReader.Close()
its, err := asm.NewInputTarStream(gzReader, metaPacker, nil)
if err != nil {
return err
return "", -1, err
}
_, err = io.Copy(ioutil.Discard, its)
digester := digest.Canonical.Digester()
written, err := io.Copy(digester.Hash(), its)
if err != nil {
return err
return "", -1, err
}
return nil
return digester.Digest(), written, nil
}
func (d *ostreeImageDestination) importBlob(selinuxHnd *C.struct_selabel_handle, repo *otbuiltin.Repo, blob *blobToImport) error {
@@ -267,7 +274,8 @@ func (d *ostreeImageDestination) importBlob(selinuxHnd *C.struct_selabel_handle,
}()
var tarSplitOutput bytes.Buffer
if err := generateTarSplitMetadata(&tarSplitOutput, blob.BlobPath); err != nil {
uncompressedDigest, uncompressedSize, err := generateTarSplitMetadata(&tarSplitOutput, blob.BlobPath)
if err != nil {
return err
}
@@ -289,6 +297,8 @@ func (d *ostreeImageDestination) importBlob(selinuxHnd *C.struct_selabel_handle,
}
}
return d.ostreeCommit(repo, ostreeBranch, destinationPath, []string{fmt.Sprintf("docker.size=%d", blob.Size),
fmt.Sprintf("docker.uncompressed_size=%d", uncompressedSize),
fmt.Sprintf("docker.uncompressed_digest=%s", uncompressedDigest.String()),
fmt.Sprintf("tarsplit.output=%s", base64.StdEncoding.EncodeToString(tarSplitOutput.Bytes()))})
}
@@ -311,7 +321,17 @@ func (d *ostreeImageDestination) HasBlob(info types.BlobInfo) (bool, int64, erro
}
branch := fmt.Sprintf("ociimage/%s", info.Digest.Hex())
found, data, err := readMetadata(d.repo, branch, "docker.size")
found, data, err := readMetadata(d.repo, branch, "docker.uncompressed_digest")
if err != nil || !found {
return found, -1, err
}
found, data, err = readMetadata(d.repo, branch, "docker.uncompressed_size")
if err != nil || !found {
return found, -1, err
}
found, data, err = readMetadata(d.repo, branch, "docker.size")
if err != nil || !found {
return found, -1, err
}
@@ -383,7 +403,7 @@ func (d *ostreeImageDestination) Commit() error {
var selinuxHnd *C.struct_selabel_handle
if os.Getuid() == 0 && selinux.GetEnabled() {
selinuxHnd, err := C.selabel_open(C.SELABEL_CTX_FILE, nil, 0)
selinuxHnd, err = C.selabel_open(C.SELABEL_CTX_FILE, nil, 0)
if selinuxHnd == nil {
return errors.Wrapf(err, "cannot open the SELinux DB")
}

View File

@@ -37,11 +37,13 @@ type ostreeImageSource struct {
ref ostreeReference
tmpDir string
repo *C.struct_OstreeRepo
// get the compressed layer by its uncompressed checksum
compressed map[digest.Digest]digest.Digest
}
// newImageSource returns an ImageSource for reading from an existing directory.
func newImageSource(ctx *types.SystemContext, tmpDir string, ref ostreeReference) (types.ImageSource, error) {
return &ostreeImageSource{ref: ref, tmpDir: tmpDir}, nil
return &ostreeImageSource{ref: ref, tmpDir: tmpDir, compressed: nil}, nil
}
// Reference returns the reference used to set up this source.
@@ -255,7 +257,21 @@ func (s *ostreeImageSource) readSingleFile(commit, path string) (io.ReadCloser,
// GetBlob returns a stream for the specified blob, and the blob's size.
func (s *ostreeImageSource) GetBlob(info types.BlobInfo) (io.ReadCloser, int64, error) {
blob := info.Digest.Hex()
// Ensure s.compressed is initialized. It is build by LayerInfosForCopy.
if s.compressed == nil {
_, err := s.LayerInfosForCopy()
if err != nil {
return nil, -1, err
}
}
compressedBlob, found := s.compressed[info.Digest]
if found {
blob = compressedBlob.Hex()
}
branch := fmt.Sprintf("ociimage/%s", blob)
if s.repo == nil {
@@ -348,7 +364,45 @@ func (s *ostreeImageSource) GetSignatures(ctx context.Context, instanceDigest *d
return signatures, nil
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (s *ostreeImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
// LayerInfosForCopy() returns the list of layer blobs that make up the root filesystem of
// the image, after they've been decompressed.
func (s *ostreeImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
updatedBlobInfos := []types.BlobInfo{}
manifestBlob, manifestType, err := s.GetManifest(nil)
if err != nil {
return nil, err
}
man, err := manifest.FromBlob(manifestBlob, manifestType)
s.compressed = make(map[digest.Digest]digest.Digest)
layerBlobs := man.LayerInfos()
for _, layerBlob := range layerBlobs {
branch := fmt.Sprintf("ociimage/%s", layerBlob.Digest.Hex())
found, uncompressedDigestStr, err := readMetadata(s.repo, branch, "docker.uncompressed_digest")
if err != nil || !found {
return nil, err
}
found, uncompressedSizeStr, err := readMetadata(s.repo, branch, "docker.uncompressed_size")
if err != nil || !found {
return nil, err
}
uncompressedSize, err := strconv.ParseInt(uncompressedSizeStr, 10, 64)
if err != nil {
return nil, err
}
uncompressedDigest := digest.Digest(uncompressedDigestStr)
blobInfo := types.BlobInfo{
Digest: uncompressedDigest,
Size: uncompressedSize,
MediaType: layerBlob.MediaType,
}
s.compressed[uncompressedDigest] = layerBlob.Digest
updatedBlobInfos = append(updatedBlobInfos, blobInfo)
}
return updatedBlobInfos, nil
}

View File

@@ -177,18 +177,16 @@ func (s *storageImageSource) GetManifest(instanceDigest *digest.Digest) (manifes
// LayerInfosForCopy() returns the list of layer blobs that make up the root filesystem of
// the image, after they've been decompressed.
func (s *storageImageSource) LayerInfosForCopy() []types.BlobInfo {
func (s *storageImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
simg, err := s.imageRef.transport.store.Image(s.ID)
if err != nil {
logrus.Errorf("error reading image %q: %v", s.ID, err)
return nil
return nil, errors.Wrapf(err, "error reading image %q", s.ID)
}
updatedBlobInfos := []types.BlobInfo{}
layerID := simg.TopLayer
_, manifestType, err := s.GetManifest(nil)
if err != nil {
logrus.Errorf("error reading image manifest for %q: %v", s.ID, err)
return nil
return nil, errors.Wrapf(err, "error reading image manifest for %q", s.ID)
}
uncompressedLayerType := ""
switch manifestType {
@@ -201,16 +199,13 @@ func (s *storageImageSource) LayerInfosForCopy() []types.BlobInfo {
for layerID != "" {
layer, err := s.imageRef.transport.store.Layer(layerID)
if err != nil {
logrus.Errorf("error reading layer %q in image %q: %v", layerID, s.ID, err)
return nil
return nil, errors.Wrapf(err, "error reading layer %q in image %q", layerID, s.ID)
}
if layer.UncompressedDigest == "" {
logrus.Errorf("uncompressed digest for layer %q is unknown", layerID)
return nil
return nil, errors.Errorf("uncompressed digest for layer %q is unknown", layerID)
}
if layer.UncompressedSize < 0 {
logrus.Errorf("uncompressed size for layer %q is unknown", layerID)
return nil
return nil, errors.Errorf("uncompressed size for layer %q is unknown", layerID)
}
blobInfo := types.BlobInfo{
Digest: layer.UncompressedDigest,
@@ -220,7 +215,7 @@ func (s *storageImageSource) LayerInfosForCopy() []types.BlobInfo {
updatedBlobInfos = append([]types.BlobInfo{blobInfo}, updatedBlobInfos...)
layerID = layer.Parent
}
return updatedBlobInfos
return updatedBlobInfos, nil
}
// GetSignatures() parses the image's signatures blob into a slice of byte slices.

View File

@@ -89,5 +89,5 @@ func (r *tarballReference) DeleteImage(ctx *types.SystemContext) error {
}
func (r *tarballReference) NewImageDestination(ctx *types.SystemContext) (types.ImageDestination, error) {
return nil, fmt.Errorf("destination not implemented yet")
return nil, fmt.Errorf(`"tarball:" locations can only be read from, not written to`)
}

View File

@@ -255,6 +255,6 @@ func (is *tarballImageSource) Reference() types.ImageReference {
}
// LayerInfosForCopy() returns updated layer info that should be used when reading, in preference to values in the manifest, if specified.
func (*tarballImageSource) LayerInfosForCopy() []types.BlobInfo {
return nil
func (*tarballImageSource) LayerInfosForCopy() ([]types.BlobInfo, error) {
return nil, nil
}

View File

@@ -129,7 +129,7 @@ type ImageSource interface {
// LayerInfosForCopy returns either nil (meaning the values in the manifest are fine), or updated values for the layer blobsums that are listed in the image's manifest.
// The Digest field is guaranteed to be provided; Size may be -1.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfosForCopy() []BlobInfo
LayerInfosForCopy() ([]BlobInfo, error)
}
// ImageDestination is a service, possibly remote (= slow), to store components of a single image.
@@ -218,7 +218,7 @@ type UnparsedImage interface {
// LayerInfosForCopy returns either nil (meaning the values in the manifest are fine), or updated values for the layer blobsums that are listed in the image's manifest.
// The Digest field is guaranteed to be provided, Size may be -1 and MediaType may be optionally provided.
// WARNING: The list may contain duplicates, and they are semantically relevant.
LayerInfosForCopy() []BlobInfo
LayerInfosForCopy() ([]BlobInfo, error)
}
// Image is the primary API for inspecting properties of images.

View File

@@ -463,9 +463,9 @@ func (a *Driver) isParent(id, parent string) bool {
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
func (a *Driver) Diff(id, parent string) (io.ReadCloser, error) {
func (a *Driver) Diff(id, parent, mountLabel string) (io.ReadCloser, error) {
if !a.isParent(id, parent) {
return a.naiveDiff.Diff(id, parent)
return a.naiveDiff.Diff(id, parent, mountLabel)
}
// AUFS doesn't need the parent layer to produce a diff.
@@ -502,9 +502,9 @@ func (a *Driver) applyDiff(id string, diff io.Reader) error {
// DiffSize calculates the changes between the specified id
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
func (a *Driver) DiffSize(id, parent string) (size int64, err error) {
func (a *Driver) DiffSize(id, parent, mountLabel string) (size int64, err error) {
if !a.isParent(id, parent) {
return a.naiveDiff.DiffSize(id, parent)
return a.naiveDiff.DiffSize(id, parent, mountLabel)
}
// AUFS doesn't need the parent layer to calculate the diff size.
return directory.Size(path.Join(a.rootPath(), "diff", id))
@@ -513,9 +513,9 @@ func (a *Driver) DiffSize(id, parent string) (size int64, err error) {
// ApplyDiff extracts the changeset from the given diff into the
// layer with the specified id and parent, returning the size of the
// new layer in bytes.
func (a *Driver) ApplyDiff(id, parent string, diff io.Reader) (size int64, err error) {
func (a *Driver) ApplyDiff(id, parent, mountLabel string, diff io.Reader) (size int64, err error) {
if !a.isParent(id, parent) {
return a.naiveDiff.ApplyDiff(id, parent, diff)
return a.naiveDiff.ApplyDiff(id, parent, mountLabel, diff)
}
// AUFS doesn't need the parent id to apply the diff if it is the direct parent.
@@ -523,14 +523,14 @@ func (a *Driver) ApplyDiff(id, parent string, diff io.Reader) (size int64, err e
return
}
return a.DiffSize(id, parent)
return a.DiffSize(id, parent, mountLabel)
}
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
func (a *Driver) Changes(id, parent string) ([]archive.Change, error) {
func (a *Driver) Changes(id, parent, mountLabel string) ([]archive.Change, error) {
if !a.isParent(id, parent) {
return a.naiveDiff.Changes(id, parent)
return a.naiveDiff.Changes(id, parent, mountLabel)
}
// AUFS doesn't have snapshots, so we need to get changes from all parent

View File

@@ -92,19 +92,19 @@ type ProtoDriver interface {
type DiffDriver interface {
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
Diff(id, parent string) (io.ReadCloser, error)
Diff(id, parent, mountLabel string) (io.ReadCloser, error)
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
Changes(id, parent string) ([]archive.Change, error)
Changes(id, parent, mountLabel string) ([]archive.Change, error)
// ApplyDiff extracts the changeset from the given diff into the
// layer with the specified id and parent, returning the size of the
// new layer in bytes.
// The io.Reader must be an uncompressed stream.
ApplyDiff(id, parent string, diff io.Reader) (size int64, err error)
ApplyDiff(id, parent, mountLabel string, diff io.Reader) (size int64, err error)
// DiffSize calculates the changes between the specified id
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
DiffSize(id, parent string) (size int64, err error)
DiffSize(id, parent, mountLabel string) (size int64, err error)
}
// Driver is the interface for layered/snapshot file system drivers.

View File

@@ -31,10 +31,10 @@ type NaiveDiffDriver struct {
// NewNaiveDiffDriver returns a fully functional driver that wraps the
// given ProtoDriver and adds the capability of the following methods which
// it may or may not support on its own:
// Diff(id, parent string) (io.ReadCloser, error)
// Changes(id, parent string) ([]archive.Change, error)
// ApplyDiff(id, parent string, diff io.Reader) (size int64, err error)
// DiffSize(id, parent string) (size int64, err error)
// Diff(id, parent, mountLabel string) (io.ReadCloser, error)
// Changes(id, parent, mountLabel string) ([]archive.Change, error)
// ApplyDiff(id, parent, mountLabel string, diff io.Reader) (size int64, err error)
// DiffSize(id, parent, mountLabel string) (size int64, err error)
func NewNaiveDiffDriver(driver ProtoDriver, uidMaps, gidMaps []idtools.IDMap) Driver {
return &NaiveDiffDriver{ProtoDriver: driver,
uidMaps: uidMaps,
@@ -43,11 +43,11 @@ func NewNaiveDiffDriver(driver ProtoDriver, uidMaps, gidMaps []idtools.IDMap) Dr
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
func (gdw *NaiveDiffDriver) Diff(id, parent string) (arch io.ReadCloser, err error) {
func (gdw *NaiveDiffDriver) Diff(id, parent, mountLabel string) (arch io.ReadCloser, err error) {
startTime := time.Now()
driver := gdw.ProtoDriver
layerFs, err := driver.Get(id, "")
layerFs, err := driver.Get(id, mountLabel)
if err != nil {
return nil, err
}
@@ -70,7 +70,7 @@ func (gdw *NaiveDiffDriver) Diff(id, parent string) (arch io.ReadCloser, err err
}), nil
}
parentFs, err := driver.Get(parent, "")
parentFs, err := driver.Get(parent, mountLabel)
if err != nil {
return nil, err
}
@@ -101,10 +101,10 @@ func (gdw *NaiveDiffDriver) Diff(id, parent string) (arch io.ReadCloser, err err
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
func (gdw *NaiveDiffDriver) Changes(id, parent string) ([]archive.Change, error) {
func (gdw *NaiveDiffDriver) Changes(id, parent, mountLabel string) ([]archive.Change, error) {
driver := gdw.ProtoDriver
layerFs, err := driver.Get(id, "")
layerFs, err := driver.Get(id, mountLabel)
if err != nil {
return nil, err
}
@@ -113,7 +113,7 @@ func (gdw *NaiveDiffDriver) Changes(id, parent string) ([]archive.Change, error)
parentFs := ""
if parent != "" {
parentFs, err = driver.Get(parent, "")
parentFs, err = driver.Get(parent, mountLabel)
if err != nil {
return nil, err
}
@@ -126,11 +126,11 @@ func (gdw *NaiveDiffDriver) Changes(id, parent string) ([]archive.Change, error)
// ApplyDiff extracts the changeset from the given diff into the
// layer with the specified id and parent, returning the size of the
// new layer in bytes.
func (gdw *NaiveDiffDriver) ApplyDiff(id, parent string, diff io.Reader) (size int64, err error) {
func (gdw *NaiveDiffDriver) ApplyDiff(id, parent, mountLabel string, diff io.Reader) (size int64, err error) {
driver := gdw.ProtoDriver
// Mount the root filesystem so we can apply the diff/layer.
layerFs, err := driver.Get(id, "")
layerFs, err := driver.Get(id, mountLabel)
if err != nil {
return
}
@@ -151,15 +151,15 @@ func (gdw *NaiveDiffDriver) ApplyDiff(id, parent string, diff io.Reader) (size i
// DiffSize calculates the changes between the specified layer
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
func (gdw *NaiveDiffDriver) DiffSize(id, parent string) (size int64, err error) {
func (gdw *NaiveDiffDriver) DiffSize(id, parent, mountLabel string) (size int64, err error) {
driver := gdw.ProtoDriver
changes, err := gdw.Changes(id, parent)
changes, err := gdw.Changes(id, parent, mountLabel)
if err != nil {
return
}
layerFs, err := driver.Get(id, "")
layerFs, err := driver.Get(id, mountLabel)
if err != nil {
return
}

View File

@@ -142,16 +142,18 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
return nil, err
}
supportsDType, err := supportsOverlay(home, fsMagic, rootUID, rootGID)
if err != nil {
return nil, errors.Wrap(graphdriver.ErrNotSupported, "kernel does not support overlay fs")
}
// Create the driver home dir
if err := idtools.MkdirAllAs(path.Join(home, linkDir), 0700, rootUID, rootGID); err != nil && !os.IsExist(err) {
return nil, err
}
supportsDType, err := supportsOverlay(home, fsMagic, rootUID, rootGID)
if err != nil {
os.Remove(filepath.Join(home, linkDir))
os.Remove(home)
return nil, errors.Wrap(graphdriver.ErrNotSupported, "kernel does not support overlay fs")
}
if err := mount.MakePrivate(home); err != nil {
return nil, err
}
@@ -181,7 +183,7 @@ func Init(home string, options []string, uidMaps, gidMaps []idtools.IDMap) (grap
return nil, fmt.Errorf("Storage option overlay.size only supported for backingFS XFS. Found %v", backingFs)
}
logrus.Debugf("backingFs=%s, projectQuotaSupported=%v", backingFs, projectQuotaSupported)
logrus.Debugf("backingFs=%s, projectQuotaSupported=%v, useNativeDiff=%v", backingFs, projectQuotaSupported, !useNaiveDiff(home))
return d, nil
}
@@ -245,9 +247,7 @@ func supportsOverlay(home string, homeMagic graphdriver.FsMagic, rootUID, rootGI
return false, err
}
if !supportsDType {
logrus.Warn(overlayutils.ErrDTypeNotSupported("overlay", backingFs))
// TODO: Will make fatal when CRI-O Has AMI built on RHEL7.4
// return nil, overlayutils.ErrDTypeNotSupported("overlay", backingFs)
return false, overlayutils.ErrDTypeNotSupported("overlay", backingFs)
}
// Try a test mount in the specific location we're looking at using.
@@ -426,11 +426,6 @@ func (d *Driver) create(id, parent string, opts *graphdriver.CreateOpts) (retErr
return err
}
// if no parent directory, done
if parent == "" {
return nil
}
if err := idtools.MkdirAs(path.Join(dir, "work"), 0700, rootUID, rootGID); err != nil {
return err
}
@@ -438,6 +433,11 @@ func (d *Driver) create(id, parent string, opts *graphdriver.CreateOpts) (retErr
return err
}
// if no parent directory, create a dummy lower directory and skip writing a "lowers" file
if parent == "" {
return idtools.MkdirAs(path.Join(dir, "empty"), 0700, rootUID, rootGID)
}
lower, err := d.getLower(parent)
if err != nil {
return err
@@ -558,11 +558,7 @@ func (d *Driver) Get(id, mountLabel string) (_ string, retErr error) {
diffDir := path.Join(dir, "diff")
lowers, err := ioutil.ReadFile(path.Join(dir, lowerFile))
if err != nil {
// If no lower, just return diff directory
if os.IsNotExist(err) {
return diffDir, nil
}
if err != nil && !os.IsNotExist(err) {
return "", err
}
@@ -590,6 +586,10 @@ func (d *Driver) Get(id, mountLabel string) (_ string, retErr error) {
newlowers = newlowers + ":" + lower
}
}
if len(lowers) == 0 {
newlowers = path.Join(dir, "empty")
lowers = []byte(newlowers)
}
mergedDir := path.Join(dir, "merged")
if count := d.ctr.Increment(mergedDir); count > 1 {
@@ -660,11 +660,7 @@ func (d *Driver) Put(id string) error {
if count := d.ctr.Decrement(mountpoint); count > 0 {
return nil
}
if _, err := ioutil.ReadFile(path.Join(dir, lowerFile)); err != nil {
// If no lower, we used the diff directory, so no work to do
if os.IsNotExist(err) {
return nil
}
if _, err := ioutil.ReadFile(path.Join(dir, lowerFile)); err != nil && !os.IsNotExist(err) {
return err
}
if err := unix.Unmount(mountpoint, unix.MNT_DETACH); err != nil {
@@ -701,9 +697,9 @@ func (d *Driver) isParent(id, parent string) bool {
}
// ApplyDiff applies the new layer into a root
func (d *Driver) ApplyDiff(id string, parent string, diff io.Reader) (size int64, err error) {
func (d *Driver) ApplyDiff(id, parent, mountLabel string, diff io.Reader) (size int64, err error) {
if !d.isParent(id, parent) {
return d.naiveDiff.ApplyDiff(id, parent, diff)
return d.naiveDiff.ApplyDiff(id, parent, mountLabel, diff)
}
applyDir := d.getDiffPath(id)
@@ -730,18 +726,18 @@ func (d *Driver) getDiffPath(id string) string {
// DiffSize calculates the changes between the specified id
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
func (d *Driver) DiffSize(id, parent string) (size int64, err error) {
func (d *Driver) DiffSize(id, parent, mountLabel string) (size int64, err error) {
if useNaiveDiff(d.home) || !d.isParent(id, parent) {
return d.naiveDiff.DiffSize(id, parent)
return d.naiveDiff.DiffSize(id, parent, mountLabel)
}
return directory.Size(d.getDiffPath(id))
}
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
func (d *Driver) Diff(id, parent string) (io.ReadCloser, error) {
func (d *Driver) Diff(id, parent, mountLabel string) (io.ReadCloser, error) {
if useNaiveDiff(d.home) || !d.isParent(id, parent) {
return d.naiveDiff.Diff(id, parent)
return d.naiveDiff.Diff(id, parent, mountLabel)
}
diffPath := d.getDiffPath(id)
@@ -756,9 +752,9 @@ func (d *Driver) Diff(id, parent string) (io.ReadCloser, error) {
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
func (d *Driver) Changes(id, parent, mountLabel string) ([]archive.Change, error) {
if useNaiveDiff(d.home) || !d.isParent(id, parent) {
return d.naiveDiff.Changes(id, parent)
return d.naiveDiff.Changes(id, parent, mountLabel)
}
// Overlay doesn't have snapshots, so we need to get changes from all parent
// layers.

View File

@@ -472,7 +472,7 @@ func (d *Driver) Cleanup() error {
// Diff produces an archive of the changes between the specified
// layer and its parent layer which may be "".
// The layer should be mounted when calling this function
func (d *Driver) Diff(id, parent string) (_ io.ReadCloser, err error) {
func (d *Driver) Diff(id, parent, mountLabel string) (_ io.ReadCloser, err error) {
panicIfUsedByLcow()
rID, err := d.resolveID(id)
if err != nil {
@@ -509,7 +509,7 @@ func (d *Driver) Diff(id, parent string) (_ io.ReadCloser, err error) {
// Changes produces a list of changes between the specified layer
// and its parent layer. If parent is "", then all changes will be ADD changes.
// The layer should not be mounted when calling this function.
func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
func (d *Driver) Changes(id, parent, mountLabel string) ([]archive.Change, error) {
panicIfUsedByLcow()
rID, err := d.resolveID(id)
if err != nil {
@@ -565,7 +565,7 @@ func (d *Driver) Changes(id, parent string) ([]archive.Change, error) {
// layer with the specified id and parent, returning the size of the
// new layer in bytes.
// The layer should not be mounted when calling this function
func (d *Driver) ApplyDiff(id, parent string, diff io.Reader) (int64, error) {
func (d *Driver) ApplyDiff(id, parent, mountLabel string, diff io.Reader) (int64, error) {
panicIfUsedByLcow()
var layerChain []string
if parent != "" {
@@ -600,14 +600,14 @@ func (d *Driver) ApplyDiff(id, parent string, diff io.Reader) (int64, error) {
// DiffSize calculates the changes between the specified layer
// and its parent and returns the size in bytes of the changes
// relative to its base filesystem directory.
func (d *Driver) DiffSize(id, parent string) (size int64, err error) {
func (d *Driver) DiffSize(id, parent, mountLabel string) (size int64, err error) {
panicIfUsedByLcow()
rPId, err := d.resolveID(parent)
if err != nil {
return
}
changes, err := d.Changes(id, rPId)
changes, err := d.Changes(id, rPId, mountLabel)
if err != nil {
return
}

Some files were not shown because too many files have changed in this diff Show More