Compare commits

...

110 Commits

Author SHA1 Message Date
Ettore Di Giacinto
b93357e36c Tag 0.11.0 2021-02-10 09:07:02 +01:00
Ettore Di Giacinto
518fb16067 Add IsVirtual() to compile spec 2021-02-09 19:05:16 +01:00
Ettore Di Giacinto
4d9297e3da Be sure to copy exact folder structure when generating final images
'COPY *' has a different behavior than 'COPY .' - when regexes are
involved, COPY behave differently, by unpacking directory content to the
root.

Enhance unit test to cover the scenario as well
2021-02-09 18:27:53 +01:00
Ettore Di Giacinto
544895e051 Handle empty packages when pushing final images
We used to create dockerfiles blindly assuming there is content, but
that's not the case for virtual packages.

Due to https://github.com/moby/moby/issues/38039 we are forced for a
"unpleasant" workaround, as we can't create empty FROM scratch images
and export them.
2021-02-09 18:27:53 +01:00
Ettore Di Giacinto
fd80bb526e Add DirectoryIsEmpty 2021-02-09 16:51:10 +01:00
Ettore Di Giacinto
505f07f056 Add asciinema to README 2021-02-01 12:25:31 +01:00
Ettore Di Giacinto
8bce3f1f00 Merge pull request #181 from mudler/trim_domain_name_from_cached_image_reference
Trim the Domain Name from cached image references
2021-01-29 16:25:50 +01:00
David Cassany
18e9ce4557 Trim the Domain Name from cached image references
This commit removes the Domain Name, if any, from the cached image
reference before computing the image fingerprint. This way the same
image, if stored in some oter mirror, is still seen as the same one.

Fixes #158
2021-01-29 15:11:52 +01:00
Daniele Rondina
9f73a334b3 cmd/tree/validate: Avoid nil pointer if solution doesn't contain the dependency 2021-01-25 19:13:33 +01:00
Ettore Di Giacinto
4eab1eb738 Tag bugfix release 0.10.2 2021-01-25 14:36:01 +01:00
Ettore Di Giacinto
685bbf46a6 Refactor common code while compiling regexes 2021-01-25 12:20:04 +01:00
Ettore Di Giacinto
d89225f37d Handle namedpipe copy
CopyFile relies on copy.Copy from https://github.com/otiai10/copy which
doesn't handle named pipes copy. Handle it here until
https://github.com/otiai10/copy/issues/47 is fixed.

This causes luet to hang while copying packages that have named pipes in
it.

Also invert compression argument for gzip, it causes slowliness.
2021-01-25 12:20:04 +01:00
Ettore Di Giacinto
55d34a3b40 Adapt artifact tests after delta changes 2021-01-24 20:07:33 +01:00
Ettore Di Giacinto
85b5c96bdd Promote to info building image messages
The user wants to know whats going on in this case. Image builds can
take up also long time
2021-01-24 19:09:09 +01:00
Ettore Di Giacinto
6f5f400765 Don't bail out if image doesn't exist locally
The backend will figure out if we have the image or not, otherwise will
atempt to pull if not there.

Skip retrieve integration test with img as its not supported.
2021-01-24 19:05:21 +01:00
Ettore Di Giacinto
be87861657 img: pull image if not locally present while extracting 2021-01-24 13:17:11 +01:00
Ettore Di Giacinto
76e5d37895 ci: temporary disable final image tests with img 2021-01-24 13:06:23 +01:00
Ettore Di Giacinto
8aca246f51 ci: login to quay for integration tests 2021-01-24 12:57:15 +01:00
Ettore Di Giacinto
be7b56bae3 Split ImageDefinitionToTar test
ImageDefinitionToTar it is not actually used by compiler code, but can
be handy from an API perspective, so we keep it.
2021-01-24 12:56:25 +01:00
Ettore Di Giacinto
eae2382764 unpackDelta needs a rootfs where to extract files from 2021-01-24 12:41:56 +01:00
Ettore Di Giacinto
76076c8f51 Run integration tests on img as well 2021-01-24 12:34:44 +01:00
Ettore Di Giacinto
7d11df3225 Simplify delta generation, and avoid two-pass with img backend
This changeset also drops --keep-exported-images, which is quite unused
and can be replaced with a plugin, or either by manually exporting the
resulting images.
2021-01-24 12:27:07 +01:00
Ettore Di Giacinto
0ae8cbb877 Tag 0.10.1 2021-01-23 22:02:18 +01:00
Ettore Di Giacinto
b9f0ef1c55 Implement ImageExists in the img backend 2021-01-23 22:01:29 +01:00
Ettore Di Giacinto
8b4b249211 Tag 0.10.0 2021-01-22 21:21:59 +01:00
Niklas Engvall
23bc42bb15 Latest version, sudo check, autoinstall repos
Grab latest version, if not get a default since luet auto updates.
Check is ran as normal user or sudo/root.
Autoinstall repos -y.
2021-01-22 21:19:02 +01:00
Ettore Di Giacinto
715ee1db08 Add unit test for creating docker repositories 2021-01-22 19:59:26 +01:00
Ettore Di Giacinto
d5f70aea26 Add test for Artifact GetUncompressedName() 2021-01-22 19:15:54 +01:00
Ettore Di Giacinto
3bbc2c4691 Add multiarch-build-small target to shrink release builds 2021-01-22 19:07:17 +01:00
Ettore Di Giacinto
c7e5c9b1fd Enhance create-repo CLI args help 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
7c507fe272 Set GHA workflow to access to testing docker image 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
a037bc545b Add comment on required permission to unpack 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
d2e6409451 Add docker client unit test 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
a2df02e1bf Add comments on repository files #159 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
f0b8e4556e Make warning messages less prominent 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
161b5f40f7 Add file/line/function to debug messages 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
485b8d8c89 Add integration test for docker repository types 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
cad1deb2c6 Allow to run single integration tests 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
928c305ff1 Respect metadata fields about the tree filename
This caused to always ignore what was explictly asked during repo
creation
2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
a192d48610 Reword CLI help for --force-push 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
a6e7a3059c Respect artifact extension when populating cache
This caused cache to not hit correctly
2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
9a2ff0a3e2 Don't cleanup images from system during integration tests
Should fix #167
2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
9b2b877a53 Avoid to build images if already present 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
75fad993f3 Fixup repository revision bump 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
4b4c3a2e14 Adapt tests to new constructor changes 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
c24a3a35f1 Update gomod and vendor 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
163f93067c Temporarly rework genuinetools/img code for pull/unpack without Docker 2021-01-22 16:55:51 +01:00
Ettore Di Giacinto
91ea2ed99f Make docker image repositories actually working
Several changes are included:
- Expose ensureDir in helpers, and call it in the Docker client. In
  other implementations that was handled by CopyFile behind the scenes,
  but that's not the case here
- Create accessor in Artifact to create Artifact objects from files.
  This is handy when we have to carry over downloaded package content
  into caches when artifacts are already verified
- Fix various issues around the imagePush flag, so now trees are pushed
  forcefully each time
- Take into consideration the real artifact name when pushing single
  files in the docker image. This behavior should be changed eventually,
  because single files which aren't repository packages now are in its
  own docker image, but we should have just one that brings the required
  metadata alltogether.
2021-01-22 16:54:19 +01:00
Ettore Di Giacinto
b27b146b45 Refactor artifact Verify() 2021-01-22 16:54:19 +01:00
Ettore Di Giacinto
7b25a54653 Update gomod and vendor 2021-01-22 16:54:19 +01:00
Ettore Di Giacinto
dbd37afced Add docker client #169 2021-01-22 16:54:19 +01:00
Ettore Di Giacinto
2f459c0469 Use only one docker image reference to push repository.
Instead of generating different images, which are harder to track and
clean, we generate a single image with various tags, corresponding to
the packages available in the repositories.

Tagging, and pushing separate images will be possible with the plugin
mechanism
2021-01-22 16:54:19 +01:00
Ettore Di Giacinto
ad455edafc Allow to push images in create-repo
Add also the --force flag to allow image overwrite

Related to #169
2021-01-22 16:54:19 +01:00
Ettore Di Giacinto
d9286a1a1e Download repository metadata with client DownloadFile, uniform downloads for Docker repositories 2021-01-22 16:54:19 +01:00
Ettore Di Giacinto
322fe72ef2 Generate repository metadata and packages for docker repository type
Drop image-repository on create-repo. In case of a docker repository, --output is the image reference to use.
Also restore default output build dir.

See also: #169
2021-01-22 16:53:52 +01:00
Ettore Di Giacinto
88b5576611 Expect full image name to GenerateFinalImage
We will re-use this method also when generating repository metadata
2021-01-18 12:26:22 +01:00
Ettore Di Giacinto
a1f4c28973 Add GenerateFinalImage to package artifacts
GenerateFinalImage generates a docker image from scratch with the
artifact content.

Related to #169
2021-01-18 12:08:47 +01:00
Ettore Di Giacinto
43b0b11028 Define a build context for backends 2021-01-18 11:06:54 +01:00
Ettore Di Giacinto
429e9757db Select cwd as default tree path for commands 2021-01-18 10:40:41 +01:00
Ettore Di Giacinto
3d086c9b17 Merge pull request #172 from trappiz/patch-2
Update version, utilize curl fully instead of mixing.
2021-01-16 20:26:08 +01:00
Niklas Engvall
bb610dff49 Update version, utilize curl fully instead of mixing. 2021-01-16 19:55:46 +01:00
Ettore Di Giacinto
21d8ce6050 Add link to gophersat 2021-01-12 15:54:51 +01:00
Ettore Di Giacinto
f8c64c38d3 Link to building strategies in README 2021-01-12 15:54:12 +01:00
Ettore Di Giacinto
e219caf720 Fixup broken link in README 2021-01-12 15:53:08 +01:00
Ettore Di Giacinto
ecae2873d6 zstd extension suffix is zst, not zstd
Fixes #163
2021-01-12 15:52:34 +01:00
Ettore Di Giacinto
48f17dbc7a Tag 0.9.26 2021-01-12 11:25:05 +01:00
Ettore Di Giacinto
bd3e483f0f Be sure to cache packages with same fingerprint only once
this makes sure that we are fast to process also invalid trees
2021-01-12 10:30:31 +01:00
Ettore Di Giacinto
40fc948c6e Stabilize tests after changes
With BuildWorld() we get more results back (now we return the whole
model, including the false assertions).

Besides, now solving with BuildWorld() detects an invalid case:
when we supply a provided, the definitionDB shouldn't explictly supply
also the package that has to be provided. This would cause to 'shadow'
packages between repositories.

The test was invalid before, and shouldn't have contained A1. Moved the
test to Pending to inspect it further in subsequent dev iterations
2021-01-11 23:35:18 +01:00
Ettore Di Giacinto
186fa58ab0 Use BuildWorld() instead of BuildPartialWorld() in solver.Solve
We now have a stronger cache system while we pre-compute also RevDeps in
a hashmap, this makes now makes BuildWorld() much more performant.
2021-01-11 20:11:51 +01:00
Ettore Di Giacinto
7e04ad67f6 Merge branch 'master' into develop 2021-01-08 17:43:25 +01:00
Ettore Di Giacinto
6e7ec890ca Update issue templates 2021-01-08 17:42:35 +01:00
Ettore Di Giacinto
7390623e40 Don't warn user of accepting license in case of uninstall
Closes #160
2021-01-07 10:36:57 +01:00
Ettore Di Giacinto
bd0d2765aa Mark executed finalizers at beginning
don't retry failing finalizers, but mark as executed right away
2021-01-04 17:04:20 +01:00
Ettore Di Giacinto
ddd61f769c Tag 0.9.25 2021-01-04 00:19:14 +01:00
Ettore Di Giacinto
1dd91b06bd Cleanup docker images on test teardown 2021-01-03 23:56:59 +01:00
Ettore Di Giacinto
11df314a26 Avoid clashing fixture version 2021-01-03 23:56:49 +01:00
Ettore Di Giacinto
2cbf547873 Add new fixtures 2021-01-03 23:37:34 +01:00
Ettore Di Giacinto
43f5b69c18 Let the build fail when depending on virtuals
This is currently not a valid use case. Virtuals are empty packages and
if the `build.yaml` is completely empty, nothing could depend on them.

Let's try to not be too smart and build the package image if a source
image is supplied, and fail hardly when we depend on a virtual in build
time.
2021-01-03 23:03:01 +01:00
Ettore Di Giacinto
1fdef757b6 Adapt other bunch of fixtures to changes 2021-01-03 22:22:32 +01:00
Ettore Di Giacinto
6c27af18c8 Adapt fixture to changes 2021-01-03 21:40:35 +01:00
Ettore Di Giacinto
f57f0f9588 Adapt complex selection fixtures to new changes
We don't generate anymore images if packages are empty - those are now
virtuals which just generates empty artifacts.

Virtuals are not meant to be required by other packages in build time,
because it would violate the virtual packages purpose (they are just
useful for runtime).

This test was used to verify version selection of the best match during build
time, not to actually test any build process. Inject steps so images are
actually generated, and they can depend on each others.
2021-01-03 20:50:32 +01:00
Ettore Di Giacinto
457acd0d8a Add virtual packages support 2021-01-03 20:08:04 +01:00
Ettore Di Giacinto
f2ba9e02d7 Tag 0.9.24 2021-01-02 22:53:08 +01:00
Ettore Di Giacinto
a81d0bc3a3 Build assertions when swapping
When we are swapping packages, we do not run the solver to gather things
to install, but we trust the given list when calling computeInstall. In this case, the assertion
returned by computeInstall is empty, as we force l.Options.NoDeps.

This change generates the assertion list while calling computeSwap so
it's available later when we call ExecuteFinalizer.
2021-01-02 21:28:54 +01:00
Ettore Di Giacinto
45e8553d26 Tag 0.9.23 2020-12-30 02:51:19 +01:00
Ettore Di Giacinto
bb48326039 Adapt solver tests after changes 2020-12-30 02:05:55 +01:00
Ettore Di Giacinto
dce8b52293 Use Conflicts() which already lists revdeps on failure 2020-12-30 01:17:31 +01:00
Ettore Di Giacinto
0652fce55e Update revdeps table while populating Cache
When we cycle, we don't necessarly have all the packages into the DB
yet.

With this change, luet annotates the reverse dependency without any version, and we try to
update revdeps table when new items gets added, by checking the version
required in the selector.

Thanks to @joostruis for noticing the issue
2020-12-30 01:12:35 +01:00
Ettore Di Giacinto
38c9540a1d Use DB copy in GetRevdeps in BoltDB 2020-12-30 01:12:09 +01:00
Ettore Di Giacinto
90278a034b Use ConflictsWith to check conflicts when uninstalling packages 2020-12-29 23:43:39 +01:00
Ettore Di Giacinto
55ab1894e9 Add unit test for Uninstall in Installer 2020-12-29 22:58:03 +01:00
Ettore Di Giacinto
ddebe66859 Merge pull request #157 from trappiz/patch-1
Set import name for zstd
2020-12-29 22:25:14 +01:00
Niklas Engvall
bfbcb81210 Set import name for zstd 2020-12-29 22:22:53 +01:00
Ettore Di Giacinto
062e75bc25 Add unit test for Uninstall without full 2020-12-29 22:13:26 +01:00
Ettore Di Giacinto
498edc95c8 Tag 0.9.22 2020-12-27 21:17:22 +01:00
Ettore Di Giacinto
b81ce66914 Reduce download verbosity 2020-12-27 20:21:05 +01:00
Ettore Di Giacinto
68030baf98 Revert "Dockerfile: Initialize /tmp dir"
There is no mkdir, nor sh in a image from scratch

This reverts commit 981fe5b04a.
2020-12-26 18:29:48 +01:00
Ettore Di Giacinto
9e868b69fc Update vendor 2020-12-25 10:35:18 +01:00
Ettore Di Giacinto
f871111e50 Collect errors from finalizer runs
Instead of failing and depend on the --force flag, always execute
finalizer and collect errors to determine if install was successfull or
not
2020-12-25 10:35:09 +01:00
Daniele Rondina
981fe5b04a Dockerfile: Initialize /tmp dir 2020-12-21 18:09:22 +01:00
Ettore Di Giacinto
8371d7aa7b Tag 0.9.21 2020-12-19 18:23:08 +01:00
Ettore Di Giacinto
736c9470cf Add db copy and clone 2020-12-19 17:45:50 +01:00
Ettore Di Giacinto
e52bc4f2b2 Refactor: get systemdb from config, which knows which one to load 2020-12-19 17:23:59 +01:00
Ettore Di Giacinto
96e877fc0b Allow uninstall to take multiple packages
And treat those as a list, instead of each single of them
2020-12-19 17:16:58 +01:00
Ettore Di Giacinto
1331c3551a Renaming clashing test func 2020-12-19 16:23:30 +01:00
Ettore Di Giacinto
bfb2bdc230 Add test for replace --nodeps 2020-12-19 15:44:42 +01:00
Ettore Di Giacinto
525bfb5ebf Respect --nodeps when calling Swap from the public interface 2020-12-19 15:26:18 +01:00
Ettore Di Giacinto
f4e2f32aff Return candidate not found when appropriate 2020-12-19 14:57:42 +01:00
Ettore Di Giacinto
7cf650a8f6 Break Swap in computeSwap() and display uninstall dialog only when asked 2020-12-19 14:55:59 +01:00
Ettore Di Giacinto
2b6fe2baa1 Add luet build --wait
It allows to wait for intermediate images to be available instead of
building all of them
2020-12-18 23:19:18 +01:00
867 changed files with 163765 additions and 5355 deletions

31
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,31 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: mudler
---
<!-- Thanks for helping us to improve Luet! We welcome all bug reports. Please fill out each area of the template so we can better help you. Comments like this will be hidden when you post but you can delete them if you wish. -->
**Luet version:**
<!-- Provide the output from "luet --version" -->
**CPU architecture, OS, and Version:**
<!-- Provide the output from "uname -a" -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
<!-- Steps to reproduce the behavior, including the luet command used -->
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Logs**
<!-- If applicable, add logs with the "--debug" flag enabled to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. -->

View File

@@ -0,0 +1,22 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: mudler
---
<!-- Thanks for helping us to improve Luet! We welcome all feature requests. Please fill out each area of the template so we can better help you. Comments like this will be hidden when you post but you can delete them if you wish. -->
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->

View File

@@ -11,10 +11,34 @@ jobs:
go-version: 1.14.x
- name: Checkout code
uses: actions/checkout@v2
- name: setup-docker
uses: docker-practice/actions-setup-docker@0.0.1
- name: Login to quay
run: echo ${{ secrets.DOCKER_TESTING_PASSWORD }} | sudo docker login -u ${{ secrets.DOCKER_TESTING_USERNAME }} --password-stdin quay.io
- name: Install deps
run: |
sudo apt-get install -y upx && sudo -E env "PATH=$PATH" make deps
sudo curl -fSL "https://github.com/genuinetools/img/releases/download/v0.5.11/img-linux-amd64" -o "/usr/bin/img"
sudo chmod a+x "/usr/bin/img"
- name: Build test
run: sudo -E env "PATH=$PATH" make multiarch-build-small
- name: Login to quay with img
run: echo ${{ secrets.DOCKER_TESTING_PASSWORD }} | sudo img login -u ${{ secrets.DOCKER_TESTING_USERNAME }} --password-stdin quay.io
- name: Tests with Img backend
run: |
sudo -E env "PATH=$PATH" \
env "LUET_BACKEND=img" \
make test-integration
- name: Tests
run: sudo -E env "PATH=$PATH" make deps multiarch-build test-integration test-coverage
run: |
sudo -E \
env "PATH=$PATH" \
env "TEST_DOCKER_IMAGE=${{ secrets.DOCKER_TESTING_IMAGE }}" \
env "UNIT_TEST_DOCKER_IMAGE=${{ secrets.DOCKER_TESTING_IMAGE }}" \
env "UNIT_TEST_DOCKER_IMAGE_REPOSITORY=${{ secrets.DOCKER_TESTING_UNIT_TEST_IMAGE }}" \
make test-integration test-coverage
- name: Build
run: sudo -E env "PATH=$PATH" make multiarch-build && sudo chmod -R 777 release/
run: sudo -E env "PATH=$PATH" make multiarch-build-small && sudo chmod -R 777 release/
- name: Release
uses: fnkr/github-action-ghr@v1
if: startsWith(github.ref, 'refs/tags/')

View File

@@ -17,5 +17,14 @@ jobs:
uses: actions/checkout@v2
- name: setup-docker
uses: docker-practice/actions-setup-docker@0.0.1
- name: Install deps
run: |
sudo apt-get install -y upx && sudo -E env "PATH=$PATH" make deps
sudo curl -fSL "https://github.com/genuinetools/img/releases/download/v0.5.11/img-linux-amd64" -o "/usr/bin/img"
sudo chmod a+x "/usr/bin/img"
- name: Build
run: sudo -E env "PATH=$PATH" make multiarch-build-small
- name: Tests with Img backend
run: sudo -E env "PATH=$PATH" env "LUET_BACKEND=img" make test-integration
- name: Tests
run: sudo -E env "PATH=$PATH" make deps multiarch-build test-integration test-coverage
run: sudo -E env "PATH=$PATH" make test-integration test-coverage

View File

@@ -88,6 +88,11 @@ test-docker:
--workdir /go/src/github.com/mudler/luet -ti golang:latest \
bash -c "make test"
.PHONY: multiarch-build
multiarch-build:
CGO_ENABLED=0 gox $(BUILD_PLATFORMS) -ldflags '$(LDFLAGS)' -output="release/$(NAME)-$(VERSION)-{{.OS}}-{{.Arch}}"
multiarch-build-small:
@$(MAKE) LDFLAGS+="-s -w" multiarch-build
for file in $(ROOT_DIR)/release/* ; do \
upx --brute -1 $${file} ; \
done

View File

@@ -6,6 +6,8 @@
[![GoDoc](https://godoc.org/github.com/mudler/luet?status.svg)](https://godoc.org/github.com/mudler/luet)
[![codecov](https://codecov.io/gh/mudler/luet/branch/master/graph/badge.svg)](https://codecov.io/gh/mudler/luet)
[![asciicast](https://asciinema.org/a/388348.svg)](https://asciinema.org/a/388348)
Luet is a multi-platform Package Manager based off from containers - it uses Docker (and others) to build packages. It has zero dependencies and it is well suitable for "from scratch" environments. It can also version entire rootfs and enables delivery of OTA-alike updates, making it a perfect fit for the Edge computing era and IoT embedded devices.
It offers a simple [specfile format](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/) in YAML notation to define both [packages](https://luet-lab.github.io/docs/docs/concepts/packages/) and [rootfs](https://luet-lab.github.io/docs/docs/concepts/packages/#package-layers). As it is based on containers, it can be also used to build stages for Linux From Scratch installations and it can build and track updates for those systems.
@@ -18,8 +20,8 @@ It is written entirely in Golang and where used as package manager, it can run i
- It builds, installs, uninstalls and perform upgrades on machines
- Installer doesn't depend on anything ( 0 dep installer !), statically built
- You can install it aside also with your current distro package manager, and start building and distributing your packages
- Support for packages as "layers"
- [It uses SAT solving techniques to solve the deptree](https://luet-lab.github.io/docs/docs/concepts/constraints/) ( Inspired by [OPIUM](https://ranjitjhala.github.io/static/opium.pdf) )
- [Support for packages as "layers"](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/#building-strategies)
- [It uses SAT solving techniques to solve the deptree](https://luet-lab.github.io/docs/docs/concepts/overview/constraints/) ( Inspired by [OPIUM](https://ranjitjhala.github.io/static/opium.pdf) )
- Support for [collections](https://luet-lab.github.io/docs/docs/concepts/packages/collections/) and [templated package definitions](https://luet-lab.github.io/docs/docs/concepts/packages/templates/)
- [Can be extended with Plugins and Extensions](https://luet-lab.github.io/docs/docs/concepts/plugins-and-extensions/)
- [Can build packages in Kubernetes (experimental)](https://github.com/mudler/luet-k8s)
@@ -51,7 +53,7 @@ run `luet --help`, any subcommand is documented as well, try e.g.: `luet build
# Dependency solving
Luet uses SAT and Reinforcement learning engine for dependency solving.
It encodes the package requirements into a SAT problem, using gophersat to solve the dependency tree and give a concrete model as result.
It encodes the package requirements into a SAT problem, using [gophersat](https://github.com/crillab/gophersat) to solve the dependency tree and give a concrete model as result.
## SAT encoding

View File

@@ -18,6 +18,7 @@ import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"github.com/ghodss/yaml"
helpers "github.com/mudler/luet/cmd/helpers"
@@ -75,10 +76,9 @@ Build packages specifying multiple definition trees:
viper.BindPFlag("image-repository", cmd.Flags().Lookup("image-repository"))
viper.BindPFlag("push", cmd.Flags().Lookup("push"))
viper.BindPFlag("pull", cmd.Flags().Lookup("pull"))
viper.BindPFlag("wait", cmd.Flags().Lookup("wait"))
viper.BindPFlag("keep-images", cmd.Flags().Lookup("keep-images"))
LuetCfg.Viper.BindPFlag("keep-exported-images", cmd.Flags().Lookup("keep-exported-images"))
LuetCfg.Viper.BindPFlag("solver.type", cmd.Flags().Lookup("solver-type"))
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
@@ -97,13 +97,12 @@ Build packages specifying multiple definition trees:
compressionType := viper.GetString("compression")
imageRepository := viper.GetString("image-repository")
values := viper.GetString("values")
wait := viper.GetBool("wait")
push := viper.GetBool("push")
pull := viper.GetBool("pull")
keepImages := viper.GetBool("keep-images")
nodeps := viper.GetBool("nodeps")
onlydeps := viper.GetBool("onlydeps")
keepExportedImages := viper.GetBool("keep-exported-images")
onlyTarget, _ := cmd.Flags().GetBool("only-target-package")
full, _ := cmd.Flags().GetBool("full")
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
@@ -115,14 +114,9 @@ Build packages specifying multiple definition trees:
}
pretend, _ := cmd.Flags().GetBool("pretend")
compilerSpecs := compiler.NewLuetCompilationspecs()
var compilerBackend compiler.CompilerBackend
var db pkg.PackageDatabase
switch backendType {
case "img":
compilerBackend = backend.NewSimpleImgBackend()
case "docker":
compilerBackend = backend.NewSimpleDockerBackend()
}
compilerBackend := backend.NewBackend(backendType)
switch databaseType {
case "memory":
@@ -140,10 +134,6 @@ Build packages specifying multiple definition trees:
generalRecipe := tree.NewCompilerRecipe(db)
if len(treePaths) <= 0 {
Fatal("No tree path supplied!")
}
for _, src := range treePaths {
Info("Loading tree", src)
@@ -175,7 +165,7 @@ Build packages specifying multiple definition trees:
opts.Push = push
opts.OnlyDeps = onlydeps
opts.NoDeps = nodeps
opts.KeepImageExport = keepExportedImages
opts.Wait = wait
opts.PackageTargetOnly = onlyTarget
opts.BuildValuesFile = values
var solverOpts solver.Options
@@ -303,7 +293,7 @@ func init() {
if err != nil {
Fatal(err)
}
buildCmd.Flags().StringSliceP("tree", "t", []string{}, "Path of the tree to use.")
buildCmd.Flags().StringSliceP("tree", "t", []string{path}, "Path of the tree to use.")
buildCmd.Flags().String("backend", "docker", "backend used (docker,img)")
buildCmd.Flags().Bool("privileged", false, "Privileged (Keep permissions)")
buildCmd.Flags().String("database", "memory", "database used for solving (memory,boltdb)")
@@ -312,15 +302,15 @@ func init() {
buildCmd.Flags().Bool("full", false, "Build all packages (optimized)")
buildCmd.Flags().String("values", "", "Build values file to interpolate with each package")
buildCmd.Flags().String("destination", path, "Destination folder")
buildCmd.Flags().String("destination", filepath.Join(path, "build"), "Destination folder")
buildCmd.Flags().String("compression", "none", "Compression alg: none, gzip, zstd")
buildCmd.Flags().String("image-repository", "luet/cache", "Default base image string for generated image")
buildCmd.Flags().Bool("push", false, "Push images to a hub")
buildCmd.Flags().Bool("pull", false, "Pull images from a hub")
buildCmd.Flags().Bool("wait", false, "Don't build all intermediate images, but wait for them until they are available")
buildCmd.Flags().Bool("keep-images", true, "Keep built docker images in the host")
buildCmd.Flags().Bool("nodeps", false, "Build only the target packages, skipping deps (it works only if you already built the deps locally, or by using --pull) ")
buildCmd.Flags().Bool("onlydeps", false, "Build only package dependencies")
buildCmd.Flags().Bool("keep-exported-images", false, "Keep exported images used during building")
buildCmd.Flags().Bool("only-target-package", false, "Build packages of only the required target. Otherwise builds all the necessary ones not present in the destination")
buildCmd.Flags().String("solver-type", "", "Solver strategy")
buildCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")

View File

@@ -16,8 +16,10 @@ package cmd
import (
"os"
"path/filepath"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
. "github.com/mudler/luet/pkg/config"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/logger"
@@ -54,6 +56,7 @@ Create a repository from the metadata description defined in the luet.yaml confi
viper.BindPFlag("packages", cmd.Flags().Lookup("packages"))
viper.BindPFlag("tree", cmd.Flags().Lookup("tree"))
viper.BindPFlag("output", cmd.Flags().Lookup("output"))
viper.BindPFlag("backend", cmd.Flags().Lookup("backend"))
viper.BindPFlag("name", cmd.Flags().Lookup("name"))
viper.BindPFlag("descr", cmd.Flags().Lookup("descr"))
viper.BindPFlag("urls", cmd.Flags().Lookup("urls"))
@@ -64,6 +67,10 @@ Create a repository from the metadata description defined in the luet.yaml confi
viper.BindPFlag("meta-filename", cmd.Flags().Lookup("meta-filename"))
viper.BindPFlag("reset-revision", cmd.Flags().Lookup("reset-revision"))
viper.BindPFlag("repo", cmd.Flags().Lookup("repo"))
viper.BindPFlag("force-push", cmd.Flags().Lookup("force-push"))
viper.BindPFlag("push-images", cmd.Flags().Lookup("push-images"))
},
Run: func(cmd *cobra.Command, args []string) {
var err error
@@ -82,9 +89,13 @@ Create a repository from the metadata description defined in the luet.yaml confi
metatype := viper.GetString("meta-compression")
metaName := viper.GetString("meta-filename")
source_repo := viper.GetString("repo")
backendType := viper.GetString("backend")
treeFile := installer.NewDefaultTreeRepositoryFile()
metaFile := installer.NewDefaultMetaRepositoryFile()
compilerBackend := backend.NewBackend(backendType)
force := viper.GetBool("force-push")
imagePush := viper.GetBool("push-images")
if source_repo != "" {
// Search for system repository
@@ -107,11 +118,11 @@ Create a repository from the metadata description defined in the luet.yaml confi
lrepo.Priority,
packages,
treePaths,
pkg.NewInMemoryDatabase(false))
pkg.NewInMemoryDatabase(false), compilerBackend, dst, imagePush, force)
} else {
repo, err = installer.GenerateRepository(name, descr, t, urls, 1, packages,
treePaths, pkg.NewInMemoryDatabase(false))
treePaths, pkg.NewInMemoryDatabase(false), compilerBackend, dst, imagePush, force)
}
if err != nil {
@@ -137,7 +148,7 @@ Create a repository from the metadata description defined in the luet.yaml confi
repo.SetRepositoryFile(installer.REPOFILE_TREE_KEY, treeFile)
repo.SetRepositoryFile(installer.REPOFILE_META_KEY, metaFile)
err = repo.Write(dst, reset)
err = repo.Write(dst, reset, true)
if err != nil {
Fatal("Error: " + err.Error())
}
@@ -149,15 +160,19 @@ func init() {
if err != nil {
Fatal(err)
}
createrepoCmd.Flags().String("packages", path, "Packages folder (output from build)")
createrepoCmd.Flags().StringSliceP("tree", "t", []string{}, "Path of the source trees to use.")
createrepoCmd.Flags().String("output", path, "Destination folder")
createrepoCmd.Flags().String("packages", filepath.Join(path, "build"), "Packages folder (output from build)")
createrepoCmd.Flags().StringSliceP("tree", "t", []string{path}, "Path of the source trees to use.")
createrepoCmd.Flags().String("output", filepath.Join(path, "build"), "Destination for generated archives. With 'docker' repository type, it should be an image reference (e.g 'foo/bar')")
createrepoCmd.Flags().String("name", "luet", "Repository name")
createrepoCmd.Flags().String("descr", "luet", "Repository description")
createrepoCmd.Flags().StringSlice("urls", []string{}, "Repository URLs")
createrepoCmd.Flags().String("type", "disk", "Repository type (disk)")
createrepoCmd.Flags().String("type", "disk", "Repository type (disk, http, docker)")
createrepoCmd.Flags().Bool("reset-revision", false, "Reset repository revision.")
createrepoCmd.Flags().String("repo", "", "Use repository defined in configuration.")
createrepoCmd.Flags().String("backend", "docker", "backend used (docker,img)")
createrepoCmd.Flags().Bool("force-push", false, "Force overwrite of docker images if already present online")
createrepoCmd.Flags().Bool("push-images", false, "Enable/Disable docker image push for docker repositories")
createrepoCmd.Flags().String("tree-compression", "gzip", "Compression alg: none, gzip, zstd")
createrepoCmd.Flags().String("tree-filename", installer.TREE_TARBALL, "Repository tree filename")

View File

@@ -17,7 +17,6 @@ package cmd_database
import (
"io/ioutil"
"path/filepath"
"github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/logger"
@@ -51,7 +50,7 @@ For reference, inspect a "metadata.yaml" file generated while running "luet buil
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
systemDB := LuetCfg.GetSystemDB()
for _, a := range args {
dat, err := ioutil.ReadFile(a)
@@ -63,13 +62,6 @@ For reference, inspect a "metadata.yaml" file generated while running "luet buil
Fatal("Failed reading yaml ", a, ": ", err.Error())
}
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
files := art.GetFiles()
if _, err := systemDB.CreatePackage(art.GetCompileSpec().GetPackage()); err != nil {

View File

@@ -16,13 +16,10 @@
package cmd_database
import (
"path/filepath"
. "github.com/mudler/luet/pkg/logger"
helpers "github.com/mudler/luet/cmd/helpers"
. "github.com/mudler/luet/pkg/config"
pkg "github.com/mudler/luet/pkg/package"
"github.com/spf13/cobra"
)
@@ -44,7 +41,7 @@ This commands takes multiple packages as arguments and prunes their entries from
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
systemDB := LuetCfg.GetSystemDB()
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
@@ -52,13 +49,6 @@ This commands takes multiple packages as arguments and prunes their entries from
Fatal("Invalid package string ", a, ": ", err.Error())
}
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
if err := systemDB.RemovePackage(pack); err != nil {
Fatal("Failed removing ", a, ": ", err.Error())
}

View File

@@ -16,7 +16,6 @@ package cmd
import (
"os"
"path/filepath"
installer "github.com/mudler/luet/pkg/installer"
"github.com/mudler/luet/pkg/solver"
@@ -63,7 +62,6 @@ To force install a package:
},
Run: func(cmd *cobra.Command, args []string) {
var toInstall pkg.Packages
var systemDB pkg.PackageDatabase
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
@@ -120,13 +118,7 @@ To force install a package:
})
inst.Repositories(repos)
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
system := &installer.System{Database: LuetCfg.GetSystemDB(), Target: LuetCfg.GetSystem().Rootfs}
err := inst.Install(toInstall, system)
if err != nil {
Fatal("Error: " + err.Error())

View File

@@ -16,13 +16,11 @@ package cmd
import (
"os"
"path/filepath"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/spf13/cobra"
)
@@ -42,7 +40,6 @@ var reclaimCmd = &cobra.Command{
It scans the target file system, and if finds a match with a package available in the repositories, it marks as installed in the system database.
`,
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
// This shouldn't be necessary, but we need to unmarshal the repositories to a concrete struct, thus we need to port them back to the Repositories type
repos := installer.Repositories{}
@@ -65,13 +62,7 @@ It scans the target file system, and if finds a match with a package available i
})
inst.Repositories(repos)
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
system := &installer.System{Database: LuetCfg.GetSystemDB(), Target: LuetCfg.GetSystem().Rootfs}
err := inst.Reclaim(system)
if err != nil {
Fatal("Error: " + err.Error())

View File

@@ -16,7 +16,6 @@ package cmd
import (
"os"
"path/filepath"
installer "github.com/mudler/luet/pkg/installer"
"github.com/mudler/luet/pkg/solver"
@@ -54,7 +53,6 @@ var replaceCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
var toUninstall pkg.Packages
var toAdd pkg.Packages
var systemDB pkg.PackageDatabase
f := LuetCfg.Viper.GetStringSlice("for")
stype := LuetCfg.Viper.GetString("solver.type")
@@ -120,13 +118,7 @@ var replaceCmd = &cobra.Command{
})
inst.Repositories(repos)
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
system := &installer.System{Database: LuetCfg.GetSystemDB(), Target: LuetCfg.GetSystem().Rootfs}
err := inst.Swap(toUninstall, toAdd, system)
if err != nil {
Fatal("Error: " + err.Error())

View File

@@ -40,7 +40,7 @@ var Verbose bool
var LockedCommands = []string{"install", "uninstall", "upgrade"}
const (
LuetCLIVersion = "0.9.20"
LuetCLIVersion = "0.11.0"
LuetEnvPrefix = "LUET"
)

View File

@@ -17,7 +17,6 @@ package cmd
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/ghodss/yaml"
@@ -112,7 +111,6 @@ Search can also return results in the terminal in different ways: as terminal ou
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
var results Results
if len(args) > 1 {
Fatal("Wrong number of arguments (expected 1)")
@@ -213,13 +211,7 @@ Search can also return results in the terminal in different ways: as terminal ou
}
} else {
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
system := &installer.System{Database: LuetCfg.GetSystemDB(), Target: LuetCfg.GetSystem().Rootfs}
var err error
iMatches := pkg.Packages{}

View File

@@ -18,6 +18,7 @@ package cmd_tree
import (
"fmt"
"os"
//. "github.com/mudler/luet/pkg/config"
"github.com/ghodss/yaml"
@@ -127,9 +128,12 @@ func NewTreeImageCommand() *cobra.Command {
}
},
}
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
ans.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")
ans.Flags().StringArrayP("tree", "t", []string{}, "Path of the tree to use.")
ans.Flags().StringArrayP("tree", "t", []string{path}, "Path of the tree to use.")
ans.Flags().String("image-repository", "luet/cache", "Default base image string for generated image")
return ans

View File

@@ -18,6 +18,7 @@ package cmd_tree
import (
"fmt"
"os"
"sort"
//. "github.com/mudler/luet/pkg/config"
@@ -266,7 +267,10 @@ func NewTreePkglistCommand() *cobra.Command {
},
}
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
ans.Flags().BoolP("buildtime", "b", false, "Build time match")
ans.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")
ans.Flags().Bool("revdeps", false, "Search package reverse dependencies")
@@ -274,7 +278,7 @@ func NewTreePkglistCommand() *cobra.Command {
ans.Flags().BoolP("verbose", "v", false, "Add package version")
ans.Flags().BoolP("full", "f", false, "Show package detail")
ans.Flags().StringArrayP("tree", "t", []string{}, "Path of the tree to use.")
ans.Flags().StringArrayP("tree", "t", []string{path}, "Path of the tree to use.")
ans.Flags().StringSliceVarP(&matches, "matches", "m", []string{},
"Include only matched packages from list. (Use string as regex).")
ans.Flags().StringSliceVarP(&excludes, "exclude", "e", []string{},

View File

@@ -244,10 +244,33 @@ func validatePackage(p pkg.Package, checkType string, opts *ValidateOpts, recipe
Spinner(32)
solution, err := depSolver.Install(pkg.Packages{r})
ass := solution.SearchByName(r.GetPackageName())
if err == nil {
_, err = solution.Order(reciper.GetDatabase(), ass.Package.GetFingerPrint())
}
SpinnerStop()
if err == nil {
if ass == nil {
ans = errors.New(
fmt.Sprintf("[%9s] %s/%s-%s: solution doesn't retrieve package %s/%s-%s.",
checkType,
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
))
if LuetCfg.GetGeneral().Debug {
for idx, pa := range solution {
fmt.Println(fmt.Sprintf("[%9s] %s/%s-%s: solution %d: %s",
checkType,
p.GetCategory(), p.GetName(), p.GetVersion(), idx,
pa.Package.GetPackageName()))
}
}
Error(ans.Error())
opts.IncrBrokenDeps()
validpkg = false
} else {
_, err = solution.Order(reciper.GetDatabase(), ass.Package.GetFingerPrint())
}
}
if err != nil {
@@ -458,12 +481,15 @@ func NewTreeValidateCommand() *cobra.Command {
}
},
}
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
ans.Flags().Bool("only-runtime", false, "Check only runtime dependencies.")
ans.Flags().Bool("only-buildtime", false, "Check only buildtime dependencies.")
ans.Flags().BoolP("with-solver", "s", false,
"Enable check of requires also with solver.")
ans.Flags().StringSliceVarP(&treePaths, "tree", "t", []string{},
ans.Flags().StringSliceVarP(&treePaths, "tree", "t", []string{path},
"Path of the tree to use.")
ans.Flags().StringSliceVarP(&excludes, "exclude", "e", []string{},
"Exclude matched packages from analysis. (Use string as regex).")

View File

@@ -16,7 +16,6 @@ package cmd
import (
"os"
"path/filepath"
helpers "github.com/mudler/luet/cmd/helpers"
. "github.com/mudler/luet/pkg/config"
@@ -45,64 +44,58 @@ var uninstallCmd = &cobra.Command{
LuetCfg.Viper.BindPFlag("yes", cmd.Flags().Lookup("yes"))
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
toRemove := []pkg.Package{}
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
if err != nil {
Fatal("Invalid package string ", a, ": ", err.Error())
}
toRemove = append(toRemove, pack)
}
stype := LuetCfg.Viper.GetString("solver.type")
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
force := LuetCfg.Viper.GetBool("force")
nodeps, _ := cmd.Flags().GetBool("nodeps")
full, _ := cmd.Flags().GetBool("full")
checkconflicts, _ := cmd.Flags().GetBool("conflictscheck")
fullClean, _ := cmd.Flags().GetBool("full-clean")
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
yes := LuetCfg.Viper.GetBool("yes")
stype := LuetCfg.Viper.GetString("solver.type")
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
force := LuetCfg.Viper.GetBool("force")
nodeps, _ := cmd.Flags().GetBool("nodeps")
full, _ := cmd.Flags().GetBool("full")
checkconflicts, _ := cmd.Flags().GetBool("conflictscheck")
fullClean, _ := cmd.Flags().GetBool("full-clean")
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
yes := LuetCfg.Viper.GetBool("yes")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
LuetCfg.GetSolverOptions().Discount = float32(discount)
LuetCfg.GetSolverOptions().MaxAttempts = attempts
if concurrent {
LuetCfg.GetSolverOptions().Implementation = solver.ParallelSimple
} else {
LuetCfg.GetSolverOptions().Implementation = solver.SingleCoreSimple
}
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
LuetCfg.GetSolverOptions().Discount = float32(discount)
LuetCfg.GetSolverOptions().MaxAttempts = attempts
if concurrent {
LuetCfg.GetSolverOptions().Implementation = solver.ParallelSimple
} else {
LuetCfg.GetSolverOptions().Implementation = solver.SingleCoreSimple
}
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
// Load config protect configs
installer.LoadConfigProtectConfs(LuetCfg)
// Load config protect configs
installer.LoadConfigProtectConfs(LuetCfg)
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
NoDeps: nodeps,
Force: force,
FullUninstall: full,
FullCleanUninstall: fullClean,
CheckConflicts: checkconflicts,
Ask: !yes,
PreserveSystemEssentialData: true,
})
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
NoDeps: nodeps,
Force: force,
FullUninstall: full,
FullCleanUninstall: fullClean,
CheckConflicts: checkconflicts,
Ask: !yes,
PreserveSystemEssentialData: true,
})
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
err = inst.Uninstall(pack, system)
if err != nil {
Fatal("Error: " + err.Error())
}
system := &installer.System{Database: LuetCfg.GetSystemDB(), Target: LuetCfg.GetSystem().Rootfs}
if err := inst.Uninstall(system, toRemove...); err != nil {
Fatal("Error: " + err.Error())
}
},
}

View File

@@ -16,12 +16,10 @@ package cmd
import (
"os"
"path/filepath"
. "github.com/mudler/luet/pkg/config"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
"github.com/spf13/cobra"
@@ -43,7 +41,6 @@ var upgradeCmd = &cobra.Command{
},
Long: `Upgrades packages in parallel`,
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
repos := installer.Repositories{}
for _, repo := range LuetCfg.SystemRepositories {
@@ -97,14 +94,7 @@ var upgradeCmd = &cobra.Command{
})
inst.Repositories(repos)
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
system := &installer.System{Database: LuetCfg.GetSystemDB(), Target: LuetCfg.GetSystem().Rootfs}
if err := inst.Upgrade(system); err != nil {
Fatal("Error: " + err.Error())
}

View File

@@ -1,14 +1,19 @@
#!/bin/bash
if [ $(id -u) -ne 0 ]
then echo "Please run the installer with sudo/as root"
exit
fi
set -ex
export LUET_NOLOCK=true
LUET_VERSION=0.8.6
LUET_VERSION=$(curl -s https://api.github.com/repos/mudler/luet/releases/latest | ( grep -oP '"tag_name": "\K(.*)(?=")' || echo "0.9.24" ))
LUET_ROOTFS=${LUET_ROOTFS:-/}
LUET_DATABASE_PATH=${LUET_DATABASE_PATH:-/var/luet/db}
LUET_DATABASE_ENGINE=${LUET_DATABASE_ENGINE:-boltdb}
LUET_CONFIG_PROTECT=${LUET_CONFIG_PROTECT:-1}
wget -q https://github.com/mudler/luet/releases/download/0.8.6/luet-0.8.6-linux-amd64 -O luet
curl -L https://github.com/mudler/luet/releases/download/${LUET_VERSION}/luet-${LUET_VERSION}-linux-amd64 --output luet
chmod +x luet
mkdir -p /etc/luet/repos.conf.d || true
@@ -17,9 +22,9 @@ mkdir -p /var/tmp/luet || true
if [ "${LUET_CONFIG_PROTECT}" = "1" ] ; then
mkdir -p /etc/luet/config.protect.d || true
wget -q https://raw.githubusercontent.com/mudler/luet/master/contrib/config/config.protect.d/01_etc.yml.example -O /etc/luet/config.protect.d/01_etc.yml
curl -L https://raw.githubusercontent.com/mudler/luet/master/contrib/config/config.protect.d/01_etc.yml.example --output /etc/luet/config.protect.d/01_etc.yml
fi
wget -q https://raw.githubusercontent.com/mocaccinoOS/repository-index/master/packages/mocaccino-repository-index.yml -O /etc/luet/repos.conf.d/mocaccino-repository-index.yml
curl -L https://raw.githubusercontent.com/mocaccinoOS/repository-index/master/packages/mocaccino-repository-index.yml --output /etc/luet/repos.conf.d/mocaccino-repository-index.yml
cat > /etc/luet/luet.yaml <<EOF
general:
@@ -31,7 +36,8 @@ system:
tmpdir_base: "/var/tmp/luet"
EOF
./luet install repository/luet repository/mocaccino-repository-index
./luet install system/luet system/luet-extensions
./luet install -y repository/luet repository/mocaccino-repository-index
./luet install -y system/luet system/luet-extensions
rm -rf luet

31
go.mod
View File

@@ -1,20 +1,25 @@
module github.com/mudler/luet
go 1.12
go 1.14
require (
github.com/DataDog/zstd v1.4.4 // indirect
github.com/Sabayon/pkgs-checker v0.7.2
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154
github.com/briandowns/spinner v1.7.0
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec
github.com/containerd/containerd v1.4.1-0.20201117152358-0edc412565dc
github.com/crillab/gophersat v1.3.2-0.20201023142334-3fc2ac466765
github.com/docker/docker v17.12.0-ce-rc1.0.20200417035958-130b0bc6032c+incompatible
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/distribution v2.7.1+incompatible
github.com/docker/docker v20.10.0-beta1.0.20201110211921-af34b94a78a1+incompatible
github.com/docker/go-units v0.4.0
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd
github.com/fsouza/go-dockerclient v1.6.4
github.com/genuinetools/img v0.5.11
github.com/ghodss/yaml v1.0.0
github.com/google/go-containerregistry v0.2.1
github.com/hashicorp/go-multierror v1.0.0
github.com/hashicorp/go-version v1.2.0
github.com/jedib0t/go-pretty v4.3.0+incompatible
github.com/jedib0t/go-pretty/v6 v6.0.5
@@ -25,27 +30,37 @@ require (
github.com/kyokomi/emoji v2.1.0+incompatible
github.com/logrusorgru/aurora v0.0.0-20190417123914-21d75270181e
github.com/marcsauter/single v0.0.0-20181104081128-f8bf46f26ec0
github.com/moby/sys/mount v0.1.1-0.20200320164225-6154f11e6840 // indirect
github.com/moby/buildkit v0.7.2
github.com/moby/sys/mount v0.2.0 // indirect
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290
github.com/onsi/ginkgo v1.14.2
github.com/onsi/gomega v1.10.3
github.com/opencontainers/image-spec v1.0.1
github.com/otiai10/copy v1.2.1-0.20200916181228-26f84a0b1578
github.com/pelletier/go-toml v1.6.0 // indirect
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f
github.com/pkg/errors v0.9.1
github.com/schollz/progressbar/v3 v3.7.1
github.com/sirupsen/logrus v1.6.0
github.com/spf13/cobra v1.0.0
github.com/spf13/viper v1.6.3
go.etcd.io/bbolt v1.3.4
github.com/spf13/viper v1.7.0
go.etcd.io/bbolt v1.3.5
go.uber.org/atomic v1.5.1 // indirect
go.uber.org/multierr v1.4.0 // indirect
go.uber.org/zap v1.13.0
gopkg.in/yaml.v2 v2.3.0
gotest.tools/v3 v3.0.2 // indirect
helm.sh/helm/v3 v3.3.4
)
replace github.com/docker/docker => github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200605210607-749178b8f80d+incompatible
replace github.com/containerd/containerd => github.com/containerd/containerd v1.3.1-0.20200227195959-4d242818bf55
replace github.com/hashicorp/go-immutable-radix => github.com/tonistiigi/go-immutable-radix v0.0.0-20170803185627-826af9ccf0fe
replace github.com/jaguilar/vt100 => github.com/tonistiigi/vt100 v0.0.0-20190402012908-ad4c4a574305
replace github.com/opencontainers/runc => github.com/opencontainers/runc v1.0.0-rc9.0.20200221051241-688cf6d43cc4

176
go.sum
View File

@@ -15,6 +15,7 @@ cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNF
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
@@ -22,6 +23,8 @@ cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiy
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/AkihiroSuda/containerd-fuse-overlayfs v0.0.0-20200220082720-bb896865146c h1:2pWkaq3X2yFR5o5OI7QP0CYNNKtfE2ZCK3hMRaTkhmc=
github.com/AkihiroSuda/containerd-fuse-overlayfs v0.0.0-20200220082720-bb896865146c/go.mod h1:K4kx7xAA5JimeQCnN+dbeLlfaBxzZLaLiDD8lusFI8w=
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go v35.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go v38.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
@@ -70,13 +73,16 @@ github.com/Masterminds/sprig/v3 v3.1.0 h1:j7GpgZ7PdFqNsmncycTHsLmVPf5/3wJtlgW9TN
github.com/Masterminds/sprig/v3 v3.1.0/go.mod h1:ONGMf7UfYGAbMXCZmQLy8x3lCDIPrEZE/rU8pmrbihA=
github.com/Masterminds/squirrel v1.4.0/go.mod h1:yaPeOnPG5ZRwL9oKdTsO/prlkPbXWZlRVMQ/gGlzIuA=
github.com/Masterminds/vcs v1.13.1/go.mod h1:N09YCmOQr6RLxC6UNHzuVwAdodYbbnycGHSmwVJjcKA=
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
github.com/Microsoft/go-winio v0.4.15-0.20200113171025-3fe6c5262873 h1:93nQ7k53GjoMQ07HVP8g6Zj1fQZDDj7Xy2VkNNtvX8o=
github.com/Microsoft/go-winio v0.4.15-0.20200113171025-3fe6c5262873/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7 h1:ptnOoufxGSzauVTsdE+wMYnCWA301PdoN4xg5oRdZpg=
github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
@@ -98,6 +104,7 @@ github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuy
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8=
github.com/apache/thrift v0.0.0-20161221203622-b2a4d4ae21c7/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apex/log v1.1.1 h1:BwhRZ0qbjYtTob0I+2M+smavV0kOC8XgcnGZcyL9liA=
@@ -114,6 +121,8 @@ github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:l
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535 h1:4daAzAu0S6Vi7/lbWECcX0j45yZReDZ56BQsrVBOEEY=
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535/go.mod h1:oGkLhpf+kjZl6xBf758TQhh5XrAeiJv/7FRz/2spLIg=
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef h1:46PFijGLmAjMPwCCCo7Jf0W6f9slllCkkv7vyc1yOSg=
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154 h1:2lbe+CPe6eQf2EA3jjLdLFZKGv3cbYqVIDjKnzcyOXg=
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154/go.mod h1:cMLKpjHSP4q0P133fV15ojQgwWWB2IMv+hrFsmBF/wI=
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
@@ -131,6 +140,7 @@ github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+Ce
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
@@ -153,33 +163,53 @@ github.com/chuckpreslar/emission v0.0.0-20170206194824-a7ddd980baf9/go.mod h1:2w
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/codahale/hdrhistogram v0.0.0-20160425231609-f8ad88b59a58/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0 h1:sDMmm+q/3+BukdIpxwO365v/Rbspp2Nt5XntgQRXq8Q=
github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f h1:tSNMc+rJDfmYntojat8lljbt1mgKNpTxUZJsSzJ9Y1s=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
github.com/containerd/cgroups v0.0.0-20200217135630-d732e370d46d h1:UKAt78F1OvM4ceTn1VvXuYuatXohsFU1eSI2IBtTw9g=
github.com/containerd/cgroups v0.0.0-20200217135630-d732e370d46d/go.mod h1:CStdkl05lBnJej94BPFoJ7vB8cELKXwViS+dgfW0/M8=
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.0 h1:xjvXQWABwS2uiv3TWgQt5Uth60Gu86LTGZXMJkjc7rY=
github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.4 h1:3o0smo5SKY7H6AJCmJhsnCjR2/V2T8VmiHt7seN2/kI=
github.com/containerd/containerd v1.3.4/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
github.com/containerd/console v0.0.0-20191219165238-8375c3424e4d h1:VuiIRfgJ2M3vYEU0F6E5lg3+V0l9YpbGQr3jpZor5fo=
github.com/containerd/console v0.0.0-20191219165238-8375c3424e4d/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
github.com/containerd/containerd v1.3.1-0.20200227195959-4d242818bf55 h1:FGO0nwSBESgoGCakj+w3OQXyrMLsz2omdo9b2UfG/BQ=
github.com/containerd/containerd v1.3.1-0.20200227195959-4d242818bf55/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/continuity v0.0.0-20180921161001-7f53d412b9eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20181001140422-bd77b46c8352/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20200107194136-26c1120b8d41/go.mod h1:Dq467ZllaHgAtVp4p1xUQWBrFXR9s/wyoTpG8zOJGkY=
github.com/containerd/continuity v0.0.0-20200228182428-0f16d7a0959c h1:8ahmSVELW1wghbjerVAyuEYD5+Dio66RYvSS0iGfL1M=
github.com/containerd/continuity v0.0.0-20200228182428-0f16d7a0959c/go.mod h1:Dq467ZllaHgAtVp4p1xUQWBrFXR9s/wyoTpG8zOJGkY=
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/fifo v0.0.0-20191213151349-ff969a566b00 h1:lsjC5ENBl+Zgf38+B0ymougXFp0BaubeIVETltYZTQw=
github.com/containerd/fifo v0.0.0-20191213151349-ff969a566b00/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
github.com/containerd/go-cni v0.0.0-20200107172653-c154a49e2c75 h1:5Q5C6jDObSVpjeX8CuZ5yac8d/KIYuPzUHbUzdL+NFw=
github.com/containerd/go-cni v0.0.0-20200107172653-c154a49e2c75/go.mod h1:0mg8r6FCdbxvLDqCXwAx2rO+KA37QICjKL8+wHOG5OE=
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328 h1:PRTagVMbJcCezLcHXe8UJvR1oBzp2lG3CEumeFOLOds=
github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328/go.mod h1:PpyHrqVs8FTi9vpyHwPwiNEGaACDxT/N/pLcvMSRA9g=
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de h1:dlfGmNcE3jDAecLqwKPMNX6nk2qh1c1Vg1/YTzpOOF4=
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
github.com/containerd/ttrpc v0.0.0-20200121165050-0be804eadb15 h1:+jgiLE5QylzgADj0Yldb4id1NQNRrDOROj7KDvY9PEc=
github.com/containerd/ttrpc v0.0.0-20200121165050-0be804eadb15/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd h1:JNn81o/xG+8NEo3bC/vx9pbi/g2WI8mtP2/nXzu297Y=
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
github.com/containerd/typeurl v0.0.0-20190911142611-5eb25027c9fd/go.mod h1:GeKYzf2pQcqv7tJ0AoCuuhtnqhva5LNU3U+OyKxxJpk=
github.com/containerd/typeurl v0.0.0-20200205145503-b45ef1f1f737 h1:HovfQDS/K3Mr7eyS0QJLxE1CbVUhjZCl6g3OhFJgP1o=
github.com/containerd/typeurl v0.0.0-20200205145503-b45ef1f1f737/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
github.com/containernetworking/cni v0.7.1 h1:fE3r16wpSEyaqY4Z4oFrLMmIGfBYIKpPrHK31EJ9FzE=
github.com/containernetworking/cni v0.7.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/clair v0.0.0-20180919182544-44ae4bc9590a/go.mod h1:uXhHPWAoRqw0jJc2f8RrPCwRhIo9otQ8OEWUFtpCiwA=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
@@ -187,7 +217,10 @@ github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHo
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e h1:Wf6HqHfScWJN9/ZjdUKyjop4mf3Qdd+1TvvltAvM3m8=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.0.0 h1:XJIw/+VlJ+87J+doOxznsAWIdmWuViOVhkQamW5YV28=
github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180108230652-97fdf19511ea/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
@@ -215,22 +248,36 @@ github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
github.com/docker/cli v0.0.0-20180920165730-54c19e67f69c/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v0.0.0-20191017083524-a8ff7f821017/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v0.0.0-20200130152716-5d0cf8839492 h1:FwssHbCDJD025h+BchanCwE1Q8fyMgqDr2mOQAWOLGw=
github.com/docker/cli v0.0.0-20200130152716-5d0cf8839492/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v0.0.0-20200227165822-2298e6a3fe24 h1:bjsfAvm8BVtvQFxV7TYznmKa35J8+fmgrRJWvcS3yJo=
github.com/docker/cli v0.0.0-20200227165822-2298e6a3fe24/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/distribution v0.0.0-20180920194744-16128bbac47f/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v0.0.0-20191216044856-a8371794149d/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
github.com/docker/distribution v0.0.0-20200223014041-6b972e50feee/go.mod h1:xgJxuOjyp98AvnpRTR1+lGOqQ493ylRnRPmewD5GWtc=
github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker-ce v0.0.0-20180924210327-f53bd8bb8e43/go.mod h1:l1FUGRYBvbjnZ8MS6A2xOji4aZFlY/Qmgz7p4oXH7ac=
github.com/docker/docker-credential-helpers v0.6.0/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
github.com/docker/docker-credential-helpers v0.6.1/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
github.com/docker/docker-credential-helpers v0.6.3 h1:zI2p9+1NQYdnG6sMU26EX4aVGlqbInSQxQXLvzJ4RPQ=
github.com/docker/docker-credential-helpers v0.6.3/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
github.com/docker/go-connections v0.0.0-20180821093606-97c2040d34df/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-connections v0.3.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
github.com/docker/go-units v0.3.1/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/libnetwork v0.8.0-dev.2.0.20200226230617-d8334ccdb9be h1:GJzljYRqZapOwyfeRyExF2/5qfv8f1feNDMiz0hmRPY=
github.com/docker/libnetwork v0.8.0-dev.2.0.20200226230617-d8334ccdb9be/go.mod h1:93m0aTqz6z+g32wla4l4WxTrdtvBRmVzYRkYvasA5Z8=
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7 h1:UhxFibDNY/bfvqU5CAUmr9zpesgbU6SWc8/B4mflAE4=
github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
@@ -256,6 +303,7 @@ github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZM
github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fernet/fernet-go v0.0.0-20180830025343-9eac43b88a5e/go.mod h1:2H9hjfbpSMHwY503FclkV/lZTBh2YlOmLLSda12uL8c=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
@@ -266,6 +314,10 @@ github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4
github.com/fsouza/go-dockerclient v1.6.4 h1:B+L+1lz1LUrNgEUUh8PSG76s70EYC49ssv2xvTefTMM=
github.com/fsouza/go-dockerclient v1.6.4/go.mod h1:GOdftxWLWIbIWKbIMDroKFJzPdg6Iw7r+jX1DDZdVsA=
github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
github.com/genuinetools/img v0.5.11 h1:qIP3nVgRYbXmtSNMjmcdx+TBzWjZS03AdqTNBf6QtGY=
github.com/genuinetools/img v0.5.11/go.mod h1:m58lsi0AC97XSTXBTKpqZE1zCZpMCg9nLjikvgLCo0A=
github.com/genuinetools/pkg v0.0.0-20180910213200-1c141f661797/go.mod h1:XTcrCYlXPxnxL2UpnwuRn7tcaTn9HAhxFoFJucootk8=
github.com/genuinetools/reg v0.16.0/go.mod h1:12Fe9EIvK3dG/qWhNk5e9O96I8SGmCKLsJ8GsXUbk+Y=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
@@ -339,15 +391,23 @@ github.com/gobuffalo/packd v0.3.0/go.mod h1:zC7QkmNkYVGKPw4tHpBQ+ml7W/3tIebgeo1b
github.com/gobuffalo/packr/v2 v2.7.1/go.mod h1:qYEvAazPaVxy7Y7KR0W8qYEE+RymX74kETFqjFoFlOc=
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e h1:BWhy2j3IXJhjCbC68FptL43tDKIq8FladmaTs3Xs7Z8=
github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
github.com/godbus/dbus/v5 v5.0.3 h1:ZqHaoEF7TBzh4jzPmqVhE/5A1z9of6orkAe5uHoAeME=
github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godror/godror v0.13.3/go.mod h1:2ouUT4kdhUBk7TAkHWD4SN0CdI0pgEQbo8FVHhbSKWg=
github.com/gofrs/flock v0.7.0/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gofrs/flock v0.7.1 h1:DP+LD/t0njgoPBvT5MJLeliUIVQR03hiKR6vezdwHlc=
github.com/gofrs/flock v0.7.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/googleapis v1.3.2 h1:kX1es4djPJrsDhY7aZKJy7aZasdcB5oSOEphMjSB53c=
github.com/gogo/googleapis v1.3.2/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
@@ -360,6 +420,7 @@ github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4er
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
@@ -408,6 +469,8 @@ github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hf
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/shlex v0.0.0-20150127133951-6f45313302b9 h1:JM174NTeGNJ2m/oLH3UOWOvWQQKd+BoL3hcSCUWFLt0=
github.com/google/shlex v0.0.0-20150127133951-6f45313302b9/go.mod h1:RpwtwJQFrIEPstU94h88MWPXP2ektJZ8cZ0YntAmXiE=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@@ -430,26 +493,33 @@ github.com/gorilla/mux v1.7.4 h1:VuZ8uybHlWmqV03+zRzdwKL4tUnIp1MAQtp1mIFE1bc=
github.com/gorilla/mux v1.7.4/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gosuri/uitable v0.0.4/go.mod h1:tKR86bXuXPZazfOTG1FIzvjIdXzd0mo4Vtn16vt0PJo=
github.com/gotestyourself/gotestyourself v2.2.0+incompatible/go.mod h1:zZKM6oeNM8k+FRljX1mnzVYeS8wiGgQyvST1/GafPbY=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645 h1:MJG/KsmcqMwFAkh8mTnAwhyKoB+sTAnY4CACC110tbU=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645/go.mod h1:6iZfnjpejD4L/4DwD7NryNaJyCQdzwWwH2MWhCA90Kw=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uPribsnS6o=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-version v1.2.0 h1:3vNe/fWF5CBgRIguda1meWhsZHy3m8gCJ5wx+dIzX/E=
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
@@ -457,6 +527,7 @@ github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.3 h1:YPkqC67at8FYaadspW/6uE0COsBxS2656RLEr8Bppgk=
github.com/hashicorp/golang-lru v0.5.3/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
@@ -464,6 +535,8 @@ github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hashicorp/uuid v0.0.0-20160311170451-ebb0a03e909c h1:nQcv325vxv2fFHJsOt53eSRf1eINt6vOdYUFfXs4rgk=
github.com/hashicorp/uuid v0.0.0-20160311170451-ebb0a03e909c/go.mod h1:fHzc09UnyJyqyW+bFuq864eh+wC7dj65aXmXLRe5to0=
github.com/heroku/docker-registry-client v0.0.0-20181004091502-47ecf50fd8d4 h1:44WMsEqwiYnpHA3E4Rg1K379MH5iZllp2sO5nzXARI0=
github.com/heroku/docker-registry-client v0.0.0-20181004091502-47ecf50fd8d4/go.mod h1:ceV82AfTGFCOL/b0cdpP54uKVSL1Gef0TBSTGFDuqyY=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
@@ -473,11 +546,14 @@ github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.7/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.8 h1:CGgOkSJeqMRmt0D9XLWExdT4m4F1vd3FV3VPt+0VxkQ=
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
github.com/ishidawataru/sctp v0.0.0-20191218070446-00ab2ac2db07 h1:rw3IAne6CDuVFlZbPOkA7bhxlqawFh7RJJ+CejfMaxE=
github.com/ishidawataru/sctp v0.0.0-20191218070446-00ab2ac2db07/go.mod h1:co9pwDoBCm1kGxawmb4sPq0cSIOOWNPT4KnHotMP1Zg=
github.com/jedib0t/go-pretty v4.3.0+incompatible h1:CGs8AVhEKg/n9YbUenWmNStRW2PHJzaeDodcfvRAbIo=
github.com/jedib0t/go-pretty v4.3.0+incompatible/go.mod h1:XemHduiw8R651AF9Pt4FwCTKeG3oo7hrHJAoznj9nag=
github.com/jedib0t/go-pretty/v6 v6.0.5 h1:oOo0/jSb3NEYKT6l1hhFXoX2UZnkanMuCE2DVT1mqnE=
@@ -592,6 +668,9 @@ github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrk
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/hashstructure v0.0.0-20170609045927-2bca23e0e452/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1DutKwClXU/ABz6AQ=
github.com/mitchellh/hashstructure v1.0.0 h1:ZkRJX1CyOoTkar7p/mLS5TZU4nJ1Rn/F8u9dGS02Q3Y=
github.com/mitchellh/hashstructure v1.0.0/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1DutKwClXU/ABz6AQ=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
@@ -599,10 +678,12 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
github.com/mitchellh/reflectwalk v1.0.0 h1:9D+8oIskB4VJBN5SFlmc27fSlIBZaov1Wpk/IfikLNY=
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/moby/sys/mount v0.1.1-0.20200320164225-6154f11e6840 h1:8oWZ4vNoz5UeQc0l0lJMTfIjsEP5fyUI+TuK0gbrm0o=
github.com/moby/sys/mount v0.1.1-0.20200320164225-6154f11e6840/go.mod h1:FVQFLDRWwyBjDTBNQXDlWnSFREqOo3OKX9aqhmeoo74=
github.com/moby/sys/mountinfo v0.1.0 h1:r8vMRbMAFEAfiNptYVokP+nfxPJzvRuia5e2vzXtENo=
github.com/moby/sys/mountinfo v0.1.0/go.mod h1:w2t2Avltqx8vE7gX5l+QiBKxODu2TX0+Syr3h52Tw4o=
github.com/moby/buildkit v0.7.2 h1:wp4R0QMXSqwjTJKhhWlJNOCSQ/OVPnsCf3N8rs09+vQ=
github.com/moby/buildkit v0.7.2/go.mod h1:D3DN/Nl4DyMH1LkwpRUJuoghqdigdXd1A6HXt5aZS40=
github.com/moby/sys/mount v0.2.0 h1:WhCW5B355jtxndN5ovugJlMFJawbUODuW8fSnEH6SSM=
github.com/moby/sys/mount v0.2.0/go.mod h1:aAivFE2LB3W4bACsUXChRHQ0qKWsetY4Y9V7sxOougM=
github.com/moby/sys/mountinfo v0.4.0 h1:1KInV3Huv18akCu58V7lzNlt+jFmqlu1EaErnEHE/VM=
github.com/moby/sys/mountinfo v0.4.0/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -612,6 +693,7 @@ github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
github.com/morikuni/aec v0.0.0-20170113033406-39771216ff4c/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d h1:fKh+rvwZQCA+TPzK0EMwwbqhjvRHaQ6H8AsVU1Wt+NQ=
@@ -650,6 +732,7 @@ github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg=
github.com/onsi/ginkgo v1.12.1 h1:mFwc4LvZ0xpSvDZ3E+k8Yte0hLOMxXUlP+yXtJqkYfQ=
@@ -657,6 +740,7 @@ github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108
github.com/onsi/ginkgo v1.14.2 h1:8mVmC9kjFFmA8H4pKMUhcblgifdkOIXPvbhN1T36q1M=
github.com/onsi/ginkgo v1.14.2/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
@@ -680,16 +764,22 @@ github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3I
github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v0.1.1 h1:GlxAyO6x8rfZYN9Tt0Kti5a/cP41iuiO2yYT0IJGY8Y=
github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.0-rc9.0.20200221051241-688cf6d43cc4 h1:JhRvjyrjq24YPSDS0MQo9KJHQh95naK5fYl9IT+dzPM=
github.com/opencontainers/runc v1.0.0-rc9.0.20200221051241-688cf6d43cc4/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.1 h1:wY4pOY8fBdSIvs9+IDHC55thBuEulhzfSgKeC1yFvzQ=
github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
github.com/opencontainers/selinux v1.3.2 h1:DR4lL9SYVjgcTZKEZIncvDU06fKSc/eygjmNGOA3E1s=
github.com/opencontainers/selinux v1.3.2/go.mod h1:yTcKuYAh6R95iDpefGLQaPaRwJFwyzAJufJyiTt7s0g=
github.com/opentracing-contrib/go-observer v0.0.0-20170622124052-a52f23424492/go.mod h1:Ngi6UdF0k5OKD5t5wlmGhe/EDKPoUM3BXZSSfIuJbis=
github.com/opentracing-contrib/go-stdlib v0.0.0-20171029140428-b1a47cfbdd75/go.mod h1:PLldrQSroqzH70Xl+1DQcGnefIbqsKR7UDaiux3zV+w=
github.com/opentracing-contrib/go-stdlib v0.0.0-20180702182724-07a764486eb1 h1:gmB1XmLjI0RXG8rJCP0PK6g8rwhX8COSGFTiOgJ4Wx4=
github.com/opentracing-contrib/go-stdlib v0.0.0-20180702182724-07a764486eb1/go.mod h1:PLldrQSroqzH70Xl+1DQcGnefIbqsKR7UDaiux3zV+w=
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
github.com/opentracing/opentracing-go v0.0.0-20171003133519-1361b9cd60be/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/openzipkin-contrib/zipkin-go-opentracing v0.4.5/go.mod h1:/wsWhb9smxSfWAKL3wpBW7V8scJMt8N8gnaMCS9E/cA=
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
@@ -708,11 +798,11 @@ github.com/otiai10/mint v1.3.1/go.mod h1:/yxELlJQ0ufhjUwhshSj+wFjZ78CnZ48/1wtmBH
github.com/pact-foundation/pact-go v1.0.4/go.mod h1:uExwJY4kCzNPcHRj+hCR/HBbOOIwwtUjcrb0b5/5kLM=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/peterhellberg/link v1.0.0/go.mod h1:gtSlOT4jmkY8P47hbTc8PTgiDDWpdPbFYl75keYyBB8=
github.com/phayes/freeport v0.0.0-20180830031419-95f893ade6f2/go.mod h1:iIss55rKnNBTvrwdmkUpLnDpZoAHvWaiq5+iMmen4AE=
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f h1:WyCn68lTiytVSkk7W1K9nBiSGTSRlUOdyTnSjwrIlok=
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f/go.mod h1:/iRjX3DdSK956SzsUdV55J+wIsQ+2IBWmBrB4RvZfk4=
@@ -731,6 +821,7 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.0.0-20180924113449-f69c853d21c1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
@@ -744,15 +835,18 @@ github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20180920065004-418d78d0b9a7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.0-20190522114515-bc1a522cf7b1/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8=
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
@@ -784,9 +878,13 @@ github.com/schollz/progressbar/v3 v3.7.1/go.mod h1:CG/f0JmacksUc6TkZToO7tVq4t03z
github.com/sclevine/spec v1.2.0/go.mod h1:W4J29eT/Kzv7/b9IWLB055Z+qvVC9vt0Arko24q7p+U=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/serialx/hashring v0.0.0-20190422032157-8b2912629002/go.mod h1:/yeG0My1xr/u+HZrFQ1tOQQQQrOawfyMUH13ai5brBc=
github.com/shurcooL/httpfs v0.0.0-20171119174359-809beceb2371/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.0.3/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
@@ -831,13 +929,18 @@ github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DM
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.6.3 h1:pDDu1OyEDTKzpJwdq4TiuLyMsUgRa/BT5cn5O62NoHs=
github.com/spf13/viper v1.6.3/go.mod h1:jUMtyi0/lB5yZH/FjyGAoH7IMNrIhlBf6pXZmbMDvzw=
github.com/spf13/viper v1.7.0 h1:xVKxvI7ouOI5I+U9s2eeiUfMaWBVoXA3AWskkrqK0VM=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
@@ -848,6 +951,8 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2 h1:b6uOv7YOFK0TYG7HtkIgExQo+2RdLuwRft63jn2HWj8=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tj/assert v0.0.0-20171129193455-018094318fb0/go.mod h1:mZ9/Rh9oLWpLLDRpvE+3b7gP/C2YyLFYxNmcLnPTMe0=
@@ -856,16 +961,30 @@ github.com/tj/go-kinesis v0.0.0-20171128231115-08b17f58cb1b/go.mod h1:/yhzCV0xPf
github.com/tj/go-spin v1.1.0/go.mod h1:Mg1mzmePZm4dva8Qz60H2lHwmJ2loum4VIrLgVnKwh4=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tonistiigi/fsutil v0.0.0-20200326231323-c2c7d7b0e144 h1:6RY1EKxCnPQShPM46xFDHta2JSOd+YKCgHyyBHtKuo8=
github.com/tonistiigi/fsutil v0.0.0-20200326231323-c2c7d7b0e144/go.mod h1:0G1sLZ/0ttFf09xvh7GR4AEECnjifHRNJN/sYbLianU=
github.com/tonistiigi/go-immutable-radix v0.0.0-20170803185627-826af9ccf0fe h1:pd7hrFSqUPxYS9IB+UMG1AB/8EXGXo17ssx0bSQ5L6Y=
github.com/tonistiigi/go-immutable-radix v0.0.0-20170803185627-826af9ccf0fe/go.mod h1:/+MCh11CJf2oz0BXmlmqyopK/ad1rKkcOXPoYuPCJYU=
github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea/go.mod h1:WPnis/6cRcDZSUvVmezrxJPkiO87ThFYsoUiMwWNDJk=
github.com/tonistiigi/vt100 v0.0.0-20190402012908-ad4c4a574305/go.mod h1:gXOLibKqQTRAVuVZ9gX7G9Ykky8ll8yb4slxsEMoY0c=
github.com/uber/jaeger-client-go v0.0.0-20180103221425-e02c85f9069e/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
github.com/uber/jaeger-lib v1.2.1/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1 h1:+mkCCcOFKPnCmVYVcURKps1Xe+3zP90gSYGNfRkjoIY=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/urfave/cli v1.22.2 h1:gsqYFH8bb9ekPA12kRo0hfjngWQjkJPlN9R0N78BoUo=
github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vbatts/go-mtree v0.4.4 h1:+CncqETnSpxBCCUhRnNQBvxhsjWXNuc+ExZsLSNaj5o=
github.com/vbatts/go-mtree v0.4.4/go.mod h1:3sazBqLG4bZYmgRTgdh9X3iKTzwBpp5CrREJDzrNSXY=
github.com/vdemeester/k8s-pkg-credentialprovider v1.18.1-0.20201019120933-f1d16962a4db/go.mod h1:grWy0bkr1XO6hqbaaCKaPXqkBVlMGHYG6PGykktwbJc=
github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw=
github.com/vishvananda/netlink v1.0.0 h1:bqNY2lgheFIu1meHUFSH3d7vG93AFyqg3oGbJCOJgSM=
github.com/vishvananda/netlink v1.0.0/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc h1:R83G5ikgLMxrBvLh22JhdfI8K6YXEPHx5P03Uu3DRs4=
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
github.com/vmihailenco/msgpack v4.0.1+incompatible h1:RMF1enSPeKTlXrXdOcqjFUElywVZjjC6pqse21bKbEU=
github.com/vmihailenco/msgpack v4.0.1+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
github.com/vmware/govmomi v0.20.3/go.mod h1:URlwyTFZX72RmxtxuaFL2Uj3fD1JTvZdx59bHWk6aFU=
@@ -888,8 +1007,8 @@ github.com/ziutek/mymysql v1.5.4/go.mod h1:LMSpPZ6DbqWFxNCHW77HeMg9I646SAhApZ/wK
go.etcd.io/bbolt v1.3.0/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.4 h1:hi1bXHMVrlQh6WwxAy+qZCV/SYIlqo+Ushwdpa4tAKg=
go.etcd.io/bbolt v1.3.4/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/bbolt v1.3.5 h1:XAzx9gjCb0Rxj7EoqcClPD1d5ZBxZJk0jbuoPHenBt0=
go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
@@ -920,6 +1039,7 @@ go.uber.org/zap v1.13.0 h1:nR6NoDBgAf67s68NhaXbsojM+2gxp3S1hWkHDl27pVU=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180910181607-0e37d006457b/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
@@ -939,6 +1059,7 @@ golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975 h1:/Tl7pH94bvbAAHBdZJT947M/+gp0+CqQXDtMRC0fseo=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200221231518-2aa609cf4a9d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
@@ -959,6 +1080,7 @@ golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EH
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -984,6 +1106,7 @@ golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180925072008-f04abc6bdfa7/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1042,6 +1165,7 @@ golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180925112736-b09afc3d579e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -1058,6 +1182,7 @@ golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190522044717-8097e1b27ff5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1072,12 +1197,15 @@ golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200120151820-655fe14d7479/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1087,6 +1215,8 @@ golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200922070232-aee5d888a860/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f h1:+Nyd8tzPX9R7BWHguqsrbFdRx3WQ/1ib8I44HXV5yTA=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201113135734-0a15ea8d9b02 h1:5Ftd3YbC/kANXWCBjvppvUmv1BMakgFcBKA7MpYYp4M=
@@ -1137,6 +1267,7 @@ golang.org/x/tools v0.0.0-20191004055002-72853e10c5a3/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@@ -1193,10 +1324,12 @@ google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCID
google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20180924164928-221a8d4f7494/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190522204451-c2c4e71fbf69/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
@@ -1207,15 +1340,18 @@ google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvx
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200117163144-32f20d992d24/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200227132054-3f1135a288c9/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20200527145253-8367513e4ece h1:1YM0uhfumvoDu9sx8+RyWwTI63zoCQvI23IYFRlvte0=
google.golang.org/genproto v0.0.0-20200527145253-8367513e4ece/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.15.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
@@ -1226,6 +1362,7 @@ google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ij
google.golang.org/grpc v1.22.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
@@ -1284,6 +1421,7 @@ gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.1.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/v3 v3.0.2 h1:kG1BFyqVHuQoVQiR1bWGnfz/fmHvvuiSPIV7rvl360E=

View File

@@ -28,7 +28,7 @@ import (
"regexp"
system "github.com/docker/docker/pkg/system"
"github.com/klauspost/compress/zstd"
zstd "github.com/klauspost/compress/zstd"
gzip "github.com/klauspost/pgzip"
//"strconv"
@@ -144,14 +144,14 @@ func (a *PackageArtifact) Hash() error {
func (a *PackageArtifact) Verify() error {
sum := Checksums{}
err := sum.Generate(a)
if err != nil {
if err := sum.Generate(a); err != nil {
return err
}
err = sum.Compare(a.Checksums)
if err != nil {
if err := sum.Compare(a.Checksums); err != nil {
return err
}
return nil
}
@@ -237,11 +237,92 @@ func (a *PackageArtifact) GetPath() string {
return a.Path
}
func (a *PackageArtifact) GetFileName() string {
return path.Base(a.GetPath())
}
func (a *PackageArtifact) SetPath(p string) {
a.Path = p
}
// Compress Archives and compress (TODO) to the artifact path
func (a *PackageArtifact) genDockerfile() string {
return `
FROM scratch
COPY . /`
}
// CreateArtifactForFile creates a new artifact from the given file
func CreateArtifactForFile(s string, opts ...func(*PackageArtifact)) (*PackageArtifact, error) {
fileName := path.Base(s)
archive, err := LuetCfg.GetSystem().TempDir("archive")
if err != nil {
return nil, errors.Wrap(err, "error met while creating tempdir for "+s)
}
defer os.RemoveAll(archive) // clean up
helpers.CopyFile(s, filepath.Join(archive, fileName))
artifact, err := LuetCfg.GetSystem().TempDir("artifact")
if err != nil {
return nil, errors.Wrap(err, "error met while creating tempdir for "+s)
}
a := &PackageArtifact{Path: filepath.Join(artifact, fileName)}
for _, o := range opts {
o(a)
}
return a, a.Compress(archive, 1)
}
// GenerateFinalImage takes an artifact and builds a Docker image with its content
func (a *PackageArtifact) GenerateFinalImage(imageName string, b CompilerBackend, keepPerms bool) (CompilerBackendOptions, error) {
builderOpts := CompilerBackendOptions{}
archive, err := LuetCfg.GetSystem().TempDir("archive")
if err != nil {
return builderOpts, errors.Wrap(err, "error met while creating tempdir for "+a.Path)
}
defer os.RemoveAll(archive) // clean up
uncompressedFiles := filepath.Join(archive, "files")
dockerFile := filepath.Join(archive, "Dockerfile")
if err := os.MkdirAll(uncompressedFiles, os.ModePerm); err != nil {
return builderOpts, errors.Wrap(err, "error met while creating tempdir for "+a.Path)
}
if err := a.Unpack(uncompressedFiles, keepPerms); err != nil {
return builderOpts, errors.Wrap(err, "error met while uncompressing artifact "+a.Path)
}
empty, err := helpers.DirectoryIsEmpty(uncompressedFiles)
if err != nil {
return builderOpts, errors.Wrap(err, "error met while checking if directory is empty "+uncompressedFiles)
}
// See https://github.com/moby/moby/issues/38039.
// We can't generate FROM scratch empty images. Docker will refuse to export them
// workaround: Inject a .virtual empty file
if empty {
helpers.Touch(filepath.Join(uncompressedFiles, ".virtual"))
}
data := a.genDockerfile()
if err := ioutil.WriteFile(dockerFile, []byte(data), 0644); err != nil {
return builderOpts, errors.Wrap(err, "error met while rendering artifact dockerfile "+a.Path)
}
builderOpts = CompilerBackendOptions{
ImageName: imageName,
SourcePath: archive,
DockerFileName: dockerFile,
Context: uncompressedFiles,
}
return builderOpts, b.BuildImage(builderOpts)
}
// Compress is responsible to archive and compress to the artifact Path.
// It accepts a source path, which is the content to be archived/compressed
// and a concurrency parameter.
func (a *PackageArtifact) Compress(src string, concurrency int) error {
switch a.CompressionType {
@@ -256,7 +337,7 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
}
defer original.Close()
zstdFile := a.Path + ".zstd"
zstdFile := a.getCompressedName()
bufferedReader := bufio.NewReader(original)
// Open a file for writing.
@@ -279,6 +360,8 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
}
os.RemoveAll(a.Path) // Remove original
Debug("Removed artifact", a.Path)
a.Path = zstdFile
return nil
case GZip:
@@ -292,7 +375,7 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
}
defer original.Close()
gzipfile := a.Path + ".gz"
gzipfile := a.getCompressedName()
bufferedReader := bufio.NewReader(original)
// Open a file for writing.
@@ -302,7 +385,7 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
}
// Create gzip writer.
w := gzip.NewWriter(dst)
w.SetConcurrency(concurrency, 10)
w.SetConcurrency(1<<20, concurrency)
defer w.Close()
defer dst.Close()
_, err = io.Copy(w, bufferedReader)
@@ -311,6 +394,7 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
}
w.Close()
os.RemoveAll(a.Path) // Remove original
Debug("Removed artifact", a.Path)
// a.CompressedPath = gzipfile
a.Path = gzipfile
return nil
@@ -318,12 +402,31 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
// Defaults to tar only (covers when "none" is supplied)
default:
return helpers.Tar(src, a.Path)
return helpers.Tar(src, a.getCompressedName())
}
return errors.New("Compression type must be supplied")
}
func (a *PackageArtifact) getCompressedName() string {
switch a.CompressionType {
case Zstandard:
return a.Path + ".zst"
case GZip:
return a.Path + ".gz"
}
return a.Path
}
// GetUncompressedName returns the artifact path without the extension suffix
func (a *PackageArtifact) GetUncompressedName() string {
switch a.CompressionType {
case Zstandard, GZip:
return strings.TrimSuffix(a.Path, filepath.Ext(a.Path))
}
return a.Path
}
func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
// If the destination path already exists I rename target file name with postfix.
var destPath string
@@ -613,6 +716,19 @@ func worker(i int, wg *sync.WaitGroup, s <-chan CopyJob) {
}
}
func compileRegexes(regexes []string) []*regexp.Regexp {
var result []*regexp.Regexp
for _, i := range regexes {
r, e := regexp.Compile(i)
if e != nil {
Warning("Failed compiling regex:", e)
continue
}
result = append(result, r)
}
return result
}
// ExtractArtifactFromDelta extracts deltas from ArtifactLayer from an image in tar format
func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurrency int, keepPerms bool, includes []string, excludes []string, t CompressionImplementation) (Artifact, error) {
@@ -646,15 +762,7 @@ func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurren
// Handle includes in spec. If specified they filter what gets in the package
if len(includes) > 0 && len(excludes) == 0 {
var includeRegexp []*regexp.Regexp
for _, i := range includes {
r, e := regexp.Compile(i)
if e != nil {
Warning("Failed compiling regex:", e)
continue
}
includeRegexp = append(includeRegexp, r)
}
includeRegexp := compileRegexes(includes)
for _, l := range layers {
// Consider d.Additions (and d.Changes? - warn at least) only
ADDS:
@@ -675,15 +783,7 @@ func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurren
}
} else if len(includes) == 0 && len(excludes) != 0 {
var excludeRegexp []*regexp.Regexp
for _, i := range excludes {
r, e := regexp.Compile(i)
if e != nil {
Warning("Failed compiling regex:", e)
continue
}
excludeRegexp = append(excludeRegexp, r)
}
excludeRegexp := compileRegexes(excludes)
for _, l := range layers {
// Consider d.Additions (and d.Changes? - warn at least) only
ADD:
@@ -704,25 +804,8 @@ func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurren
}
} else if len(includes) != 0 && len(excludes) != 0 {
var includeRegexp []*regexp.Regexp
for _, i := range includes {
r, e := regexp.Compile(i)
if e != nil {
Warning("Failed compiling regex:", e)
continue
}
includeRegexp = append(includeRegexp, r)
}
var excludeRegexp []*regexp.Regexp
for _, i := range excludes {
r, e := regexp.Compile(i)
if e != nil {
Warning("Failed compiling regex:", e)
continue
}
excludeRegexp = append(excludeRegexp, r)
}
includeRegexp := compileRegexes(includes)
excludeRegexp := compileRegexes(excludes)
for _, l := range layers {
// Consider d.Additions (and d.Changes? - warn at least) only

View File

@@ -20,6 +20,7 @@ import (
"os"
"path/filepath"
"github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/solver"
@@ -86,7 +87,8 @@ ENV PACKAGE_CATEGORY=app-admin`))
DockerFileName: "Dockerfile",
Destination: filepath.Join(tmpdir2, "output1.tar"),
}
Expect(b.ImageDefinitionToTar(opts)).ToNot(HaveOccurred())
Expect(b.BuildImage(opts)).ToNot(HaveOccurred())
Expect(b.ExportImage(opts)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
Expect(b.BuildImage(opts)).ToNot(HaveOccurred())
@@ -109,7 +111,8 @@ RUN echo bar > /test2`))
DockerFileName: "LuetDockerfile",
Destination: filepath.Join(tmpdir, "output2.tar"),
}
Expect(b.ImageDefinitionToTar(opts2)).ToNot(HaveOccurred())
Expect(b.BuildImage(opts2)).ToNot(HaveOccurred())
Expect(b.ExportImage(opts2)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
diffs, err := b.Changes(opts, opts2)
Expect(err).ToNot(HaveOccurred())
@@ -126,13 +129,13 @@ RUN echo bar > /test2`))
Expect(diffs).To(Equal(
[]ArtifactLayer{{
FromImage: filepath.Join(tmpdir2, "output1.tar"),
ToImage: filepath.Join(tmpdir, "output2.tar"),
FromImage: "luet/base",
ToImage: "test",
Diffs: ArtifactDiffs{
Additions: artifacts,
},
}}))
err = b.ExtractRootfs(CompilerBackendOptions{SourcePath: filepath.Join(tmpdir, "output2.tar"), Destination: rootfs}, false)
err = b.ExtractRootfs(CompilerBackendOptions{ImageName: "test", Destination: rootfs}, false)
Expect(err).ToNot(HaveOccurred())
artifact, err := ExtractArtifactFromDelta(rootfs, filepath.Join(tmpdir, "package.tar"), diffs, 2, false, []string{}, []string{}, None)
@@ -159,5 +162,107 @@ RUN echo bar > /test2`))
Expect(err).To(HaveOccurred())
})
It("Generates packages images", func() {
b := NewSimpleDockerBackend()
imageprefix := "foo/"
testString := []byte(`funky test data`)
tmpdir, err := ioutil.TempDir(os.TempDir(), "artifact")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
tmpWork, err := ioutil.TempDir(os.TempDir(), "artifact2")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpWork) // clean up
Expect(os.MkdirAll(filepath.Join(tmpdir, "foo", "bar"), os.ModePerm)).ToNot(HaveOccurred())
err = ioutil.WriteFile(filepath.Join(tmpdir, "test"), testString, 0644)
Expect(err).ToNot(HaveOccurred())
err = ioutil.WriteFile(filepath.Join(tmpdir, "foo", "bar", "test"), testString, 0644)
Expect(err).ToNot(HaveOccurred())
artifact := NewPackageArtifact(filepath.Join(tmpWork, "fake.tar"))
artifact.SetCompileSpec(&LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Version: "1.0"}})
err = artifact.Compress(tmpdir, 1)
Expect(err).ToNot(HaveOccurred())
resultingImage := imageprefix + "foo--1.0"
opts, err := artifact.GenerateFinalImage(resultingImage, b, false)
Expect(err).ToNot(HaveOccurred())
Expect(opts.ImageName).To(Equal(resultingImage))
Expect(b.ImageExists(resultingImage)).To(BeTrue())
result, err := ioutil.TempDir(os.TempDir(), "result")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(result) // clean up
err = b.ExtractRootfs(CompilerBackendOptions{ImageName: resultingImage, Destination: result}, false)
Expect(err).ToNot(HaveOccurred())
content, err := ioutil.ReadFile(filepath.Join(result, "test"))
Expect(err).ToNot(HaveOccurred())
Expect(content).To(Equal(testString))
content, err = ioutil.ReadFile(filepath.Join(result, "foo", "bar", "test"))
Expect(err).ToNot(HaveOccurred())
Expect(content).To(Equal(testString))
})
It("Generates empty packages images", func() {
b := NewSimpleDockerBackend()
imageprefix := "foo/"
tmpdir, err := ioutil.TempDir(os.TempDir(), "artifact")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
tmpWork, err := ioutil.TempDir(os.TempDir(), "artifact2")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpWork) // clean up
artifact := NewPackageArtifact(filepath.Join(tmpWork, "fake.tar"))
artifact.SetCompileSpec(&LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Version: "1.0"}})
err = artifact.Compress(tmpdir, 1)
Expect(err).ToNot(HaveOccurred())
resultingImage := imageprefix + "foo--1.0"
opts, err := artifact.GenerateFinalImage(resultingImage, b, false)
Expect(err).ToNot(HaveOccurred())
Expect(opts.ImageName).To(Equal(resultingImage))
Expect(b.ImageExists(resultingImage)).To(BeTrue())
result, err := ioutil.TempDir(os.TempDir(), "result")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(result) // clean up
err = b.ExtractRootfs(CompilerBackendOptions{ImageName: resultingImage, Destination: result}, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.DirectoryIsEmpty(result)).To(BeFalse())
content, err := ioutil.ReadFile(filepath.Join(result, ".virtual"))
Expect(err).ToNot(HaveOccurred())
Expect(string(content)).To(Equal(""))
})
It("Retrieves uncompressed name", func() {
a := NewPackageArtifact("foo.tar.gz")
a.SetCompressionType(compiler.GZip)
Expect(a.GetUncompressedName()).To(Equal("foo.tar"))
a = NewPackageArtifact("foo.tar.zst")
a.SetCompressionType(compiler.Zstandard)
Expect(a.GetUncompressedName()).To(Equal("foo.tar"))
a = NewPackageArtifact("foo.tar")
a.SetCompressionType(compiler.None)
Expect(a.GetUncompressedName()).To(Equal("foo.tar"))
})
})
})

View File

@@ -17,9 +17,27 @@ package backend
import (
"github.com/google/go-containerregistry/pkg/crane"
"github.com/mudler/luet/pkg/compiler"
)
const (
ImgBackend = "img"
DockerBackend = "docker"
)
func imageAvailable(image string) bool {
_, err := crane.Digest(image)
return err == nil
}
func NewBackend(s string) compiler.CompilerBackend {
var compilerBackend compiler.CompilerBackend
switch s {
case ImgBackend:
compilerBackend = NewSimpleImgBackend()
case DockerBackend:
compilerBackend = NewSimpleDockerBackend()
}
return compilerBackend
}

View File

@@ -21,6 +21,8 @@ import (
"path/filepath"
"strings"
. "github.com/mudler/luet/pkg/logger"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/config"
"github.com/pkg/errors"
@@ -54,10 +56,7 @@ import (
// ]
func GenerateChanges(b compiler.CompilerBackend, fromImage, toImage compiler.CompilerBackendOptions) ([]compiler.ArtifactLayer, error) {
srcImage := fromImage.Destination
dstImage := toImage.Destination
res := compiler.ArtifactLayer{FromImage: srcImage, ToImage: dstImage}
res := compiler.ArtifactLayer{FromImage: fromImage.ImageName, ToImage: toImage.ImageName}
tmpdiffs, err := config.LuetCfg.GetSystem().TempDir("extraction")
if err != nil {
@@ -77,62 +76,24 @@ func GenerateChanges(b compiler.CompilerBackend, fromImage, toImage compiler.Com
}
defer os.RemoveAll(dstRootFS) // clean up
// Handle both files (.tar) or images. If parameters are beginning with / , don't export the images
if !strings.HasPrefix(srcImage, "/") {
srcImageTar, err := ioutil.TempFile(tmpdiffs, "srctar")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.Remove(srcImageTar.Name()) // clean up
srcImageExport := compiler.CompilerBackendOptions{
ImageName: srcImage,
Destination: srcImageTar.Name(),
}
err = b.ExportImage(srcImageExport)
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while exporting src image "+srcImage)
}
srcImage = srcImageTar.Name()
}
srcImageExtract := compiler.CompilerBackendOptions{
SourcePath: srcImage,
ImageName: fromImage.ImageName,
Destination: srcRootFS,
}
Debug("Extracting source image", fromImage.ImageName)
err = b.ExtractRootfs(srcImageExtract, false) // No need to keep permissions as we just collect file diffs
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking src image "+srcImage)
}
// Handle both files (.tar) or images. If parameters are beginning with / , don't export the images
if !strings.HasPrefix(dstImage, "/") {
dstImageTar, err := ioutil.TempFile(tmpdiffs, "dsttar")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.Remove(dstImageTar.Name()) // clean up
dstImageExport := compiler.CompilerBackendOptions{
ImageName: dstImage,
Destination: dstImageTar.Name(),
}
err = b.ExportImage(dstImageExport)
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while exporting dst image "+dstImage)
}
dstImage = dstImageTar.Name()
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking src image "+fromImage.ImageName)
}
dstImageExtract := compiler.CompilerBackendOptions{
SourcePath: dstImage,
ImageName: toImage.ImageName,
Destination: dstRootFS,
}
Debug("Extracting destination image", toImage.ImageName)
err = b.ExtractRootfs(dstImageExtract, false)
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking dst image "+dstImage)
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking dst image "+toImage.ImageName)
}
// Get Additions/Changes. dst -> src

View File

@@ -33,8 +33,7 @@ var _ = Describe("Docker image diffs", func() {
Context("Generate diffs from docker images", func() {
It("Detect no changes", func() {
opts := compiler.CompilerBackendOptions{
ImageName: "alpine:latest",
Destination: "alpine:latest",
ImageName: "alpine:latest",
}
err := b.DownloadImage(opts)
Expect(err).ToNot(HaveOccurred())
@@ -58,11 +57,9 @@ var _ = Describe("Docker image diffs", func() {
Expect(err).ToNot(HaveOccurred())
layers, err := GenerateChanges(b, compiler.CompilerBackendOptions{
ImageName: "quay.io/mocaccino/micro",
Destination: "quay.io/mocaccino/micro",
ImageName: "quay.io/mocaccino/micro",
}, compiler.CompilerBackendOptions{
ImageName: "quay.io/mocaccino/extra",
Destination: "quay.io/mocaccino/extra",
ImageName: "quay.io/mocaccino/extra",
})
Expect(err).ToNot(HaveOccurred())
Expect(len(layers)).To(Equal(1))

View File

@@ -46,10 +46,14 @@ func (*SimpleDocker) BuildImage(opts compiler.CompilerBackendOptions) error {
name := opts.ImageName
path := opts.SourcePath
dockerfileName := opts.DockerFileName
context := opts.Context
buildarg := []string{"build", "-f", dockerfileName, "-t", name, "."}
if context == "" {
context = "."
}
buildarg := []string{"build", "-f", dockerfileName, "-t", name, context}
Debug(":whale2: Building image " + name)
Info(":whale2: Building image " + name)
cmd := exec.Command("docker", buildarg...)
cmd.Dir = path
out, err := cmd.CombinedOutput()
@@ -171,7 +175,7 @@ func (*SimpleDocker) ExportImage(opts compiler.CompilerBackendOptions) error {
return errors.Wrap(err, "Failed exporting image: "+string(out))
}
Info(":whale: Exported image:", name)
Debug(":whale: Exported image:", name)
return nil
}
@@ -179,10 +183,36 @@ type ManifestEntry struct {
Layers []string `json:"Layers"`
}
func (*SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerms bool) error {
src := opts.SourcePath
func (b *SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerms bool) error {
name := opts.ImageName
dst := opts.Destination
tempexport, err := ioutil.TempDir(dst, "tmprootfs")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tempexport) // clean up
imageExport := filepath.Join(tempexport, "image.tar")
if err := b.ExportImage(compiler.CompilerBackendOptions{ImageName: name, Destination: imageExport}); err != nil {
return errors.Wrap(err, "failed while extracting rootfs for "+name)
}
src := imageExport
if src == "" && opts.ImageName != "" {
tempUnpack, err := ioutil.TempDir(dst, "tempUnpack")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tempUnpack) // clean up
imageExport := filepath.Join(tempUnpack, "image.tar")
if err := b.ExportImage(compiler.CompilerBackendOptions{ImageName: opts.ImageName, Destination: imageExport}); err != nil {
return errors.Wrap(err, "while exporting image before extraction")
}
src = imageExport
}
rootfs, err := ioutil.TempDir(dst, "tmprootfs")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for rootfs")
@@ -223,7 +253,7 @@ func (*SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPer
}
}
}
// TODO: Drop capi in favor of the img approach already used in pkg/installer/repository
export, err := capi.CreateExport(rootfs)
if err != nil {
return err

View File

@@ -77,9 +77,10 @@ ENV PACKAGE_CATEGORY=app-admin`))
DockerFileName: "Dockerfile",
Destination: filepath.Join(tmpdir2, "output1.tar"),
}
Expect(b.ImageDefinitionToTar(opts)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
Expect(b.BuildImage(opts)).ToNot(HaveOccurred())
Expect(b.ExportImage(opts)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "LuetDockerfile"))
Expect(err).ToNot(HaveOccurred())
@@ -100,7 +101,9 @@ RUN echo bar > /test2`))
DockerFileName: "LuetDockerfile",
Destination: filepath.Join(tmpdir, "output2.tar"),
}
Expect(b.ImageDefinitionToTar(opts2)).ToNot(HaveOccurred())
Expect(b.BuildImage(opts2)).ToNot(HaveOccurred())
Expect(b.ExportImage(opts2)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
artifacts := []ArtifactNode{{
@@ -115,13 +118,23 @@ RUN echo bar > /test2`))
Expect(b.Changes(opts, opts2)).To(Equal(
[]ArtifactLayer{{
FromImage: filepath.Join(tmpdir2, "output1.tar"),
ToImage: filepath.Join(tmpdir, "output2.tar"),
FromImage: "luet/base",
ToImage: "test",
Diffs: ArtifactDiffs{
Additions: artifacts,
},
}}))
opts2 = CompilerBackendOptions{
ImageName: "test",
SourcePath: tmpdir,
DockerFileName: "LuetDockerfile",
Destination: filepath.Join(tmpdir, "output3.tar"),
}
Expect(b.ImageDefinitionToTar(opts2)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir, "output3.tar"))).To(BeTrue())
Expect(b.ImageExists(opts2.ImageName)).To(BeFalse())
})
It("Detects available images", func() {

View File

@@ -18,6 +18,7 @@ package backend
import (
"os"
"os/exec"
"strings"
"github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/logger"
@@ -35,12 +36,17 @@ func NewSimpleImgBackend() compiler.CompilerBackend {
func (*SimpleImg) BuildImage(opts compiler.CompilerBackendOptions) error {
name := opts.ImageName
path := opts.SourcePath
context := opts.Context
if context == "" {
context = "."
}
dockerfileName := opts.DockerFileName
buildarg := []string{"build", "-f", dockerfileName, "-t", name, "."}
buildarg := []string{"build", "-f", dockerfileName, "-t", name, context}
Spinner(22)
defer SpinnerStop()
Debug(":tea: Building image " + name)
Info(":tea: Building image " + name)
cmd := exec.Command("img", buildarg...)
cmd.Dir = path
out, err := cmd.CombinedOutput()
@@ -101,10 +107,16 @@ func (*SimpleImg) ImageAvailable(imagename string) bool {
return imageAvailable(imagename)
}
// ImageExists check if the given image is available locally
func (*SimpleImg) ImageExists(imagename string) bool {
// NOOP: not implemented
// TODO: Since img doesn't have an inspect command,
// we need to parse the ls output manually
cmd := exec.Command("img", "ls")
out, err := cmd.Output()
if err != nil {
return false
}
if strings.Contains(string(out), imagename) {
return true
}
return false
}
@@ -134,11 +146,17 @@ func (*SimpleImg) ExportImage(opts compiler.CompilerBackendOptions) error {
return nil
}
// TODO: Dup in docker, refactor common code in helpers for shared parts
func (*SimpleImg) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerms bool) error {
// ExtractRootfs extracts the docker image content inside the destination
func (s *SimpleImg) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerms bool) error {
name := opts.ImageName
path := opts.Destination
if !s.ImageExists(name) {
if err := s.DownloadImage(opts); err != nil {
return errors.Wrap(err, "failed pulling image "+name+" during extraction")
}
}
os.RemoveAll(path)
buildarg := []string{"unpack", "-o", path, name}
Debug(":tea: Extracting image " + name)
@@ -146,9 +164,8 @@ func (*SimpleImg) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerms
if err != nil {
return errors.Wrap(err, "Failed extracting image: "+string(out))
}
Info(":tea: Image " + name + " extracted")
Debug(":tea: Image " + name + " extracted")
return nil
//return NewSimpleDockerBackend().ExtractRootfs(opts, keepPerms)
}
// TODO: Use container-diff (https://github.com/GoogleContainerTools/container-diff) for checking out layer diffs

View File

@@ -16,7 +16,6 @@
package compiler
import (
"archive/tar"
"fmt"
"io/ioutil"
"os"
@@ -237,7 +236,20 @@ func (cs *LuetCompiler) stripFromRootfs(includes []string, rootfs string, includ
return nil
}
func (cs *LuetCompiler) unpackFs(rootfs string, concurrency int, p CompilationSpec) (Artifact, error) {
func (cs *LuetCompiler) unpackFs(concurrency int, keepPermissions bool, p CompilationSpec, runnerOpts CompilerBackendOptions) (Artifact, error) {
rootfs, err := ioutil.TempDir(p.GetOutputPath(), "rootfs")
if err != nil {
return nil, errors.Wrap(err, "Could not create tempdir")
}
defer os.RemoveAll(rootfs) // clean up
err = cs.Backend.ExtractRootfs(CompilerBackendOptions{
ImageName: runnerOpts.ImageName, Destination: rootfs}, keepPermissions)
if err != nil {
return nil, errors.Wrap(err, "Could not extract rootfs")
}
if p.GetPackageDir() != "" {
Info(":tophat: Packing from output dir", p.GetPackageDir())
rootfs = filepath.Join(rootfs, p.GetPackageDir())
@@ -248,7 +260,7 @@ func (cs *LuetCompiler) unpackFs(rootfs string, concurrency int, p CompilationSp
cs.stripFromRootfs(p.GetIncludes(), rootfs, true)
}
if len(p.GetExcludes()) > 0 {
// strip from includes
// strip from excludes
cs.stripFromRootfs(p.GetExcludes(), rootfs, false)
}
artifact := NewPackageArtifact(p.Rel(p.GetPackage().GetFingerPrint() + ".package.tar"))
@@ -262,27 +274,33 @@ func (cs *LuetCompiler) unpackFs(rootfs string, concurrency int, p CompilationSp
return artifact, nil
}
func (cs *LuetCompiler) unpackDelta(rootfs string, concurrency int, keepPermissions bool, p CompilationSpec, builderOpts, runnerOpts CompilerBackendOptions) (Artifact, error) {
func (cs *LuetCompiler) unpackDelta(concurrency int, keepPermissions bool, p CompilationSpec, builderOpts, runnerOpts CompilerBackendOptions) (Artifact, error) {
rootfs, err := ioutil.TempDir(p.GetOutputPath(), "rootfs")
if err != nil {
return nil, errors.Wrap(err, "Could not create tempdir")
}
defer os.RemoveAll(rootfs) // clean up
pkgTag := ":package: " + p.GetPackage().HumanReadableString()
if cs.Options.PullFirst && !cs.Backend.ImageExists(builderOpts.ImageName) && cs.Backend.ImageAvailable(builderOpts.ImageName) {
err := cs.Backend.DownloadImage(builderOpts)
if err != nil {
return nil, errors.Wrap(err, "Could not pull image")
}
} else if !cs.Backend.ImageExists(builderOpts.ImageName) {
return nil, errors.New("No image found for " + builderOpts.ImageName)
}
if err := cs.Backend.ExportImage(builderOpts); err != nil {
return nil, errors.Wrap(err, "Could not export image"+builderOpts.ImageName)
}
if !cs.Options.KeepImageExport {
defer os.Remove(builderOpts.Destination)
}
Info(pkgTag, ":hammer: Generating delta")
diffs, err := cs.Backend.Changes(builderOpts, runnerOpts)
if err != nil {
return nil, errors.Wrap(err, "Could not generate changes from layers")
}
Debug("Extracting image to grab files from delta")
if err := cs.Backend.ExtractRootfs(CompilerBackendOptions{
ImageName: runnerOpts.ImageName, Destination: rootfs}, keepPermissions); err != nil {
return nil, errors.Wrap(err, "Could not extract rootfs")
}
artifact, err := ExtractArtifactFromDelta(rootfs, p.Rel(p.GetPackage().GetFingerPrint()+".package.tar"), diffs, concurrency, keepPermissions, p.GetIncludes(), p.GetExcludes(), cs.CompressionType)
if err != nil {
return nil, errors.Wrap(err, "Could not generate deltas")
@@ -304,12 +322,10 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
// so the hash is unique also in cases where
// some package deps does have completely different
// depgraphs
// TODO: As the salt contains the packageImage ( in registry/organization/imagename:tag format)
// the images hashes are broken with registry mirrors.
// We should use the image tag, or pass by the package assertion hash which is unique
// TODO: We should use the image tag, or pass by the package assertion hash which is unique
// and identifies the deptree of the package.
fp := p.GetPackage().HashFingerprint(packageImage)
fp := p.GetPackage().HashFingerprint(helpers.StripRegistryFromImage(packageImage))
if buildertaggedImage == "" {
buildertaggedImage = cs.ImageRepository + ":builder-" + fp
@@ -427,64 +443,45 @@ func (cs *LuetCompiler) genArtifact(p CompilationSpec, builderOpts, runnerOpts C
var artifact Artifact
var rootfs string
var err error
unpack := p.ImageUnpack()
pkgTag := ":package: " + p.GetPackage().HumanReadableString()
// If package_dir was specified in the spec, we want to treat the content of the directory
// as the root of our archive. ImageUnpack is implied to be true. override it
if p.GetPackageDir() != "" {
unpack = true
}
if len(p.BuildSteps()) == 0 && len(p.GetPreBuildSteps()) == 0 && !unpack {
// We can't generate delta in this case. It implies the package is a virtual, and nothing has to be done really
if p.EmptyPackage() {
fakePackage := p.Rel(p.GetPackage().GetFingerPrint() + ".package.tar")
// We can't generate delta in this case. It implies the package is a virtual, and nothing has to be done really
file, err := os.Create(fakePackage)
rootfs, err = ioutil.TempDir(p.GetOutputPath(), "rootfs")
if err != nil {
return nil, errors.Wrap(err, "Failed creating virtual package")
return nil, errors.Wrap(err, "Could not create tempdir")
}
defer file.Close()
tw := tar.NewWriter(file)
defer tw.Close()
defer os.RemoveAll(rootfs) // clean up
artifact := NewPackageArtifact(fakePackage)
artifact.SetCompressionType(cs.CompressionType)
if err := artifact.Compress(rootfs, concurrency); err != nil {
return nil, errors.Wrap(err, "Error met while creating package archive")
}
artifact.SetCompileSpec(p)
artifact.GetCompileSpec().GetPackage().SetBuildTimestamp(time.Now().String())
err = artifact.WriteYaml(p.GetOutputPath())
if err != nil {
return artifact, errors.Wrap(err, "Failed while writing metadata file")
}
Info(pkgTag, " :white_check_mark: done (empty virtual package)")
return artifact, nil
}
// prepare folder content of the image with the package compiled inside
if err := cs.Backend.ExportImage(runnerOpts); err != nil {
return nil, errors.Wrap(err, "Failed exporting image")
}
if !cs.Options.KeepImageExport {
defer os.Remove(runnerOpts.Destination)
}
rootfs, err = ioutil.TempDir(p.GetOutputPath(), "rootfs")
if err != nil {
return nil, errors.Wrap(err, "Could not create tempdir")
}
defer os.RemoveAll(rootfs) // clean up
// TODO: Compression and such
err = cs.Backend.ExtractRootfs(CompilerBackendOptions{
ImageName: runnerOpts.ImageName,
SourcePath: runnerOpts.Destination, Destination: rootfs}, keepPermissions)
if err != nil {
return nil, errors.Wrap(err, "Could not extract rootfs")
}
if unpack {
if p.UnpackedPackage() {
// Take content of container as a base for our package files
artifact, err = cs.unpackFs(rootfs, concurrency, p)
artifact, err = cs.unpackFs(concurrency, keepPermissions, p, runnerOpts)
if err != nil {
return nil, errors.Wrap(err, "Error met while extracting image")
}
} else {
// Generate delta between the two images
artifact, err = cs.unpackDelta(rootfs, concurrency, keepPermissions, p, builderOpts, runnerOpts)
artifact, err = cs.unpackDelta(concurrency, keepPermissions, p, builderOpts, runnerOpts)
if err != nil {
return nil, errors.Wrap(err, "Error met while generating delta")
}
@@ -507,17 +504,37 @@ func (cs *LuetCompiler) genArtifact(p CompilationSpec, builderOpts, runnerOpts C
return artifact, nil
}
func (cs *LuetCompiler) waitForImage(image string) {
if cs.Options.PullFirst && cs.Options.Wait && !cs.Backend.ImageAvailable(image) {
Info(fmt.Sprintf("Waiting for image %s to be available... :zzz:", image))
Spinner(22)
defer SpinnerStop()
for !cs.Backend.ImageAvailable(image) {
Info(fmt.Sprintf("Image %s not available yet, sleeping", image))
time.Sleep(5 * time.Second)
}
}
}
func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage string,
concurrency int,
keepPermissions, keepImg bool,
p CompilationSpec, generateArtifact bool) (Artifact, error) {
// If it is a virtual, check if we have to generate an empty artifact or not.
if generateArtifact && p.IsVirtual() {
return cs.genArtifact(p, CompilerBackendOptions{}, CompilerBackendOptions{}, concurrency, keepPermissions)
} else if p.IsVirtual() {
return &PackageArtifact{}, nil
}
if !generateArtifact {
exists := cs.Backend.ImageExists(packageImage)
if art, err := LoadArtifactFromYaml(p); err == nil && exists { // If YAML is correctly loaded, and both images exists, no reason to rebuild.
Debug("Artifact reloaded from YAML. Skipping build")
return art, err
}
cs.waitForImage(packageImage)
if cs.Options.PullFirst && cs.Backend.ImageAvailable(packageImage) {
return &PackageArtifact{}, nil
}
@@ -636,10 +653,13 @@ func (cs *LuetCompiler) Compile(keepPermissions bool, p CompilationSpec) (Artifa
func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p CompilationSpec) (Artifact, error) {
Info(":package: Compiling", p.GetPackage().HumanReadableString(), ".... :coffee:")
if len(p.GetPackage().GetRequires()) == 0 && p.GetImage() == "" {
Error("Package with no deps and no seed image supplied, bailing out")
return nil, errors.New("Package " + p.GetPackage().GetFingerPrint() +
" with no deps and no seed image supplied, bailing out")
Debug(fmt.Sprintf("%s: has images %t, empty package: %t", p.GetPackage().HumanReadableString(), p.HasImageSource(), p.EmptyPackage()))
if !p.HasImageSource() && !p.EmptyPackage() {
return nil,
fmt.Errorf(
"%s is invalid: package has no dependencies and no seed image supplied while it has steps defined",
p.GetPackage().GetFingerPrint(),
)
}
targetAssertion := p.GetSourceAssertion().Search(p.GetPackage().GetFingerPrint())

View File

@@ -42,6 +42,7 @@ type CompilerBackendOptions struct {
SourcePath string
DockerFileName string
Destination string
Context string
}
type CompilerOptions struct {
@@ -49,8 +50,8 @@ type CompilerOptions struct {
PullFirst, KeepImg, Push bool
Concurrency int
CompressionType CompressionImplementation
KeepImageExport bool
Wait bool
OnlyDeps bool
NoDeps bool
SolverOptions config.LuetSolverOptions
@@ -109,9 +110,13 @@ type Artifact interface {
SetFiles(f []string)
GetFiles() []string
GetFileName() string
GetChecksums() Checksums
SetChecksums(c Checksums)
GenerateFinalImage(string, CompilerBackend, bool) (CompilerBackendOptions, error)
GetUncompressedName() string
}
type ArtifactNode struct {
@@ -177,6 +182,11 @@ type CompilationSpec interface {
SetPackageDir(string)
GetPackageDir() string
EmptyPackage() bool
UnpackedPackage() bool
HasImageSource() bool
IsVirtual() bool
}
type CompilationSpecs interface {

View File

@@ -114,12 +114,12 @@ func NewLuetCompilationSpec(b []byte, p pkg.Package) (CompilationSpec, error) {
spec.Package = p.(*pkg.DefaultPackage)
return &spec, nil
}
func (a *LuetCompilationSpec) GetSourceAssertion() solver.PackagesAssertions {
return a.SourceAssertion
func (cs *LuetCompilationSpec) GetSourceAssertion() solver.PackagesAssertions {
return cs.SourceAssertion
}
func (a *LuetCompilationSpec) SetSourceAssertion(as solver.PackagesAssertions) {
a.SourceAssertion = as
func (cs *LuetCompilationSpec) SetSourceAssertion(as solver.PackagesAssertions) {
cs.SourceAssertion = as
}
func (cs *LuetCompilationSpec) GetPackage() pkg.Package {
return cs.Package
@@ -157,6 +157,12 @@ func (cs *LuetCompilationSpec) GetRetrieve() []string {
return cs.Retrieve
}
// IsVirtual returns true if the spec is virtual.
// A spec is virtual if the package is empty, and it has no image source to unpack from.
func (cs *LuetCompilationSpec) IsVirtual() bool {
return cs.EmptyPackage() && !cs.HasImageSource()
}
func (cs *LuetCompilationSpec) GetSeedImage() string {
return cs.Seed
}
@@ -185,6 +191,27 @@ func (cs *LuetCompilationSpec) SetSeedImage(s string) {
cs.Seed = s
}
func (cs *LuetCompilationSpec) EmptyPackage() bool {
return len(cs.BuildSteps()) == 0 && len(cs.GetPreBuildSteps()) == 0 && !cs.UnpackedPackage()
}
func (cs *LuetCompilationSpec) UnpackedPackage() bool {
// If package_dir was specified in the spec, we want to treat the content of the directory
// as the root of our archive. ImageUnpack is implied to be true. override it
unpack := cs.ImageUnpack()
if cs.GetPackageDir() != "" {
unpack = true
}
return unpack
}
// HasImageSource returns true when the compilation spec has an image source.
// a compilation spec has an image source when it depends on other packages or have a source image
// explictly supplied
func (cs *LuetCompilationSpec) HasImageSource() bool {
return (cs.Package != nil && len(cs.GetPackage().GetRequires()) != 0) || cs.GetImage() != ""
}
func (cs *LuetCompilationSpec) CopyRetrieves(dest string) error {
var err error
if len(cs.Retrieve) > 0 {

View File

@@ -51,6 +51,26 @@ var _ = Describe("Spec", func() {
Expect(newSpec2.All()).To(Equal([]CompilationSpec{testSpec3}))
})
Context("virtuals", func() {
When("is empty", func() {
It("is virtual", func() {
spec := &LuetCompilationSpec{}
Expect(spec.IsVirtual()).To(BeTrue())
})
})
When("has defined steps", func() {
It("is not a virtual", func() {
spec := &LuetCompilationSpec{Steps: []string{"foo"}}
Expect(spec.IsVirtual()).To(BeFalse())
})
})
When("has defined image", func() {
It("is not a virtual", func() {
spec := &LuetCompilationSpec{Image: "foo"}
Expect(spec.IsVirtual()).To(BeFalse())
})
})
})
})
Context("Simple package build definition", func() {

View File

@@ -28,6 +28,7 @@ import (
"time"
"github.com/mudler/luet/pkg/helpers"
pkg "github.com/mudler/luet/pkg/package"
solver "github.com/mudler/luet/pkg/solver"
v "github.com/spf13/viper"
@@ -276,6 +277,16 @@ func GenDefault(viper *v.Viper) {
viper.SetDefault("solver.max_attempts", 9000)
}
func (c *LuetConfig) GetSystemDB() pkg.PackageDatabase {
switch LuetCfg.GetSystem().DatabaseEngine {
case "boltdb":
return pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
default:
return pkg.NewInMemoryDatabase(true)
}
}
func (c *LuetConfig) AddSystemRepository(r LuetRepository) {
c.SystemRepositories = append(c.SystemRepositories, r)
}

View File

@@ -16,14 +16,17 @@
package helpers
import (
"io"
"io/ioutil"
"os"
"path/filepath"
"sort"
"strings"
"syscall"
"time"
copy "github.com/otiai10/copy"
"github.com/pkg/errors"
)
func OrderFiles(target string, files []string) ([]string, []string) {
@@ -80,6 +83,20 @@ func ListDir(dir string) ([]string, error) {
return content, err
}
// DirectoryIsEmpty Checks wether the directory is empty or not
func DirectoryIsEmpty(dir string) (bool, error) {
f, err := os.Open(dir)
if err != nil {
return false, err
}
defer f.Close()
if _, err = f.Readdirnames(1); err == io.EOF {
return true, nil
}
return false, nil
}
// Touch creates an empty file
func Touch(f string) error {
_, err := os.Stat(f)
@@ -117,14 +134,15 @@ func Read(file string) (string, error) {
return string(dat), nil
}
func ensureDir(fileName string) {
func EnsureDir(fileName string) error {
dirName := filepath.Dir(fileName)
if _, serr := os.Stat(dirName); serr != nil {
merr := os.MkdirAll(dirName, os.ModePerm) // FIXME: It should preserve permissions from src to dst instead
if merr != nil {
panic(merr)
return merr
}
}
return nil
}
// CopyFile copies the contents of the file named src to the file named
@@ -133,7 +151,30 @@ func ensureDir(fileName string) {
// of the source file. The file mode will be copied from the source and
// the copied data is synced/flushed to stable storage.
func CopyFile(src, dst string) (err error) {
return copy.Copy(src, dst, copy.Options{OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
// Workaround for https://github.com/otiai10/copy/issues/47
fi, err := os.Lstat(src)
if err != nil {
return errors.Wrap(err, "error reading file info")
}
fm := fi.Mode()
switch {
case fm&os.ModeNamedPipe != 0:
EnsureDir(dst)
if err := syscall.Mkfifo(dst, uint32(fi.Mode())); err != nil {
return errors.Wrap(err, "failed creating pipe")
}
if stat, ok := fi.Sys().(*syscall.Stat_t); ok {
if err := os.Chown(dst, int(stat.Uid), int(stat.Gid)); err != nil {
return errors.Wrap(err, "failed chowning file")
}
}
return nil
}
return copy.Copy(src, dst, copy.Options{
Sync: true,
OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
}
func IsDirectory(path string) (bool, error) {
@@ -150,5 +191,7 @@ func IsDirectory(path string) (bool, error) {
func CopyDir(src string, dst string) (err error) {
src = filepath.Clean(src)
dst = filepath.Clean(dst)
return copy.Copy(src, dst, copy.Options{OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
return copy.Copy(src, dst, copy.Options{
Sync: true,
OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
}

View File

@@ -33,6 +33,23 @@ var _ = Describe("Helpers", func() {
})
})
Context("DirectoryIsEmpty", func() {
It("Detects empty directory", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
Expect(DirectoryIsEmpty(testDir)).To(BeTrue())
})
It("Detects directory with files", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
err = Touch(filepath.Join(testDir, "foo"))
Expect(err).ToNot(HaveOccurred())
Expect(DirectoryIsEmpty(testDir)).To(BeFalse())
})
})
Context("Orders dir and files correctly", func() {
It("puts files first and folders at end", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")

30
pkg/helpers/references.go Normal file
View File

@@ -0,0 +1,30 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// David Cassany <dcassany@suse.com>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers
import (
"github.com/asaskevich/govalidator"
"strings"
)
func StripRegistryFromImage(image string) string {
img := strings.SplitN(image, "/", 2)
if len(img) == 2 && govalidator.IsURL(img[0]) {
return img[1]
}
return image
}

View File

@@ -0,0 +1,44 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// David Cassany <dcassany@suse.com>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers_test
import (
. "github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Helpers", func() {
Context("StripRegistryFromImage", func() {
It("Strips the domain name", func() {
out := StripRegistryFromImage("valid.domain.org/base/image:tag")
Expect(out).To(Equal("base/image:tag"))
})
It("Strips the domain name when port is included", func() {
out := StripRegistryFromImage("valid.domain.org:5000/base/image:tag")
Expect(out).To(Equal("base/image:tag"))
})
It("Does not strip the domain name", func() {
out := StripRegistryFromImage("not-a-domain/base/image:tag")
Expect(out).To(Equal("not-a-domain/base/image:tag"))
})
It("Does not strip the domain name on invalid domains", func() {
out := StripRegistryFromImage("-invaliddomain.org/base/image:tag")
Expect(out).To(Equal("-invaliddomain.org/base/image:tag"))
})
})
})

View File

@@ -0,0 +1,186 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package client
import (
"fmt"
"os"
"path"
"path/filepath"
"github.com/docker/go-units"
"github.com/pkg/errors"
imgworker "github.com/mudler/luet/pkg/installer/client/imgworker"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
)
type DockerClient struct {
RepoData RepoData
}
func NewDockerClient(r RepoData) *DockerClient {
return &DockerClient{RepoData: r}
}
func downloadAndExtractDockerImage(image, dest string) error {
temp, err := config.LuetCfg.GetSystem().TempDir("contentstore")
if err != nil {
return err
}
defer os.RemoveAll(temp)
Debug("Temporary directory", temp)
c, err := imgworker.New(temp)
if err != nil {
return errors.Wrapf(err, "failed creating client")
}
defer c.Close()
// FROM Slightly adapted from genuinetools/img https://github.com/genuinetools/img/blob/54d0ca981c1260546d43961a538550eef55c87cf/pull.go
Debug("Pulling image", image)
listedImage, err := c.Pull(image)
if err != nil {
return errors.Wrapf(err, "failed listing images")
}
Debug("Pulled:", listedImage.Target.Digest)
Debug("Size:", units.BytesSize(float64(listedImage.ContentSize)))
Debug("Unpacking", image, "to", dest)
os.RemoveAll(dest)
return c.Unpack(image, dest)
}
func (c *DockerClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Artifact, error) {
//var u *url.URL = nil
var err error
var temp string
var resultingArtifact compiler.Artifact
artifactName := path.Base(artifact.GetPath())
cacheFile := filepath.Join(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), artifactName)
Debug("Cache file", cacheFile)
if err := helpers.EnsureDir(cacheFile); err != nil {
return nil, errors.Wrapf(err, "could not create cache folder %s for %s", config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), cacheFile)
}
ok := false
// TODO:
// Files are in URI/packagename:version (GetPackageImageName() method)
// use downloadAndExtract .. and egenrate an archive to consume. Checksum should be already checked while downloading the image
// with the above functions, because Docker images already contain such metadata
// - Check how verification is done when calling DownloadArtifact outside, similarly we need to check DownloadFile, and how verification
// is done in such cases (see repository.go)
// Check if file is already in cache
if helpers.Exists(cacheFile) {
Debug("Cache hit for artifact", artifactName)
resultingArtifact = artifact
resultingArtifact.SetPath(cacheFile)
resultingArtifact.SetChecksums(compiler.Checksums{})
} else {
temp, err = config.LuetCfg.GetSystem().TempDir("tree")
if err != nil {
return nil, err
}
defer os.RemoveAll(temp)
for _, uri := range c.RepoData.Urls {
Debug("Downloading artifact", artifactName, "from", uri)
imageName := fmt.Sprintf("%s:%s", uri, artifact.GetCompileSpec().GetPackage().GetFingerPrint())
// imageName := fmt.Sprintf("%s/%s", uri, artifact.GetCompileSpec().GetPackage().GetPackageImageName())
err = downloadAndExtractDockerImage(imageName, temp)
if err != nil {
Debug("Failed download of image", imageName)
continue
}
Debug("\nCompressing result ", filepath.Join(temp), "to", cacheFile)
newart := artifact
// We discard checksum, that are checked while during pull and unpack
newart.SetChecksums(compiler.Checksums{})
newart.SetPath(cacheFile) // First set to cache file
newart.SetPath(newart.GetUncompressedName()) // Calculate the real path from cacheFile
err = newart.Compress(temp, 1)
if err != nil {
Error(fmt.Sprintf("Failed compressing package %s: %s", imageName, err.Error()))
continue
}
resultingArtifact = newart
ok = true
break
}
if !ok {
return nil, err
}
}
return resultingArtifact, nil
}
func (c *DockerClient) DownloadFile(name string) (string, error) {
var file *os.File = nil
var err error
var temp string
// Files should be in URI/repository:<file>
ok := false
temp, err = config.LuetCfg.GetSystem().TempDir("tree")
if err != nil {
return "", err
}
for _, uri := range c.RepoData.Urls {
file, err = config.LuetCfg.GetSystem().TempFile("DockerClient")
if err != nil {
continue
}
Debug("Downloading file", name, "from", uri)
imageName := fmt.Sprintf("%s:%s", uri, name)
//imageName := fmt.Sprintf("%s/%s:%s", uri, "repository", name)
err = downloadAndExtractDockerImage(imageName, temp)
if err != nil {
Debug("Failed download of image", imageName)
continue
}
Debug("\nCopying file ", filepath.Join(temp, name), "to", file.Name())
err = helpers.CopyFile(filepath.Join(temp, name), file.Name())
if err != nil {
continue
}
ok = true
break
}
if !ok {
return "", err
}
return file.Name(), err
}

View File

@@ -0,0 +1,77 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package client_test
import (
"io/ioutil"
"os"
"path/filepath"
compiler "github.com/mudler/luet/pkg/compiler"
helpers "github.com/mudler/luet/pkg/helpers"
pkg "github.com/mudler/luet/pkg/package"
. "github.com/mudler/luet/pkg/installer/client"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
// This test expect that the repository defined in UNIT_TEST_DOCKER_IMAGE is in zstd format.
// the repository is built by the 01_simple_docker.sh integration test file.
// This test also require root. At the moment, unpacking docker images with 'img' requires root permission to
// mount/unmount layers.
var _ = Describe("Docker client", func() {
Context("With repository", func() {
repoImage := os.Getenv("UNIT_TEST_DOCKER_IMAGE")
var repoURL []string
var c *DockerClient
BeforeEach(func() {
if repoImage == "" {
Skip("UNIT_TEST_DOCKER_IMAGE not specified")
}
repoURL = []string{repoImage}
c = NewDockerClient(RepoData{Urls: repoURL})
})
It("Downloads single files", func() {
f, err := c.DownloadFile("repository.yaml")
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(f)).To(ContainSubstring("Test Repo"))
os.RemoveAll(f)
})
It("Downloads artifacts", func() {
f, err := c.DownloadArtifact(&compiler.PackageArtifact{
Path: "test.tar",
CompileSpec: &compiler.LuetCompilationSpec{
Package: &pkg.DefaultPackage{
Name: "c",
Category: "test",
Version: "1.0",
},
},
})
Expect(err).ToNot(HaveOccurred())
tmpdir, err := ioutil.TempDir("", "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(f.Unpack(tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Read(filepath.Join(tmpdir, "c"))).To(Equal("c\n"))
Expect(helpers.Read(filepath.Join(tmpdir, "cd"))).To(Equal("c\n"))
os.RemoveAll(f.GetPath())
})
})
})

View File

@@ -78,7 +78,7 @@ func (c *HttpClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Arti
// Check if file is already in cache
if helpers.Exists(cacheFile) {
Info("Use artifact", artifactName, "from cache.")
Debug("Use artifact", artifactName, "from cache.")
} else {
temp, err = config.LuetCfg.GetSystem().TempDir("tree")
@@ -90,7 +90,7 @@ func (c *HttpClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Arti
client := grab.NewClient()
for _, uri := range c.RepoData.Urls {
Info("Downloading artifact", artifactName, "from", uri)
Debug("Downloading artifact", artifactName, "from", uri)
u, err = url.Parse(uri)
if err != nil {
@@ -202,7 +202,7 @@ func (c *HttpClient) DownloadFile(name string) (string, error) {
}
u.Path = path.Join(u.Path, name)
Info("Downloading", u.String())
Debug("Downloading", u.String())
req, err = c.PrepareReq(temp, u.String())
if err != nil {

View File

@@ -0,0 +1,78 @@
package imgworker
// FROM Slightly adapted from genuinetools/img worker
import (
"context"
"os"
"path/filepath"
"github.com/containerd/containerd/namespaces"
"github.com/genuinetools/img/types"
"github.com/moby/buildkit/control"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/util/appcontext"
"github.com/moby/buildkit/worker/base"
"github.com/pkg/errors"
)
// Client holds the information for the client we will use for communicating
// with the buildkit controller.
type Client struct {
backend string
localDirs map[string]string
root string
sessionManager *session.Manager
controller *control.Controller
opts *base.WorkerOpt
sess *session.Session
ctx context.Context
}
// New returns a new client for communicating with the buildkit controller.
func New(root string) (*Client, error) {
// Native backend is fine, our images have just one layer. No need to depend on anything
backend := types.NativeBackend
// Create the root/
root = filepath.Join(root, "runc", backend)
if err := os.MkdirAll(root, 0700); err != nil {
return nil, err
}
c := &Client{
backend: types.NativeBackend,
root: root,
localDirs: nil,
}
if err := c.prepare(); err != nil {
return nil, errors.Wrapf(err, "failed preparing client")
}
// Create the start of the client.
return c, nil
}
func (c *Client) Close() {
c.sess.Close()
}
func (c *Client) prepare() error {
ctx := appcontext.Context()
sess, sessDialer, err := c.Session(ctx)
if err != nil {
return errors.Wrapf(err, "failed creating Session")
}
ctx = session.NewContext(ctx, sess.ID())
ctx = namespaces.WithNamespace(ctx, "buildkit")
c.ctx = ctx
c.sess = sess
go func() {
sess.Run(ctx, sessDialer)
}()
return nil
}

View File

@@ -0,0 +1,129 @@
package imgworker
// FROM Slightly adapted from genuinetools/img worker
import (
"fmt"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/platforms"
"github.com/docker/distribution/reference"
"github.com/moby/buildkit/cache"
"github.com/moby/buildkit/exporter"
imageexporter "github.com/moby/buildkit/exporter/containerimage"
"github.com/moby/buildkit/source"
"github.com/moby/buildkit/source/containerimage"
)
// ListedImage represents an image structure returuned from ListImages.
// It extends containerd/images.Image with extra fields.
type ListedImage struct {
images.Image
ContentSize int64
}
// Pull retrieves an image from a remote registry.
func (c *Client) Pull(image string) (*ListedImage, error) {
ctx := c.ctx
sm, err := c.getSessionManager()
if err != nil {
return nil, err
}
// Parse the image name and tag.
named, err := reference.ParseNormalizedNamed(image)
if err != nil {
return nil, fmt.Errorf("parsing image name %q failed: %v", image, err)
}
// Add the latest lag if they did not provide one.
named = reference.TagNameOnly(named)
image = named.String()
// Get the identifier for the image.
identifier, err := source.NewImageIdentifier(image)
if err != nil {
return nil, err
}
// Create the worker opts.
opt, err := c.createWorkerOpt()
if err != nil {
return nil, fmt.Errorf("creating worker opt failed: %v", err)
}
cm, err := cache.NewManager(cache.ManagerOpt{
Snapshotter: opt.Snapshotter,
MetadataStore: opt.MetadataStore,
ContentStore: opt.ContentStore,
LeaseManager: opt.LeaseManager,
GarbageCollect: opt.GarbageCollect,
Applier: opt.Applier,
})
if err != nil {
return nil, err
}
// Create the source for the pull.
srcOpt := containerimage.SourceOpt{
Snapshotter: opt.Snapshotter,
ContentStore: opt.ContentStore,
Applier: opt.Applier,
CacheAccessor: cm,
ImageStore: opt.ImageStore,
RegistryHosts: opt.RegistryHosts,
LeaseManager: opt.LeaseManager,
}
src, err := containerimage.NewSource(srcOpt)
if err != nil {
return nil, err
}
s, err := src.Resolve(ctx, identifier, sm)
if err != nil {
return nil, err
}
ref, err := s.Snapshot(ctx)
if err != nil {
return nil, err
}
// Create the exporter for the pull.
iw, err := imageexporter.NewImageWriter(imageexporter.WriterOpt{
Snapshotter: opt.Snapshotter,
ContentStore: opt.ContentStore,
Differ: opt.Differ,
})
if err != nil {
return nil, err
}
expOpt := imageexporter.Opt{
SessionManager: sm,
ImageWriter: iw,
Images: opt.ImageStore,
RegistryHosts: opt.RegistryHosts,
LeaseManager: opt.LeaseManager,
}
exp, err := imageexporter.New(expOpt)
if err != nil {
return nil, err
}
e, err := exp.Resolve(ctx, map[string]string{"name": image})
if err != nil {
return nil, err
}
if _, err := e.Export(ctx, exporter.Source{Ref: ref}); err != nil {
return nil, err
}
// Get the image.
img, err := opt.ImageStore.Get(ctx, image)
if err != nil {
return nil, fmt.Errorf("getting image %s from image store failed: %v", image, err)
}
size, err := img.Size(ctx, opt.ContentStore, platforms.Default())
if err != nil {
return nil, fmt.Errorf("calculating size of image %s failed: %v", img.Name, err)
}
return &ListedImage{Image: img, ContentSize: size}, nil
}

View File

@@ -0,0 +1,51 @@
package imgworker
// FROM Slightly adapted from genuinetools/img worker
import (
"context"
"os"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/session/filesync"
"github.com/moby/buildkit/session/testutil"
"github.com/pkg/errors"
)
func (c *Client) getSessionManager() (*session.Manager, error) {
if c.sessionManager == nil {
var err error
c.sessionManager, err = session.NewManager()
if err != nil {
return nil, err
}
}
return c.sessionManager, nil
}
// Session creates the session manager and returns the session and it's
// dialer.
func (c *Client) Session(ctx context.Context) (*session.Session, session.Dialer, error) {
m, err := c.getSessionManager()
if err != nil {
return nil, nil, errors.Wrap(err, "failed to create session manager")
}
sessionName := "img"
s, err := session.NewSession(ctx, sessionName, "")
if err != nil {
return nil, nil, errors.Wrap(err, "failed to create session")
}
syncedDirs := make([]filesync.SyncedDir, 0, len(c.localDirs))
for name, d := range c.localDirs {
syncedDirs = append(syncedDirs, filesync.SyncedDir{Name: name, Dir: d})
}
s.Allow(filesync.NewFSSyncProvider(syncedDirs))
s.Allow(authprovider.NewDockerAuthProvider(os.Stderr))
return s, sessionDialer(s, m), err
}
func sessionDialer(s *session.Session, m *session.Manager) session.Dialer {
// FIXME: rename testutil
return session.Dialer(testutil.TestStream(testutil.Handler(m.HandleConn)))
}

View File

@@ -0,0 +1,82 @@
package imgworker
// FROM Slightly adapted from genuinetools/img worker
import (
"errors"
"fmt"
"os"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/platforms"
"github.com/docker/distribution/reference"
"github.com/docker/docker/pkg/archive"
"github.com/sirupsen/logrus"
)
// TODO: this requires root permissions to mount/unmount layers, althrought it shouldn't be required.
// See how backends are unpacking images without asking for root permissions.
// Unpack exports an image to a rootfs destination directory.
func (c *Client) Unpack(image, dest string) error {
ctx := c.ctx
if len(dest) < 1 {
return errors.New("destination directory for rootfs cannot be empty")
}
if _, err := os.Stat(dest); err == nil {
return fmt.Errorf("destination directory already exists: %s", dest)
}
// Parse the image name and tag.
named, err := reference.ParseNormalizedNamed(image)
if err != nil {
return fmt.Errorf("parsing image name %q failed: %v", image, err)
}
// Add the latest lag if they did not provide one.
named = reference.TagNameOnly(named)
image = named.String()
// Create the worker opts.
opt, err := c.createWorkerOpt()
if err != nil {
return fmt.Errorf("creating worker opt failed: %v", err)
}
if opt.ImageStore == nil {
return errors.New("image store is nil")
}
img, err := opt.ImageStore.Get(ctx, image)
if err != nil {
return fmt.Errorf("getting image %s from image store failed: %v", image, err)
}
manifest, err := images.Manifest(ctx, opt.ContentStore, img.Target, platforms.Default())
if err != nil {
return fmt.Errorf("getting image manifest failed: %v", err)
}
for _, desc := range manifest.Layers {
logrus.Debugf("Unpacking layer %s", desc.Digest.String())
// Read the blob from the content store.
layer, err := opt.ContentStore.ReaderAt(ctx, desc)
if err != nil {
return fmt.Errorf("getting reader for digest %s failed: %v", desc.Digest.String(), err)
}
// Unpack the tarfile to the rootfs path.
// FROM: https://godoc.org/github.com/moby/moby/pkg/archive#TarOptions
if err := archive.Untar(content.NewReader(layer), dest, &archive.TarOptions{
NoLchown: true,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
}); err != nil {
return fmt.Errorf("extracting tar for %s to directory %s failed: %v", desc.Digest.String(), dest, err)
}
}
return nil
}

View File

@@ -0,0 +1,106 @@
package imgworker
// FROM Slightly adapted from genuinetools/img worker
import (
"context"
"fmt"
"path/filepath"
"github.com/containerd/containerd/content/local"
"github.com/containerd/containerd/diff/apply"
"github.com/containerd/containerd/diff/walking"
ctdmetadata "github.com/containerd/containerd/metadata"
"github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/remotes/docker"
ctdsnapshot "github.com/containerd/containerd/snapshots"
"github.com/containerd/containerd/snapshots/native"
"github.com/moby/buildkit/cache/metadata"
containerdsnapshot "github.com/moby/buildkit/snapshot/containerd"
"github.com/moby/buildkit/util/binfmt_misc"
"github.com/moby/buildkit/util/leaseutil"
"github.com/moby/buildkit/worker/base"
specs "github.com/opencontainers/image-spec/specs-go/v1"
bolt "go.etcd.io/bbolt"
)
// createWorkerOpt creates a base.WorkerOpt to be used for a new worker.
func (c *Client) createWorkerOpt() (opt base.WorkerOpt, err error) {
if c.opts != nil {
return *c.opts, nil
}
// Create the metadata store.
md, err := metadata.NewStore(filepath.Join(c.root, "metadata.db"))
if err != nil {
return opt, err
}
snapshotRoot := filepath.Join(c.root, "snapshots")
s, err := native.NewSnapshotter(snapshotRoot)
if err != nil {
return opt, fmt.Errorf("creating %s snapshotter failed: %v", c.backend, err)
}
// Create the content store locally.
contentStore, err := local.NewStore(filepath.Join(c.root, "content"))
if err != nil {
return opt, err
}
// Open the bolt database for metadata.
db, err := bolt.Open(filepath.Join(c.root, "containerdmeta.db"), 0644, nil)
if err != nil {
return opt, err
}
// Create the new database for metadata.
mdb := ctdmetadata.NewDB(db, contentStore, map[string]ctdsnapshot.Snapshotter{
c.backend: s,
})
if err := mdb.Init(context.TODO()); err != nil {
return opt, err
}
// Create the image store.
imageStore := ctdmetadata.NewImageStore(mdb)
contentStore = containerdsnapshot.NewContentStore(mdb.ContentStore(), "buildkit")
id, err := base.ID(c.root)
if err != nil {
return opt, err
}
xlabels := base.Labels("oci", c.backend)
var supportedPlatforms []specs.Platform
for _, p := range binfmt_misc.SupportedPlatforms(false) {
parsed, err := platforms.Parse(p)
if err != nil {
return opt, err
}
supportedPlatforms = append(supportedPlatforms, platforms.Normalize(parsed))
}
opt = base.WorkerOpt{
ID: id,
Labels: xlabels,
MetadataStore: md,
Snapshotter: containerdsnapshot.NewSnapshotter(c.backend, mdb.Snapshotter(c.backend), "buildkit", nil),
ContentStore: contentStore,
Applier: apply.NewFileSystemApplier(contentStore),
Differ: walking.NewWalkingDiff(contentStore),
ImageStore: imageStore,
Platforms: supportedPlatforms,
RegistryHosts: docker.ConfigureDefaultRegistries(),
LeaseManager: leaseutil.WithNamespace(ctdmetadata.NewLeaseManager(mdb), "buildkit"),
GarbageCollect: mdb.GarbageCollect,
}
c.opts = &opt
return opt, err
}

View File

@@ -51,7 +51,7 @@ func (c *LocalClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Art
// Check if file is already in cache
if helpers.Exists(cacheFile) {
Info("Use artifact", artifactName, "from cache.")
Debug("Use artifact", artifactName, "from cache.")
} else {
ok := false
for _, uri := range c.RepoData.Urls {

View File

@@ -175,7 +175,7 @@ func (l *LuetInstaller) Upgrade(s *System) error {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return l.swap(syncedRepos, uninstall, toInstall, s)
return l.swap(syncedRepos, uninstall, toInstall, s, true)
} else {
return errors.New("Aborted by user")
}
@@ -183,7 +183,7 @@ func (l *LuetInstaller) Upgrade(s *System) error {
Spinner(32)
defer SpinnerStop()
return l.swap(syncedRepos, uninstall, toInstall, s)
return l.swap(syncedRepos, uninstall, toInstall, s, true)
}
func (l *LuetInstaller) SyncRepositories(inMemory bool) (Repositories, error) {
@@ -215,8 +215,6 @@ func (l *LuetInstaller) Swap(toRemove pkg.Packages, toInstall pkg.Packages, s *S
}
toRemoveFinal := pkg.Packages{}
toInstallFinal := pkg.Packages{}
for _, p := range toRemove {
packs, _ := s.Database.FindPackages(p)
if len(packs) == 0 {
@@ -227,40 +225,46 @@ func (l *LuetInstaller) Swap(toRemove pkg.Packages, toInstall pkg.Packages, s *S
}
}
match, _, _, _, err := l.computeInstall(syncedRepos, toInstall, s)
return l.swap(syncedRepos, toRemoveFinal, toInstall, s, false)
}
func (l *LuetInstaller) computeSwap(syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
toInstall = syncedRepos.ResolveSelectors(toInstall)
// First check what would have been done
installedtmp, err := s.Database.Copy()
if err != nil {
return err
}
for _, m := range match {
toInstallFinal = append(toInstallFinal, m.Package)
return nil, nil, nil, nil, errors.Wrap(err, "Failed create temporary in-memory db")
}
if len(toRemove) > 0 {
Info(":recycle: Packages that are going to be removed from the system:\n ", Yellow(packsToList(toRemove)).BgBlack().String())
}
systemAfterChanges := &System{Database: installedtmp}
if len(toInstall) > 0 {
Info(":zap:Packages that are going to be installed in the system:\n ", Green(packsToList(toInstall)).BgBlack().String())
packs, err := l.computeUninstall(systemAfterChanges, toRemove...)
if err != nil && !l.Options.Force {
Error("Failed computing uninstall for ", packsToList(toRemove))
return nil, nil, nil, nil, errors.Wrap(err, "computing uninstall "+packsToList(toRemove))
}
if l.Options.Ask {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return l.swap(syncedRepos, toRemoveFinal, toInstallFinal, s)
} else {
return errors.New("Aborted by user")
for _, p := range packs {
err = systemAfterChanges.Database.RemovePackage(p)
if err != nil {
return nil, nil, nil, nil, errors.Wrap(err, "Failed removing package from database")
}
}
return l.swap(syncedRepos, toRemoveFinal, toInstallFinal, s)
match, packages, assertions, allRepos, err := l.computeInstall(syncedRepos, toInstall, systemAfterChanges)
for _, p := range toInstall {
assertions = append(assertions, solver.PackageAssert{Package: p.(*pkg.DefaultPackage), Value: true})
}
return match, packages, assertions, allRepos, err
}
func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) error {
// First match packages against repositories by priority
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
toInstall = syncedRepos.ResolveSelectors(toInstall)
func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System, forceNodeps bool) error {
forced := l.Options.Force
nodeps := l.Options.NoDeps
// We don't want any conflict with the installed to raise during the upgrade.
// In this way we both force uninstalls and we avoid to check with conflicts
@@ -269,51 +273,42 @@ func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, to
// if the old A results installed in the system. This is due to the fact that
// now the solver enforces the constraints and explictly denies two packages
// of the same version installed.
forced := l.Options.Force
nodeps := l.Options.NoDeps
l.Options.Force = true
l.Options.NoDeps = true
// First check what would have been done
installedtmp := pkg.NewInMemoryDatabase(false)
for _, i := range s.Database.World() {
_, err := installedtmp.CreatePackage(i)
if err != nil {
return errors.Wrap(err, "Failed create temporary in-memory db")
}
}
systemAfterChanges := &System{Database: installedtmp}
for _, u := range toRemove {
packs, err := l.computeUninstall(u, systemAfterChanges)
if err != nil && !l.Options.Force {
Error("Failed computing uninstall for ", u.HumanReadableString())
return errors.Wrap(err, "computing uninstall "+u.HumanReadableString())
}
for _, p := range packs {
err = systemAfterChanges.Database.RemovePackage(p)
if err != nil {
return errors.Wrap(err, "Failed removing package from database")
}
}
if forceNodeps {
l.Options.NoDeps = true
}
match, packages, assertions, allRepos, err := l.computeInstall(syncedRepos, toInstall, systemAfterChanges)
match, packages, assertions, allRepos, err := l.computeSwap(syncedRepos, toRemove, toInstall, s)
if err != nil {
return errors.Wrap(err, "computing installation")
return errors.Wrap(err, "failed computing package replacement")
}
if l.Options.Ask {
if len(toRemove) > 0 {
Info(":recycle: Packages that are going to be removed from the system:\n ", Yellow(packsToList(toRemove)).BgBlack().String())
}
if len(match) > 0 {
Info("Packages that are going to be installed in the system: \n ", Green(matchesToList(match)).BgBlack().String())
}
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
} else {
return errors.New("Aborted by user")
}
}
// First match packages against repositories by priority
if err := l.download(syncedRepos, match); err != nil {
return errors.Wrap(err, "Pre-downloading packages")
}
for _, u := range toRemove {
err := l.Uninstall(u, s)
if err != nil && !l.Options.Force {
Error("Failed uninstall for ", u.HumanReadableString())
return errors.Wrap(err, "uninstalling "+u.HumanReadableString())
}
err = l.Uninstall(s, toRemove...)
if err != nil && !l.Options.Force {
Error("Failed uninstall for ", packsToList(toRemove))
return errors.Wrap(err, "uninstalling "+packsToList(toRemove))
}
l.Options.Force = forced
@@ -358,7 +353,6 @@ func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
}
}
}
Info("Packages that are going to be installed in the system: \n ", Green(matchesToList(match)).BgBlack().String())
if l.Options.Ask {
@@ -597,7 +591,7 @@ func (l *LuetInstaller) install(syncedRepos Repositories, toInstall map[string]A
}
}
return s.ExecuteFinalizers(toFinalize, l.Options.Force)
return s.ExecuteFinalizers(toFinalize)
}
func (l *LuetInstaller) downloadPackage(a ArtifactMatch) (compiler.Artifact, error) {
@@ -765,7 +759,7 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
return nil
}
func (l *LuetInstaller) computeUninstall(p pkg.Package, s *System) (pkg.Packages, error) {
func (l *LuetInstaller) computeUninstall(s *System, packs ...pkg.Package) (pkg.Packages, error) {
var toUninstall pkg.Packages
// compute uninstall from all world - remove packages in parallel - run uninstall finalizer (in order) TODO - mark the uninstallation in db
@@ -779,13 +773,10 @@ func (l *LuetInstaller) computeUninstall(p pkg.Package, s *System) (pkg.Packages
// Create a temporary DB with the installed packages
// so the solver is much faster finding the deptree
installedtmp := pkg.NewInMemoryDatabase(false)
for _, i := range s.Database.World() {
_, err := installedtmp.CreatePackage(i)
if err != nil {
return toUninstall, errors.Wrap(err, "Failed create temporary in-memory db")
}
// First check what would have been done
installedtmp, err := s.Database.Copy()
if err != nil {
return toUninstall, errors.Wrap(err, "Failed create temporary in-memory db")
}
if !l.Options.NoDeps {
@@ -793,12 +784,12 @@ func (l *LuetInstaller) computeUninstall(p pkg.Package, s *System) (pkg.Packages
var solution pkg.Packages
var err error
if l.Options.FullCleanUninstall {
solution, err = solv.UninstallUniverse(pkg.Packages{p})
solution, err = solv.UninstallUniverse(packs)
if err != nil {
return toUninstall, errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
}
} else {
solution, err = solv.Uninstall(checkConflicts, full, p)
solution, err = solv.Uninstall(checkConflicts, full, packs...)
if err != nil && !l.Options.Force {
return toUninstall, errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
}
@@ -808,19 +799,22 @@ func (l *LuetInstaller) computeUninstall(p pkg.Package, s *System) (pkg.Packages
toUninstall = append(toUninstall, p)
}
} else {
toUninstall = append(toUninstall, p)
toUninstall = append(toUninstall, packs...)
}
return toUninstall, nil
}
func (l *LuetInstaller) Uninstall(p pkg.Package, s *System) error {
if packs, _ := s.Database.FindPackages(p); len(packs) == 0 {
return errors.New("Package not found in the system")
func (l *LuetInstaller) Uninstall(s *System, packs ...pkg.Package) error {
for _, p := range packs {
if packs, _ := s.Database.FindPackages(p); len(packs) == 0 {
return errors.New("Package not found in the system")
}
}
Spinner(32)
toUninstall, err := l.computeUninstall(p, s)
toUninstall, err := l.computeUninstall(s, packs...)
if err != nil {
return errors.Wrap(err, "while computing uninstall")
}
@@ -841,10 +835,8 @@ func (l *LuetInstaller) Uninstall(p pkg.Package, s *System) error {
return nil
}
Info(":recycle: Packages that are going to be removed from the system:\n ", Yellow(packsToList(toUninstall)).BgBlack().String())
if l.Options.Ask {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
Info(":recycle: Packages that are going to be removed from the system:\n ", Yellow(packsToList(toUninstall)).BgBlack().String())
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return uninstall()

View File

@@ -24,6 +24,7 @@ import (
compiler "github.com/mudler/luet/pkg/compiler"
backend "github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/installer"
solver "github.com/mudler/luet/pkg/solver"
. "github.com/mudler/luet/pkg/installer"
@@ -33,6 +34,18 @@ import (
. "github.com/onsi/gomega"
)
func stubRepo(tmpdir, tree string) (installer.Repository, error) {
return GenerateRepository(
"test",
"description",
"disk",
[]string{tmpdir},
1,
tmpdir,
[]string{tree},
pkg.NewInMemoryDatabase(false), nil, "", false, false)
}
var _ = Describe("Installer", func() {
Context("Writes a repository definition", func() {
It("Writes a repo and can install packages from it", func() {
@@ -84,13 +97,13 @@ var _ = Describe("Installer", func() {
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
@@ -134,7 +147,7 @@ urls:
Expect(err).ToNot(HaveOccurred())
Expect(p.GetName()).To(Equal("b"))
err = inst.Uninstall(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}, system)
err = inst.Uninstall(system, &pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
@@ -200,7 +213,9 @@ urls:
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
treeFile := NewDefaultTreeRepositoryFile()
treeFile.SetCompressionType(compiler.None)
repo.SetRepositoryFile(REPOFILE_TREE_KEY, treeFile)
@@ -209,7 +224,7 @@ urls:
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
@@ -253,7 +268,7 @@ urls:
Expect(err).ToNot(HaveOccurred())
Expect(p.GetName()).To(Equal("b"))
err = inst.Uninstall(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}, system)
err = inst.Uninstall(system, &pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
@@ -318,13 +333,20 @@ urls:
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
repo, err := GenerateRepository(
"test",
"description",
"disk",
[]string{tmpdir}, 1,
tmpdir,
[]string{"../../tests/fixtures/buildable"},
pkg.NewInMemoryDatabase(false), nil, "", false, false)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
@@ -373,7 +395,7 @@ urls:
Expect(files).To(Equal([]string{"artifact42", "test5", "test6"}))
Expect(err).ToNot(HaveOccurred())
err = inst.Uninstall(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}, system)
err = inst.Uninstall(system, &pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
@@ -436,13 +458,21 @@ urls:
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
repo, err := GenerateRepository(
"test",
"description",
"disk",
[]string{tmpdir},
1,
tmpdir,
[]string{"../../tests/fixtures/buildable"},
pkg.NewInMemoryDatabase(false), nil, "", false, false)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
@@ -502,9 +532,9 @@ urls:
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
repo, err = GenerateRepository("test", "description", "disk", []string{tmpdir2}, 1, tmpdir2, []string{"../../tests/fixtures/alpine"}, pkg.NewInMemoryDatabase(false))
repo, err = stubRepo(tmpdir2, "../../tests/fixtures/alpine")
Expect(err).ToNot(HaveOccurred())
err = repo.Write(tmpdir2, false)
err = repo.Write(tmpdir2, false, false)
Expect(err).ToNot(HaveOccurred())
fakeroot, err = ioutil.TempDir("", "fakeroot")
@@ -571,13 +601,13 @@ urls:
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade"}, pkg.NewInMemoryDatabase(false))
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
@@ -697,18 +727,18 @@ urls:
_, errs = c2.CompileParallel(false, compiler.NewLuetCompilationspecs(spec2))
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade_old_repo"}, pkg.NewInMemoryDatabase(false))
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade_old_repo")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
repoupgrade, err := GenerateRepository("test", "description", "disk", []string{tmpdirnewrepo}, 1, tmpdirnewrepo, []string{"../../tests/fixtures/upgrade_new_repo"}, pkg.NewInMemoryDatabase(false))
repoupgrade, err := stubRepo(tmpdirnewrepo, "../../tests/fixtures/upgrade_new_repo")
Expect(err).ToNot(HaveOccurred())
err = repoupgrade.Write(tmpdirnewrepo, false)
err = repoupgrade.Write(tmpdirnewrepo, false, false)
Expect(err).ToNot(HaveOccurred())
fakeroot, err := ioutil.TempDir("", "fakeroot")
@@ -820,13 +850,13 @@ urls:
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade"}, pkg.NewInMemoryDatabase(false))
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
@@ -899,6 +929,47 @@ urls:
})
Context("Uninstallation", func() {
It("fails if package is required by others which are installed", func() {
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
bolt, err := ioutil.TempDir("", "db")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(bolt) // clean up
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1, CheckConflicts: true})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("calamares", "", []*pkg.DefaultPackage{D}, []*pkg.DefaultPackage{})
C := pkg.NewPackage("kpmcore", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
Z := pkg.NewPackage("chromium", "", []*pkg.DefaultPackage{A}, []*pkg.DefaultPackage{})
F := pkg.NewPackage("F", "", []*pkg.DefaultPackage{Z, B}, []*pkg.DefaultPackage{})
Z.SetVersion("86.0.4240.193+2")
Z.SetCategory("www-client")
B.SetVersion("3.2.32.1+5")
B.SetCategory("app-admin")
C.SetVersion("4.2.0+2")
C.SetCategory("sys-libs-5")
D.SetVersion("5.19.5+9")
D.SetCategory("layers")
for _, p := range []pkg.Package{A, B, C, D, Z, F} {
_, err := systemDB.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
err = inst.Uninstall(system, D)
Expect(err).To(HaveOccurred())
})
})
Context("Existing files", func() {
It("Reclaims them", func() {
//repo:=NewLuetSystemRepository()
@@ -937,13 +1008,13 @@ urls:
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade"}, pkg.NewInMemoryDatabase(false))
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
@@ -1039,13 +1110,13 @@ urls:
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade_old_repo"}, pkg.NewInMemoryDatabase(false))
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade_old_repo")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
@@ -1125,10 +1196,10 @@ urls:
Expect(errs).To(BeEmpty())
repo, err = GenerateRepository("test", "description", "disk", []string{tmpdir2}, 1, tmpdir2, []string{"../../tests/fixtures/upgrade_new_repo"}, pkg.NewInMemoryDatabase(false))
repo, err = stubRepo(tmpdir2, "../../tests/fixtures/upgrade_new_repo")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
err = repo.Write(tmpdir2, false)
err = repo.Write(tmpdir2, false, false)
Expect(err).ToNot(HaveOccurred())
inst = NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})

View File

@@ -24,7 +24,7 @@ import (
type Installer interface {
Install(pkg.Packages, *System) error
Uninstall(pkg.Package, *System) error
Uninstall(*System, ...pkg.Package) error
Upgrade(s *System) error
Reclaim(s *System) error
@@ -51,7 +51,7 @@ type Repository interface {
SetIndex(i compiler.ArtifactIndex)
GetTree() tree.Builder
SetTree(tree.Builder)
Write(path string, resetRevision bool) error
Write(path string, resetRevision, force bool) error
Sync(bool) (Repository, error)
GetTreePath() string
SetTreePath(string)
@@ -72,4 +72,6 @@ type Repository interface {
SetRepositoryFile(string, LuetRepositoryFile)
SetName(p string)
Serialize() (*LuetSystemRepositoryMetadata, LuetSystemRepositorySerialized)
GetBackend() compiler.CompilerBackend
SetBackend(b compiler.CompilerBackend)
}

View File

@@ -46,6 +46,10 @@ const (
REPOFILE_TREE_KEY = "tree"
REPOFILE_META_KEY = "meta"
DiskRepositoryType = "disk"
HttpRepositoryType = "http"
DockerRepositoryType = "docker"
)
type LuetRepositoryFile struct {
@@ -60,6 +64,9 @@ type LuetSystemRepository struct {
Index compiler.ArtifactIndex `json:"index"`
Tree tree.Builder `json:"-"`
RepositoryFiles map[string]LuetRepositoryFile `json:"repo_files"`
Backend compiler.CompilerBackend `json:"-"`
PushImages bool `json:"-"`
ForcePush bool `json:"-"`
}
type LuetSystemRepositorySerialized struct {
@@ -157,27 +164,54 @@ func NewDefaultMetaRepositoryFile() LuetRepositoryFile {
}
}
// SetFileName sets the name of the repository file.
// Each repository can ship arbitrary file that will be downloaded by the client
// in case of need, this set the filename that the client will pull
func (f *LuetRepositoryFile) SetFileName(n string) {
f.FileName = n
}
// GetFileName returns the name of the repository file.
// Each repository can ship arbitrary file that will be downloaded by the client
// in case of need, this gets the filename that the client will pull
func (f *LuetRepositoryFile) GetFileName() string {
return f.FileName
}
// SetCompressionType sets the compression type of the repository file.
// Each repository can ship arbitrary file that will be downloaded by the client
// in case of need, this sets the compression type that the client will use to uncompress the artifact
func (f *LuetRepositoryFile) SetCompressionType(c compiler.CompressionImplementation) {
f.CompressionType = c
}
// GetCompressionType gets the compression type of the repository file.
// Each repository can ship arbitrary file that will be downloaded by the client
// in case of need, this gets the compression type that the client will use to uncompress the artifact
func (f *LuetRepositoryFile) GetCompressionType() compiler.CompressionImplementation {
return f.CompressionType
}
// SetChecksums sets the checksum of the repository file.
// Each repository can ship arbitrary file that will be downloaded by the client
// in case of need, this sets the checksums that the client will use to verify the artifact
func (f *LuetRepositoryFile) SetChecksums(c compiler.Checksums) {
f.Checksums = c
}
// GetChecksums gets the checksum of the repository file.
// Each repository can ship arbitrary file that will be downloaded by the client
// in case of need, this gets the checksums that the client will use to verify the artifact
func (f *LuetRepositoryFile) GetChecksums() compiler.Checksums {
return f.Checksums
}
func GenerateRepository(name, descr, t string, urls []string, priority int, src string, treesDir []string, db pkg.PackageDatabase) (Repository, error) {
// GenerateRepository generates a new repository from the given argument.
// If the repository is of the docker type, it will also push the package images.
// In case the repository is local, it will build the package Index
func GenerateRepository(name, descr, t string, urls []string,
priority int, src string, treesDir []string, db pkg.PackageDatabase,
b compiler.CompilerBackend, imagePrefix string, pushImages, force bool) (Repository, error) {
tr := tree.NewInstallerRecipe(db)
@@ -188,14 +222,31 @@ func GenerateRepository(name, descr, t string, urls []string, priority int, src
}
}
art, err := buildPackageIndex(src, tr.GetDatabase())
if err != nil {
return nil, err
// if !strings.HasSuffix(imagePrefix, "/") {
// imagePrefix = imagePrefix + "/"
// }
var art []compiler.Artifact
var err error
switch t {
case DiskRepositoryType, HttpRepositoryType:
art, err = buildPackageIndex(src, tr.GetDatabase())
if err != nil {
return nil, err
}
case DockerRepositoryType:
art, err = generatePackageImages(b, imagePrefix, src, tr.GetDatabase(), pushImages, force)
if err != nil {
return nil, err
}
}
return NewLuetSystemRepository(
repo := NewLuetSystemRepository(
config.NewLuetRepository(name, t, descr, urls, priority, true, false),
art, tr), nil
art, tr, pushImages, force)
repo.SetBackend(b)
return repo, nil
}
func NewSystemRepository(repo config.LuetRepository) Repository {
@@ -205,12 +256,14 @@ func NewSystemRepository(repo config.LuetRepository) Repository {
}
}
func NewLuetSystemRepository(repo *config.LuetRepository, art []compiler.Artifact, builder tree.Builder) Repository {
func NewLuetSystemRepository(repo *config.LuetRepository, art []compiler.Artifact, builder tree.Builder, pushImages, force bool) Repository {
return &LuetSystemRepository{
LuetRepository: repo,
Index: art,
Tree: builder,
RepositoryFiles: map[string]LuetRepositoryFile{},
PushImages: pushImages,
ForcePush: force,
}
}
@@ -244,6 +297,71 @@ func NewLuetSystemRepositoryFromYaml(data []byte, db pkg.PackageDatabase) (Repos
return r, err
}
func pushImage(b compiler.CompilerBackend, image string, force bool) error {
if b.ImageAvailable(image) && !force {
Debug("Image", image, "already present, skipping")
return nil
}
return b.Push(compiler.CompilerBackendOptions{ImageName: image})
}
func generatePackageImages(b compiler.CompilerBackend, imagePrefix, path string, db pkg.PackageDatabase, imagePush, force bool) ([]compiler.Artifact, error) {
Info("Generating docker images for packages in", imagePrefix)
var art []compiler.Artifact
var ff = func(currentpath string, info os.FileInfo, err error) error {
if !strings.HasSuffix(info.Name(), ".metadata.yaml") {
return nil // Skip with no errors
}
dat, err := ioutil.ReadFile(currentpath)
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
}
artifact, err := compiler.NewPackageArtifactFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
// We want to include packages that are ONLY referenced in the tree.
// the ones which aren't should be deleted. (TODO: by another cli command?)
if _, notfound := db.FindPackage(artifact.GetCompileSpec().GetPackage()); notfound != nil {
Debug(fmt.Sprintf("Package %s not found in tree. Ignoring it.",
artifact.GetCompileSpec().GetPackage().HumanReadableString()))
return nil
}
packageImage := fmt.Sprintf("%s:%s", imagePrefix, artifact.GetCompileSpec().GetPackage().GetFingerPrint())
if imagePush && b.ImageAvailable(packageImage) && !force {
Info("Image", packageImage, "already present, skipping. use --force-push to override")
} else {
Info("Generating final image", packageImage,
"for package ", artifact.GetCompileSpec().GetPackage().HumanReadableString())
if opts, err := artifact.GenerateFinalImage(packageImage, b, true); err != nil {
return errors.Wrap(err, "Failed generating metadata tree"+opts.ImageName)
}
}
if imagePush {
if err := pushImage(b, packageImage, force); err != nil {
return errors.Wrapf(err, "Failed while pushing image: '%s'", packageImage)
}
}
art = append(art, artifact)
return nil
}
err := filepath.Walk(path, ff)
if err != nil {
return nil, err
}
return art, nil
}
func buildPackageIndex(path string, db pkg.PackageDatabase) ([]compiler.Artifact, error) {
var art []compiler.Artifact
@@ -266,7 +384,7 @@ func buildPackageIndex(path string, db pkg.PackageDatabase) ([]compiler.Artifact
// We want to include packages that are ONLY referenced in the tree.
// the ones which aren't should be deleted. (TODO: by another cli command?)
if _, notfound := db.FindPackage(artifact.GetCompileSpec().GetPackage()); notfound != nil {
Info(fmt.Sprintf("Package %s not found in tree. Ignoring it.",
Debug(fmt.Sprintf("Package %s not found in tree. Ignoring it.",
artifact.GetCompileSpec().GetPackage().HumanReadableString()))
return nil
}
@@ -306,6 +424,13 @@ func (r *LuetSystemRepository) SetType(p string) {
r.LuetRepository.Type = p
}
func (r *LuetSystemRepository) GetBackend() compiler.CompilerBackend {
return r.Backend
}
func (r *LuetSystemRepository) SetBackend(b compiler.CompilerBackend) {
r.Backend = b
}
func (r *LuetSystemRepository) SetName(p string) {
r.LuetRepository.Name = p
}
@@ -400,7 +525,7 @@ func (r *LuetSystemRepository) ReadSpecFile(file string, removeFile bool) (Repos
return repo, err
}
func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
func (r *LuetSystemRepository) genLocalRepo(dst string, resetRevision bool) error {
err := os.MkdirAll(dst, os.ModePerm)
if err != nil {
return err
@@ -522,24 +647,240 @@ func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
Repo: *r,
Path: dst,
})
return nil
}
func (r *LuetSystemRepository) genDockerRepo(imagePrefix string, resetRevision, force bool) error {
// - Iterate over meta, build final images, push them if necessary
// - while pushing, check if image already exists, and if exist push them only if --force is supplied
// - Generate final images for metadata and push
imageRepository := fmt.Sprintf("%s:%s", imagePrefix, REPOSITORY_SPECFILE)
r.LastUpdate = strconv.FormatInt(time.Now().Unix(), 10)
repoTemp, err := config.LuetCfg.GetSystem().TempDir("repo")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for repository")
}
defer os.RemoveAll(repoTemp) // clean up
if r.GetBackend().ImageAvailable(imageRepository) {
if err := r.GetBackend().DownloadImage(compiler.CompilerBackendOptions{ImageName: imageRepository}); err != nil {
return errors.Wrapf(err, "while downloading '%s'", imageRepository)
}
if err := r.GetBackend().ExtractRootfs(compiler.CompilerBackendOptions{ImageName: imageRepository, Destination: repoTemp}, false); err != nil {
return errors.Wrapf(err, "while extracting '%s'", imageRepository)
}
}
repospec := filepath.Join(repoTemp, REPOSITORY_SPECFILE)
if resetRevision {
r.Revision = 0
} else {
if _, err := os.Stat(repospec); !os.IsNotExist(err) {
// Read existing file for retrieve revision
spec, err := r.ReadSpecFile(repospec, false)
if err != nil {
return err
}
r.Revision = spec.GetRevision()
}
}
r.Revision++
Info(fmt.Sprintf(
"For repository %s creating revision %d and last update %s...",
r.Name, r.Revision, r.LastUpdate,
))
bus.Manager.Publish(bus.EventRepositoryPreBuild, struct {
Repo LuetSystemRepository
Path string
}{
Repo: *r,
Path: imageRepository,
})
// Create tree and repository file
archive, err := config.LuetCfg.GetSystem().TempDir("archive")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for archive")
}
defer os.RemoveAll(archive) // clean up
err = r.GetTree().Save(archive)
if err != nil {
return errors.Wrap(err, "Error met while saving the tree")
}
treeFile, err := r.GetRepositoryFile(REPOFILE_TREE_KEY)
if err != nil {
treeFile = NewDefaultTreeRepositoryFile()
r.SetRepositoryFile(REPOFILE_TREE_KEY, treeFile)
}
a := compiler.NewPackageArtifact(filepath.Join(repoTemp, treeFile.GetFileName()))
a.SetCompressionType(treeFile.GetCompressionType())
err = a.Compress(archive, 1)
if err != nil {
return errors.Wrap(err, "Error met while creating package archive")
}
// Update the tree name with the name created by compression selected.
treeFile.SetFileName(a.GetFileName())
err = a.Hash()
if err != nil {
return errors.Wrap(err, "Failed generating checksums for tree")
}
treeFile.SetChecksums(a.GetChecksums())
r.SetRepositoryFile(REPOFILE_TREE_KEY, treeFile)
// we generate a new archive containing the required compressed file.
// TODO: Bundle all the extra files in 1 docker image only, instead of an image for each file
treeArchive, err := compiler.CreateArtifactForFile(a.GetPath())
if err != nil {
return errors.Wrap(err, "Failed generating checksums for tree")
}
imageTree := fmt.Sprintf("%s:%s", imagePrefix, a.GetFileName())
Debug("Generating image", imageTree)
if opts, err := treeArchive.GenerateFinalImage(imageTree, r.GetBackend(), false); err != nil {
return errors.Wrap(err, "Failed generating metadata tree "+opts.ImageName)
}
if r.PushImages {
if err := pushImage(r.GetBackend(), imageTree, true); err != nil {
return errors.Wrapf(err, "Failed while pushing image: '%s'", imageTree)
}
}
// Create Metadata struct and serialized repository
meta, serialized := r.Serialize()
// Create metadata file and repository file
metaTmpDir, err := config.LuetCfg.GetSystem().TempDir("metadata")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for metadata")
}
defer os.RemoveAll(metaTmpDir) // clean up
metaFile, err := r.GetRepositoryFile(REPOFILE_META_KEY)
if err != nil {
metaFile = NewDefaultMetaRepositoryFile()
r.SetRepositoryFile(REPOFILE_META_KEY, metaFile)
}
repoMetaSpec := filepath.Join(metaTmpDir, REPOSITORY_METAFILE)
// Create repository.meta.yaml file
err = meta.WriteFile(repoMetaSpec)
if err != nil {
return err
}
// create temp dir for metafile
metaDir, err := config.LuetCfg.GetSystem().TempDir("metadata")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for metadata")
}
defer os.RemoveAll(metaDir) // clean up
a = compiler.NewPackageArtifact(filepath.Join(metaDir, metaFile.GetFileName()))
a.SetCompressionType(metaFile.GetCompressionType())
err = a.Compress(metaTmpDir, 1)
if err != nil {
return errors.Wrap(err, "Error met while archiving repository metadata")
}
metaFile.SetFileName(a.GetFileName())
r.SetRepositoryFile(REPOFILE_META_KEY, metaFile)
err = a.Hash()
if err != nil {
return errors.Wrap(err, "Failed generating checksums for metadata")
}
metaFile.SetChecksums(a.GetChecksums())
// Files are downloaded as-is from docker images
// we generate a new archive containing the required compressed file.
// TODO: Bundle all the extra files in 1 docker image only, instead of an image for each file
metaArchive, err := compiler.CreateArtifactForFile(a.GetPath())
if err != nil {
return errors.Wrap(err, "Failed generating checksums for tree")
}
imageMetaTree := fmt.Sprintf("%s:%s", imagePrefix, a.GetFileName())
if opts, err := metaArchive.GenerateFinalImage(imageMetaTree, r.GetBackend(), false); err != nil {
return errors.Wrap(err, "Failed generating metadata tree"+opts.ImageName)
}
if r.PushImages {
if err := pushImage(r.GetBackend(), imageMetaTree, true); err != nil {
return errors.Wrapf(err, "Failed while pushing image: '%s'", imageMetaTree)
}
}
data, err := yaml.Marshal(serialized)
if err != nil {
return err
}
err = ioutil.WriteFile(repospec, data, os.ModePerm)
if err != nil {
return err
}
tempRepoFile := filepath.Join(metaDir, REPOSITORY_SPECFILE+".tar")
if err := helpers.Tar(repospec, tempRepoFile); err != nil {
return errors.Wrap(err, "Error met while archiving repository file")
}
a = compiler.NewPackageArtifact(tempRepoFile)
imageRepo := fmt.Sprintf("%s:%s", imagePrefix, REPOSITORY_SPECFILE)
if opts, err := a.GenerateFinalImage(imageRepo, r.GetBackend(), false); err != nil {
return errors.Wrap(err, "Failed generating repository image"+opts.ImageName)
}
if r.PushImages {
if err := pushImage(r.GetBackend(), imageRepo, true); err != nil {
return errors.Wrapf(err, "Failed while pushing image: '%s'", imageRepo)
}
}
bus.Manager.Publish(bus.EventRepositoryPostBuild, struct {
Repo LuetSystemRepository
Path string
}{
Repo: *r,
Path: imagePrefix,
})
return nil
}
// Write writes the repository metadata to the supplied destination
func (r *LuetSystemRepository) Write(dst string, resetRevision, force bool) error {
switch r.GetType() {
case DiskRepositoryType, HttpRepositoryType:
return r.genLocalRepo(dst, resetRevision)
case DockerRepositoryType:
return r.genDockerRepo(dst, resetRevision, force)
}
return errors.New("invalid repository type")
}
func (r *LuetSystemRepository) Client() Client {
switch r.GetType() {
case "disk":
case DiskRepositoryType:
return client.NewLocalClient(client.RepoData{Urls: r.GetUrls()})
case "http":
case HttpRepositoryType:
return client.NewHttpClient(
client.RepoData{
Urls: r.GetUrls(),
Authentication: r.GetAuthentication(),
})
}
case DockerRepositoryType:
return client.NewDockerClient(
client.RepoData{
Urls: r.GetUrls(),
Authentication: r.GetAuthentication(),
})
}
return nil
}
func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
var repoUpdated bool = false
var treefs, metafs string
@@ -548,7 +889,7 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
Debug("Sync of the repository", r.Name, "in progress...")
c := r.Client()
if c == nil {
return nil, errors.New("No client could be generated from repository.")
return nil, errors.New("no client could be generated from repository")
}
// Retrieve remote repository.yaml for retrieve revision and date
@@ -562,7 +903,7 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
if err != nil {
return nil, err
}
// Remove temporary file that contains repository.html.
// Remove temporary file that contains repository.yaml
// Example: /tmp/HttpClient236052003
defer os.RemoveAll(file)
@@ -592,15 +933,10 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
// Note: If we always remove them, later on, no other structure can access
// to the tree for e.g. to retrieve finalizers
metafs, err = config.LuetCfg.GetSystem().TempDir("metafs")
if err != nil {
return nil, errors.Wrap(err, "Error met whilte creating tempdir for metafs")
}
//defer os.RemoveAll(metafs)
}
// POST: treeFile and metaFile are present. I check this inside
@@ -611,17 +947,18 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
if !repoUpdated {
// Get Tree
a := compiler.NewPackageArtifact(treeFile.GetFileName())
artifactTree, err := c.DownloadArtifact(a)
downloadedTreeFile, err := c.DownloadFile(treeFile.GetFileName())
if err != nil {
return nil, errors.Wrap(err, "While downloading "+treeFile.GetFileName())
}
defer os.Remove(artifactTree.GetPath())
defer os.Remove(downloadedTreeFile)
artifactTree.SetChecksums(treeFile.GetChecksums())
artifactTree.SetCompressionType(treeFile.GetCompressionType())
// Treat the file as artifact, in order to verify it
treeFileArtifact := compiler.NewPackageArtifact(downloadedTreeFile)
treeFileArtifact.SetChecksums(treeFile.GetChecksums())
treeFileArtifact.SetCompressionType(treeFile.GetCompressionType())
err = artifactTree.Verify()
err = treeFileArtifact.Verify()
if err != nil {
return nil, errors.Wrap(err, "Tree integrity check failure")
}
@@ -629,17 +966,17 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
Debug("Tree tarball for the repository " + r.GetName() + " downloaded correctly.")
// Get Repository Metadata
a = compiler.NewPackageArtifact(metaFile.GetFileName())
artifactMeta, err := c.DownloadArtifact(a)
downloadedMeta, err := c.DownloadFile(metaFile.GetFileName())
if err != nil {
return nil, errors.Wrap(err, "While downloading "+metaFile.GetFileName())
}
defer os.Remove(artifactMeta.GetPath())
defer os.Remove(downloadedMeta)
artifactMeta.SetChecksums(metaFile.GetChecksums())
artifactMeta.SetCompressionType(metaFile.GetCompressionType())
metaFileArtifact := compiler.NewPackageArtifact(downloadedMeta)
metaFileArtifact.SetChecksums(metaFile.GetChecksums())
metaFileArtifact.SetCompressionType(metaFile.GetCompressionType())
err = artifactMeta.Verify()
err = metaFileArtifact.Verify()
if err != nil {
return nil, errors.Wrap(err, "Metadata integrity check failure")
}
@@ -659,7 +996,7 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
}
Debug("Decompress tree of the repository " + r.Name + "...")
err = artifactTree.Unpack(treefs, true)
err = treeFileArtifact.Unpack(treefs, true)
if err != nil {
return nil, errors.Wrap(err, "Error met while unpacking tree")
}
@@ -667,7 +1004,7 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
// FIXME: It seems that tar with only one file doesn't create destination
// directory. I create directory directly for now.
os.MkdirAll(metafs, os.ModePerm)
err = artifactMeta.Unpack(metafs, true)
err = metaFileArtifact.Unpack(metafs, true)
if err != nil {
return nil, errors.Wrap(err, "Error met while unpacking metadata")
}
@@ -829,13 +1166,11 @@ PACKAGE:
c, err := r.GetTree().GetDatabase().FindPackageCandidate(pack)
// If FindPackageCandidate returns the same package, it means it couldn't find one.
// Skip this repository and keep looking.
if c.String() == pack.String() {
if err != nil { //c.String() == pack.String() {
continue REPOSITORY
}
if err == nil {
matches = append(matches, c)
continue PACKAGE
}
matches = append(matches, c)
continue PACKAGE
} else {
// If it's not a selector, just append it
matches = append(matches, pack)

View File

@@ -19,13 +19,16 @@ import (
// . "github.com/mudler/luet/pkg/installer"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"github.com/mudler/luet/pkg/compiler"
backend "github.com/mudler/luet/pkg/compiler/backend"
config "github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/installer"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
@@ -34,6 +37,18 @@ import (
. "github.com/onsi/gomega"
)
func dockerStubRepo(tmpdir, tree, image string, push, force bool) (installer.Repository, error) {
return GenerateRepository(
"test",
"description",
"docker",
[]string{image},
1,
tmpdir,
[]string{tree},
pkg.NewInMemoryDatabase(false), backend.NewSimpleDockerBackend(), image, push, force)
}
var _ = Describe("Repository", func() {
Context("Generation", func() {
It("Generate repository metadata", func() {
@@ -84,13 +99,13 @@ var _ = Describe("Repository", func() {
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, true)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
@@ -166,13 +181,13 @@ var _ = Describe("Repository", func() {
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
err = repo.Write(tmpdir, false, true)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
@@ -216,6 +231,166 @@ urls:
Expect(matches).To(Equal([]PackageMatch{{Repo: repo1, Package: package1}}))
})
})
Context("Docker repository", func() {
repoImage := os.Getenv("UNIT_TEST_DOCKER_IMAGE_REPOSITORY")
BeforeEach(func() {
if repoImage == "" {
Skip("UNIT_TEST_DOCKER_IMAGE_REPOSITORY not specified")
}
})
It("generates images", func() {
b := backend.NewSimpleDockerBackend()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
localcompiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
spec, err := localcompiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
localcompiler.SetConcurrency(1)
artifact, err := localcompiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
repo, err := dockerStubRepo(tmpdir, "../../tests/fixtures/buildable", repoImage, true, true)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(repoImage, false, true)
Expect(err).ToNot(HaveOccurred())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", repoImage, "tree.tar.gz"))).To(BeTrue())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", repoImage, "repository.meta.yaml.tar"))).To(BeTrue())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", repoImage, "repository.yaml"))).To(BeTrue())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", repoImage, "b-test-1.0"))).To(BeTrue())
extracted, err := ioutil.TempDir("", "extracted")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(extracted) // clean up
c := repo.Client()
f, err := c.DownloadFile("repository.yaml")
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(f)).To(ContainSubstring("name: test"))
a, err := c.DownloadArtifact(&compiler.PackageArtifact{
Path: "test.tar",
CompileSpec: &compiler.LuetCompilationSpec{
Package: &pkg.DefaultPackage{
Name: "b",
Category: "test",
Version: "1.0",
},
},
})
Expect(err).ToNot(HaveOccurred())
Expect(a.Unpack(extracted, false)).ToNot(HaveOccurred())
Expect(helpers.Read(filepath.Join(extracted, "test6"))).To(Equal("artifact6\n"))
})
It("generates images of virtual packages", func() {
b := backend.NewSimpleDockerBackend()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/virtuals")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(5))
localcompiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
spec, err := localcompiler.FromPackage(&pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.99"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
localcompiler.SetConcurrency(1)
artifact, err := localcompiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
repo, err := dockerStubRepo(tmpdir, "../../tests/fixtures/virtuals", repoImage, true, true)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(repoImage, false, true)
Expect(err).ToNot(HaveOccurred())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", repoImage, "tree.tar.gz"))).To(BeTrue())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", repoImage, "repository.meta.yaml.tar"))).To(BeTrue())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", repoImage, "repository.yaml"))).To(BeTrue())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", repoImage, "a-test-1.99"))).To(BeTrue())
extracted, err := ioutil.TempDir("", "extracted")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(extracted) // clean up
c := repo.Client()
f, err := c.DownloadFile("repository.yaml")
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(f)).To(ContainSubstring("name: test"))
a, err := c.DownloadArtifact(&compiler.PackageArtifact{
Path: "test.tar",
CompileSpec: &compiler.LuetCompilationSpec{
Package: &pkg.DefaultPackage{
Name: "a",
Category: "test",
Version: "1.99",
},
},
})
Expect(err).ToNot(HaveOccurred())
Expect(a.Unpack(extracted, false)).ToNot(HaveOccurred())
Expect(helpers.DirectoryIsEmpty(extracted)).To(BeFalse())
content, err := ioutil.ReadFile(filepath.Join(extracted, ".virtual"))
Expect(err).ToNot(HaveOccurred())
Expect(string(content)).To(Equal(""))
})
})
})

View File

@@ -1,12 +1,11 @@
package installer
import (
. "github.com/mudler/luet/pkg/logger"
"github.com/hashicorp/go-multierror"
"github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
"github.com/pkg/errors"
)
type System struct {
@@ -20,28 +19,35 @@ func (s *System) World() (pkg.Packages, error) {
type templatedata map[string]interface{}
func (s *System) ExecuteFinalizers(packs []pkg.Package, force bool) error {
func (s *System) ExecuteFinalizers(packs []pkg.Package) error {
var errs error
executedFinalizer := map[string]bool{}
for _, p := range packs {
if helpers.Exists(p.Rel(tree.FinalizerFile)) {
out, err := helpers.RenderFiles(p.Rel(tree.FinalizerFile), p.Rel(tree.DefinitionFile), "")
if err != nil && !force {
return errors.Wrap(err, "reading file "+p.Rel(tree.FinalizerFile))
if err != nil {
Warning("Failed rendering finalizer for ", p.HumanReadableString(), err.Error())
errs = multierror.Append(errs, err)
continue
}
if _, exists := executedFinalizer[p.GetFingerPrint()]; !exists {
executedFinalizer[p.GetFingerPrint()] = true
Info("Executing finalizer for " + p.HumanReadableString())
finalizer, err := NewLuetFinalizerFromYaml([]byte(out))
if err != nil && !force {
return errors.Wrap(err, "Error reading finalizer "+p.Rel(tree.FinalizerFile))
if err != nil {
Warning("Failed reading finalizer for ", p.HumanReadableString(), err.Error())
errs = multierror.Append(errs, err)
continue
}
err = finalizer.RunInstall(s)
if err != nil && !force {
return errors.Wrap(err, "Error executing install finalizer "+p.Rel(tree.FinalizerFile))
if err != nil {
Warning("Failed running finalizer for ", p.HumanReadableString(), err.Error())
errs = multierror.Append(errs, err)
continue
}
executedFinalizer[p.GetFingerPrint()] = true
}
}
}
return nil
return errs
}

View File

@@ -3,7 +3,9 @@ package logger
import (
"fmt"
"os"
"path"
"regexp"
"runtime"
"strings"
. "github.com/mudler/luet/pkg/config"
@@ -196,7 +198,7 @@ func msg(level string, withoutColor bool, msg ...interface{}) {
} else {
switch level {
case "warning":
levelMsg = Bold(Yellow(":construction: " + message)).BgBlack().String()
levelMsg = Yellow(":construction: warning" + message).BgBlack().String()
case "debug":
levelMsg = White(message).BgBlack().String()
case "info":
@@ -228,6 +230,11 @@ func Warning(mess ...interface{}) {
}
func Debug(mess ...interface{}) {
pc, file, line, ok := runtime.Caller(1)
if ok {
mess = append([]interface{}{fmt.Sprintf("DEBUG (%s:#%d:%v)",
path.Base(file), line, runtime.FuncForPC(pc).Name())}, mess...)
}
msg("debug", false, mess...)
}

View File

@@ -28,6 +28,9 @@ type PackageDatabase interface {
}
type PackageSet interface {
Clone(PackageDatabase) error
Copy() (PackageDatabase, error)
GetRevdeps(p Package) (Packages, error)
GetPackages() []string //Ids
CreatePackage(pkg Package) (string, error)

View File

@@ -47,6 +47,14 @@ func NewBoltDatabase(path string) PackageDatabase {
return &BoltDatabase{Path: path, ProvidesDatabase: map[string]map[string]Package{}}
}
func (db *BoltDatabase) Clone(to PackageDatabase) error {
return clone(db, to)
}
func (db *BoltDatabase) Copy() (PackageDatabase, error) {
return copy(db)
}
func (db *BoltDatabase) Get(s string) (string, error) {
bolt, err := storm.Open(db.Path, storm.BoltOptions(0600, &bbolt.Options{Timeout: 30 * time.Second}))
if err != nil {
@@ -90,9 +98,9 @@ func (db *BoltDatabase) Retrieve(ID string) ([]byte, error) {
// TODO: Have a memory instance for boltdb, so we don't compute each time we get called
// as this is REALLY expensive. But we don't perform usually those operations in a file db.
func (db *BoltDatabase) GetRevdeps(p Package) (Packages, error) {
memory := NewInMemoryDatabase(false)
for _, p := range db.World() {
memory.CreatePackage(p)
memory, err := db.Copy()
if err != nil {
return nil, errors.New("Failed copying bolt db to memory")
}
return memory.GetRevdeps(p)
}
@@ -341,17 +349,18 @@ func (db *BoltDatabase) FindPackageCandidate(p Package) (Package, error) {
required, err := db.FindPackage(p)
if err != nil {
err = nil
// return nil, errors.Wrap(err, "Couldn't find required package in db definition")
packages, err := p.Expand(db)
// Info("Expanded", packages, err)
if err != nil || len(packages) == 0 {
required = p
err = errors.Wrap(err, "Candidate not found")
} else {
required = packages.Best(nil)
}
return required, nil
return required, err
//required = &DefaultPackage{Name: "test"}
}

View File

@@ -0,0 +1,38 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package pkg
import "github.com/pkg/errors"
func clone(src, dst PackageDatabase) error {
for _, i := range src.World() {
_, err := dst.CreatePackage(i)
if err != nil {
return errors.Wrap(err, "Failed create package "+i.HumanReadableString())
}
}
return nil
}
func copy(src PackageDatabase) (PackageDatabase, error) {
dst := NewInMemoryDatabase(false)
if err := clone(src, dst); err != nil {
return dst, errors.Wrap(err, "Failed create temporary in-memory db")
}
return dst, nil
}

View File

@@ -41,6 +41,7 @@ type InMemoryDatabase struct {
CacheNoVersion map[string]map[string]interface{}
ProvidesDatabase map[string]map[string]Package
RevDepsDatabase map[string]map[string]Package
cached map[string]interface{}
}
func NewInMemoryDatabase(singleton bool) PackageDatabase {
@@ -53,6 +54,7 @@ func NewInMemoryDatabase(singleton bool) PackageDatabase {
CacheNoVersion: map[string]map[string]interface{}{},
ProvidesDatabase: map[string]map[string]Package{},
RevDepsDatabase: map[string]map[string]Package{},
cached: map[string]interface{}{},
}
}
return DBInMemoryInstance
@@ -197,6 +199,11 @@ func (db *InMemoryDatabase) populateCaches(p Package) {
// Create extra cache between package -> []versions
db.Lock()
if _, ok := db.cached[p.GetFingerPrint()]; ok {
db.Unlock()
return
}
db.cached[p.GetFingerPrint()] = nil
// Provides: Store package provides, we will reuse this when walking deps
for _, provide := range pd.Provides {
@@ -213,6 +220,25 @@ func (db *InMemoryDatabase) populateCaches(p Package) {
db.CacheNoVersion[p.GetPackageName()] = make(map[string]interface{})
}
db.CacheNoVersion[p.GetPackageName()][p.GetVersion()] = nil
// Updating Revdeps
// Given that when we populate the cache we don't have the full db at hand
// We cycle over reverse dependency of a package to update their entry if they are matching
// the version selector
toUpdate, ok := db.RevDepsDatabase[pd.GetPackageName()]
if ok {
for _, pp := range toUpdate {
for _, re := range pp.GetRequires() {
if match, _ := pd.VersionMatchSelector(re.GetVersion(), nil); match {
_, ok = db.RevDepsDatabase[pd.GetFingerPrint()]
if !ok {
db.RevDepsDatabase[pd.GetFingerPrint()] = make(map[string]Package)
}
db.RevDepsDatabase[pd.GetFingerPrint()][pp.GetFingerPrint()] = pp
}
}
}
}
db.Unlock()
for _, re := range pd.GetRequires() {
@@ -225,12 +251,23 @@ func (db *InMemoryDatabase) populateCaches(p Package) {
db.RevDepsDatabase[pa.GetFingerPrint()] = make(map[string]Package)
}
db.RevDepsDatabase[pa.GetFingerPrint()][pd.GetFingerPrint()] = pd
_, ok = db.RevDepsDatabase[pa.GetPackageName()]
if !ok {
db.RevDepsDatabase[pa.GetPackageName()] = make(map[string]Package)
}
db.RevDepsDatabase[pa.GetPackageName()][pd.GetPackageName()] = pd
}
_, ok := db.RevDepsDatabase[re.GetFingerPrint()]
if !ok {
db.RevDepsDatabase[re.GetFingerPrint()] = make(map[string]Package)
}
db.RevDepsDatabase[re.GetFingerPrint()][pd.GetFingerPrint()] = pd
_, ok = db.RevDepsDatabase[re.GetPackageName()]
if !ok {
db.RevDepsDatabase[re.GetPackageName()] = make(map[string]Package)
}
db.RevDepsDatabase[re.GetPackageName()][pd.GetPackageName()] = pd
db.Unlock()
}
}
@@ -270,6 +307,14 @@ func (db *InMemoryDatabase) getProvide(p Package) (Package, error) {
return db.FindPackage(pa)
}
func (db *InMemoryDatabase) Clone(to PackageDatabase) error {
return clone(db, to)
}
func (db *InMemoryDatabase) Copy() (PackageDatabase, error) {
return copy(db)
}
func (db *InMemoryDatabase) encodePackage(p Package) (string, string, error) {
pd, ok := p.(*DefaultPackage)
if !ok {
@@ -434,16 +479,17 @@ func (db *InMemoryDatabase) FindPackageCandidate(p Package) (Package, error) {
required, err := db.FindPackage(p)
if err != nil {
err = nil
// return nil, errors.Wrap(err, "Couldn't find required package in db definition")
packages, err := p.Expand(db)
// Info("Expanded", packages, err)
if err != nil || len(packages) == 0 {
required = p
err = errors.Wrap(err, "Candidate not found")
} else {
required = packages.Best(nil)
}
return required, nil
return required, err
//required = &DefaultPackage{Name: "test"}
}

View File

@@ -46,6 +46,7 @@ type Package interface {
GetFingerPrint() string
GetPackageName() string
GetPackageImageName() string
Requires([]*DefaultPackage) Package
Conflicts([]*DefaultPackage) Package
Revdeps(PackageDatabase) Packages
@@ -288,6 +289,10 @@ func (p *DefaultPackage) GetPackageName() string {
return fmt.Sprintf("%s-%s", p.Name, p.Category)
}
func (p *DefaultPackage) GetPackageImageName() string {
return fmt.Sprintf("%s-%s:%s", p.Name, p.Category, p.Version)
}
// GetBuildTimestamp returns the package build timestamp
func (p *DefaultPackage) GetBuildTimestamp() string {
return p.BuildTimestamp

View File

@@ -446,10 +446,9 @@ func (s *Parallel) UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAsse
removed := pkg.Packages{}
// TODO: this is memory expensive, we need to optimize this
universe := pkg.NewInMemoryDatabase(false)
for _, p := range s.DefinitionDatabase.World() {
universe.CreatePackage(p)
universe, err := s.DefinitionDatabase.Copy()
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't build world copy")
}
for _, p := range s.Installed() {
universe.CreatePackage(p)
@@ -558,9 +557,9 @@ func (s *Parallel) Upgrade(checkconflicts, full bool) (pkg.Packages, PackagesAss
toInstall := pkg.Packages{}
// we do this in memory so we take into account of provides
universe := pkg.NewInMemoryDatabase(false)
for _, p := range s.DefinitionDatabase.World() {
universe.CreatePackage(p)
universe, err := s.DefinitionDatabase.Copy()
if err != nil {
return nil, nil, errors.Wrap(err, "Could not copy def db")
}
installedcopy := pkg.NewInMemoryDatabase(false)
@@ -742,7 +741,7 @@ func (s *Parallel) Uninstall(checkconflicts, full bool, packs ...pkg.Package) (p
func (s *Parallel) BuildFormula() (bf.Formula, error) {
var formulas []bf.Formula
r, err := s.BuildPartialWorld(false)
r, err := s.BuildWorld(false)
if err != nil {
return nil, err
}

View File

@@ -111,7 +111,7 @@ var _ = Describe("Parallel", func() {
Expect(solution).ToNot(ContainElement(PackageAssert{Package: B, Value: true}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: D, Value: true}))
Expect(len(solution)).To(Equal(3))
Expect(len(solution)).To(Equal(5))
})
It("Solves correctly if the selected package to install has requirements", func() {
@@ -401,7 +401,7 @@ var _ = Describe("Parallel", func() {
Expect(solution).ToNot(ContainElement(PackageAssert{Package: D, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(3))
Expect(len(solution)).To(Equal(4))
Expect(err).ToNot(HaveOccurred())
})
@@ -529,7 +529,7 @@ var _ = Describe("Parallel", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(5))
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
})
@@ -570,7 +570,7 @@ var _ = Describe("Parallel", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(5))
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
})

View File

@@ -79,7 +79,7 @@ var _ = Describe("Resolver", func() {
solution, err := s.Install([]pkg.Package{D, F}) // D and F should go as they have no deps. A/E should be filtered by QLearn
Expect(err).ToNot(HaveOccurred())
Expect(len(solution)).To(Equal(3))
Expect(len(solution)).To(Equal(6))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: A, Value: true}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: B, Value: true}))
@@ -117,7 +117,7 @@ var _ = Describe("Resolver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: D, Value: true}))
Expect(len(solution)).To(Equal(2))
Expect(len(solution)).To(Equal(4))
})
It("will find out that we can install D and F by ignoring E and A", func() {
@@ -148,8 +148,7 @@ var _ = Describe("Resolver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D, Value: true}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: F, Value: true}))
Expect(len(solution)).To(Equal(3))
Expect(len(solution)).To(Equal(6))
})
})

View File

@@ -167,7 +167,6 @@ func (s *Solver) BuildWorld(includeInstalled bool) (bf.Formula, error) {
}
for _, p := range s.World() {
solvable, err := p.BuildFormula(s.DefinitionDatabase, s.SolverDatabase)
if err != nil {
return nil, err
@@ -266,7 +265,7 @@ func (s *Solver) Conflicts(pack pkg.Package, lsp pkg.Packages) (bool, error) {
if revdepsErr == nil {
revdepsErr = errors.New("")
}
revdepsErr = errors.New(fmt.Sprintf("%s\n%s", revdepsErr.Error(), r.HumanReadableString()))
revdepsErr = fmt.Errorf("%s\n%s", revdepsErr.Error(), r.HumanReadableString())
}
return len(revdeps) != 0, revdepsErr
@@ -278,7 +277,6 @@ func (s *Solver) ConflictsWith(pack pkg.Package, lsp pkg.Packages) (bool, error)
p, err := s.DefinitionDatabase.FindPackage(pack)
if err != nil {
p = pack //Relax search, otherwise we cannot compute solutions for packages not in definitions
// return false, errors.Wrap(err, "Package not found in definition db")
}
@@ -402,10 +400,11 @@ func (s *Solver) UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAssert
replacements := map[pkg.Package]pkg.Package{}
// TODO: this is memory expensive, we need to optimize this
universe := pkg.NewInMemoryDatabase(false)
for _, p := range s.DefinitionDatabase.World() {
universe.CreatePackage(p)
universe, err := s.DefinitionDatabase.Copy()
if err != nil {
return nil, nil, errors.Wrap(err, "Failed copying db")
}
for _, p := range s.Installed() {
universe.CreatePackage(p)
}
@@ -501,10 +500,10 @@ func (s *Solver) Upgrade(checkconflicts, full bool) (pkg.Packages, PackagesAsser
toUninstall := pkg.Packages{}
toInstall := pkg.Packages{}
// we do this in memory so we take into account of provides
universe := pkg.NewInMemoryDatabase(false)
for _, p := range s.DefinitionDatabase.World() {
universe.CreatePackage(p)
// we do this in memory so we take into account of provides, and its faster
universe, err := s.DefinitionDatabase.Copy()
if err != nil {
return nil, nil, errors.Wrap(err, "failed creating db copy")
}
installedcopy := pkg.NewInMemoryDatabase(false)
@@ -590,7 +589,7 @@ func (s *Solver) Uninstall(checkconflicts, full bool, packs ...pkg.Package) (pkg
if !full && checkconflicts {
for _, candidate := range toRemove {
if conflicts, err := s.Conflicts(candidate, s.Installed()); conflicts {
return nil, err
return nil, errors.Wrap(err, "while searching for "+candidate.HumanReadableString()+" conflicts")
}
}
return toRemove, nil
@@ -661,7 +660,7 @@ func (s *Solver) Uninstall(checkconflicts, full bool, packs ...pkg.Package) (pkg
// BuildFormula builds the main solving formula that is evaluated by the sat solver.
func (s *Solver) BuildFormula() (bf.Formula, error) {
var formulas []bf.Formula
r, err := s.BuildPartialWorld(false)
r, err := s.BuildWorld(false)
if err != nil {
return nil, err
}

View File

@@ -111,7 +111,7 @@ var _ = Describe("Solver", func() {
// Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: false}))
//Expect(solution).To(ContainElement(PackageAssert{Package: D, Value: false}))
Expect(len(solution)).To(Equal(3))
Expect(len(solution)).To(Equal(5))
})
It("Solves correctly if the selected package to install has requirements", func() {
@@ -401,7 +401,7 @@ var _ = Describe("Solver", func() {
Expect(solution).ToNot(ContainElement(PackageAssert{Package: D, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(3))
Expect(len(solution)).To(Equal(4))
Expect(err).ToNot(HaveOccurred())
})
@@ -529,7 +529,7 @@ var _ = Describe("Solver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(5))
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
})
@@ -570,7 +570,7 @@ var _ = Describe("Solver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(5))
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
})
@@ -753,6 +753,33 @@ var _ = Describe("Solver", func() {
})
It("Fails to uninstall if a package is required", func() {
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{D}, []*pkg.DefaultPackage{})
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
Z := pkg.NewPackage("Z", "", []*pkg.DefaultPackage{A}, []*pkg.DefaultPackage{})
F := pkg.NewPackage("F", "", []*pkg.DefaultPackage{Z, B}, []*pkg.DefaultPackage{})
Z.SetVersion("1.4101.dvw.dqc.")
B.SetVersion("1.4101qe.eq.ff..dvw.dqc.")
C.SetVersion("1.aaaa.eq.ff..dvw.dqc.")
for _, p := range []pkg.Package{A, B, C, D, Z, F} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D, Z, F} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(true, false, B)
Expect(err).To(HaveOccurred())
Expect(len(solution)).To(Equal(0))
})
It("UninstallUniverse simple package correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
@@ -1256,7 +1283,37 @@ var _ = Describe("Solver", func() {
})
It("upgrades correctly with provides", func() {
B.SetProvides([]*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=0", Category: "test"}, &pkg.DefaultPackage{Name: "c", Version: ">=0", Category: "test"}})
B.SetProvides([]*pkg.DefaultPackage{
&pkg.DefaultPackage{Name: "a", Version: ">=0", Category: "test"},
&pkg.DefaultPackage{Name: "c", Version: ">=0", Category: "test"},
})
for _, p := range []pkg.Package{B} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, C} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
uninstall, solution, err := s.Upgrade(true, true)
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(2))
Expect(uninstall).To(ContainElement(C))
Expect(uninstall).To(ContainElement(A))
Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: true}))
Expect(len(solution)).To(Equal(1))
})
PIt("upgrades correctly with provides, also if definitiondb contains both a provide, and the package to be provided", func() {
B.SetProvides([]*pkg.DefaultPackage{
&pkg.DefaultPackage{Name: "a", Version: ">=0", Category: "test"},
&pkg.DefaultPackage{Name: "c", Version: ">=0", Category: "test"},
})
for _, p := range []pkg.Package{A1, B} {
_, err := dbDefinitions.CreatePackage(p)
@@ -1271,10 +1328,9 @@ var _ = Describe("Solver", func() {
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(2))
Expect(uninstall[1].GetName()).To(Equal("c"))
Expect(uninstall[1].GetVersion()).To(Equal("1.5"))
Expect(uninstall[0].GetName()).To(Equal("a"))
Expect(uninstall[0].GetVersion()).To(Equal("1.1"))
Expect(uninstall).To(ContainElement(C))
Expect(uninstall).To(ContainElement(A))
Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: true}))
Expect(len(solution)).To(Equal(1))

View File

@@ -1,2 +1,4 @@
image: "alpine:3.5"
steps:
- echo "foo"

View File

@@ -15,3 +15,6 @@ requires:
- category: "layer"
name: "build-sabayon-overlay"
version: "0.20191212"
steps:
- echo "foo"

View File

@@ -5,3 +5,6 @@ requires:
includes:
- /usr/portage/packages/.*
steps:
- echo "foo"

View File

@@ -16,3 +16,6 @@ requires:
- category: "layer"
name: "build-sabayon-overlay"
version: ">=0.1"
steps:
- echo "foo"

View File

@@ -2,3 +2,5 @@ requires:
- category: "layer"
name: "build"
version: ">=0.1"
steps:
- echo "foo"

View File

@@ -2,3 +2,5 @@ requires:
- category: "layer"
version: "0.1"
name: "build"
steps:
- echo "foo"

View File

@@ -5,3 +5,5 @@ requires:
- category: "layer"
name: "sabayon-build-portage"
version: ">=0.1"
steps:
- echo "foo"

View File

@@ -1,3 +1,5 @@
category: "layer"
name: "build-sabayon-overlay"
version: "0.20191205"
steps:
- echo "foo"

View File

@@ -5,3 +5,5 @@ requires:
- category: "layer"
name: "sabayon-build-portage"
version: ">=0.1"
steps:
- echo "foo"

View File

@@ -2,6 +2,7 @@ requires:
- category: "layer"
version: "0.1"
name: "build-sabayon-overlays"
steps:
- echo "foo"
includes:
- /usr/portage/packages/.*

View File

@@ -0,0 +1,6 @@
image: "alpine"
unpack: true
excludes:
- /sys/.*
- /proc/.*
- /etc/.*

View File

@@ -0,0 +1,3 @@
category: "seed"
name: "alpine"
version: "0.9"

View File

@@ -0,0 +1,6 @@
image: "alpine"
unpack: true
excludes:
- /sys/.*
- /proc/.*
- /etc/.*

View File

@@ -0,0 +1,3 @@
category: "seed"
name: "alpine"
version: "1.0"

View File

@@ -0,0 +1,2 @@
install:
- echo "$0" > /tmp/foo

0
tests/fixtures/virtuals/a/build.yaml vendored Normal file
View File

View File

@@ -0,0 +1,3 @@
category: test
name: "a"
version: "1.99"

4
tests/fixtures/virtuals/b/build.yaml vendored Normal file
View File

@@ -0,0 +1,4 @@
requires:
- category: "test"
name: "a"
version: ">=0"

View File

@@ -0,0 +1,3 @@
category: test
name: "b"
version: "1.0"

4
tests/fixtures/virtuals/c/build.yaml vendored Normal file
View File

@@ -0,0 +1,4 @@
# Invalid, no source image or requires defined, but we define steps
steps:
- echo "fail!"

View File

@@ -0,0 +1,3 @@
category: test
name: "c"
version: "1.0"

View File

@@ -0,0 +1 @@
image: "busybox"

View File

@@ -0,0 +1,3 @@
category: test
name: "image"
version: "1.0"

View File

@@ -0,0 +1,4 @@
requires:
- category: "test"
name: "image"
version: ">=0"

View File

@@ -0,0 +1,3 @@
category: test
name: "virtual"
version: "1.0"

View File

@@ -0,0 +1,115 @@
#!/bin/bash
export LUET_NOLOCK=true
oneTimeSetUp() {
export tmpdir="$(mktemp -d)"
}
oneTimeTearDown() {
rm -rf "$tmpdir"
}
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression zstd test/c@1.0 > /dev/null
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.zst' ]"
assertTrue 'create package' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar.zst' ]"
}
testRepo() {
# Disable tests which require a DOCKER registry
[ -z "${TEST_DOCKER_IMAGE:-}" ] && startSkipping
luet create-repo --tree "$ROOT_DIR/tests/fixtures/buildableseed" \
--output "${TEST_DOCKER_IMAGE}" \
--packages $tmpdir/testbuild \
--name "test" \
--descr "Test Repo" \
--urls $tmpdir/testrootfs \
--tree-compression zstd \
--tree-filename foo.tar \
--meta-filename repository.meta.tar \
--meta-compression zstd \
--type docker --push-images --force-push
createst=$?
assertEquals 'create repo successfully' "$createst" "0"
}
testConfig() {
mkdir $tmpdir/testrootfs
cat <<EOF > $tmpdir/luet.yaml
general:
debug: true
system:
rootfs: $tmpdir/testrootfs
database_path: "/"
database_engine: "boltdb"
config_from_host: true
repositories:
- name: "main"
type: "docker"
enable: true
urls:
- "${TEST_DOCKER_IMAGE}"
EOF
luet config --config $tmpdir/luet.yaml
res=$?
assertEquals 'config test successfully' "$res" "0"
}
testInstall() {
# Disable tests which require a DOCKER registry
[ -z "${TEST_DOCKER_IMAGE:-}" ] && startSkipping
luet install -y --config $tmpdir/luet.yaml test/c@1.0
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
}
testReInstall() {
# Disable tests which require a DOCKER registry
[ -z "${TEST_DOCKER_IMAGE:-}" ] && startSkipping
output=$(luet install -y --config $tmpdir/luet.yaml test/c@1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertContains 'contains warning' "$output" 'No packages to install'
}
testUnInstall() {
# Disable tests which require a DOCKER registry
[ -z "${TEST_DOCKER_IMAGE:-}" ] && startSkipping
luet uninstall -y --config $tmpdir/luet.yaml test/c@1.0
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
}
testInstallAgain() {
# Disable tests which require a DOCKER registry
[ -z "${TEST_DOCKER_IMAGE:-}" ] && startSkipping
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
output=$(luet install -y --config $tmpdir/luet.yaml test/c@1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertNotContains 'contains warning' "$output" 'No packages to install'
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.zst' ]"
}
testCleanup() {
luet cleanup --config $tmpdir/luet.yaml
installst=$?
assertEquals 'cleanup test successfully' "$installst" "0"
}
# Load shUnit2.
. "$ROOT_DIR/tests/integration/shunit2"/shunit2

View File

@@ -15,8 +15,8 @@ testBuild() {
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression zstd test/c@1.0 > /dev/null
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.zstd' ]"
assertTrue 'create package' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar.zstd' ]"
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.zst' ]"
assertTrue 'create package' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar.zst' ]"
}
testRepo() {
@@ -36,9 +36,9 @@ testRepo() {
createst=$?
assertEquals 'create repo successfully' "$createst" "0"
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
assertTrue 'create named tree in zstd' "[ -e '$tmpdir/testbuild/foo.tar.zstd' ]"
assertTrue 'create named tree in zstd' "[ -e '$tmpdir/testbuild/foo.tar.zst' ]"
assertTrue 'create tree in zstd-only' "[ ! -e '$tmpdir/testbuild/foo.tar' ]"
assertTrue 'create named meta in zstd' "[ -e '$tmpdir/testbuild/repository.meta.tar.zstd' ]"
assertTrue 'create named meta in zstd' "[ -e '$tmpdir/testbuild/repository.meta.tar.zst' ]"
assertTrue 'create meta in zstd-only' "[ ! -e '$tmpdir/testbuild/repository.meta.tar' ]"
}
@@ -93,14 +93,14 @@ testInstallAgain() {
assertEquals 'install test successfully' "$installst" "0"
assertNotContains 'contains warning' "$output" 'No packages to install'
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.zstd' ]"
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.zst' ]"
}
testCleanup() {
luet cleanup --config $tmpdir/luet.yaml
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ ! -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.zstd' ]"
assertTrue 'package installed' "[ ! -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.zst' ]"
}
# Load shUnit2.

View File

@@ -12,6 +12,7 @@ oneTimeTearDown() {
testBuild() {
mkdir $tmpdir/testbuild
[ "$LUET_BACKEND" == "img" ] && startSkipping
luet build --tree "$ROOT_DIR/tests/fixtures/retrieve-integration" --destination $tmpdir/testbuild --compression gzip test/b
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
@@ -20,6 +21,7 @@ testBuild() {
}
testRepo() {
[ "$LUET_BACKEND" == "img" ] && startSkipping
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
luet create-repo --tree "$ROOT_DIR/tests/fixtures/retrieve-integration" \
--output $tmpdir/testbuild \
@@ -59,6 +61,7 @@ EOF
testInstall() {
[ "$LUET_BACKEND" == "img" ] && startSkipping
luet install -y --config $tmpdir/luet.yaml test/b
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
installst=$?
@@ -71,6 +74,7 @@ testInstall() {
testUnInstall() {
[ "$LUET_BACKEND" == "img" ] && startSkipping
luet uninstall -y --full --config $tmpdir/luet.yaml test/b
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
@@ -80,6 +84,7 @@ testUnInstall() {
testCleanup() {
[ "$LUET_BACKEND" == "img" ] && startSkipping
luet cleanup --config $tmpdir/luet.yaml
installst=$?
assertEquals 'install test successfully' "$installst" "0"

View File

@@ -0,0 +1,86 @@
#!/bin/bash
export LUET_NOLOCK=true
oneTimeSetUp() {
export tmpdir="$(mktemp -d)"
}
oneTimeTearDown() {
rm -rf "$tmpdir"
}
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/finalizers_upgrade" --destination $tmpdir/testbuild --compression gzip --all
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package' "[ -e '$tmpdir/testbuild/alpine-seed-1.0.package.tar.gz' ]"
assertTrue 'create package' "[ -e '$tmpdir/testbuild/alpine-seed-0.9.package.tar.gz' ]"
}
testRepo() {
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
luet create-repo --tree "$ROOT_DIR/tests/fixtures/finalizers_upgrade" \
--output $tmpdir/testbuild \
--packages $tmpdir/testbuild \
--name "test" \
--descr "Test Repo" \
--urls $tmpdir/testrootfs \
--type disk > /dev/null
createst=$?
assertEquals 'create repo successfully' "$createst" "0"
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
}
testConfig() {
mkdir $tmpdir/testrootfs
cat <<EOF > $tmpdir/luet.yaml
general:
debug: true
system:
rootfs: $tmpdir/testrootfs
database_path: "/"
database_engine: "boltdb"
config_from_host: true
repositories:
- name: "main"
type: "disk"
enable: true
urls:
- "$tmpdir/testbuild"
EOF
luet config --config $tmpdir/luet.yaml
res=$?
assertEquals 'config test successfully' "$res" "0"
}
testInstall() {
luet install -y --config $tmpdir/luet.yaml seed/alpine@0.9
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/bin/busybox' ]"
assertTrue 'finalizer does not run' "[ ! -e '$tmpdir/testrootfs/tmp/foo' ]"
}
testUpgrade() {
luet upgrade -y --config $tmpdir/luet.yaml
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/bin/busybox' ]"
assertTrue 'finalizer runs' "[ -e '$tmpdir/testrootfs/tmp/foo' ]"
assertEquals 'finalizer printed used shell' "$(cat $tmpdir/testrootfs/tmp/foo)" 'sh'
}
testCleanup() {
luet cleanup --config $tmpdir/luet.yaml
installst=$?
assertEquals 'install test successfully' "$installst" "0"
}
# Load shUnit2.
. "$ROOT_DIR/tests/integration/shunit2"/shunit2

View File

@@ -76,4 +76,3 @@ testReplace() {
# Load shUnit2.
. "$ROOT_DIR/tests/integration/shunit2"/shunit2

View File

@@ -0,0 +1,78 @@
#!/bin/bash
export LUET_NOLOCK=true
oneTimeSetUp() {
export tmpdir="$(mktemp -d)"
}
oneTimeTearDown() {
rm -rf "$tmpdir"
}
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/simple_dep" --destination $tmpdir/testbuild test/b
luet build --tree "$ROOT_DIR/tests/fixtures/simple_dep" --destination $tmpdir/testbuild test/a
luet build --tree "$ROOT_DIR/tests/fixtures/simple_dep" --destination $tmpdir/testbuild test/c
assertTrue 'create package B 1.1' "[ -e '$tmpdir/testbuild/b-test-1.1.package.tar' ]"
assertTrue 'create package A 1.2' "[ -e '$tmpdir/testbuild/a-test-1.2.package.tar' ]"
assertTrue 'create package C 1.0' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar' ]"
}
testRepo() {
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
luet create-repo --tree "$ROOT_DIR/tests/fixtures/simple_dep" \
--output $tmpdir/testbuild \
--packages $tmpdir/testbuild \
--name "test" \
--descr "Test Repo" \
--urls $tmpdir/testrootfs \
--type http
createst=$?
assertEquals 'create repo successfully' "$createst" "0"
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
}
testConfig() {
mkdir $tmpdir/testrootfs
cat <<EOF > $tmpdir/luet.yaml
general:
debug: true
system:
rootfs: $tmpdir/testrootfs
database_path: "/"
database_engine: "boltdb"
config_from_host: true
repositories:
- name: "main"
type: "disk"
enable: true
urls:
- "$tmpdir/testbuild"
EOF
luet config --config $tmpdir/luet.yaml
res=$?
assertEquals 'config test successfully' "$res" "0"
}
testInstall() {
luet install -y --config $tmpdir/luet.yaml test/b
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed B' "[ -e '$tmpdir/testrootfs/b' ]"
}
testReplace() {
luet --config $tmpdir/luet.yaml replace --nodeps -y test/b --for test/c
installst=$?
assertEquals 'replace test successfully' "$installst" "0"
echo "$upgrade"
assertTrue 'package uninstalled B' "[ ! -e '$tmpdir/testrootfs/b' ]"
assertTrue 'package installed C' "[ -e '$tmpdir/testrootfs/c' ]"
assertTrue 'package not installed A' "[ ! -e '$tmpdir/testrootfs/a' ]"
}
# Load shUnit2.
. "$ROOT_DIR/tests/integration/shunit2"/shunit2

View File

@@ -15,17 +15,15 @@ testBuild() {
luet build --tree "$ROOT_DIR/tests/fixtures/simple_dep" --destination $tmpdir/testbuild1 test/c
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package B 1.1' "[ -e '$tmpdir/testbuild1/b-test-1.1.package.tar' ]"
assertTrue 'create package A 1.2' "[ -e '$tmpdir/testbuild1/a-test-1.2.package.tar' ]"
assertTrue 'create package C 1.0' "[ -e '$tmpdir/testbuild1/c-test-1.0.package.tar' ]"
}
testBuild() {
testBuildOnlyTarget() {
mkdir $tmpdir/testbuild2
luet build --tree "$ROOT_DIR/tests/fixtures/simple_dep" --destination $tmpdir/testbuild2 --only-target-package test/c
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package B 1.1' "[ ! -e '$tmpdir/testbuild2/b-test-1.1.package.tar' ]"
assertTrue 'create package A 1.2' "[ ! -e '$tmpdir/testbuild2/a-test-1.2.package.tar' ]"
assertTrue 'create package C 1.0' "[ -e '$tmpdir/testbuild2/c-test-1.0.package.tar' ]"
}

View File

@@ -0,0 +1,58 @@
#!/bin/bash
export LUET_NOLOCK=true
oneTimeSetUp() {
export tmpdir="$(mktemp -d)"
}
oneTimeTearDown() {
rm -rf "$tmpdir"
}
testBuildA() {
mkdir $tmpdir/testbuild1
luet build --tree "$ROOT_DIR/tests/fixtures/virtuals" --debug --compression "gzip" --destination $tmpdir/testbuild1 test/a
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.99' "[ -e '$tmpdir/testbuild1/a-test-1.99.package.tar.gz' ]"
assertTrue 'create package A 1.99' "[ -e '$tmpdir/testbuild1/a-test-1.99.metadata.yaml' ]"
}
testBuildB() {
mkdir $tmpdir/testbuild2
luet build --tree "$ROOT_DIR/tests/fixtures/virtuals" --debug --compression "gzip" --destination $tmpdir/testbuild2 test/b
buildst=$?
assertEquals 'builds of B expected to fail. It depends on a virtual' "$buildst" "1"
}
testBuildC() {
mkdir $tmpdir/testbuild3
luet build --tree "$ROOT_DIR/tests/fixtures/virtuals" --debug --compression "gzip" --destination $tmpdir/testbuild3 test/c
buildst=$?
assertEquals 'builds of C expected to fail. Steps with no source image' "$buildst" "1"
}
testBuildImage() {
mkdir $tmpdir/testbuild4
luet build --tree "$ROOT_DIR/tests/fixtures/virtuals" --debug --compression "gzip" --destination $tmpdir/testbuild4 test/image
buildst=$?
assertEquals 'builds of test/image expected to succeed' "$buildst" "0"
assertTrue 'create package test/image 1.0' "[ -e '$tmpdir/testbuild4/image-test-1.0.package.tar.gz' ]"
assertTrue 'create package test/image 1.0' "[ -e '$tmpdir/testbuild4/image-test-1.0.metadata.yaml' ]"
}
testBuildVirtual() {
mkdir $tmpdir/testbuild5
luet build --tree "$ROOT_DIR/tests/fixtures/virtuals" --debug --compression "gzip" --destination $tmpdir/testbuild5 test/virtual
buildst=$?
assertEquals 'builds of test/virtual expected to succeed' "$buildst" "0"
assertTrue 'create package test/image 1.0' "[ -e '$tmpdir/testbuild5/image-test-1.0.package.tar.gz' ]"
assertTrue 'create package test/image 1.0' "[ -e '$tmpdir/testbuild5/image-test-1.0.metadata.yaml' ]"
assertTrue 'create package test/virtual 1.0' "[ -e '$tmpdir/testbuild5/virtual-test-1.0.package.tar.gz' ]"
assertTrue 'create package test/virtual 1.0' "[ -e '$tmpdir/testbuild5/virtual-test-1.0.metadata.yaml' ]"
}
# Load shUnit2.
. "$ROOT_DIR/tests/integration/shunit2"/shunit2

View File

@@ -12,7 +12,16 @@ popd
export PATH=$ROOT_DIR/tests/integration/bin/:$PATH
for script in $(ls "$ROOT_DIR/tests/integration/" | grep '^[0-9]*_.*.sh'); do
echo "Executing script '$script'."
$ROOT_DIR/tests/integration/$script
done
if [ -z "$SINGLE_TEST" ]; then
for script in $(ls "$ROOT_DIR/tests/integration/" | grep '^[0-9]*_.*.sh'); do
echo "Executing test '$script'."
$ROOT_DIR/tests/integration/$script
done
else
echo "Executing test '$SINGLE_TEST'."
$ROOT_DIR/tests/integration/$SINGLE_TEST
fi

Some files were not shown because too many files have changed in this diff Show More