Compare commits

...

109 Commits

Author SHA1 Message Date
Ettore Di Giacinto
5cccc34f32 Tag 0.16.5 2021-06-01 22:54:10 +02:00
Ettore Di Giacinto
5ef1d04055 Create CONTRIBUTING.md 2021-06-01 17:13:23 +02:00
Itxaka
1bd4d520a4 Add 2 new events for image unpacking (#226)
* Reduce possibility of circular dependency

Just by adding an import for bus to anything in the helper dir, we would
run into a circular dependency due to how things are structured. That
means that we cannot set any events for unpacking or docker helper
pulling an image.

This commit tries to work around this by doing several things.
 - Remove full imports of the helper module by segmentating some modules
   into their own submodule, like docker or match so just using a small match
   function doesnt bring the whole module
 - Removing a simple function to check if a dir exists from importing
   the full helper module and instead write the function (5 lines)
 - Using logrus in the bus module instead of logger, which avoids a
   circular dependency

Signed-off-by: Itxaka <igarcia@suse.com>

* Add two new events for unpacking an image

Both pre and post unpacking an image

Signed-off-by: Itxaka <igarcia@suse.com>
2021-06-01 16:43:31 +02:00
Daniele Rondina
b12c7678d4 get_luet_root.sh: Fix installation on alpine/busybox env (#218)
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2021-05-27 17:17:49 +02:00
Ettore Di Giacinto
32a99a4a49 Update readme with badges and a small logo 2021-05-26 17:57:41 +02:00
Ettore Di Giacinto
56e9c6f82e Tag 0.16.4 2021-05-25 14:55:55 +02:00
Ettore Di Giacinto
92ea69a2b9 Don't always generate artifact
Distinguish rather if we should generate artifact of the deps of a
package or not, and ensure that we just force artifact generation for
targets when computing the join image.

Fixes #217
2021-05-25 13:28:14 +02:00
Ettore Di Giacinto
838899aa83 Respect create-repo --force-push with metadata files
Don't force push package metadata with docker repository.

We were pushing regardless .metadata.yaml files even if packages were
not, also when calling create-repo without --force-push.
2021-05-25 12:13:23 +02:00
Ettore Di Giacinto
76695b2fc8 docker: check during extraction if image exists and pull if needed 2021-05-25 11:04:06 +02:00
Ettore Di Giacinto
5c84e5b0a7 Tag 0.16.3 2021-05-24 22:23:57 +02:00
Ettore Di Giacinto
06fa8b1c87 fixup: jointag index 2021-05-24 22:13:27 +02:00
Ettore Di Giacinto
ff153f367f fixup: enhance output of copy tag when cycling over packages 2021-05-24 21:35:22 +02:00
Ettore Di Giacinto
459676397c Enhance multi-stage output 2021-05-24 19:40:39 +02:00
Ettore Di Giacinto
93057fbf6d Respect PackageTargetOnly during multi-stage copy image generation 2021-05-24 19:40:17 +02:00
Ettore Di Giacinto
5e1a7c50df fixup: display hash of join image 2021-05-23 18:22:14 +02:00
Ettore Di Giacinto
0ceaf09615 fixup: propagate outputpath 2021-05-23 17:36:09 +02:00
Ettore Di Giacinto
0dc78ebe41 Tag 0.16.2 2021-05-22 14:49:58 +02:00
Ettore Di Giacinto
27c2e3c51f fixup: make sure the outputdir exists 2021-05-21 22:25:18 +02:00
Ettore Di Giacinto
e83f600ed3 Tag 0.16.1 2021-05-21 21:28:33 +02:00
Ettore Di Giacinto
6344e47eb3 fixup: resolve multistage and join also when building deps 2021-05-21 19:47:23 +02:00
Ettore Di Giacinto
8b1c5558b2 Tag 0.16.0 2021-05-21 16:10:28 +02:00
Ettore Di Giacinto
c277ac0f94 Add join keyword to generate parent image from final artifacts
A new keyword `join` is introduced to generate the parent image. It
takes precedence over a `requires` or a `image` already defined in a
spec.

It will generate all the artifacts from the packages listed and join
them in a single image which will be used as parent for the package
build process.

This is a change which invalidates priorly generated hashes.

Fixes #173
2021-05-21 14:52:48 +02:00
Ettore Di Giacinto
d8c8c2194f Tag 0.15.0 2021-05-20 13:53:53 +02:00
Ettore Di Giacinto
4494385f5b Fixup test rewording match 2021-05-20 11:17:39 +02:00
Ettore Di Giacinto
85a7968ecc Check if we have the image locally and extract directly it 2021-05-20 10:29:41 +02:00
Ettore Di Giacinto
1ba987b0f1 cmd/tree: consume Hashtree to display image hashes 2021-05-19 17:43:14 +02:00
Ettore Di Giacinto
c72b5be364 Update vendor 2021-05-19 17:34:54 +02:00
Ettore Di Giacinto
1ef18ed2c5 Add salted method for assertion hashing
- Add the spec Hash as salt for image hashes
- Add tests and adapt existing ones
- Use a signature to build a spec hash

Fixes: #207
2021-05-19 17:34:01 +02:00
Ettore Di Giacinto
4b1b711a5c Add single suite tests for spec and artifact
They went missing when the refactoring of the compiler occurred
2021-05-19 17:34:01 +02:00
Ettore Di Giacinto
7f047e4fc2 Multi-stage builds from hash images
Implement multi-stage copy from images part of the build cache of a
package.

Note, this is not the final images where are we copying files from, but
the underlying build container.

Skipping the test on img backend because it fails when pulling external
images during multi-stage build...

Fixes #190
2021-05-18 12:16:35 +02:00
Ettore Di Giacinto
356350f724 Tag 0.14.7 2021-05-17 19:11:44 +02:00
Ettore Di Giacinto
9d2ee1b760 Add hashtree and extract hash logic from compiler
Add unit tests, consume imagehashtree in compiler and cleanup

Fixes: #203
2021-05-17 15:22:08 +02:00
Ettore Di Giacinto
fd12227d53 plugins: Make execution fail if loaded plugins are erroring, print debug output on emitted responses
Now plugins failing to answer make execution fail, adapt tests
2021-05-17 15:17:03 +02:00
Ettore Di Giacinto
1e617b0c67 Add copy field
Initial implementation that just works with standard docker image
references

Related: #190
2021-05-17 15:16:15 +02:00
Ettore Di Giacinto
77b49d9c4a Tag 0.14.6 patchfix 2021-05-16 23:32:54 +02:00
Ettore Di Giacinto
4c3532e3c6 fixup: unchecked err causes invalid read in certain cases 2021-05-16 23:12:54 +02:00
Ettore Di Giacinto
f2ec065a89 Tag 0.14.5 2021-05-16 21:12:32 +02:00
Ettore Di Giacinto
7193ea03f9 Use hasher to work out also big files 2021-05-16 17:21:19 +02:00
Ettore Di Giacinto
beeb0dcaaa Don't write protect file if it is same
Add checksum check to config protect files
Add test file

Fixes #214
2021-05-16 11:54:08 +02:00
Ettore Di Giacinto
0de3177ddd Tag 0.14.4 2021-05-14 09:48:45 +02:00
Ettore Di Giacinto
45c8dfa19f Update go-pluggable 2021-05-13 18:13:44 +02:00
Ettore Di Giacinto
186ac33156 Tag 0.14.3 2021-05-11 15:24:26 +02:00
Ettore Di Giacinto
bdc24b84a4 Update go-pluggable
- Update vendor and run go mod tidy
2021-05-11 14:02:15 +02:00
Ettore Di Giacinto
e5d6d21178 Tag 0.14.2 2021-05-07 22:11:45 +02:00
Ettore Di Giacinto
0379855592 Drop docker squash support
It can be easily implemented as a plugin

Fixes: #206
2021-05-04 11:17:10 +02:00
Ettore Di Giacinto
958b8c32e1 Add DeepCopyFile for copies with additional permission bits
It's not needed for most of all the copying we do, except when we
generate the deltas

See also #204
2021-05-04 11:14:16 +02:00
Ettore Di Giacinto
b0b95d1721 Skip perms test on img backend 2021-05-01 00:33:46 +02:00
Ettore Di Giacinto
f85891e362 Ensure we carry permissions from dirs while we create packages by delta 2021-05-01 00:14:01 +02:00
Ettore Di Giacinto
946524f90d Tag 0.14.1 2021-04-24 21:09:52 +02:00
Ettore Di Giacinto
2cbd97ff3a Respect replace options 2021-04-24 21:09:25 +02:00
Ettore Di Giacinto
a4d77f8f99 Fixup clean path uninstall
While uninstalling, we weren't checking if we left any empty dir behind.
Now we walk the full path to the file in the artifact, and check each
subdir if it's empty. If it is, we delete it as it is claimed by the
package
2021-04-24 19:21:15 +02:00
Ettore Di Giacinto
adcb459fd2 Inplace upgrades 2021-04-24 18:48:57 +02:00
Ettore Di Giacinto
55ae67be0f Pass by options to compute functions in install 2021-04-24 14:26:26 +02:00
Ettore Di Giacinto
848215eef0 Tag 0.14.0 2021-04-23 19:46:21 +02:00
Ettore Di Giacinto
7bfff97f57 Fixup race when updating compiler options
Instead of updating the compiler options that can be accessed by each
worker, update the compilespec BuildOptions directly
2021-04-23 12:02:41 +02:00
Ettore Di Giacinto
a73f5f9b65 Add more tests 2021-04-23 12:02:41 +02:00
Ettore Di Giacinto
0288eedbc3 Always resolve buildhash image, add --rebuild to build 2021-04-23 12:02:41 +02:00
Ettore Di Giacinto
b27237b7ff Allow to pull images from multiple-repo while generating all artifacts
before, without --only-target-package, we were forcing to build the
images locally. Now we extend it also to that use-case.

Also revisit how we pass by the builder image hash, so it's easier to
read.

This change also re-enables tagging and bulding builder images all the
times.
Previously, we regressed into not tagging build images coming out-of-the
tree, with the unpleasant side-effect of not be able to re-build the
same artifacts for those leaf packages.
2021-04-23 12:02:41 +02:00
Ettore Di Giacinto
c9aed37fa7 Fixup on values interpolation and metadata retrieve
- Fixup search path on metadata spec load. Previously we were reading
  the package being passed, and not the one resolved (it failed against
selectors)
- Do inherit first pushrepositories, so they take precedence over pull
- Add test cases to cover build values interpolation by remote
  repositories
- Enhance test cases to check image cache repository inheritance when
  --from-repositories is passed
- Fix race condition when inheriting buildspec options: Instead of consuming the compiler one, annotate the updates in the
package BuildOption spec which is passed by
- Update vendor
2021-04-23 12:02:19 +02:00
Ettore Di Giacinto
788b889d14 Enhance integration test to check invalid characters for docker images tags 2021-04-21 11:05:50 +02:00
Ettore Di Giacinto
ef92f23221 Inherit pullimages as pushimages while parsing compilespec metadata
Fixes #200
2021-04-19 17:18:37 +02:00
Ettore Di Giacinto
562fcc2421 Strip invalid chars from docker images also for metadata files
Fixes #199
2021-04-19 17:18:37 +02:00
Ettore Di Giacinto
54be45dcff Tag 0.13.1 2021-04-17 10:20:20 +02:00
Ettore Di Giacinto
413572a8e3 Adapt tests at new behavior change 2021-04-17 00:44:10 +02:00
Ettore Di Giacinto
ecc41ce370 Check if there are upgrades before attempting install 2021-04-16 23:38:10 +02:00
Ettore Di Giacinto
dd6501a642 Check if there are upgrades before attempting install 2021-04-16 22:28:30 +02:00
Ettore Di Giacinto
a7b355ed2f Tag 0.13.0 2021-04-16 21:33:32 +02:00
Ettore Di Giacinto
9f3e7fd0b2 PackageIndex is in repository metadata file only
This was a regression in those changeset, don't replicate the info, and
at the same time, write the artifact path containing the name of the
file only (excluding the path)
2021-04-16 15:12:51 +02:00
Ettore Di Giacinto
ac3843e342 Add integration tests 2021-04-16 14:01:23 +02:00
Ettore Di Giacinto
612477718e Add values interpolation inheritance test 2021-04-16 14:01:23 +02:00
Ettore Di Giacinto
802b0b5201 Add integration test to build with no tree and interpolation values 2021-04-16 14:01:23 +02:00
Ettore Di Giacinto
c5587f9dfc Add simple integration test about remote-repo builds 2021-04-16 14:01:23 +02:00
Ettore Di Giacinto
c27d4d258e Allow create-repo to source trees from remote repositories
This makes possible to create the repositorie from external ones,
It also required to address #26.

Fixes #26
2021-04-16 14:01:13 +02:00
Ettore Di Giacinto
9202bcbbbe Convert raw value to templatedata before merging 2021-04-16 14:00:11 +02:00
Ettore Di Giacinto
fadd5a10a3 Load trees separately while reconstructing spectrees 2021-04-16 14:00:11 +02:00
Ettore Di Giacinto
d297b92483 Update vendor 2021-04-16 14:00:11 +02:00
Ettore Di Giacinto
7ba7add2a8 Allow to pull specfiles from published repositories
- Interpolates values from the repositories compilespec if present
- Automatically merge cache images coming from specified repository when
  necessary

Fixes #194
2021-04-16 13:58:51 +02:00
Ettore Di Giacinto
57c769b4a5 Refactor compiler and annotate buildoptions into compiler metadata
This allows to later pick up values used during build of each package
2021-04-16 13:57:54 +02:00
Ettore Di Giacinto
44cae094e8 Allow multiple build values files
Fixes #198
2021-04-16 13:56:51 +02:00
Ettore Di Giacinto
c022c75239 Write BuildTree as an additional repository file
Also split Docker repository generation a bit more for readability
2021-04-16 13:56:51 +02:00
Ettore Di Giacinto
3250d63072 Don't merge install and compilation DB. Use a temporary DB for generation 2021-04-16 13:56:51 +02:00
Ettore Di Giacinto
b8961be793 Add accessor to embed custom files to a repository while writing
Split up into methods so it's easier to add custom files
2021-04-16 13:56:45 +02:00
Ettore Di Giacinto
036b5c08c6 Split up repository generators 2021-04-08 15:24:32 +02:00
Ettore Di Giacinto
c7b79bf630 Compiler recipe now saves the entire tree 2021-04-08 15:22:00 +02:00
Ettore Di Giacinto
83cb6a2804 Check target with PathSeparator in finalizer.go 2021-04-08 10:22:24 +02:00
Ettore Di Giacinto
182afa315a Tag 0.12.0 2021-04-08 10:21:22 +02:00
Ettore Di Giacinto
ec19b34ca8 Allow to pull from multiple repositories
Adds a new cli flag to luet build `--pull-repository` which allows to
pass-by a list of docker image references which are used to pull the
cache from

Fixes #185
Fixes #184
Closes #161
2021-04-08 09:17:54 +02:00
Daniele Rondina
88307b1912 Restore parsing of gentoo string with condition (#195)
* cmd/helpers: Permit to parse gentoo package str with condition

* Update vendor pkgs-checker @v0.8.1
2021-03-30 09:17:16 +02:00
Ettore Di Giacinto
a83be204e8 Tag 0.11.8 2021-03-18 13:53:04 +01:00
Ettore Di Giacinto
b8352a81a2 Use Lchown when copying bits, lower message to warn 2021-03-18 11:23:42 +01:00
Ettore Di Giacinto
ebf907fb45 Add owner permissions tests 2021-03-18 10:57:09 +01:00
Ettore Di Giacinto
f0a34f1cf0 Enable NoLchown
This caused to drop file permissions
2021-03-18 10:48:23 +01:00
Ettore Di Giacinto
4f1e4c0b41 Switch to containerd when unpacking layers 2021-03-18 10:47:58 +01:00
Ettore Di Giacinto
c736c002af build: Set privileged to true by default 2021-03-18 10:47:39 +01:00
Ettore Di Giacinto
2c32af2951 Tag 0.11.7 2021-03-16 15:47:37 +01:00
Ettore Di Giacinto
ce349ca1b7 Se the path relative to the artifact when generating docker repositories 2021-03-16 14:46:46 +01:00
Ettore Di Giacinto
662742851a Generate backend bus events in the backends 2021-03-16 14:46:28 +01:00
Ettore Di Giacinto
f8ef1c0889 Print error when failing downloading docker image 2021-03-16 11:34:36 +01:00
Ettore Di Giacinto
23513f2c75 Include files in search only if we have the artifact 2021-03-15 11:40:34 +01:00
Ettore Di Giacinto
268239a561 Propagate verify when serializing 2021-03-15 11:37:56 +01:00
Ettore Di Giacinto
f8989e464e Update vendor 2021-03-11 17:57:59 +01:00
Ettore Di Giacinto
0028dd3a92 Support content trust images and pull with authentication
Contact the notary server if ```--verify``` is specified (or `verify:
true` is enabled on the repo config) and verify if the image is signed,
use the returned value to pull the verified image.
2021-03-11 17:57:59 +01:00
Ettore Di Giacinto
caa1cfad5c Tag 0.11.6 2021-03-09 11:37:41 +01:00
Ettore Di Giacinto
39839edda9 Add util to rootcmd 2021-03-09 11:37:22 +01:00
Ettore Di Giacinto
ecd4be4ad3 Tag 0.11.5 2021-03-09 10:55:34 +01:00
Ettore Di Giacinto
675170939d Expose DownloadAndExtractDockerImage as a util
Create a util sub cmd to add all utils that are handy for development
and already present in the luet codebase. We expose in this case `luet
util unpack` to unpack a docker image without a docker daemon running.
2021-03-09 09:22:37 +01:00
Ettore Di Giacinto
0f1acac89b Tag 0.11.4 2021-03-07 12:38:37 +01:00
Ettore Di Giacinto
42f5210764 Add DownloadOnly option
It allows to download only the packages, without
installing/upgrading/replacing them

Fixes #179
2021-03-07 11:39:59 +01:00
Ettore Di Giacinto
9eda81667b Use common ImageID() when computing final images
It also adds tests and strip invalid "+" character which is not
supported in docker image tags
2021-03-07 11:01:08 +01:00
611 changed files with 85208 additions and 5210 deletions

53
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,53 @@
We love your input! We want to make contributing to this project as easy and transparent as possible, whether it's:
- Reporting a bug
- Discussing the current state of the code
- Submitting a fix
- Proposing new features
- Becoming a maintainer
## We Develop with Github
We use github to host code, to track issues and feature requests, as well as accept pull requests.
## Stay in touch
Join us in [slack](https://luet.slack.com/join/shared_invite/enQtOTQxMjcyNDQ0MDUxLWQ5ODVlNTI1MTYzNDRkYzkyYmM1YWE5YjM0NTliNDEzNmQwMTkxNDRhNDIzM2Y5NDBlOTZjZTYxYWQyNDE4YzY#/) and hang out with the community! It will be much easier to get started and do your first steps in contributing to the project.
## All Code Changes Happen Through Pull Requests
Pull requests are the best way to propose changes to the codebase. We actively welcome your pull requests:
1. Fork the repo you want to contribute to and create your branch from `develop`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the [documentation](https://github.com/Luet-lab/docs).
4. Ensure the test suite passes.
5. Make sure your code lints.
6. Issue that pull request!
## Any contributions you make will be under the Software License of the repository
In short, when you submit code changes, your submissions are understood to be under the same License that covers the project. Feel free to contact the maintainers if that's a concern.
## Report bugs using Github's [issues](https://github.com/mudler/luet/issues)
We use GitHub issues to track public bugs. Report a bug by [opening a new issue](https://github.com/mudler/luet/issues/new); it's that easy!
## Write bug reports with detail, background, and sample code
Try to be as more descriptive as possible. When opening a new issue you will be prompted to choose between a bug or a feature request, with a small template to fill details with. Be specific!
**Great Bug Reports** tend to have:
- A quick summary and/or background
- Steps to reproduce
- Be specific!
- Give sample code if you can.
- What you expected would happen
- What actually happens
- Notes (possibly including why you think this might be happening, or stuff you tried that didn't work)
People *love* thorough bug reports.
## License
By contributing, you agree that your contributions will be licensed under the project Licenses.
## References
This document was adapted from the open-source contribution guidelines from https://gist.github.com/briandk/3d2e8b3ec8daf5a27a62

View File

@@ -1,19 +1,24 @@
<p align="center">
<img width=150 height=150 src="https://user-images.githubusercontent.com/2420543/119691600-0293d700-be4b-11eb-827f-49ff1174a07a.png">
</p>
# luet - Container-based Package manager
[![Docker Repository on Quay](https://quay.io/repository/luet/base/status "Docker Repository on Quay")](https://quay.io/repository/luet/base)
[![Go Report Card](https://goreportcard.com/badge/github.com/mudler/luet)](https://goreportcard.com/report/github.com/mudler/luet)
[![Build Status](https://travis-ci.org/mudler/luet.svg?branch=master)](https://travis-ci.org/mudler/luet)
[![Build and release on push](https://github.com/mudler/luet/actions/workflows/release.yml/badge.svg)](https://github.com/mudler/luet/actions/workflows/release.yml)
[![GoDoc](https://godoc.org/github.com/mudler/luet?status.svg)](https://godoc.org/github.com/mudler/luet)
[![codecov](https://codecov.io/gh/mudler/luet/branch/master/graph/badge.svg)](https://codecov.io/gh/mudler/luet)
[![asciicast](https://asciinema.org/a/388348.svg)](https://asciinema.org/a/388348)
Luet is a multi-platform Package Manager based off from containers - it uses Docker (and others) to build packages. It has zero dependencies and it is well suitable for "from scratch" environments. It can also version entire rootfs and enables delivery of OTA-alike updates, making it a perfect fit for the Edge computing era and IoT embedded devices.
It offers a simple [specfile format](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/) in YAML notation to define both [packages](https://luet-lab.github.io/docs/docs/concepts/packages/) and [rootfs](https://luet-lab.github.io/docs/docs/concepts/packages/#package-layers). As it is based on containers, it can be also used to build stages for Linux From Scratch installations and it can build and track updates for those systems.
It is written entirely in Golang and where used as package manager, it can run in from scratch environment, with zero dependencies.
[![asciicast](https://asciinema.org/a/388348.svg)](https://asciinema.org/a/388348)
## In a glance
- Luet can reuse Gentoo's portage tree hierarchy, and it is heavily inspired from it.

View File

@@ -16,14 +16,18 @@ package cmd
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"github.com/ghodss/yaml"
helpers "github.com/mudler/luet/cmd/helpers"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/compiler/types/artifact"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
"github.com/mudler/luet/pkg/installer"
"github.com/mudler/luet/pkg/compiler/types/compression"
"github.com/mudler/luet/pkg/compiler/types/options"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
@@ -40,17 +44,17 @@ var buildCmd = &cobra.Command{
Long: `Builds one or more packages from a tree (current directory is implied):
$ luet build utils/busybox utils/yq ...
Builds all packages
$ luet build --all
Builds only the leaf packages:
$ luet build --full
Build package revdeps:
$ luet build --revdeps utils/yq
Build package without dependencies (needs the images already in the host, or either need to be available online):
@@ -65,7 +69,6 @@ Build packages specifying multiple definition trees:
viper.BindPFlag("destination", cmd.Flags().Lookup("destination"))
viper.BindPFlag("backend", cmd.Flags().Lookup("backend"))
viper.BindPFlag("privileged", cmd.Flags().Lookup("privileged"))
viper.BindPFlag("database", cmd.Flags().Lookup("database"))
viper.BindPFlag("revdeps", cmd.Flags().Lookup("revdeps"))
viper.BindPFlag("all", cmd.Flags().Lookup("all"))
viper.BindPFlag("compression", cmd.Flags().Lookup("compression"))
@@ -97,10 +100,9 @@ Build packages specifying multiple definition trees:
privileged := viper.GetBool("privileged")
revdeps := viper.GetBool("revdeps")
all := viper.GetBool("all")
databaseType := viper.GetString("database")
compressionType := viper.GetString("compression")
imageRepository := viper.GetString("image-repository")
values := viper.GetString("values")
values := viper.GetStringSlice("values")
wait := viper.GetBool("wait")
push := viper.GetBool("push")
pull := viper.GetBool("pull")
@@ -109,6 +111,8 @@ Build packages specifying multiple definition trees:
onlydeps := viper.GetBool("onlydeps")
onlyTarget, _ := cmd.Flags().GetBool("only-target-package")
full, _ := cmd.Flags().GetBool("full")
rebuild, _ := cmd.Flags().GetBool("rebuild")
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
var results Results
backendArgs := viper.GetStringSlice("backend-args")
@@ -118,34 +122,28 @@ Build packages specifying multiple definition trees:
LuetCfg.GetLogging().SetLogLevel("error")
}
pretend, _ := cmd.Flags().GetBool("pretend")
compilerSpecs := compiler.NewLuetCompilationspecs()
fromRepo, _ := cmd.Flags().GetBool("from-repositories")
compilerSpecs := compilerspec.NewLuetCompilationspecs()
var db pkg.PackageDatabase
compilerBackend := backend.NewBackend(backendType)
compilerBackend, err := compiler.NewBackend(backendType)
helpers.CheckErr(err)
switch databaseType {
case "memory":
db = pkg.NewInMemoryDatabase(false)
case "boltdb":
tmpdir, err := ioutil.TempDir("", "package")
if err != nil {
Fatal(err)
}
db = pkg.NewBoltDatabase(tmpdir)
}
db = pkg.NewInMemoryDatabase(false)
defer db.Clean()
generalRecipe := tree.NewCompilerRecipe(db)
if fromRepo {
if err := installer.LoadBuildTree(generalRecipe, db, LuetCfg); err != nil {
Warning("errors while loading trees from repositories", err.Error())
}
}
for _, src := range treePaths {
Info("Loading tree", src)
err := generalRecipe.Load(src)
if err != nil {
Fatal("Error: " + err.Error())
}
helpers.CheckErr(generalRecipe.Load(src))
}
Info("Building in", dst)
@@ -154,38 +152,44 @@ Build packages specifying multiple definition trees:
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
LuetCfg.GetSolverOptions().Discount = float32(discount)
LuetCfg.GetSolverOptions().MaxAttempts = attempts
pullRepo, _ := cmd.Flags().GetStringArray("pull-repository")
LuetCfg.GetGeneral().ShowBuildOutput = LuetCfg.Viper.GetBool("general.show_build_output")
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
opts := compiler.NewDefaultCompilerOptions()
opts.SolverOptions = *LuetCfg.GetSolverOptions()
opts.ImageRepository = imageRepository
opts.PullFirst = pull
opts.KeepImg = keepImages
opts.Push = push
opts.OnlyDeps = onlydeps
opts.NoDeps = nodeps
opts.Wait = wait
opts.PackageTargetOnly = onlyTarget
opts.BuildValuesFile = values
var solverOpts solver.Options
if concurrent {
solverOpts = solver.Options{Type: solver.ParallelSimple, Concurrency: concurrency}
} else {
solverOpts = solver.Options{Type: solver.SingleCoreSimple, Concurrency: concurrency}
opts := &LuetSolverOptions{
Type: stype,
LearnRate: float32(rate),
Discount: float32(discount),
MaxAttempts: attempts,
}
luetCompiler := compiler.NewLuetCompiler(compilerBackend, generalRecipe.GetDatabase(), opts, solverOpts)
luetCompiler.SetBackendArgs(backendArgs)
luetCompiler.SetConcurrency(concurrency)
luetCompiler.SetCompressionType(compiler.CompressionImplementation(compressionType))
Debug("Solver", opts.CompactString())
if concurrent {
opts.Options = solver.Options{Type: solver.ParallelSimple, Concurrency: concurrency}
} else {
opts.Options = solver.Options{Type: solver.SingleCoreSimple, Concurrency: concurrency}
}
luetCompiler := compiler.NewLuetCompiler(compilerBackend, generalRecipe.GetDatabase(),
options.NoDeps(nodeps),
options.WithBackendType(backendType),
options.PushImages(push),
options.WithBuildValues(values),
options.WithPullRepositories(pullRepo),
options.WithPushRepository(imageRepository),
options.Rebuild(rebuild),
options.WithSolverOptions(*opts),
options.Wait(wait),
options.OnlyTarget(onlyTarget),
options.PullFirst(pull),
options.KeepImg(keepImages),
options.OnlyDeps(onlydeps),
options.BackendArgs(backendArgs),
options.Concurrency(concurrency),
options.WithCompressionType(compression.Implementation(compressionType)),
)
if full {
specs, err := luetCompiler.FromDatabase(generalRecipe.GetDatabase(), true, dst)
if err != nil {
@@ -198,7 +202,6 @@ Build packages specifying multiple definition trees:
}
} else if !all {
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
if err != nil {
Fatal("Invalid package string ", a, ": ", err.Error())
@@ -226,13 +229,13 @@ Build packages specifying multiple definition trees:
}
}
var artifact []compiler.Artifact
var artifact []*artifact.PackageArtifact
var errs []error
if revdeps {
artifact, errs = luetCompiler.CompileWithReverseDeps(privileged, compilerSpecs)
} else if pretend {
toCalculate := []compiler.CompilationSpec{}
toCalculate := []*compilerspec.LuetCompilationSpec{}
if full {
var err error
toCalculate, err = luetCompiler.ComputeMinimumCompilableSet(compilerSpecs.All()...)
@@ -244,11 +247,12 @@ Build packages specifying multiple definition trees:
}
for _, sp := range toCalculate {
packs, err := luetCompiler.ComputeDepTree(sp)
ht := compiler.NewHashTree(generalRecipe.GetDatabase())
hashTree, err := ht.Query(luetCompiler, sp)
if err != nil {
errs = append(errs, err)
}
for _, p := range packs {
for _, p := range hashTree.Dependencies {
results.Packages = append(results.Packages,
PackageResult{
Name: p.Package.GetName(),
@@ -282,6 +286,7 @@ Build packages specifying multiple definition trees:
}
}
} else {
artifact, errs = luetCompiler.CompileParallel(privileged, compilerSpecs)
}
if len(errs) != 0 {
@@ -291,7 +296,7 @@ Build packages specifying multiple definition trees:
Fatal("Bailing out")
}
for _, a := range artifact {
Info("Artifact generated:", a.GetPath())
Info("Artifact generated:", a.Path)
}
},
}
@@ -304,12 +309,11 @@ func init() {
buildCmd.Flags().StringSliceP("tree", "t", []string{path}, "Path of the tree to use.")
buildCmd.Flags().String("backend", "docker", "backend used (docker,img)")
buildCmd.Flags().Bool("privileged", false, "Privileged (Keep permissions)")
buildCmd.Flags().String("database", "memory", "database used for solving (memory,boltdb)")
buildCmd.Flags().Bool("privileged", true, "Privileged (Keep permissions)")
buildCmd.Flags().Bool("revdeps", false, "Build with revdeps")
buildCmd.Flags().Bool("all", false, "Build all specfiles in the tree")
buildCmd.Flags().Bool("full", false, "Build all packages (optimized)")
buildCmd.Flags().String("values", "", "Build values file to interpolate with each package")
buildCmd.Flags().StringSlice("values", []string{}, "Build values file to interpolate with each package")
buildCmd.Flags().StringSliceP("backend-args", "a", []string{}, "Backend args")
buildCmd.Flags().String("destination", filepath.Join(path, "build"), "Destination folder")
@@ -328,8 +332,10 @@ func init() {
buildCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
buildCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
buildCmd.Flags().Bool("live-output", LuetCfg.GetGeneral().ShowBuildOutput, "Enable live output of the build phase.")
buildCmd.Flags().Bool("from-repositories", false, "Consume the user-defined repositories to pull specfiles from")
buildCmd.Flags().Bool("rebuild", false, "To combine with --pull. Allows to rebuild the target package even if an image is available, against a local values file")
buildCmd.Flags().Bool("pretend", false, "Just print what packages will be compiled")
buildCmd.Flags().StringArrayP("pull-repository", "p", []string{}, "A list of repositories to pull the cache from")
buildCmd.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")

View File

@@ -22,7 +22,7 @@ import (
. "github.com/mudler/luet/pkg/config"
config "github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/logger"
"github.com/spf13/cobra"
@@ -47,7 +47,7 @@ var cleanupCmd = &cobra.Command{
LuetCfg.System.DatabasePath = dbpath
LuetCfg.System.Rootfs = rootfs
// Check if cache dir exists
if helpers.Exists(LuetCfg.GetSystem().GetSystemPkgsCacheDirPath()) {
if fileHelper.Exists(LuetCfg.GetSystem().GetSystemPkgsCacheDirPath()) {
files, err := ioutil.ReadDir(LuetCfg.GetSystem().GetSystemPkgsCacheDirPath())
if err != nil {

View File

@@ -18,11 +18,13 @@ import (
"os"
"path/filepath"
helpers "github.com/mudler/luet/cmd/helpers"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/compiler/types/compression"
. "github.com/mudler/luet/pkg/config"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/logger"
// . "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/spf13/cobra"
@@ -74,7 +76,7 @@ Create a repository from the metadata description defined in the luet.yaml confi
},
Run: func(cmd *cobra.Command, args []string) {
var err error
var repo installer.Repository
var repo *installer.LuetSystemRepository
treePaths := viper.GetStringSlice("tree")
dst := viper.GetString("output")
@@ -90,19 +92,19 @@ Create a repository from the metadata description defined in the luet.yaml confi
metaName := viper.GetString("meta-filename")
source_repo := viper.GetString("repo")
backendType := viper.GetString("backend")
fromRepo, _ := cmd.Flags().GetBool("from-repositories")
treeFile := installer.NewDefaultTreeRepositoryFile()
metaFile := installer.NewDefaultMetaRepositoryFile()
compilerBackend := backend.NewBackend(backendType)
compilerBackend, err := compiler.NewBackend(backendType)
helpers.CheckErr(err)
force := viper.GetBool("force-push")
imagePush := viper.GetBool("push-images")
if source_repo != "" {
// Search for system repository
lrepo, err := LuetCfg.GetSystemRepository(source_repo)
if err != nil {
Fatal("Error: " + err.Error())
}
helpers.CheckErr(err)
if len(treePaths) <= 0 {
treePaths = []string{lrepo.TreePath}
@@ -118,19 +120,23 @@ Create a repository from the metadata description defined in the luet.yaml confi
lrepo.Priority,
packages,
treePaths,
pkg.NewInMemoryDatabase(false), compilerBackend, dst, imagePush, force)
pkg.NewInMemoryDatabase(false),
compilerBackend,
dst,
imagePush,
force,
fromRepo,
LuetCfg)
helpers.CheckErr(err)
} else {
repo, err = installer.GenerateRepository(name, descr, t, urls, 1, packages,
treePaths, pkg.NewInMemoryDatabase(false), compilerBackend, dst, imagePush, force)
}
if err != nil {
Fatal("Error: " + err.Error())
treePaths, pkg.NewInMemoryDatabase(false), compilerBackend, dst, imagePush, force, fromRepo, LuetCfg)
helpers.CheckErr(err)
}
if treetype != "" {
treeFile.SetCompressionType(compiler.CompressionImplementation(treetype))
treeFile.SetCompressionType(compression.Implementation(treetype))
}
if treeName != "" {
@@ -138,7 +144,7 @@ Create a repository from the metadata description defined in the luet.yaml confi
}
if metatype != "" {
metaFile.SetCompressionType(compiler.CompressionImplementation(metatype))
metaFile.SetCompressionType(compression.Implementation(metatype))
}
if metaName != "" {
@@ -149,17 +155,15 @@ Create a repository from the metadata description defined in the luet.yaml confi
repo.SetRepositoryFile(installer.REPOFILE_META_KEY, metaFile)
err = repo.Write(dst, reset, true)
if err != nil {
Fatal("Error: " + err.Error())
}
helpers.CheckErr(err)
},
}
func init() {
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
helpers.CheckErr(err)
createrepoCmd.Flags().String("packages", filepath.Join(path, "build"), "Packages folder (output from build)")
createrepoCmd.Flags().StringSliceP("tree", "t", []string{path}, "Path of the source trees to use.")
createrepoCmd.Flags().String("output", filepath.Join(path, "build"), "Destination for generated archives. With 'docker' repository type, it should be an image reference (e.g 'foo/bar')")
@@ -178,6 +182,7 @@ func init() {
createrepoCmd.Flags().String("tree-filename", installer.TREE_TARBALL, "Repository tree filename")
createrepoCmd.Flags().String("meta-compression", "none", "Compression alg: none, gzip, zstd")
createrepoCmd.Flags().String("meta-filename", installer.REPOSITORY_METAFILE+".tar", "Repository metadata filename")
createrepoCmd.Flags().Bool("from-repositories", false, "Consume the user-defined repositories to pull specfiles from")
RootCmd.AddCommand(createrepoCmd)
}

View File

@@ -18,7 +18,8 @@ package cmd_database
import (
"io/ioutil"
"github.com/mudler/luet/pkg/compiler"
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
@@ -66,21 +67,21 @@ For reference, inspect a "metadata.yaml" file generated while running "luet buil
if err != nil {
Fatal("Failed reading ", a, ": ", err.Error())
}
art, err := compiler.NewPackageArtifactFromYaml(dat)
art, err := artifact.NewPackageArtifactFromYaml(dat)
if err != nil {
Fatal("Failed reading yaml ", a, ": ", err.Error())
}
files := art.GetFiles()
files := art.Files
if _, err := systemDB.CreatePackage(art.GetCompileSpec().GetPackage()); err != nil {
if _, err := systemDB.CreatePackage(art.CompileSpec.GetPackage()); err != nil {
Fatal("Failed to create ", a, ": ", err.Error())
}
if err := systemDB.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: art.GetCompileSpec().GetPackage().GetFingerPrint(), Files: files}); err != nil {
if err := systemDB.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: art.CompileSpec.GetPackage().GetFingerPrint(), Files: files}); err != nil {
Fatal("Failed setting package files for ", a, ": ", err.Error())
}
Info(art.GetCompileSpec().GetPackage().HumanReadableString(), " created")
Info(art.CompileSpec.GetPackage().HumanReadableString(), " created")
}
},

View File

@@ -22,6 +22,8 @@ import (
"regexp"
"strings"
. "github.com/mudler/luet/pkg/logger"
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
pkg "github.com/mudler/luet/pkg/package"
version "github.com/mudler/luet/pkg/versioner"
@@ -56,7 +58,8 @@ func packageData(p string) (string, string) {
}
func ParsePackageStr(p string) (*pkg.DefaultPackage, error) {
if !strings.HasPrefix(p, "=") {
if !(strings.HasPrefix(p, "=") || strings.HasPrefix(p, ">") ||
strings.HasPrefix(p, "<")) {
ver := ">=0"
cat := ""
name := ""
@@ -111,3 +114,9 @@ func ParsePackageStr(p string) (*pkg.DefaultPackage, error) {
return pack, nil
}
func CheckErr(err error) {
if err != nil {
Fatal(err)
}
}

View File

@@ -66,5 +66,37 @@ var _ = Describe("CLI Helpers", func() {
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal("1.2"))
})
It("accept gentoo regex parsing with with condition", func() {
pack, err := ParsePackageStr(">=cat/foo-1.2")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal(">=1.2"))
})
It("accept gentoo regex parsing with with condition2", func() {
pack, err := ParsePackageStr("<cat/foo-1.2")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal("<1.2"))
})
It("accept gentoo regex parsing with with condition3", func() {
pack, err := ParsePackageStr(">cat/foo-1.2")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal(">1.2"))
})
It("accept gentoo regex parsing with with condition4", func() {
pack, err := ParsePackageStr("<=cat/foo-1.2")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal("<=1.2"))
})
})
})

View File

@@ -70,16 +70,6 @@ To force install a package:
toInstall = append(toInstall, pack)
}
// This shouldn't be necessary, but we need to unmarshal the repositories to a concrete struct, thus we need to port them back to the Repositories type
repos := installer.Repositories{}
for _, repo := range LuetCfg.SystemRepositories {
if !repo.Enable {
continue
}
r := installer.NewSystemRepository(repo)
repos = append(repos, r)
}
stype := LuetCfg.Viper.GetString("solver.type")
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
@@ -89,6 +79,7 @@ To force install a package:
onlydeps := LuetCfg.Viper.GetBool("onlydeps")
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
yes := LuetCfg.Viper.GetBool("yes")
downloadOnly, _ := cmd.Flags().GetBool("download-only")
dbpath := LuetCfg.Viper.GetString("system.database_path")
rootfs := LuetCfg.Viper.GetString("system.rootfs")
@@ -110,6 +101,7 @@ To force install a package:
}
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
repos := installer.SystemRepositories(LuetCfg)
// Load config protect configs
installer.LoadConfigProtectConfs(LuetCfg)
@@ -121,6 +113,7 @@ To force install a package:
Force: force,
OnlyDeps: onlydeps,
PreserveSystemEssentialData: true,
DownloadOnly: downloadOnly,
Ask: !yes,
})
inst.Repositories(repos)
@@ -147,6 +140,7 @@ func init() {
installCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
installCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
installCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
installCmd.Flags().Bool("download-only", false, "Download only")
RootCmd.AddCommand(installCmd)
}

View File

@@ -20,7 +20,9 @@ import (
"time"
helpers "github.com/mudler/luet/cmd/helpers"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/types/artifact"
"github.com/mudler/luet/pkg/compiler/types/compression"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
@@ -64,21 +66,21 @@ Afterwards, you can use the content generated and associate it with a tree and a
Fatal("Invalid package string ", packageName, ": ", err.Error())
}
spec := &compiler.LuetCompilationSpec{Package: p}
artifact := compiler.NewPackageArtifact(filepath.Join(dst, p.GetFingerPrint()+".package.tar"))
artifact.SetCompressionType(compiler.CompressionImplementation(compressionType))
err = artifact.Compress(sourcePath, concurrency)
spec := &compilerspec.LuetCompilationSpec{Package: p}
a := artifact.NewPackageArtifact(filepath.Join(dst, p.GetFingerPrint()+".package.tar"))
a.CompressionType = compression.Implementation(compressionType)
err = a.Compress(sourcePath, concurrency)
if err != nil {
Fatal("failed compressing ", packageName, ": ", err.Error())
}
artifact.SetCompileSpec(spec)
filelist, err := artifact.FileList()
a.CompileSpec = spec
filelist, err := a.FileList()
if err != nil {
Fatal("failed generating file list for ", packageName, ": ", err.Error())
}
artifact.SetFiles(filelist)
artifact.GetCompileSpec().GetPackage().SetBuildTimestamp(time.Now().String())
err = artifact.WriteYaml(dst)
a.Files = filelist
a.CompileSpec.GetPackage().SetBuildTimestamp(time.Now().String())
err = a.WriteYaml(dst)
if err != nil {
Fatal("failed writing metadata yaml file for ", packageName, ": ", err.Error())
}

View File

@@ -66,6 +66,7 @@ var replaceCmd = &cobra.Command{
dbpath := LuetCfg.Viper.GetString("system.database_path")
rootfs := LuetCfg.Viper.GetString("system.rootfs")
engine := LuetCfg.Viper.GetString("system.database_engine")
downloadOnly, _ := cmd.Flags().GetBool("download-only")
LuetCfg.System.DatabaseEngine = engine
LuetCfg.System.DatabasePath = dbpath
@@ -121,6 +122,7 @@ var replaceCmd = &cobra.Command{
OnlyDeps: onlydeps,
PreserveSystemEssentialData: true,
Ask: !yes,
DownloadOnly: downloadOnly,
})
inst.Repositories(repos)
@@ -148,6 +150,7 @@ func init() {
replaceCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
replaceCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
replaceCmd.Flags().StringSlice("for", []string{}, "Packages that has to be installed in place of others")
replaceCmd.Flags().Bool("download-only", false, "Download only")
RootCmd.AddCommand(replaceCmd)
}

View File

@@ -72,8 +72,8 @@ func NewRepoListCommand() *cobra.Command {
if repo.Cached {
r := installer.NewSystemRepository(repo)
localRepo, _ := r.(*installer.LuetSystemRepository).ReadSpecFile(filepath.Join(repobasedir,
installer.REPOSITORY_SPECFILE), false)
localRepo, _ := r.ReadSpecFile(filepath.Join(repobasedir,
installer.REPOSITORY_SPECFILE))
if localRepo != nil {
tsec, _ := strconv.ParseInt(localRepo.GetLastUpdate(), 10, 64)
repoRevision = Bold(Red(localRepo.GetRevision())).String() +

View File

@@ -25,6 +25,7 @@ import (
"github.com/marcsauter/single"
bus "github.com/mudler/luet/pkg/bus"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
extensions "github.com/mudler/cobra-extensions"
config "github.com/mudler/luet/pkg/config"
@@ -40,7 +41,7 @@ var Verbose bool
var LockedCommands = []string{"install", "uninstall", "upgrade"}
const (
LuetCLIVersion = "0.11.3"
LuetCLIVersion = "0.16.5"
LuetEnvPrefix = "LUET"
)
@@ -97,7 +98,7 @@ To build a package, from a tree definition:
plugin := viper.GetStringSlice("plugin")
bus.Manager.Load(plugin...).Register()
bus.Manager.Initialize(plugin...)
if len(bus.Manager.Plugins) != 0 {
Info(":lollipop:Enabled plugins:")
for _, p := range bus.Manager.Plugins {
@@ -252,7 +253,7 @@ func initConfig() {
}
homeDir := helpers.GetHomeDir()
if helpers.Exists(filepath.Join(pwdDir, ".luet.yaml")) || (homeDir != "" && helpers.Exists(filepath.Join(homeDir, ".luet.yaml"))) {
if fileHelper.Exists(filepath.Join(pwdDir, ".luet.yaml")) || (homeDir != "" && fileHelper.Exists(filepath.Join(homeDir, ".luet.yaml"))) {
viper.AddConfigPath(".")
if homeDir != "" {
viper.AddConfigPath(homeDir)

View File

@@ -158,20 +158,23 @@ func searchOnline(term string, l list.Writer, t table.Writer, label, labelMatch,
} else {
matches = synced.Search(term)
}
for _, m := range matches {
if !revdeps {
if !m.Package.IsHidden() || m.Package.IsHidden() && hidden {
t.AppendRow(packageToRow(m.Repo.GetName(), m.Package))
packageToList(l, m.Repo.GetName(), m.Package)
results.Packages = append(results.Packages,
PackageResult{
Name: m.Package.GetName(),
Version: m.Package.GetVersion(),
Category: m.Package.GetCategory(),
Repository: m.Repo.GetName(),
Hidden: m.Package.IsHidden(),
Files: m.Artifact.GetFiles(),
})
r := &PackageResult{
Name: m.Package.GetName(),
Version: m.Package.GetVersion(),
Category: m.Package.GetCategory(),
Repository: m.Repo.GetName(),
Hidden: m.Package.IsHidden(),
}
if m.Artifact != nil {
r.Files = m.Artifact.Files
}
results.Packages = append(results.Packages, *r)
}
} else {
packs, _ := m.Repo.GetTree().GetDatabase().GetRevdeps(m.Package)
@@ -179,15 +182,17 @@ func searchOnline(term string, l list.Writer, t table.Writer, label, labelMatch,
if !revdep.IsHidden() || revdep.IsHidden() && hidden {
t.AppendRow(packageToRow(m.Repo.GetName(), revdep))
packageToList(l, m.Repo.GetName(), revdep)
results.Packages = append(results.Packages,
PackageResult{
Name: revdep.GetName(),
Version: revdep.GetVersion(),
Category: revdep.GetCategory(),
Repository: m.Repo.GetName(),
Hidden: revdep.IsHidden(),
Files: m.Artifact.GetFiles(),
})
r := &PackageResult{
Name: revdep.GetName(),
Version: revdep.GetVersion(),
Category: revdep.GetCategory(),
Repository: m.Repo.GetName(),
Hidden: revdep.IsHidden(),
}
if m.Artifact != nil {
r.Files = m.Artifact.Files
}
results.Packages = append(results.Packages, *r)
}
}
}
@@ -257,7 +262,7 @@ func searchFiles(term string, l list.Writer, t table.Writer) Results {
Category: m.Package.GetCategory(),
Repository: m.Repo.GetName(),
Hidden: m.Package.IsHidden(),
Files: m.Artifact.GetFiles(),
Files: m.Artifact.Files,
})
}
return results

View File

@@ -25,6 +25,7 @@ import (
helpers "github.com/mudler/luet/cmd/helpers"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/compiler/types/options"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
@@ -57,6 +58,7 @@ func NewTreeImageCommand() *cobra.Command {
treePath, _ := cmd.Flags().GetStringArray("tree")
imageRepository := viper.GetString("image-repository")
pullRepo, _ := cmd.Flags().GetStringArray("pull-repository")
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
@@ -73,12 +75,15 @@ func NewTreeImageCommand() *cobra.Command {
}
compilerBackend := backend.NewSimpleDockerBackend()
opts := compiler.NewDefaultCompilerOptions()
opts.SolverOptions = *LuetCfg.GetSolverOptions()
opts.ImageRepository = imageRepository
solverOpts := solver.Options{Type: solver.SingleCoreSimple, Concurrency: 1}
luetCompiler := compiler.NewLuetCompiler(compilerBackend, reciper.GetDatabase(), opts, solverOpts)
opts := *LuetCfg.GetSolverOptions()
opts.Options = solver.Options{Type: solver.SingleCoreSimple, Concurrency: 1}
luetCompiler := compiler.NewLuetCompiler(
compilerBackend,
reciper.GetDatabase(),
options.WithPushRepository(imageRepository),
options.WithPullRepositories(pullRepo),
options.WithSolverOptions(opts),
)
a := args[0]
@@ -91,9 +96,14 @@ func NewTreeImageCommand() *cobra.Command {
if err != nil {
Fatal("Error: " + err.Error())
}
asserts, err := luetCompiler.ComputeDepTree(spec)
for _, assertion := range asserts { //highly dependent on the order
ht := compiler.NewHashTree(reciper.GetDatabase())
hashtree, err := ht.Query(luetCompiler, spec)
if err != nil {
Fatal("Error: " + err.Error())
}
for _, assertion := range hashtree.Solution { //highly dependent on the order
//buildImageHash := imageRepository + ":" + assertion.Hash.BuildHash
currentPackageImageHash := imageRepository + ":" + assertion.Hash.PackageHash
@@ -135,6 +145,7 @@ func NewTreeImageCommand() *cobra.Command {
ans.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")
ans.Flags().StringArrayP("tree", "t", []string{path}, "Path of the tree to use.")
ans.Flags().String("image-repository", "luet/cache", "Default base image string for generated image")
ans.Flags().StringArrayP("pull-repository", "p", []string{}, "A list of repositories to pull the cache from")
return ans
}

View File

@@ -66,6 +66,7 @@ var upgradeCmd = &cobra.Command{
dbpath := LuetCfg.Viper.GetString("system.database_path")
rootfs := LuetCfg.Viper.GetString("system.rootfs")
engine := LuetCfg.Viper.GetString("system.database_engine")
downloadOnly, _ := cmd.Flags().GetBool("download-only")
LuetCfg.System.DatabaseEngine = engine
LuetCfg.System.DatabasePath = dbpath
@@ -96,6 +97,7 @@ var upgradeCmd = &cobra.Command{
UpgradeNewRevisions: sync,
PreserveSystemEssentialData: true,
Ask: !yes,
DownloadOnly: downloadOnly,
})
inst.Repositories(repos)
@@ -123,6 +125,7 @@ func init() {
upgradeCmd.Flags().Bool("sync", false, "Upgrade packages with new revisions (experimental)")
upgradeCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
upgradeCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
upgradeCmd.Flags().Bool("download-only", false, "Download only")
RootCmd.AddCommand(upgradeCmd)
}

36
cmd/util.go Normal file
View File

@@ -0,0 +1,36 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
. "github.com/mudler/luet/cmd/util"
"github.com/spf13/cobra"
)
var utilGroup = &cobra.Command{
Use: "util [command] [OPTIONS]",
Short: "General luet internal utilities exposed",
}
func init() {
RootCmd.AddCommand(utilGroup)
utilGroup.AddCommand(
NewUnpackCommand(),
)
}

98
cmd/util/unpack.go Normal file
View File

@@ -0,0 +1,98 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package util
import (
"fmt"
"os"
"path/filepath"
"github.com/docker/docker/api/types"
"github.com/docker/go-units"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers/docker"
. "github.com/mudler/luet/pkg/logger"
"github.com/spf13/cobra"
)
func NewUnpackCommand() *cobra.Command {
c := &cobra.Command{
Use: "unpack image path",
Short: "Unpack a docker image natively",
Long: `unpack doesn't need the docker daemon to run, and unpacks a docker image in the specified directory:
luet util unpack golang:alpine /alpine
`,
PreRun: func(cmd *cobra.Command, args []string) {
if len(args) != 2 {
Fatal("Expects an image and a path")
}
},
Run: func(cmd *cobra.Command, args []string) {
image := args[0]
destination, err := filepath.Abs(args[1])
if err != nil {
Error("Invalid path %s", destination)
os.Exit(1)
}
verify, _ := cmd.Flags().GetBool("verify")
user, _ := cmd.Flags().GetString("auth-username")
pass, _ := cmd.Flags().GetString("auth-password")
authType, _ := cmd.Flags().GetString("auth-type")
server, _ := cmd.Flags().GetString("auth-server-address")
identity, _ := cmd.Flags().GetString("auth-identity-token")
registryToken, _ := cmd.Flags().GetString("auth-registry-token")
temp, err := config.LuetCfg.GetSystem().TempDir("contentstore")
if err != nil {
Fatal("Cannot create a tempdir", err.Error())
}
Info("Downloading", image, "to", destination)
auth := &types.AuthConfig{
Username: user,
Password: pass,
ServerAddress: server,
Auth: authType,
IdentityToken: identity,
RegistryToken: registryToken,
}
info, err := docker.DownloadAndExtractDockerImage(temp, image, destination, auth, verify)
if err != nil {
Error(err.Error())
os.Exit(1)
}
Info(fmt.Sprintf("Pulled: %s %s", info.Target.Digest, info.Name))
Info(fmt.Sprintf("Size: %s", units.BytesSize(float64(info.ContentSize))))
},
}
c.Flags().String("auth-username", "", "Username to authenticate to registry/notary")
c.Flags().String("auth-password", "", "Password to authenticate to registry")
c.Flags().String("auth-type", "", "Auth type")
c.Flags().String("auth-server-address", "", "Authentication server address")
c.Flags().String("auth-identity-token", "", "Authentication identity token")
c.Flags().String("auth-registry-token", "", "Authentication registry token")
c.Flags().Bool("verify", false, "Verify signed images to notary before to pull")
return c
}

View File

@@ -7,7 +7,7 @@ fi
set -ex
export LUET_NOLOCK=true
LUET_VERSION=$(curl -s https://api.github.com/repos/mudler/luet/releases/latest | ( grep -oP '"tag_name": "\K(.*)(?=")' || echo "0.9.24" ))
LUET_VERSION=$(curl -s https://api.github.com/repos/mudler/luet/releases/latest | grep tag_name | awk '{ print $2 }' | sed -e 's/\"//g' -e 's/,//g' || echo "0.9.24" )
LUET_ROOTFS=${LUET_ROOTFS:-/}
LUET_DATABASE_PATH=${LUET_DATABASE_PATH:-/var/luet/db}
LUET_DATABASE_ENGINE=${LUET_DATABASE_ENGINE:-boltdb}

22
go.mod
View File

@@ -4,23 +4,25 @@ go 1.14
require (
github.com/DataDog/zstd v1.4.4 // indirect
github.com/Sabayon/pkgs-checker v0.7.2
github.com/Sabayon/pkgs-checker v0.8.1
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154
github.com/briandowns/spinner v1.12.1-0.20201220203425-e201aaea0a31
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec
github.com/containerd/containerd v1.4.1-0.20201117152358-0edc412565dc
github.com/crillab/gophersat v1.3.2-0.20201023142334-3fc2ac466765
github.com/docker/cli v0.0.0-20200227165822-2298e6a3fe24
github.com/docker/distribution v2.7.1+incompatible
github.com/docker/docker v20.10.0-beta1.0.20201110211921-af34b94a78a1+incompatible
github.com/docker/go-units v0.4.0
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd
github.com/fsouza/go-dockerclient v1.6.4
github.com/genuinetools/img v0.5.11
github.com/ghodss/yaml v1.0.0
github.com/google/go-containerregistry v0.2.1
github.com/google/renameio v1.0.0
github.com/hashicorp/go-multierror v1.0.0
github.com/hashicorp/go-version v1.2.0
github.com/hashicorp/go-version v1.2.1
github.com/imdario/mergo v0.3.8
github.com/jedib0t/go-pretty v4.3.0+incompatible
github.com/jedib0t/go-pretty/v6 v6.0.5
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3
@@ -30,26 +32,30 @@ require (
github.com/kyokomi/emoji v2.1.0+incompatible
github.com/logrusorgru/aurora v0.0.0-20190417123914-21d75270181e
github.com/marcsauter/single v0.0.0-20181104081128-f8bf46f26ec0
github.com/mitchellh/hashstructure/v2 v2.0.1
github.com/moby/buildkit v0.7.2
github.com/moby/sys/mount v0.2.0 // indirect
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82
github.com/mudler/go-pluggable v0.0.0-20210513155700-54c6443073af
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290
github.com/onsi/ginkgo v1.14.2
github.com/onsi/gomega v1.10.3
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.1
github.com/otiai10/copy v1.2.1-0.20200916181228-26f84a0b1578
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f
github.com/pkg/errors v0.9.1
github.com/schollz/progressbar/v3 v3.7.1
github.com/sirupsen/logrus v1.6.0
github.com/spf13/cobra v1.0.0
github.com/spf13/viper v1.7.0
github.com/sirupsen/logrus v1.7.0
github.com/spf13/cobra v1.1.1
github.com/spf13/viper v1.7.1
github.com/theupdateframework/notary v0.7.0
go.etcd.io/bbolt v1.3.5
go.uber.org/atomic v1.5.1 // indirect
go.uber.org/multierr v1.4.0 // indirect
go.uber.org/multierr v1.4.0
go.uber.org/zap v1.13.0
google.golang.org/grpc v1.29.1
gopkg.in/yaml.v2 v2.3.0
helm.sh/helm/v3 v3.3.4

100
go.sum
View File

@@ -81,6 +81,8 @@ github.com/Microsoft/go-winio v0.4.15-0.20200113171025-3fe6c5262873/go.mod h1:tT
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7 h1:ptnOoufxGSzauVTsdE+wMYnCWA301PdoN4xg5oRdZpg=
github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
github.com/MottainaiCI/simplestreams-builder v0.1.0 h1:A8KJN22Xkx7NUKC9/zWmd1UhIqRn3bdHo0wv/HsAHx8=
github.com/MottainaiCI/simplestreams-builder v0.1.0/go.mod h1:+Gbv6dg6TPHWq4oDjZY1vn978PLCEZ2hOu8kvn+S7t4=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
@@ -89,10 +91,12 @@ github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbt
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/Sabayon/pkgs-checker v0.7.2 h1:mh53u5D7FTCeBJevYQA9cCxAWGTSuKqw7m/x7GsQVb0=
github.com/Sabayon/pkgs-checker v0.7.2/go.mod h1:GFGM6ZzSE5owdGgjLnulj0+Vt9UTd5LFGmB2AOVPYrE=
github.com/Sabayon/pkgs-checker v0.8.1 h1:pVen975z9WIecq7luntUn+0XzGdiyz2CsDay8w+ZmOw=
github.com/Sabayon/pkgs-checker v0.8.1/go.mod h1:GC9PBUzcq0QVEBGRA1IIMXf6wHxo34KH5BeqoyJsLpo=
github.com/Sereal/Sereal v0.0.0-20181211220259-509a78ddbda3 h1:Xu7z47ZiE/J+sKXHZMGxEor/oY2q6dq51fkO0JqdSwY=
github.com/Sereal/Sereal v0.0.0-20181211220259-509a78ddbda3/go.mod h1:D0JMgToj/WdxCgd30Kc1UcA9E+WdZoJqeVOuYW7iTBM=
github.com/Shopify/logrus-bugsnag v0.0.0-20170309145241-6dbc35f2c30d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d h1:UrqY+r/OJnIp5u0s1SbQ8dVfLCZJsnvazdBP5hS4iRs=
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
@@ -134,28 +138,40 @@ github.com/aws/aws-sdk-go v1.28.2/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN
github.com/aws/aws-sdk-go v1.31.6/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/aybabtme/rgbterm v0.0.0-20170906152045-cc83f3b3ce59/go.mod h1:q/89r3U2H7sSsE2t6Kca0lfwTK8JdoNGS/yzM/4iH5I=
github.com/beorn7/perks v0.0.0-20150223135152-b965b613227f/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bitly/go-hostpool v0.1.0 h1:XKmsF6k5el6xHG3WPJ8U0Ku/ye7njX7W81Ng7O2ioR0=
github.com/bitly/go-hostpool v0.1.0/go.mod h1:4gOCgp6+NZnVqlKyZ/iBZFTAJKembaVENUpMkpg42fw=
github.com/bitly/go-simplejson v0.5.0 h1:6IH+V8/tVMab511d5bn4M7EwGXZf9Hj6i2xSwkNEM+Y=
github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
github.com/briandowns/spinner v1.12.1-0.20201220203425-e201aaea0a31 h1:yInAg9pE5qGec5eQ7XdfOTTaGwGxD3bKFVjmD6VKkwc=
github.com/briandowns/spinner v1.12.1-0.20201220203425-e201aaea0a31/go.mod h1:QOuQk7x+EaDASo80FEXwlwiA+j/PPIcX3FScO+3/ZPQ=
github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
github.com/bugsnag/bugsnag-go v1.0.5-0.20150529004307-13fd6b8acda0 h1:s7+5BfS4WFJoVF9pnB8kBk03S7pZXRdKamnV0FOl5Sc=
github.com/bugsnag/bugsnag-go v1.0.5-0.20150529004307-13fd6b8acda0/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b h1:otBG+dV+YK+Soembjv71DPz3uX/V/6MMlSyD9JBQ6kQ=
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 h1:nvj0OLI3YqYXer/kZD8Ri1aaunCxIEsOst1BVJswV0o=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec h1:4XvMn0XuV7qxCH22gbnR79r+xTUaLOSA0GW/egpO3SQ=
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec/go.mod h1:NbXoa59CCAGqtRm7kRrcZIk2dTCJMRVF8QI3BOD7isY=
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5/go.mod h1:/iP1qXHoty45bqomnu2LM+VVyAEdWN+vtSHGlQgyxbw=
github.com/chuckpreslar/emission v0.0.0-20170206194824-a7ddd980baf9 h1:xz6Nv3zcwO2Lila35hcb0QloCQsc38Al13RNEzWRpX4=
@@ -166,12 +182,12 @@ github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMn
github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004 h1:lkAMpLVBDaj17e85keuznYcH5rqI438v41pKcBl4ZxQ=
github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004/go.mod h1:yMWuSON2oQp+43nFtAV/uvKQIFpSPerB57DCt9t8sSA=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/codahale/hdrhistogram v0.0.0-20160425231609-f8ad88b59a58/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0 h1:sDMmm+q/3+BukdIpxwO365v/Rbspp2Nt5XntgQRXq8Q=
github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f h1:tSNMc+rJDfmYntojat8lljbt1mgKNpTxUZJsSzJ9Y1s=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
github.com/containerd/cgroups v0.0.0-20200217135630-d732e370d46d h1:UKAt78F1OvM4ceTn1VvXuYuatXohsFU1eSI2IBtTw9g=
@@ -243,6 +259,8 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE=
github.com/deislabs/oras v0.8.1/go.mod h1:Mx0rMSbBNaNfY9hjpccEnxkOqJL6KGjtxNHPLC4G4As=
github.com/denisenkom/go-mssqldb v0.0.0-20191001013358-cfbb681360f0/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU=
github.com/denisenkom/go-mssqldb v0.0.0-20191128021309-1d7a30a10f73 h1:OGNva6WhsKst5OZf7eZOklDztV3hwtTHovdrLHV+MsA=
github.com/denisenkom/go-mssqldb v0.0.0-20191128021309-1d7a30a10f73/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU=
github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
@@ -265,12 +283,15 @@ github.com/docker/docker-credential-helpers v0.6.0/go.mod h1:WRaJzqw3CTB9bk10avu
github.com/docker/docker-credential-helpers v0.6.1/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
github.com/docker/docker-credential-helpers v0.6.3 h1:zI2p9+1NQYdnG6sMU26EX4aVGlqbInSQxQXLvzJ4RPQ=
github.com/docker/docker-credential-helpers v0.6.3/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c h1:lzqkGL9b3znc+ZUgi7FlLnqjQhcXxkNM/quxIjBVMD0=
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c/go.mod h1:CADgU4DSXK5QUlFslkQu2yW2TKzFZcXq/leZfM0UH5Q=
github.com/docker/go-connections v0.0.0-20180821093606-97c2040d34df/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-connections v0.3.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916 h1:yWHOI+vFjEsAakUTSrtqc/SAHrhSkmn48pqjidZX3QA=
github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
github.com/docker/go-units v0.3.1/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
@@ -284,6 +305,7 @@ github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7/go.mod h1:cyGadeNE
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dvsekhvalnov/jose2go v0.0.0-20170216131308-f21a8cedbbae/go.mod h1:7BvyPhdbLxMXIYTFPLsyJRFMsKmOZnQmzh6Gb+uquuM=
github.com/eapache/go-resiliency v1.1.0/go.mod h1:kFI+JgMyC7bLPUVY133qvEBtVayf5mFgVsvEsIPBvNs=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
@@ -298,6 +320,8 @@ github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymF
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5 h1:Yzb9+7DPaBjB8zlTR87/ElzFsnQfuHnVUVqpZZIcV5Y=
github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5/go.mod h1:a2zkGnVExMxdzMo3M0Hi/3sEU+cWnZpSni0O6/Yb/P0=
github.com/evanphx/json-patch v0.0.0-20200808040245-162e5629780b/go.mod h1:NAJj0yf/KaRKURN6nyi7A9IZydMivZEm9oQLWNjfKDc=
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4=
github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
@@ -379,11 +403,15 @@ github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh
github.com/go-openapi/validate v0.18.0/go.mod h1:Uh4HdOzKt19xGIGm1qHf/ofbX1YQ4Y+MYsct2VUrAJ4=
github.com/go-openapi/validate v0.19.2/go.mod h1:1tRCw7m3jtI8eNWEEliiAqUIcBztB2KDnRCRMUi7GTA=
github.com/go-openapi/validate v0.19.5/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85nY1c2z52x1Gk4=
github.com/go-sql-driver/mysql v1.3.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-sql-driver/mysql v1.5.0 h1:ozyZYNQW3x3HtqT1jira07DN2PArx2v7/mN66gGcHOs=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-yaml/yaml v2.1.0+incompatible h1:RYi2hDdss1u4YE7GwixGzWwVo47T8UQwnTLB6vQiq+o=
github.com/go-yaml/yaml v2.1.0+incompatible/go.mod h1:w2MrLa16VYP0jy6N7M5kHaCkaLENm+P+Tv+MfurjSw0=
github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI=
github.com/gobuffalo/envy v1.7.1/go.mod h1:FurDp9+EDPE4aIUS3ZLyD+7/9fpx7YRt/ukY6jIHf0w=
github.com/gobuffalo/logger v1.0.1/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs=
@@ -402,6 +430,7 @@ github.com/gofrs/flock v0.7.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14j
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/googleapis v1.3.2 h1:kX1es4djPJrsDhY7aZKJy7aZasdcB5oSOEphMjSB53c=
github.com/gogo/googleapis v1.3.2/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
github.com/gogo/protobuf v1.0.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
@@ -410,6 +439,7 @@ github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe h1:lXe2qZdvpiX5WZkZR4hgp4KJVfY3nMkvmwbVkpv1rVY=
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
@@ -432,6 +462,7 @@ github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5y
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
@@ -447,6 +478,8 @@ github.com/golangplus/fmt v0.0.0-20150411045040-2a5d6d7d2995/go.mod h1:lJgMEyOkY
github.com/golangplus/testing v0.0.0-20180327235837-af21d9c3145e/go.mod h1:0AA//k/eakGydO4jKRoRL2j92ZKSzTgj9tclaCrvXHk=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/certificate-transparency-go v1.0.10-0.20180222191210-5ab67e519c93 h1:jc2UWq7CbdszqeH6qu1ougXMIUBfSy8Pbh/anURYbGI=
github.com/google/certificate-transparency-go v1.0.10-0.20180222191210-5ab67e519c93/go.mod h1:QeJfpSbVSfYc7RgB3gJFj9cbuQMMchQxrWXz8Ruopmg=
github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
@@ -469,6 +502,8 @@ github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hf
github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/renameio v1.0.0 h1:xhp2CnJmgQmpJU4RY8chagahUq5mbPPAbiSQstKpVMA=
github.com/google/renameio v1.0.0/go.mod h1:t/HQoYBZSsWSNK35C6CO/TpPLDVWvxOHboWUAweKUpk=
github.com/google/shlex v0.0.0-20150127133951-6f45313302b9 h1:JM174NTeGNJ2m/oLH3UOWOvWQQKd+BoL3hcSCUWFLt0=
github.com/google/shlex v0.0.0-20150127133951-6f45313302b9/go.mod h1:RpwtwJQFrIEPstU94h88MWPXP2ektJZ8cZ0YntAmXiE=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@@ -487,6 +522,7 @@ github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORR
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.0/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.4 h1:VuZ8uybHlWmqV03+zRzdwKL4tUnIp1MAQtp1mIFE1bc=
@@ -505,6 +541,8 @@ github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645 h1:MJG/KsmcqMwFAkh8mTnAwhyKoB+sTAnY4CACC110tbU=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645/go.mod h1:6iZfnjpejD4L/4DwD7NryNaJyCQdzwWwH2MWhCA90Kw=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed/go.mod h1:tMWxXQ9wFIaZeTI9F+hmhFiGpFmhOHzyShyFUhRm0H4=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
@@ -523,6 +561,8 @@ github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdv
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-version v1.2.0 h1:3vNe/fWF5CBgRIguda1meWhsZHy3m8gCJ5wx+dIzX/E=
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/go-version v1.2.1 h1:zEfKbn2+PDgroKdiOzqiE8rsmLqU2uwi5PB5pBJ3TkI=
github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
@@ -561,6 +601,12 @@ github.com/jedib0t/go-pretty/v6 v6.0.5/go.mod h1:MTr6FgcfNdnN5wPVBzJ6mhJeDyiF0yB
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3 h1:sHsPfNMAG70QAvKbddQ0uScZCHQoZsT5NykGRCeeeIs=
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3/go.mod h1:yL958EeXv8Ylng6IfnvG4oflryUi3vgA3xPs9hmII1s=
github.com/jinzhu/gorm v0.0.0-20170222002820-5409931a1bb8 h1:CZkYfurY6KGhVtlalI4QwQ6T0Cu6iuY3e0x5RLu96WE=
github.com/jinzhu/gorm v0.0.0-20170222002820-5409931a1bb8/go.mod h1:Vla75njaFJ8clLU1W44h34PjIkijhjHIYnZxMqCdxqo=
github.com/jinzhu/inflection v0.0.0-20170102125226-1c35d901db3d h1:jRQLvyVGL+iVtDElaEIDdKwpPqUIZJfzkNLV34htpEc=
github.com/jinzhu/inflection v0.0.0-20170102125226-1c35d901db3d/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.1 h1:g39TucaRWyV3dwDO++eEc6qf8TVIQ/Da48WmqjZ3i7E=
github.com/jinzhu/now v1.1.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
@@ -580,6 +626,8 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/juju/loggo v0.0.0-20190526231331-6e530bcce5d8 h1:UUHMLvzt/31azWTN/ifGWef4WUqvXk0iRqdhdy/2uzI=
github.com/juju/loggo v0.0.0-20190526231331-6e530bcce5d8/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/k0kubun/go-ansi v0.0.0-20180517002512-3bf9e2903213/go.mod h1:vNUNkEQ1e29fT/6vq2aBdFsgNPmy8qMdSay1npru+Sw=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
@@ -612,8 +660,10 @@ github.com/kyokomi/emoji v2.1.0+incompatible h1:+DYU2RgpI6OHG4oQkM5KlqD3Wd3UPEsX
github.com/kyokomi/emoji v2.1.0+incompatible/go.mod h1:mZ6aGCD7yk8j6QY6KICwnZ2pxoszVseX1DNoGtU2tBA=
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0/go.mod h1:dXGbAdH5GtBTC4WfIxhKZfyBF/HBFgRZSWwZ9g/He9o=
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0/go.mod h1:vmVJ0l/dxyfGW6FmdpVm2joNMFikkuWg0EoCKLGUMNw=
github.com/lib/pq v0.0.0-20150723085316-0dad96c0b94f/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.7.0 h1:h93mCPfUSkaul3Ka/VG8uZdmW1uMHDGxzu0NWHuJmHY=
github.com/lib/pq v1.7.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM=
@@ -622,6 +672,7 @@ github.com/lithammer/dedent v1.1.0/go.mod h1:jrXYCQtgg0nJiN+StA2KgR7w6CiQNv9Fd/Z
github.com/logrusorgru/aurora v0.0.0-20190417123914-21d75270181e h1:yRWBTwWfMy5YPjT14Jr+p12ygqLpM9K5ojbbNPSd8hI=
github.com/logrusorgru/aurora v0.0.0-20190417123914-21d75270181e/go.mod h1:7rIyQOR62GCctdiQpZ/zOJlFyk6y+94wXzv6RNZgaR4=
github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
github.com/magiconair/properties v1.5.3/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
@@ -651,13 +702,19 @@ github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzp
github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/QdE+0=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-shellwords v1.0.10/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y=
github.com/mattn/go-sqlite3 v1.6.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-sqlite3 v1.9.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-sqlite3 v1.12.0 h1:u/x3mp++qUxvYfulZ4HKOvVO0JWhk7HtE8lWhbGz/Do=
github.com/mattn/go-sqlite3 v1.12.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-sqlite3 v1.14.4 h1:4rQjbDxdu9fSgI/r3KN72G3c2goxknAqHHgPWWs8UlI=
github.com/mattn/go-sqlite3 v1.14.4/go.mod h1:WVKg1VTActs4Qso6iwGbiFih2UIHo0ENGwNd0Lj+XmI=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/maxbrunsfeld/counterfeiter/v6 v6.2.2/go.mod h1:eD9eIE7cdwcMi9rYluz88Jz2VyhSmden33/aXg4oVIY=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/pkcs11 v1.0.2 h1:CIBkOawOtzJNE0B+EpRiUBzuVW7JEQAwdwhSS6YhIeg=
github.com/miekg/pkcs11 v1.0.2/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
@@ -671,7 +728,10 @@ github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS4
github.com/mitchellh/hashstructure v0.0.0-20170609045927-2bca23e0e452/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1DutKwClXU/ABz6AQ=
github.com/mitchellh/hashstructure v1.0.0 h1:ZkRJX1CyOoTkar7p/mLS5TZU4nJ1Rn/F8u9dGS02Q3Y=
github.com/mitchellh/hashstructure v1.0.0/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1DutKwClXU/ABz6AQ=
github.com/mitchellh/hashstructure/v2 v2.0.1 h1:L60q1+q7cXE4JeEJJKMnh2brFIe3rZxCihYAB61ypAY=
github.com/mitchellh/hashstructure/v2 v2.0.1/go.mod h1:MG3aRVU/N29oo/V/IhBX8GR/zz4kQkprJgF2EVszyDE=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20150613213606-2caf8efc9366/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
@@ -700,8 +760,8 @@ github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d h1:fKh+rvw
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d/go.mod h1:puRUWSwyecW2V355tKncwPVPRAjQBduPsFjG0mrV/Nw=
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87 h1:mGz7T8KvmHH0gLWPI5tQne8xl2cO3T8wrrb6Aa16Jxo=
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87/go.mod h1:1w4zI1LYXDeiUXqedPcrT5eQJnmKR6dbg5iJMgSIP/Y=
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82 h1:Hkefw2tzoKATVUTFsCtDlUnY180+OE851qGbq45ATxk=
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82/go.mod h1:4P/ULate+2QxoAQtojaRjyO5VGMhV0KLnSdAS8nuBbo=
github.com/mudler/go-pluggable v0.0.0-20210513155700-54c6443073af h1:jixIxEgLSqu24eMiyzfCI+roa5IaOUhF546ePSFyHeY=
github.com/mudler/go-pluggable v0.0.0-20210513155700-54c6443073af/go.mod h1:WmKcT8ONmhDQIqQ+HxU+tkGWjzBEyY/KFO8LTGCu4AI=
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290 h1:426hFyXMpXeqIeGJn2cGAW9ogvM2Jf+Jv23gtVPvBLM=
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290/go.mod h1:uP5BBgFxq2wNWo7n1vnY5SSbgL0WDshVJrOO12tZ/lA=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
@@ -822,10 +882,12 @@ github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndr
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.0.0-20180924113449-f69c853d21c1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.0-pre1.0.20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.3.0 h1:miYCvYqFXtl/J9FIy8eNpBfYthAEFg+Ys0XyUVEcDsc=
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
@@ -833,6 +895,7 @@ github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
@@ -840,6 +903,7 @@ github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7q
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20180920065004-418d78d0b9a7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
@@ -891,6 +955,8 @@ github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0 h1:UBcNElsrwanuuMsnGSlYmtmgbb23qDR5dG+6X6Oo89I=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0 h1:ShrD1U9pZB12TX0cVy0DtePoCH97K8EtX+mg7ZARUtM=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/assertions v1.0.0/go.mod h1:kHHU4qYBaI3q23Pp3VPrmWhuIUrLW/7eUrw0BU5VaoM=
github.com/smartystreets/assertions v1.0.1 h1:voD4ITNjPL5jjBfgR/r8fPIIBrliWrWHeiJApdr3r4w=
@@ -907,30 +973,39 @@ github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasO
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v0.0.0-20150508191742-4d07383ffe94/go.mod h1:r2rcYCSwa1IExKTDiTfzaxqT2FNHs8hODu4LnUfgKEg=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.1/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/cobra v1.0.0 h1:6m/oheQuQ13N9ks4hubMG6BnvwOeaJrqSPLahSnczz8=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.1.1 h1:KfztREH0tPxJJ+geloSLaAkaPkr4ki2Er5quFV1TDo4=
github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
github.com/spf13/jwalterweatherman v0.0.0-20141219030609-3d60171a6431/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.0/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1-0.20171106142849-4c012f6dcd95/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v0.0.0-20150530192845-be5ff3e4840c/go.mod h1:A8kyI5cUJhb8N+3pkfONlcEcZbueH6nhAm0Fq7SrnBM=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.6.3 h1:pDDu1OyEDTKzpJwdq4TiuLyMsUgRa/BT5cn5O62NoHs=
github.com/spf13/viper v1.6.3/go.mod h1:jUMtyi0/lB5yZH/FjyGAoH7IMNrIhlBf6pXZmbMDvzw=
github.com/spf13/viper v1.7.0 h1:xVKxvI7ouOI5I+U9s2eeiUfMaWBVoXA3AWskkrqK0VM=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/spf13/viper v1.7.1 h1:pM5oEahlgWv/WnHXpgbKz7iLIxRf65tye2Ci+XFK5sk=
github.com/spf13/viper v1.7.1/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
@@ -953,6 +1028,8 @@ github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2 h1:b6uOv7YOFK0TYG7HtkIgExQo+2RdLuwRft63jn2HWj8=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/theupdateframework/notary v0.7.0 h1:QyagRZ7wlSpjT5N2qQAh/pN+DVqgekv4DzbAiAiEL3c=
github.com/theupdateframework/notary v0.7.0/go.mod h1:c9DRxcmhHmVLDay4/2fUYdISnHqbFDGRSlXPO0AhYWw=
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tj/assert v0.0.0-20171129193455-018094318fb0/go.mod h1:mZ9/Rh9oLWpLLDRpvE+3b7gP/C2YyLFYxNmcLnPTMe0=
@@ -1060,12 +1137,15 @@ golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975 h1:/Tl7pH94bvbAAHBdZJT947M/+gp0+CqQXDtMRC0fseo=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200221231518-2aa609cf4a9d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201112155050-0c6587e931a9 h1:umElSU9WZirRdgu2yFHY0ayQkEnKiOC1TtM3fWXFnoU=
golang.org/x/crypto v0.0.0-20201112155050-0c6587e931a9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201117144127-c1f2f97bffc9 h1:phUcVbl53swtrUN8kQEXFhUxPlIlWyBfKmidCu7P95o=
golang.org/x/crypto v0.0.0-20201117144127-c1f2f97bffc9/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -1197,6 +1277,7 @@ golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1221,6 +1302,8 @@ golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f h1:+Nyd8tzPX9R7BWHguqsrbFdRx
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201113135734-0a15ea8d9b02 h1:5Ftd3YbC/kANXWCBjvppvUmv1BMakgFcBKA7MpYYp4M=
golang.org/x/sys v0.0.0-20201113135734-0a15ea8d9b02/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221 h1:/ZHdbVpdR/jk3g30/d4yUL0JU9kksj8+F/bnQUVLGDM=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1351,6 +1434,7 @@ google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEY
google.golang.org/genproto v0.0.0-20200527145253-8367513e4ece h1:1YM0uhfumvoDu9sx8+RyWwTI63zoCQvI23IYFRlvte0=
google.golang.org/genproto v0.0.0-20200527145253-8367513e4ece/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.0.5/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.15.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@@ -1383,6 +1467,8 @@ google.golang.org/protobuf v1.24.0 h1:UhZDfRO8JRQru4/+LlLE0BRKGF8L+PICnvYZmx/fEG
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/cenkalti/backoff.v2 v2.2.1 h1:eJ9UAg01/HIHG987TwxvnzK2MgxXq97YY6rYDpY9aII=
gopkg.in/cenkalti/backoff.v2 v2.2.1/go.mod h1:S0QdOvT2AlerfSBkp0O+dk+bbIMaNbEmVk876gPCthU=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -1404,6 +1490,8 @@ gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/rethinkdb/rethinkdb-go.v6 v6.2.1 h1:d4KQkxAaAiRY2h5Zqis161Pv91A37uZyJOx73duwUwM=
gopkg.in/rethinkdb/rethinkdb-go.v6 v6.2.1/go.mod h1:WbjuEoo1oadwzQ4apSDU+JTvmllEHtsNHS6y7vFc7iw=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=

View File

@@ -23,9 +23,9 @@ import (
"strings"
"syscall"
"github.com/pkg/errors"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
helpers "github.com/mudler/luet/pkg/helpers"
"github.com/pkg/errors"
)
type Box interface {
@@ -107,7 +107,7 @@ func (b *DefaultBox) Exec() error {
func (b *DefaultBox) Run() error {
if !helpers.Exists(b.Root) {
if !fileHelper.Exists(b.Root) {
return errors.New(b.Root + " does not exist")
}

View File

@@ -2,6 +2,7 @@ package bus
import (
"github.com/mudler/go-pluggable"
"github.com/sirupsen/logrus"
)
var (
@@ -44,24 +45,59 @@ var (
EventRepositoryPreBuild pluggable.EventType = "repository.pre.build"
// EventRepositoryPostBuild is the event fired after a repository was built
EventRepositoryPostBuild pluggable.EventType = "repository.post.build"
// Image unpack
// EventImagePreUnPack is the event fired before unpacking an image to a local dir
EventImagePreUnPack pluggable.EventType = "image.pre.unpack"
// EventImagePostUnPack is the event fired after unpacking an image to a local dir
EventImagePostUnPack pluggable.EventType = "image.post.unpack"
)
// Manager is the bus instance manager, which subscribes plugins to events emitted by Luet
var Manager *pluggable.Manager = pluggable.NewManager(
[]pluggable.EventType{
EventPackageInstall,
EventPackageUnInstall,
EventPackagePreBuild,
EventPackagePreBuildArtifact,
EventPackagePostBuildArtifact,
EventPackagePostBuild,
EventRepositoryPreBuild,
EventRepositoryPostBuild,
EventImagePreBuild,
EventImagePrePull,
EventImagePrePush,
EventImagePostBuild,
EventImagePostPull,
EventImagePostPush,
},
)
var Manager *Bus = &Bus{
Manager: pluggable.NewManager(
[]pluggable.EventType{
EventPackageInstall,
EventPackageUnInstall,
EventPackagePreBuild,
EventPackagePreBuildArtifact,
EventPackagePostBuildArtifact,
EventPackagePostBuild,
EventRepositoryPreBuild,
EventRepositoryPostBuild,
EventImagePreBuild,
EventImagePrePull,
EventImagePrePush,
EventImagePostBuild,
EventImagePostPull,
EventImagePostPush,
EventImagePreUnPack,
EventImagePostUnPack,
},
),
}
type Bus struct {
*pluggable.Manager
}
func (b *Bus) Initialize(plugin ...string) {
b.Manager.Load(plugin...).Register()
for _, e := range b.Manager.Events {
b.Manager.Response(e, func(p *pluggable.Plugin, r *pluggable.EventResponse) {
if r.Errored() {
logrus.Fatal("Plugin", p.Name, "at", p.Executable, "Error", r.Error)
}
logrus.Debug(
"plugin_event",
"received from",
p.Name,
"at",
p.Executable,
r,
)
})
}
}

View File

@@ -1,33 +1,52 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package backend
package compiler
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
. "github.com/mudler/luet/pkg/logger"
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/config"
"github.com/pkg/errors"
. "github.com/mudler/luet/pkg/logger"
)
func NewBackend(s string) (CompilerBackend, error) {
var compilerBackend CompilerBackend
switch s {
case backend.ImgBackend:
compilerBackend = backend.NewSimpleImgBackend()
case backend.DockerBackend:
compilerBackend = backend.NewSimpleDockerBackend()
default:
return nil, errors.New("invalid backend. Unsupported")
}
return compilerBackend, nil
}
type CompilerBackend interface {
BuildImage(backend.Options) error
ExportImage(backend.Options) error
RemoveImage(backend.Options) error
ImageDefinitionToTar(backend.Options) error
ExtractRootfs(opts backend.Options, keepPerms bool) error
CopyImage(string, string) error
DownloadImage(opts backend.Options) error
Push(opts backend.Options) error
ImageAvailable(string) bool
ImageExists(string) bool
}
// GenerateChanges generates changes between two images using a backend by leveraging export/extractrootfs methods
// example of json return: [
// {
@@ -54,46 +73,46 @@ import (
// }
// }
// ]
func GenerateChanges(b compiler.CompilerBackend, fromImage, toImage compiler.CompilerBackendOptions) ([]compiler.ArtifactLayer, error) {
func GenerateChanges(b CompilerBackend, fromImage, toImage backend.Options) ([]artifact.ArtifactLayer, error) {
res := compiler.ArtifactLayer{FromImage: fromImage.ImageName, ToImage: toImage.ImageName}
res := artifact.ArtifactLayer{FromImage: fromImage.ImageName, ToImage: toImage.ImageName}
tmpdiffs, err := config.LuetCfg.GetSystem().TempDir("extraction")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tmpdiffs) // clean up
srcRootFS, err := ioutil.TempDir(tmpdiffs, "src")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(srcRootFS) // clean up
dstRootFS, err := ioutil.TempDir(tmpdiffs, "dst")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(dstRootFS) // clean up
srcImageExtract := compiler.CompilerBackendOptions{
srcImageExtract := backend.Options{
ImageName: fromImage.ImageName,
Destination: srcRootFS,
}
Debug("Extracting source image", fromImage.ImageName)
err = b.ExtractRootfs(srcImageExtract, false) // No need to keep permissions as we just collect file diffs
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking src image "+fromImage.ImageName)
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking src image "+fromImage.ImageName)
}
dstImageExtract := compiler.CompilerBackendOptions{
dstImageExtract := backend.Options{
ImageName: toImage.ImageName,
Destination: dstRootFS,
}
Debug("Extracting destination image", toImage.ImageName)
err = b.ExtractRootfs(dstImageExtract, false)
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking dst image "+toImage.ImageName)
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking dst image "+toImage.ImageName)
}
// Get Additions/Changes. dst -> src
@@ -114,7 +133,7 @@ func GenerateChanges(b compiler.CompilerBackend, fromImage, toImage compiler.Com
if sizeA != sizeB {
// fmt.Println("File changed", path, filepath.Join(srcRootFS, realpath))
res.Diffs.Changes = append(res.Diffs.Changes, compiler.ArtifactNode{
res.Diffs.Changes = append(res.Diffs.Changes, artifact.ArtifactNode{
Name: filepath.Join("/", realpath),
Size: int(sizeB),
})
@@ -127,7 +146,7 @@ func GenerateChanges(b compiler.CompilerBackend, fromImage, toImage compiler.Com
if s, err := os.Lstat(filepath.Join(dstRootFS, realpath)); err == nil {
sizeB = s.Size()
}
res.Diffs.Additions = append(res.Diffs.Additions, compiler.ArtifactNode{
res.Diffs.Additions = append(res.Diffs.Additions, artifact.ArtifactNode{
Name: filepath.Join("/", realpath),
Size: int(sizeB),
})
@@ -138,7 +157,7 @@ func GenerateChanges(b compiler.CompilerBackend, fromImage, toImage compiler.Com
return nil
})
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while walking image destination")
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while walking image destination")
}
// Get deletions. src -> dst
@@ -150,7 +169,7 @@ func GenerateChanges(b compiler.CompilerBackend, fromImage, toImage compiler.Com
realpath := strings.Replace(path, srcRootFS, "", -1)
if _, err = os.Lstat(filepath.Join(dstRootFS, realpath)); err != nil {
// fmt.Println("File deleted", path, filepath.Join(srcRootFS, realpath))
res.Diffs.Deletions = append(res.Diffs.Deletions, compiler.ArtifactNode{
res.Diffs.Deletions = append(res.Diffs.Deletions, artifact.ArtifactNode{
Name: filepath.Join("/", realpath),
})
}
@@ -158,8 +177,71 @@ func GenerateChanges(b compiler.CompilerBackend, fromImage, toImage compiler.Com
return nil
})
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while walking image source")
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while walking image source")
}
return []compiler.ArtifactLayer{res}, nil
diffs := []artifact.ArtifactLayer{res}
if config.LuetCfg.GetGeneral().Debug {
summary := ComputeArtifactLayerSummary(diffs)
for _, l := range summary.Layers {
Debug(fmt.Sprintf("Diff %s -> %s: add %d (%d bytes), del %d (%d bytes), change %d (%d bytes)",
l.FromImage, l.ToImage,
l.AddFiles, l.AddSizes,
l.DelFiles, l.DelSizes,
l.ChangeFiles, l.ChangeSizes))
}
}
return diffs, nil
}
type ArtifactLayerSummary struct {
FromImage string `json:"image1"`
ToImage string `json:"image2"`
AddFiles int `json:"add_files"`
AddSizes int64 `json:"add_sizes"`
DelFiles int `json:"del_files"`
DelSizes int64 `json:"del_sizes"`
ChangeFiles int `json:"change_files"`
ChangeSizes int64 `json:"change_sizes"`
}
type ArtifactLayersSummary struct {
Layers []ArtifactLayerSummary `json:"summary"`
}
func ComputeArtifactLayerSummary(diffs []artifact.ArtifactLayer) ArtifactLayersSummary {
ans := ArtifactLayersSummary{
Layers: make([]ArtifactLayerSummary, 0),
}
for _, layer := range diffs {
sum := ArtifactLayerSummary{
FromImage: layer.FromImage,
ToImage: layer.ToImage,
AddFiles: 0,
AddSizes: 0,
DelFiles: 0,
DelSizes: 0,
ChangeFiles: 0,
ChangeSizes: 0,
}
for _, a := range layer.Diffs.Additions {
sum.AddFiles++
sum.AddSizes += int64(a.Size)
}
for _, d := range layer.Diffs.Deletions {
sum.DelFiles++
sum.DelSizes += int64(d.Size)
}
for _, c := range layer.Diffs.Changes {
sum.ChangeFiles++
sum.ChangeSizes += int64(c.Size)
}
ans.Layers = append(ans.Layers, sum)
}
return ans
}

View File

@@ -18,7 +18,6 @@ package backend
import (
"os/exec"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
@@ -36,16 +35,13 @@ func imageAvailable(image string) bool {
return err == nil
}
func NewBackend(s string) compiler.CompilerBackend {
var compilerBackend compiler.CompilerBackend
switch s {
case ImgBackend:
compilerBackend = NewSimpleImgBackend()
case DockerBackend:
compilerBackend = NewSimpleDockerBackend()
}
return compilerBackend
type Options struct {
ImageName string
SourcePath string
DockerFileName string
Destination string
Context string
BackendArgs []string
}
func runCommand(cmd *exec.Cmd) error {
@@ -75,7 +71,7 @@ func runCommand(cmd *exec.Cmd) error {
return nil
}
func genBuildCommand(opts compiler.CompilerBackendOptions) []string {
func genBuildCommand(opts Options) []string {
context := opts.Context
if context == "" {

View File

@@ -17,18 +17,17 @@ package backend
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"strings"
docker "github.com/fsouza/go-dockerclient"
bus "github.com/mudler/luet/pkg/bus"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
capi "github.com/mudler/docker-companion/api"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
@@ -37,13 +36,14 @@ import (
type SimpleDocker struct{}
func NewSimpleDockerBackend() compiler.CompilerBackend {
func NewSimpleDockerBackend() *SimpleDocker {
return &SimpleDocker{}
}
// TODO: Missing still: labels, and build args expansion
func (*SimpleDocker) BuildImage(opts compiler.CompilerBackendOptions) error {
func (*SimpleDocker) BuildImage(opts Options) error {
name := opts.ImageName
bus.Manager.Publish(bus.EventImagePreBuild, opts)
buildarg := genBuildCommand(opts)
Info(":whale2: Building image " + name)
@@ -56,23 +56,7 @@ func (*SimpleDocker) BuildImage(opts compiler.CompilerBackendOptions) error {
Info(":whale: Building image " + name + " done")
if os.Getenv("DOCKER_SQUASH") == "true" {
Info(":whale: Squashing image " + name)
var client *docker.Client
Spinner(22)
defer SpinnerStop()
client, err = docker.NewClientFromEnv()
if err != nil {
return errors.Wrap(err, "could not connect to the Docker daemon")
}
err = capi.Squash(client, name, name)
if err != nil {
return errors.Wrap(err, "Failed squashing image")
}
Info(":whale: Squashing image " + name + " done")
}
bus.Manager.Publish(bus.EventImagePostBuild, opts)
return nil
}
@@ -88,8 +72,10 @@ func (*SimpleDocker) CopyImage(src, dst string) error {
return nil
}
func (*SimpleDocker) DownloadImage(opts compiler.CompilerBackendOptions) error {
func (*SimpleDocker) DownloadImage(opts Options) error {
name := opts.ImageName
bus.Manager.Publish(bus.EventImagePrePull, opts)
buildarg := []string{"pull", name}
Debug(":whale: Downloading image " + name)
@@ -103,6 +89,8 @@ func (*SimpleDocker) DownloadImage(opts compiler.CompilerBackendOptions) error {
}
Info(":whale: Downloaded image:", name)
bus.Manager.Publish(bus.EventImagePostPull, opts)
return nil
}
@@ -123,7 +111,7 @@ func (*SimpleDocker) ImageAvailable(imagename string) bool {
return imageAvailable(imagename)
}
func (*SimpleDocker) RemoveImage(opts compiler.CompilerBackendOptions) error {
func (*SimpleDocker) RemoveImage(opts Options) error {
name := opts.ImageName
buildarg := []string{"rmi", name}
out, err := exec.Command("docker", buildarg...).CombinedOutput()
@@ -135,9 +123,10 @@ func (*SimpleDocker) RemoveImage(opts compiler.CompilerBackendOptions) error {
return nil
}
func (*SimpleDocker) Push(opts compiler.CompilerBackendOptions) error {
func (*SimpleDocker) Push(opts Options) error {
name := opts.ImageName
pusharg := []string{"push", name}
bus.Manager.Publish(bus.EventImagePrePush, opts)
Spinner(22)
defer SpinnerStop()
@@ -147,11 +136,13 @@ func (*SimpleDocker) Push(opts compiler.CompilerBackendOptions) error {
return errors.Wrap(err, "Failed pushing image: "+string(out))
}
Info(":whale: Pushed image:", name)
bus.Manager.Publish(bus.EventImagePostPush, opts)
//Info(string(out))
return nil
}
func (s *SimpleDocker) ImageDefinitionToTar(opts compiler.CompilerBackendOptions) error {
func (s *SimpleDocker) ImageDefinitionToTar(opts Options) error {
if err := s.BuildImage(opts); err != nil {
return errors.Wrap(err, "Failed building image")
}
@@ -164,7 +155,7 @@ func (s *SimpleDocker) ImageDefinitionToTar(opts compiler.CompilerBackendOptions
return nil
}
func (*SimpleDocker) ExportImage(opts compiler.CompilerBackendOptions) error {
func (*SimpleDocker) ExportImage(opts Options) error {
name := opts.ImageName
path := opts.Destination
@@ -187,10 +178,16 @@ type ManifestEntry struct {
Layers []string `json:"Layers"`
}
func (b *SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerms bool) error {
func (b *SimpleDocker) ExtractRootfs(opts Options, keepPerms bool) error {
name := opts.ImageName
dst := opts.Destination
if !b.ImageExists(name) {
if err := b.DownloadImage(opts); err != nil {
return errors.Wrap(err, "failed pulling image "+name+" during extraction")
}
}
tempexport, err := ioutil.TempDir(dst, "tmprootfs")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for rootfs")
@@ -202,7 +199,7 @@ func (b *SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepP
Spinner(22)
defer SpinnerStop()
if err := b.ExportImage(compiler.CompilerBackendOptions{ImageName: name, Destination: imageExport}); err != nil {
if err := b.ExportImage(Options{ImageName: name, Destination: imageExport}); err != nil {
return errors.Wrap(err, "failed while extracting rootfs for "+name)
}
@@ -217,7 +214,7 @@ func (b *SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepP
}
defer os.RemoveAll(tempUnpack) // clean up
imageExport := filepath.Join(tempUnpack, "image.tar")
if err := b.ExportImage(compiler.CompilerBackendOptions{ImageName: opts.ImageName, Destination: imageExport}); err != nil {
if err := b.ExportImage(Options{ImageName: opts.ImageName, Destination: imageExport}); err != nil {
return errors.Wrap(err, "while exporting image before extraction")
}
src = imageExport
@@ -241,7 +238,7 @@ func (b *SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepP
return errors.Wrap(err, "Error met while unpacking rootfs")
}
manifest, err := helpers.Read(filepath.Join(rootfs, "manifest.json"))
manifest, err := fileHelper.Read(filepath.Join(rootfs, "manifest.json"))
if err != nil {
return errors.Wrap(err, "Error met while reading image manifest")
}
@@ -269,7 +266,7 @@ func (b *SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepP
return err
}
err = export.UnPackLayers(layers_sha, dst, "")
err = export.UnPackLayers(layers_sha, dst, "containerd")
if err != nil {
return err
}
@@ -281,21 +278,3 @@ func (b *SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepP
return nil
}
// Changes retrieves changes between image layers
func (d *SimpleDocker) Changes(fromImage, toImage compiler.CompilerBackendOptions) ([]compiler.ArtifactLayer, error) {
diffs, err := GenerateChanges(d, fromImage, toImage)
if config.LuetCfg.GetGeneral().Debug {
summary := compiler.ComputeArtifactLayerSummary(diffs)
for _, l := range summary.Layers {
Debug(fmt.Sprintf("Diff %s -> %s: add %d (%d bytes), del %d (%d bytes), change %d (%d bytes)",
l.FromImage, l.ToImage,
l.AddFiles, l.AddSizes,
l.DelFiles, l.DelSizes,
l.ChangeFiles, l.ChangeSizes))
}
}
return diffs, err
}

View File

@@ -16,15 +16,17 @@
package backend_test
import (
"github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
. "github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/solver"
"github.com/mudler/luet/pkg/compiler/types/artifact"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
"io/ioutil"
"os"
"path/filepath"
helpers "github.com/mudler/luet/pkg/helpers"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
@@ -41,13 +43,10 @@ var _ = Describe("Docker backend", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
compiler := NewLuetCompiler(nil, generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "enman", Category: "app-admin", Version: "1.4.0"})
cc := NewLuetCompiler(nil, generalRecipe.GetDatabase())
lspec, err := cc.FromPackage(&pkg.DefaultPackage{Name: "enman", Category: "app-admin", Version: "1.4.0"})
Expect(err).ToNot(HaveOccurred())
lspec, ok := spec.(*LuetCompilationSpec)
Expect(ok).To(BeTrue())
Expect(lspec.Steps).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
Expect(lspec.Image).To(Equal("luet/base"))
Expect(lspec.Seed).To(Equal("alpine"))
@@ -61,7 +60,7 @@ var _ = Describe("Docker backend", func() {
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
dockerfile, err := helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
dockerfile, err := fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM alpine
@@ -71,7 +70,7 @@ ENV PACKAGE_NAME=enman
ENV PACKAGE_VERSION=1.4.0
ENV PACKAGE_CATEGORY=app-admin`))
b := NewSimpleDockerBackend()
opts := CompilerBackendOptions{
opts := backend.Options{
ImageName: "luet/base",
SourcePath: tmpdir,
DockerFileName: "Dockerfile",
@@ -80,11 +79,11 @@ ENV PACKAGE_CATEGORY=app-admin`))
Expect(b.BuildImage(opts)).ToNot(HaveOccurred())
Expect(b.ExportImage(opts)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "LuetDockerfile"))
Expect(err).ToNot(HaveOccurred())
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "LuetDockerfile"))
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "LuetDockerfile"))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM luet/base
@@ -95,7 +94,7 @@ ENV PACKAGE_VERSION=1.4.0
ENV PACKAGE_CATEGORY=app-admin
RUN echo foo > /test
RUN echo bar > /test2`))
opts2 := CompilerBackendOptions{
opts2 := backend.Options{
ImageName: "test",
SourcePath: tmpdir,
DockerFileName: "LuetDockerfile",
@@ -104,28 +103,28 @@ RUN echo bar > /test2`))
Expect(b.BuildImage(opts2)).ToNot(HaveOccurred())
Expect(b.ExportImage(opts2)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
artifacts := []ArtifactNode{{
artifacts := []artifact.ArtifactNode{{
Name: "/luetbuild/LuetDockerfile",
Size: 175,
}}
if os.Getenv("DOCKER_BUILDKIT") == "1" {
artifacts = append(artifacts, ArtifactNode{Name: "/etc/resolv.conf", Size: 0})
artifacts = append(artifacts, artifact.ArtifactNode{Name: "/etc/resolv.conf", Size: 0})
}
artifacts = append(artifacts, ArtifactNode{Name: "/test", Size: 4})
artifacts = append(artifacts, ArtifactNode{Name: "/test2", Size: 4})
artifacts = append(artifacts, artifact.ArtifactNode{Name: "/test", Size: 4})
artifacts = append(artifacts, artifact.ArtifactNode{Name: "/test2", Size: 4})
Expect(b.Changes(opts, opts2)).To(Equal(
[]ArtifactLayer{{
Expect(compiler.GenerateChanges(b, opts, opts2)).To(Equal(
[]artifact.ArtifactLayer{{
FromImage: "luet/base",
ToImage: "test",
Diffs: ArtifactDiffs{
Diffs: artifact.ArtifactDiffs{
Additions: artifacts,
},
}}))
opts2 = CompilerBackendOptions{
opts2 = backend.Options{
ImageName: "test",
SourcePath: tmpdir,
DockerFileName: "LuetDockerfile",
@@ -133,7 +132,7 @@ RUN echo bar > /test2`))
}
Expect(b.ImageDefinitionToTar(opts2)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir, "output3.tar"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(tmpdir, "output3.tar"))).To(BeTrue())
Expect(b.ImageExists(opts2.ImageName)).To(BeFalse())
})

View File

@@ -20,7 +20,8 @@ import (
"os/exec"
"strings"
"github.com/mudler/luet/pkg/compiler"
bus "github.com/mudler/luet/pkg/bus"
. "github.com/mudler/luet/pkg/logger"
"github.com/pkg/errors"
@@ -28,13 +29,14 @@ import (
type SimpleImg struct{}
func NewSimpleImgBackend() compiler.CompilerBackend {
func NewSimpleImgBackend() *SimpleImg {
return &SimpleImg{}
}
// TODO: Missing still: labels, and build args expansion
func (*SimpleImg) BuildImage(opts compiler.CompilerBackendOptions) error {
func (*SimpleImg) BuildImage(opts Options) error {
name := opts.ImageName
bus.Manager.Publish(bus.EventImagePreBuild, opts)
buildarg := genBuildCommand(opts)
@@ -46,13 +48,14 @@ func (*SimpleImg) BuildImage(opts compiler.CompilerBackendOptions) error {
if err != nil {
return err
}
bus.Manager.Publish(bus.EventImagePostBuild, opts)
Info(":tea: Building image " + name + " done")
return nil
}
func (*SimpleImg) RemoveImage(opts compiler.CompilerBackendOptions) error {
func (*SimpleImg) RemoveImage(opts Options) error {
name := opts.ImageName
buildarg := []string{"rm", name}
Spinner(22)
@@ -66,8 +69,10 @@ func (*SimpleImg) RemoveImage(opts compiler.CompilerBackendOptions) error {
return nil
}
func (*SimpleImg) DownloadImage(opts compiler.CompilerBackendOptions) error {
func (*SimpleImg) DownloadImage(opts Options) error {
name := opts.ImageName
bus.Manager.Publish(bus.EventImagePrePull, opts)
buildarg := []string{"pull", name}
Debug(":tea: Downloading image " + name)
@@ -81,6 +86,7 @@ func (*SimpleImg) DownloadImage(opts compiler.CompilerBackendOptions) error {
}
Info(":tea: Image " + name + " downloaded")
bus.Manager.Publish(bus.EventImagePostPull, opts)
return nil
}
@@ -116,7 +122,7 @@ func (*SimpleImg) ImageExists(imagename string) bool {
return false
}
func (s *SimpleImg) ImageDefinitionToTar(opts compiler.CompilerBackendOptions) error {
func (s *SimpleImg) ImageDefinitionToTar(opts Options) error {
if err := s.BuildImage(opts); err != nil {
return errors.Wrap(err, "Failed building image")
}
@@ -129,7 +135,7 @@ func (s *SimpleImg) ImageDefinitionToTar(opts compiler.CompilerBackendOptions) e
return nil
}
func (*SimpleImg) ExportImage(opts compiler.CompilerBackendOptions) error {
func (*SimpleImg) ExportImage(opts Options) error {
name := opts.ImageName
path := opts.Destination
buildarg := []string{"save", "-o", path, name}
@@ -147,7 +153,7 @@ func (*SimpleImg) ExportImage(opts compiler.CompilerBackendOptions) error {
}
// ExtractRootfs extracts the docker image content inside the destination
func (s *SimpleImg) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerms bool) error {
func (s *SimpleImg) ExtractRootfs(opts Options, keepPerms bool) error {
name := opts.ImageName
path := opts.Destination
@@ -173,20 +179,18 @@ func (s *SimpleImg) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerm
return nil
}
// TODO: Use container-diff (https://github.com/GoogleContainerTools/container-diff) for checking out layer diffs
// Changes uses container-diff (https://github.com/GoogleContainerTools/container-diff) for retrieving out layer diffs
func (i *SimpleImg) Changes(fromImage, toImage compiler.CompilerBackendOptions) ([]compiler.ArtifactLayer, error) {
return GenerateChanges(i, fromImage, toImage)
}
func (*SimpleImg) Push(opts compiler.CompilerBackendOptions) error {
func (*SimpleImg) Push(opts Options) error {
name := opts.ImageName
bus.Manager.Publish(bus.EventImagePrePush, opts)
pusharg := []string{"push", name}
out, err := exec.Command("img", pusharg...).CombinedOutput()
if err != nil {
return errors.Wrap(err, "Failed pushing image: "+string(out))
}
Info(":tea: Pushed image:", name)
bus.Manager.Publish(bus.EventImagePostPush, opts)
//Info(string(out))
return nil
}

View File

@@ -13,10 +13,10 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package backend_test
package compiler_test
import (
"github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/compiler/backend"
. "github.com/onsi/ginkgo"
@@ -24,7 +24,7 @@ import (
)
var _ = Describe("Docker image diffs", func() {
var b compiler.CompilerBackend
var b CompilerBackend
BeforeEach(func() {
b = NewSimpleDockerBackend()
@@ -32,7 +32,7 @@ var _ = Describe("Docker image diffs", func() {
Context("Generate diffs from docker images", func() {
It("Detect no changes", func() {
opts := compiler.CompilerBackendOptions{
opts := Options{
ImageName: "alpine:latest",
}
err := b.DownloadImage(opts)
@@ -47,18 +47,18 @@ var _ = Describe("Docker image diffs", func() {
})
It("Detects additions and changed files", func() {
err := b.DownloadImage(compiler.CompilerBackendOptions{
err := b.DownloadImage(Options{
ImageName: "quay.io/mocaccino/micro",
})
Expect(err).ToNot(HaveOccurred())
err = b.DownloadImage(compiler.CompilerBackendOptions{
err = b.DownloadImage(Options{
ImageName: "quay.io/mocaccino/extra",
})
Expect(err).ToNot(HaveOccurred())
layers, err := GenerateChanges(b, compiler.CompilerBackendOptions{
layers, err := GenerateChanges(b, Options{
ImageName: "quay.io/mocaccino/micro",
}, compiler.CompilerBackendOptions{
}, Options{
ImageName: "quay.io/mocaccino/extra",
})
Expect(err).ToNot(HaveOccurred())

File diff suppressed because it is too large Load Diff

View File

@@ -21,9 +21,12 @@ import (
. "github.com/mudler/luet/pkg/compiler"
sd "github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/compiler/types/compression"
"github.com/mudler/luet/pkg/compiler/types/options"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
helpers "github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -39,7 +42,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -54,19 +57,18 @@ var _ = Describe("Compiler", func() {
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(2)
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
content1, err := fileHelper.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
content2, err := fileHelper.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
@@ -74,6 +76,68 @@ var _ = Describe("Compiler", func() {
})
})
Context("Copy and Join", func() {
It("Compiles it correctly with Copy", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/copy")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.2"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("result"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("bina/busybox"))).To(BeTrue())
})
It("Compiles it correctly with Join", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/join")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.2"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("newc"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test4"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
})
})
Context("Simple package build definition", func() {
It("Compiles it in parallel", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
@@ -83,7 +147,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(1))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -98,12 +162,11 @@ var _ = Describe("Compiler", func() {
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
compiler.SetConcurrency(2)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec, spec2))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec2))
Expect(errs).To(BeNil())
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
})
@@ -118,7 +181,7 @@ var _ = Describe("Compiler", func() {
err = generalRecipe.Load("../../tests/fixtures/templates")
Expect(err).ToNot(HaveOccurred())
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
pkg, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
@@ -142,7 +205,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(4))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -159,30 +222,29 @@ var _ = Describe("Compiler", func() {
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
spec3.SetOutputPath(tmpdir)
compiler.SetConcurrency(2)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec, spec2, spec3))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec2, spec3))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(3))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Exists(spec.Rel("test3"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test4"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test4"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("c"))
content1, err := fileHelper.Read(spec.Rel("c"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("cd"))
content2, err := fileHelper.Read(spec.Rel("cd"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("c\n"))
Expect(content2).To(Equal("c\n"))
content1, err = helpers.Read(spec.Rel("d"))
content1, err = fileHelper.Read(spec.Rel("d"))
Expect(err).ToNot(HaveOccurred())
content2, err = helpers.Read(spec.Rel("dd"))
content2, err = fileHelper.Read(spec.Rel("dd"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("s\n"))
Expect(content2).To(Equal("dd\n"))
@@ -199,7 +261,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(1))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "extra", Category: "layer", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -207,27 +269,26 @@ var _ = Describe("Compiler", func() {
Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
artifacts2, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec2))
artifacts2, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec2))
Expect(errs).To(BeNil())
Expect(len(artifacts2)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
for _, artifact := range artifacts2 {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Exists(spec.Rel("etc/hosts"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test1"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("etc/hosts"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test1"))).To(BeTrue())
})
It("Compiles and includes ony wanted files", func() {
@@ -241,7 +302,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -250,19 +311,18 @@ var _ = Describe("Compiler", func() {
// Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("marvin"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).ToNot(BeTrue())
})
It("Compiles and excludes files", func() {
@@ -276,7 +336,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -285,20 +345,19 @@ var _ = Describe("Compiler", func() {
// Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("marvin"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("marvot"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvot"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
})
It("Compiles includes and excludes files", func() {
@@ -312,7 +371,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -321,20 +380,19 @@ var _ = Describe("Compiler", func() {
// Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("marvin"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("marvot"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvot"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).ToNot(BeTrue())
})
It("Compiles and excludes ony wanted files also from unpacked packages", func() {
@@ -348,7 +406,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -357,18 +415,17 @@ var _ = Describe("Compiler", func() {
// Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
})
It("Compiles includes and excludes ony wanted files also from unpacked packages", func() {
@@ -382,7 +439,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -391,18 +448,17 @@ var _ = Describe("Compiler", func() {
// Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
})
It("Compiles and includes ony wanted files also from unpacked packages", func() {
@@ -416,7 +472,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -425,22 +481,21 @@ var _ = Describe("Compiler", func() {
// Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Exists(spec.Rel("var/lib/udhcpd"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("marvin"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test5"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("test"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("test2"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("lib/firmware"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("var/lib/udhcpd"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test2"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("lib/firmware"))).ToNot(BeTrue())
})
It("Compiles a more complex tree", func() {
@@ -454,7 +509,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "pkgs-checker", Category: "package", Version: "9999"})
Expect(err).ToNot(HaveOccurred())
@@ -463,25 +518,24 @@ var _ = Describe("Compiler", func() {
// Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("extra-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("extra-layer"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("extra-layer"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("usr/bin/pkgs-checker"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("base-layer-0.1.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("base-layer-0.1.metadata.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("extra-layer-0.1.metadata.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("extra-layer-0.1.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("usr/bin/pkgs-checker"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("base-layer-0.1.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("base-layer-0.1.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("extra-layer-0.1.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("extra-layer-0.1.package.tar"))).To(BeTrue())
})
It("Compiles with provides support", func() {
@@ -495,7 +549,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "d", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -504,27 +558,26 @@ var _ = Describe("Compiler", func() {
// Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
Expect(len(artifacts[0].GetDependencies())).To(Equal(1))
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("c-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("d"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("dd"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("c"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("cd"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("d"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("dd"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("c"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("cd"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("d-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("d-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("c-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("c-test-1.0.metadata.yaml"))).To(BeTrue())
})
It("Compiles with provides and selectors support", func() {
@@ -539,7 +592,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "d", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -548,27 +601,26 @@ var _ = Describe("Compiler", func() {
// Expect(err).ToNot(HaveOccurred())
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
Expect(len(artifacts[0].GetDependencies())).To(Equal(1))
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("c-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("d"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("dd"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("c"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("cd"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("d"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("dd"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("c"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("cd"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("d-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("d-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("c-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("c-test-1.0.metadata.yaml"))).To(BeTrue())
})
It("Compiles revdeps", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
@@ -581,7 +633,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "extra", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
@@ -591,21 +643,21 @@ var _ = Describe("Compiler", func() {
spec.SetOutputPath(tmpdir)
artifacts, errs := compiler.CompileWithReverseDeps(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileWithReverseDeps(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(2))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("extra-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("extra-layer"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("extra-layer"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("usr/bin/pkgs-checker"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("base-layer-0.1.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("extra-layer-0.1.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("usr/bin/pkgs-checker"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("base-layer-0.1.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("extra-layer-0.1.package.tar"))).To(BeTrue())
})
It("Compiles complex dependencies trees with best matches", func() {
@@ -619,7 +671,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(10))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "vhba", Category: "sys-fs-5.4.2", Version: "20190410"})
Expect(err).ToNot(HaveOccurred())
@@ -629,22 +681,22 @@ var _ = Describe("Compiler", func() {
spec.SetOutputPath(tmpdir)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
Expect(len(artifacts[0].GetDependencies())).To(Equal(6))
Expect(len(artifacts[0].Dependencies)).To(Equal(6))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("vhba-sys-fs-5.4.2-20190410.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("sabayon-build-portage-layer-0.20191126.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("build-layer-0.1.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("build-sabayon-overlay-layer-0.20191212.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("build-sabayon-overlays-layer-0.1.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("linux-sabayon-sys-kernel-5.4.2.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("sabayon-sources-sys-kernel-5.4.2.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("vhba"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("sabayon-build-portage-layer-0.20191126.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("build-layer-0.1.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("build-sabayon-overlay-layer-0.20191212.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("build-sabayon-overlays-layer-0.1.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("linux-sabayon-sys-kernel-5.4.2.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("sabayon-sources-sys-kernel-5.4.2.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("vhba"))).To(BeTrue())
})
It("Compiles revdeps with seeds", func() {
@@ -658,42 +710,42 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(4))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
spec.SetOutputPath(tmpdir)
artifacts, errs := compiler.CompileWithReverseDeps(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileWithReverseDeps(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(4))
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
}
// A deps on B, so A artifacts are here:
Expect(helpers.Exists(spec.Rel("test3"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test4"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test4"))).To(BeTrue())
// B
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("artifact42"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("artifact42"))).To(BeTrue())
// C depends on B, so B is here
content1, err := helpers.Read(spec.Rel("c"))
content1, err := fileHelper.Read(spec.Rel("c"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("cd"))
content2, err := fileHelper.Read(spec.Rel("cd"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("c\n"))
Expect(content2).To(Equal("c\n"))
// D is here as it requires C, and C was recompiled
content1, err = helpers.Read(spec.Rel("d"))
content1, err = fileHelper.Read(spec.Rel("d"))
Expect(err).ToNot(HaveOccurred())
content2, err = helpers.Read(spec.Rel("dd"))
content2, err = fileHelper.Read(spec.Rel("dd"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("s\n"))
Expect(content2).To(Equal("dd\n"))
@@ -710,7 +762,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -722,24 +774,23 @@ var _ = Describe("Compiler", func() {
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(2)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
for _, artifact := range artifacts {
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
for _, d := range artifact.GetDependencies() {
Expect(helpers.Exists(d.GetPath())).To(BeTrue())
Expect(helpers.Untar(d.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
for _, d := range artifact.Dependencies {
Expect(fileHelper.Exists(d.Path)).To(BeTrue())
Expect(helpers.Untar(d.Path, tmpdir, false)).ToNot(HaveOccurred())
}
}
Expect(helpers.Exists(spec.Rel("test3"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test4"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test4"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
})
})
@@ -753,7 +804,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "runtime", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
@@ -765,15 +816,14 @@ var _ = Describe("Compiler", func() {
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
Expect(len(artifacts[0].GetDependencies())).To(Equal(1))
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
Expect(helpers.Untar(spec.Rel("runtime-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("bin/busybox"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("var"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("bin/busybox"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("var"))).ToNot(BeTrue())
})
})
@@ -786,7 +836,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{
Name: "dironly",
@@ -814,21 +864,19 @@ var _ = Describe("Compiler", func() {
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir2)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec, spec2))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec2))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(2))
Expect(len(artifacts[0].GetDependencies())).To(Equal(0))
Expect(len(artifacts[0].Dependencies)).To(Equal(0))
Expect(helpers.Untar(spec.Rel("dironly-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test1"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test2"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test1"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test2"))).To(BeTrue())
Expect(helpers.Untar(spec2.Rel("dironly_filter-test-1.0.package.tar"), tmpdir2, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec2.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec2.Rel("test6"))).ToNot(BeTrue())
Expect(helpers.Exists(spec2.Rel("artifact42"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec2.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec2.Rel("test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec2.Rel("artifact42"))).ToNot(BeTrue())
})
})
@@ -841,11 +889,11 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "runtime", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
compiler.SetCompressionType(GZip)
compiler.Options.CompressionType = compression.GZip
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err := ioutil.TempDir("", "tree")
@@ -853,18 +901,17 @@ var _ = Describe("Compiler", func() {
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
Expect(len(artifacts[0].GetDependencies())).To(Equal(1))
Expect(helpers.Exists(spec.Rel("runtime-layer-0.1.package.tar.gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("runtime-layer-0.1.package.tar"))).To(BeFalse())
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
Expect(fileHelper.Exists(spec.Rel("runtime-layer-0.1.package.tar.gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("runtime-layer-0.1.package.tar"))).To(BeFalse())
Expect(artifacts[0].Unpack(tmpdir, false)).ToNot(HaveOccurred())
// Expect(helpers.Untar(spec.Rel("runtime-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("bin/busybox"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("var"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("bin/busybox"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("var"))).ToNot(BeTrue())
})
})
@@ -877,7 +924,7 @@ var _ = Describe("Compiler", func() {
err := generalRecipe.Load("../../tests/fixtures/includeimage")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
specs, err := compiler.FromDatabase(generalRecipe.GetDatabase(), true, "")
Expect(err).ToNot(HaveOccurred())
@@ -896,11 +943,11 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "runtime", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
compiler.SetCompressionType(GZip)
compiler.Options.CompressionType = compression.GZip
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err := ioutil.TempDir("", "tree")
@@ -908,20 +955,19 @@ var _ = Describe("Compiler", func() {
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
Expect(len(artifacts[0].GetDependencies())).To(Equal(1))
Expect(artifacts[0].GetFiles()).To(ContainElement("bin/busybox"))
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
Expect(artifacts[0].Files).To(ContainElement("bin/busybox"))
Expect(helpers.Exists(spec.Rel("runtime-layer-0.1.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("runtime-layer-0.1.metadata.yaml"))).To(BeTrue())
art, err := LoadArtifactFromYaml(spec)
Expect(err).ToNot(HaveOccurred())
files := art.GetFiles()
files := art.Files
Expect(files).To(ContainElement("bin/busybox"))
})
})

View File

@@ -0,0 +1,161 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler
import (
"fmt"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
"github.com/mudler/luet/pkg/config"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
"github.com/pkg/errors"
)
// ImageHashTree is holding the Database
// and the options to resolve PackageImageHashTrees
// for a given specfile
// It is responsible of returning a concrete result
// which identifies a Package in a HashTree
type ImageHashTree struct {
Database pkg.PackageDatabase
SolverOptions config.LuetSolverOptions
}
// PackageImageHashTree represent the Package into a given image hash tree
// The hash tree is constructed by a set of images representing
// the package during its build stage. A Hash is assigned to each image
// from the package fingerprint, plus the SAT solver assertion result (which is hashed as well)
// and the specfile signatures. This guarantees that each image of the build stage
// is unique and can be identified later on.
type PackageImageHashTree struct {
Target *solver.PackageAssert
Dependencies solver.PackagesAssertions
Solution solver.PackagesAssertions
dependencyBuilderImageHashes map[string]string
SourceHash string
BuilderImageHash string
}
func NewHashTree(db pkg.PackageDatabase) *ImageHashTree {
return &ImageHashTree{
Database: db,
}
}
func (ht *PackageImageHashTree) DependencyBuildImage(p pkg.Package) (string, error) {
found, ok := ht.dependencyBuilderImageHashes[p.GetFingerPrint()]
if !ok {
return "", errors.New("package hash not found")
}
return found, nil
}
func (ht *PackageImageHashTree) String() string {
return fmt.Sprintf(
"Target buildhash: %s\nTarget packagehash: %s\nBuilder Imagehash: %s\nSource Imagehash: %s\n",
ht.Target.Hash.BuildHash,
ht.Target.Hash.PackageHash,
ht.BuilderImageHash,
ht.SourceHash,
)
}
// Query takes a compiler and a compilation spec and returns a PackageImageHashTree tied to it.
// PackageImageHashTree contains all the informations to resolve the spec build images in order to
// reproducibly re-build images from packages
func (ht *ImageHashTree) Query(cs *LuetCompiler, p *compilerspec.LuetCompilationSpec) (*PackageImageHashTree, error) {
assertions, err := ht.resolve(cs, p)
if err != nil {
return nil, err
}
targetAssertion := assertions.Search(p.GetPackage().GetFingerPrint())
dependencies := assertions.Drop(p.GetPackage())
var sourceHash string
imageHashes := map[string]string{}
for _, assertion := range dependencies {
var depbuildImageTag string
compileSpec, err := cs.FromPackage(assertion.Package)
if err != nil {
return nil, errors.Wrap(err, "Error while generating compilespec for "+assertion.Package.GetName())
}
if compileSpec.GetImage() != "" {
depbuildImageTag = assertion.Hash.BuildHash
} else {
depbuildImageTag = ht.genBuilderImageTag(compileSpec, targetAssertion.Hash.PackageHash)
}
imageHashes[assertion.Package.GetFingerPrint()] = depbuildImageTag
sourceHash = assertion.Hash.PackageHash
}
return &PackageImageHashTree{
Dependencies: dependencies,
Target: targetAssertion,
SourceHash: sourceHash,
BuilderImageHash: ht.genBuilderImageTag(p, targetAssertion.Hash.PackageHash),
dependencyBuilderImageHashes: imageHashes,
Solution: assertions,
}, nil
}
func (ht *ImageHashTree) genBuilderImageTag(p *compilerspec.LuetCompilationSpec, packageImage string) string {
// Use packageImage as salt into the fp being used
// so the hash is unique also in cases where
// some package deps does have completely different
// depgraphs
return fmt.Sprintf("builder-%s", p.GetPackage().HashFingerprint(packageImage))
}
// resolve computes the dependency tree of a compilation spec and returns solver assertions
// in order to be able to compile the spec.
func (ht *ImageHashTree) resolve(cs *LuetCompiler, p *compilerspec.LuetCompilationSpec) (solver.PackagesAssertions, error) {
dependencies, err := cs.ComputeDepTree(p)
if err != nil {
return nil, errors.Wrap(err, "While computing a solution for "+p.GetPackage().HumanReadableString())
}
// Get hash from buildpsecs
salts := map[string]string{}
for _, assertion := range dependencies { //highly dependent on the order
if assertion.Value {
spec, err := cs.FromPackage(assertion.Package)
if err != nil {
return nil, errors.Wrap(err, "while computing hash buildspecs")
}
hash, err := spec.Hash()
if err != nil {
return nil, errors.Wrap(err, "failed computing hash")
}
salts[assertion.Package.GetFingerPrint()] = hash
}
}
assertions := solver.PackagesAssertions{}
for _, assertion := range dependencies { //highly dependent on the order
if assertion.Value {
nthsolution := dependencies.Cut(assertion.Package)
assertion.Hash = solver.PackageHash{
BuildHash: nthsolution.SaltedHashFrom(assertion.Package, salts),
PackageHash: nthsolution.SaltedAssertionHash(salts),
}
assertion.Package.SetTreeDir(p.Package.GetTreeDir())
assertions = append(assertions, assertion)
}
}
return assertions, nil
}

View File

@@ -0,0 +1,145 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler_test
import (
. "github.com/mudler/luet/pkg/compiler"
sd "github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/compiler/types/options"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("ImageHashTree", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
hashtree := NewHashTree(generalRecipe.GetDatabase())
Context("Simple package definition", func() {
BeforeEach(func() {
generalRecipe = tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
compiler = NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
hashtree = NewHashTree(generalRecipe.GetDatabase())
})
It("Calculates the hash correctly", func() {
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
packageHash, err := hashtree.Query(compiler, spec)
Expect(err).ToNot(HaveOccurred())
Expect(packageHash.Target.Hash.BuildHash).To(Equal("4db24406e8db30a3310a1cf8c4d4e19597745e6d41b189dc51a73ac4a50cc9e6"))
Expect(packageHash.Target.Hash.PackageHash).To(Equal("4c867c9bab6c71d9420df75806e7a2f171dbc08487852ab4e2487bab04066cf2"))
Expect(packageHash.BuilderImageHash).To(Equal("builder-e6f9c5552a67c463215b0a9e4f7c7fc8"))
})
})
expectedPackageHash := "15811d83d0f8360318c54d91dcae3714f8efb39bf872572294834880f00ee7a8"
Context("complex package definition", func() {
BeforeEach(func() {
generalRecipe = tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/upgrade_old_repo_revision")
Expect(err).ToNot(HaveOccurred())
compiler = NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
hashtree = NewHashTree(generalRecipe.GetDatabase())
})
It("Calculates the hash correctly", func() {
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
packageHash, err := hashtree.Query(compiler, spec)
Expect(err).ToNot(HaveOccurred())
Expect(packageHash.Dependencies[len(packageHash.Dependencies)-1].Hash.PackageHash).To(Equal(expectedPackageHash))
Expect(packageHash.SourceHash).To(Equal(expectedPackageHash))
Expect(packageHash.BuilderImageHash).To(Equal("builder-3d739cab442aec15a6da238481df73c5"))
//Expect(packageHash.Target.Hash.BuildHash).To(Equal("79d7107d13d578b362e6a7bf10ec850efce26316405b8d732ce8f9e004d64281"))
Expect(packageHash.Target.Hash.PackageHash).To(Equal("99c4ebb4bc4754985fcc28677badf90f525aa231b1db0fe75659f11b86dc20e8"))
a := &pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.1"}
hash, err := packageHash.DependencyBuildImage(a)
Expect(err).ToNot(HaveOccurred())
Expect(hash).To(Equal("f48f28ab62f1379a4247ec763681ccede68ea1e5c25aae8fb72459c0b2f8742e"))
assertionA := packageHash.Dependencies.Search(a.GetFingerPrint())
Expect(assertionA.Hash.PackageHash).To(Equal(expectedPackageHash))
b := &pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}
assertionB := packageHash.Dependencies.Search(b.GetFingerPrint())
Expect(assertionB.Hash.PackageHash).To(Equal("f48f28ab62f1379a4247ec763681ccede68ea1e5c25aae8fb72459c0b2f8742e"))
hashB, err := packageHash.DependencyBuildImage(b)
Expect(err).ToNot(HaveOccurred())
Expect(hashB).To(Equal("2668e418eab6861404834ad617713e39b8e58f68016a1fbfcc9384efdd037376"))
})
})
Context("complex package definition, with small change in build.yaml", func() {
BeforeEach(func() {
generalRecipe = tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
//Definition of A here is slightly changed in the steps build.yaml file (1 character only)
err := generalRecipe.Load("../../tests/fixtures/upgrade_old_repo_revision_content_changed")
Expect(err).ToNot(HaveOccurred())
compiler = NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
hashtree = NewHashTree(generalRecipe.GetDatabase())
})
It("Calculates the hash correctly", func() {
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
packageHash, err := hashtree.Query(compiler, spec)
Expect(err).ToNot(HaveOccurred())
Expect(packageHash.Dependencies[len(packageHash.Dependencies)-1].Hash.PackageHash).ToNot(Equal(expectedPackageHash))
sourceHash := "1d91b13d0246fa000085a1071c63397d21546300b17f69493f22315a64b717d4"
Expect(packageHash.Dependencies[len(packageHash.Dependencies)-1].Hash.PackageHash).To(Equal(sourceHash))
Expect(packageHash.SourceHash).To(Equal(sourceHash))
Expect(packageHash.SourceHash).ToNot(Equal(expectedPackageHash))
Expect(packageHash.BuilderImageHash).To(Equal("builder-03ee108a7c56b17ee568ace0800dd16d"))
//Expect(packageHash.Target.Hash.BuildHash).To(Equal("79d7107d13d578b362e6a7bf10ec850efce26316405b8d732ce8f9e004d64281"))
Expect(packageHash.Target.Hash.PackageHash).To(Equal("7677da23b2cc866c2d07aa4a58fbf703340f2f78c0efbb1ba9faf8979f250c87"))
a := &pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.1"}
hash, err := packageHash.DependencyBuildImage(a)
Expect(err).ToNot(HaveOccurred())
Expect(hash).To(Equal("f48f28ab62f1379a4247ec763681ccede68ea1e5c25aae8fb72459c0b2f8742e"))
assertionA := packageHash.Dependencies.Search(a.GetFingerPrint())
Expect(assertionA.Hash.PackageHash).To(Equal("1d91b13d0246fa000085a1071c63397d21546300b17f69493f22315a64b717d4"))
Expect(assertionA.Hash.PackageHash).ToNot(Equal(expectedPackageHash))
b := &pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}
assertionB := packageHash.Dependencies.Search(b.GetFingerPrint())
Expect(assertionB.Hash.PackageHash).To(Equal("f48f28ab62f1379a4247ec763681ccede68ea1e5c25aae8fb72459c0b2f8742e"))
hashB, err := packageHash.DependencyBuildImage(b)
Expect(err).ToNot(HaveOccurred())
Expect(hashB).To(Equal("2668e418eab6861404834ad617713e39b8e58f68016a1fbfcc9384efdd037376"))
})
})
})

View File

@@ -1,201 +0,0 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler
import (
"runtime"
"github.com/mudler/luet/pkg/config"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
)
type Compiler interface {
Compile(bool, CompilationSpec) (Artifact, error)
CompileParallel(keepPermissions bool, ps CompilationSpecs) ([]Artifact, []error)
CompileWithReverseDeps(keepPermissions bool, ps CompilationSpecs) ([]Artifact, []error)
ComputeDepTree(p CompilationSpec) (solver.PackagesAssertions, error)
ComputeMinimumCompilableSet(p ...CompilationSpec) ([]CompilationSpec, error)
SetConcurrency(i int)
FromPackage(pkg.Package) (CompilationSpec, error)
FromDatabase(db pkg.PackageDatabase, minimum bool, dst string) ([]CompilationSpec, error)
SetBackend(CompilerBackend)
GetBackend() CompilerBackend
SetBackendArgs([]string)
SetCompressionType(t CompressionImplementation)
}
type CompilerBackendOptions struct {
ImageName string
SourcePath string
DockerFileName string
Destination string
Context string
BackendArgs []string
}
type CompilerOptions struct {
ImageRepository string
PullFirst, KeepImg, Push bool
Concurrency int
CompressionType CompressionImplementation
Wait bool
OnlyDeps bool
NoDeps bool
SolverOptions config.LuetSolverOptions
BuildValuesFile string
PackageTargetOnly bool
}
func NewDefaultCompilerOptions() *CompilerOptions {
return &CompilerOptions{
ImageRepository: "luet/cache",
PullFirst: false,
Push: false,
CompressionType: None,
KeepImg: true,
Concurrency: runtime.NumCPU(),
OnlyDeps: false,
NoDeps: false,
}
}
type CompilerBackend interface {
BuildImage(CompilerBackendOptions) error
ExportImage(CompilerBackendOptions) error
RemoveImage(CompilerBackendOptions) error
Changes(fromImage, toImage CompilerBackendOptions) ([]ArtifactLayer, error)
ImageDefinitionToTar(CompilerBackendOptions) error
ExtractRootfs(opts CompilerBackendOptions, keepPerms bool) error
CopyImage(string, string) error
DownloadImage(opts CompilerBackendOptions) error
Push(opts CompilerBackendOptions) error
ImageAvailable(string) bool
ImageExists(string) bool
}
type Artifact interface {
GetPath() string
SetPath(string)
GetDependencies() []Artifact
SetDependencies(d []Artifact)
GetSourceAssertion() solver.PackagesAssertions
SetSourceAssertion(as solver.PackagesAssertions)
SetCompileSpec(as CompilationSpec)
GetCompileSpec() CompilationSpec
WriteYaml(dst string) error
Unpack(dst string, keepPerms bool) error
Compress(src string, concurrency int) error
SetCompressionType(t CompressionImplementation)
FileList() ([]string, error)
Hash() error
Verify() error
SetFiles(f []string)
GetFiles() []string
GetFileName() string
GetChecksums() Checksums
SetChecksums(c Checksums)
GenerateFinalImage(string, CompilerBackend, bool) (CompilerBackendOptions, error)
GetUncompressedName() string
}
type ArtifactNode struct {
Name string `json:"Name"`
Size int `json:"Size"`
}
type ArtifactDiffs struct {
Additions []ArtifactNode `json:"Adds"`
Deletions []ArtifactNode `json:"Dels"`
Changes []ArtifactNode `json:"Mods"`
}
type ArtifactLayer struct {
FromImage string `json:"Image1"`
ToImage string `json:"Image2"`
Diffs ArtifactDiffs `json:"Diff"`
}
type ArtifactLayerSummary struct {
FromImage string `json:"image1"`
ToImage string `json:"image2"`
AddFiles int `json:"add_files"`
AddSizes int64 `json:"add_sizes"`
DelFiles int `json:"del_files"`
DelSizes int64 `json:"del_sizes"`
ChangeFiles int `json:"change_files"`
ChangeSizes int64 `json:"change_sizes"`
}
type ArtifactLayersSummary struct {
Layers []ArtifactLayerSummary `json:"summary"`
}
// CompilationSpec represent a compilation specification derived from a package
type CompilationSpec interface {
ImageUnpack() bool // tells if the definition is just an image
GetIncludes() []string
GetExcludes() []string
RenderBuildImage() (string, error)
WriteBuildImageDefinition(string) error
RenderStepImage(image string) (string, error)
WriteStepImageDefinition(fromimage, path string) error
GetPackage() pkg.Package
BuildSteps() []string
GetSeedImage() string
SetSeedImage(string)
GetImage() string
SetImage(string)
SetOutputPath(string)
GetOutputPath() string
Rel(string) string
GetPreBuildSteps() []string
GetSourceAssertion() solver.PackagesAssertions
SetSourceAssertion(as solver.PackagesAssertions)
GetRetrieve() []string
CopyRetrieves(dest string) error
SetPackageDir(string)
GetPackageDir() string
EmptyPackage() bool
UnpackedPackage() bool
HasImageSource() bool
IsVirtual() bool
}
type CompilationSpecs interface {
Unique() CompilationSpecs
Len() int
All() []CompilationSpec
Add(CompilationSpec)
Remove(s CompilationSpecs) CompilationSpecs
}

View File

@@ -13,12 +13,14 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler
package artifact
import (
"archive/tar"
"bufio"
"bytes"
"crypto/sha1"
"encoding/base64"
"fmt"
"io"
"io/ioutil"
@@ -27,7 +29,6 @@ import (
"path/filepath"
"regexp"
system "github.com/docker/docker/pkg/system"
zstd "github.com/klauspost/compress/zstd"
gzip "github.com/klauspost/pgzip"
@@ -36,8 +37,12 @@ import (
"sync"
bus "github.com/mudler/luet/pkg/bus"
backend "github.com/mudler/luet/pkg/compiler/backend"
compression "github.com/mudler/luet/pkg/compiler/types/compression"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
. "github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
@@ -45,54 +50,31 @@ import (
yaml "gopkg.in/yaml.v2"
)
type CompressionImplementation string
const (
None CompressionImplementation = "none" // e.g. tar for standard packages
GZip CompressionImplementation = "gzip"
Zstandard CompressionImplementation = "zstd"
)
type ArtifactIndex []Artifact
func (i ArtifactIndex) CleanPath() ArtifactIndex {
newIndex := ArtifactIndex{}
for _, n := range i {
art := n.(*PackageArtifact)
// FIXME: This is a dup and makes difficult to add attributes to artifacts
newIndex = append(newIndex, &PackageArtifact{
Path: path.Base(n.GetPath()),
SourceAssertion: art.SourceAssertion,
CompileSpec: art.CompileSpec,
Dependencies: art.Dependencies,
CompressionType: art.CompressionType,
Checksums: art.Checksums,
Files: art.Files,
})
}
return newIndex
//Update if exists, otherwise just create
}
// When compiling, we write also a fingerprint.metadata.yaml file with PackageArtifact. In this way we can have another command to create the repository
// which will consist in just of an repository.yaml which is just the repository structure with the list of package artifact.
// In this way a generic client can fetch the packages and, after unpacking the tree, performing queries to install packages.
type PackageArtifact struct {
Path string `json:"path"`
Dependencies []*PackageArtifact `json:"dependencies"`
CompileSpec *LuetCompilationSpec `json:"compilationspec"`
Checksums Checksums `json:"checksums"`
SourceAssertion solver.PackagesAssertions `json:"-"`
CompressionType CompressionImplementation `json:"compressiontype"`
Files []string `json:"files"`
Dependencies []*PackageArtifact `json:"dependencies"`
CompileSpec *compilerspec.LuetCompilationSpec `json:"compilationspec"`
Checksums Checksums `json:"checksums"`
SourceAssertion solver.PackagesAssertions `json:"-"`
CompressionType compression.Implementation `json:"compressiontype"`
Files []string `json:"files"`
PackageCacheImage string `json:"package_cacheimage"`
}
func NewPackageArtifact(path string) Artifact {
return &PackageArtifact{Path: path, Dependencies: []*PackageArtifact{}, Checksums: Checksums{}, CompressionType: None}
func (p *PackageArtifact) ShallowCopy() *PackageArtifact {
copy := *p
return &copy
}
func NewPackageArtifactFromYaml(data []byte) (Artifact, error) {
func NewPackageArtifact(path string) *PackageArtifact {
return &PackageArtifact{Path: path, Dependencies: []*PackageArtifact{}, Checksums: Checksums{}, CompressionType: compression.None}
}
func NewPackageArtifactFromYaml(data []byte) (*PackageArtifact, error) {
p := &PackageArtifact{Checksums: Checksums{}}
err := yaml.Unmarshal(data, &p)
if err != nil {
@@ -102,42 +84,6 @@ func NewPackageArtifactFromYaml(data []byte) (Artifact, error) {
return p, err
}
func LoadArtifactFromYaml(spec CompilationSpec) (Artifact, error) {
metaFile := spec.GetPackage().GetFingerPrint() + ".metadata.yaml"
dat, err := ioutil.ReadFile(spec.Rel(metaFile))
if err != nil {
return nil, errors.Wrap(err, "Error reading file "+metaFile)
}
art, err := NewPackageArtifactFromYaml(dat)
if err != nil {
return nil, errors.Wrap(err, "Error writing file "+metaFile)
}
// It is relative, set it back to abs
art.SetPath(spec.Rel(art.GetPath()))
return art, nil
}
func (a *PackageArtifact) SetCompressionType(t CompressionImplementation) {
a.CompressionType = t
}
func (a *PackageArtifact) GetChecksums() Checksums {
return a.Checksums
}
func (a *PackageArtifact) SetChecksums(c Checksums) {
a.Checksums = c
}
func (a *PackageArtifact) SetFiles(f []string) {
a.Files = f
}
func (a *PackageArtifact) GetFiles() []string {
return a.Files
}
func (a *PackageArtifact) Hash() error {
return a.Checksums.Generate(a)
}
@@ -181,8 +127,8 @@ func (a *PackageArtifact) WriteYaml(dst string) error {
}
//p := a.CompileSpec.GetPackage().GetPath()
mangle.GetCompileSpec().GetPackage().SetPath("")
for _, ass := range mangle.GetCompileSpec().GetSourceAssertion() {
mangle.CompileSpec.GetPackage().SetPath("")
for _, ass := range mangle.CompileSpec.GetSourceAssertion() {
ass.Package.SetPath("")
}
@@ -191,7 +137,7 @@ func (a *PackageArtifact) WriteYaml(dst string) error {
return errors.Wrap(err, "While marshalling for PackageArtifact YAML")
}
err = ioutil.WriteFile(filepath.Join(dst, a.GetCompileSpec().GetPackage().GetFingerPrint()+".metadata.yaml"), data, os.ModePerm)
err = ioutil.WriteFile(filepath.Join(dst, a.CompileSpec.GetPackage().GetMetadataFilePath()), data, os.ModePerm)
if err != nil {
return errors.Wrap(err, "While writing PackageArtifact YAML")
}
@@ -201,48 +147,8 @@ func (a *PackageArtifact) WriteYaml(dst string) error {
return nil
}
func (a *PackageArtifact) GetSourceAssertion() solver.PackagesAssertions {
return a.SourceAssertion
}
func (a *PackageArtifact) SetCompileSpec(as CompilationSpec) {
a.CompileSpec = as.(*LuetCompilationSpec)
}
func (a *PackageArtifact) GetCompileSpec() CompilationSpec {
return a.CompileSpec
}
func (a *PackageArtifact) SetSourceAssertion(as solver.PackagesAssertions) {
a.SourceAssertion = as
}
func (a *PackageArtifact) GetDependencies() []Artifact {
ret := []Artifact{}
for _, d := range a.Dependencies {
ret = append(ret, d)
}
return ret
}
func (a *PackageArtifact) SetDependencies(d []Artifact) {
ret := []*PackageArtifact{}
for _, dd := range d {
ret = append(ret, dd.(*PackageArtifact))
}
a.Dependencies = ret
}
func (a *PackageArtifact) GetPath() string {
return a.Path
}
func (a *PackageArtifact) GetFileName() string {
return path.Base(a.GetPath())
}
func (a *PackageArtifact) SetPath(p string) {
a.Path = p
return path.Base(a.Path)
}
func (a *PackageArtifact) genDockerfile() string {
@@ -253,14 +159,20 @@ COPY . /`
// CreateArtifactForFile creates a new artifact from the given file
func CreateArtifactForFile(s string, opts ...func(*PackageArtifact)) (*PackageArtifact, error) {
if _, err := os.Stat(s); os.IsNotExist(err) {
return nil, errors.Wrap(err, "artifact path doesn't exist")
}
fileName := path.Base(s)
archive, err := LuetCfg.GetSystem().TempDir("archive")
if err != nil {
return nil, errors.Wrap(err, "error met while creating tempdir for "+s)
}
defer os.RemoveAll(archive) // clean up
helpers.CopyFile(s, filepath.Join(archive, fileName))
dst := filepath.Join(archive, fileName)
if err := fileHelper.CopyFile(s, dst); err != nil {
return nil, errors.Wrapf(err, "error while copying %s to %s", s, dst)
}
artifact, err := LuetCfg.GetSystem().TempDir("artifact")
if err != nil {
return nil, errors.Wrap(err, "error met while creating tempdir for "+s)
@@ -274,9 +186,13 @@ func CreateArtifactForFile(s string, opts ...func(*PackageArtifact)) (*PackageAr
return a, a.Compress(archive, 1)
}
type ImageBuilder interface {
BuildImage(backend.Options) error
}
// GenerateFinalImage takes an artifact and builds a Docker image with its content
func (a *PackageArtifact) GenerateFinalImage(imageName string, b CompilerBackend, keepPerms bool) (CompilerBackendOptions, error) {
builderOpts := CompilerBackendOptions{}
func (a *PackageArtifact) GenerateFinalImage(imageName string, b ImageBuilder, keepPerms bool) (backend.Options, error) {
builderOpts := backend.Options{}
archive, err := LuetCfg.GetSystem().TempDir("archive")
if err != nil {
return builderOpts, errors.Wrap(err, "error met while creating tempdir for "+a.Path)
@@ -294,7 +210,7 @@ func (a *PackageArtifact) GenerateFinalImage(imageName string, b CompilerBackend
return builderOpts, errors.Wrap(err, "error met while uncompressing artifact "+a.Path)
}
empty, err := helpers.DirectoryIsEmpty(uncompressedFiles)
empty, err := fileHelper.DirectoryIsEmpty(uncompressedFiles)
if err != nil {
return builderOpts, errors.Wrap(err, "error met while checking if directory is empty "+uncompressedFiles)
}
@@ -303,7 +219,7 @@ func (a *PackageArtifact) GenerateFinalImage(imageName string, b CompilerBackend
// We can't generate FROM scratch empty images. Docker will refuse to export them
// workaround: Inject a .virtual empty file
if empty {
helpers.Touch(filepath.Join(uncompressedFiles, ".virtual"))
fileHelper.Touch(filepath.Join(uncompressedFiles, ".virtual"))
}
data := a.genDockerfile()
@@ -311,7 +227,7 @@ func (a *PackageArtifact) GenerateFinalImage(imageName string, b CompilerBackend
return builderOpts, errors.Wrap(err, "error met while rendering artifact dockerfile "+a.Path)
}
builderOpts = CompilerBackendOptions{
builderOpts = backend.Options{
ImageName: imageName,
SourcePath: archive,
DockerFileName: dockerFile,
@@ -326,7 +242,7 @@ func (a *PackageArtifact) GenerateFinalImage(imageName string, b CompilerBackend
func (a *PackageArtifact) Compress(src string, concurrency int) error {
switch a.CompressionType {
case Zstandard:
case compression.Zstandard:
err := helpers.Tar(src, a.Path)
if err != nil {
return err
@@ -364,7 +280,7 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
a.Path = zstdFile
return nil
case GZip:
case compression.GZip:
err := helpers.Tar(src, a.Path)
if err != nil {
return err
@@ -404,15 +320,14 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
default:
return helpers.Tar(src, a.getCompressedName())
}
return errors.New("Compression type must be supplied")
}
func (a *PackageArtifact) getCompressedName() string {
switch a.CompressionType {
case Zstandard:
case compression.Zstandard:
return a.Path + ".zst"
case GZip:
case compression.GZip:
return a.Path + ".gz"
}
return a.Path
@@ -421,12 +336,34 @@ func (a *PackageArtifact) getCompressedName() string {
// GetUncompressedName returns the artifact path without the extension suffix
func (a *PackageArtifact) GetUncompressedName() string {
switch a.CompressionType {
case Zstandard, GZip:
case compression.Zstandard, compression.GZip:
return strings.TrimSuffix(a.Path, filepath.Ext(a.Path))
}
return a.Path
}
func hashContent(bv []byte) string {
hasher := sha1.New()
hasher.Write(bv)
sha := base64.URLEncoding.EncodeToString(hasher.Sum(nil))
return sha
}
func hashFileContent(path string) (string, error) {
f, err := os.Open(path)
if err != nil {
return "", err
}
defer f.Close()
h := sha1.New()
if _, err := io.Copy(h, f); err != nil {
return "", err
}
return base64.URLEncoding.EncodeToString(h.Sum(nil)), nil
}
func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
// If the destination path already exists I rename target file name with postfix.
var destPath string
@@ -438,6 +375,7 @@ func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Rea
return nil, nil, err
}
}
tarHash := hashContent(buffer.Bytes())
// If file is not present on archive but is defined on mods
// I receive the callback. Prevent nil exception.
@@ -450,13 +388,26 @@ func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Rea
return header, buffer.Bytes(), nil
}
existingHash := ""
f, err := os.Lstat(destPath)
if err == nil {
Debug("File exists already, computing hash for", destPath)
hash, herr := hashFileContent(destPath)
if herr == nil {
existingHash = hash
}
}
Debug("Existing file hash: ", existingHash, "Tar file hashsum: ", tarHash)
// We want to protect file only if the hash of the files are differing OR the file size are
differs := (existingHash != "" && existingHash != tarHash) || (err != nil && f != nil && header.Size != f.Size())
// Check if exists
if helpers.Exists(destPath) {
if fileHelper.Exists(destPath) && differs {
for i := 1; i < 1000; i++ {
name := filepath.Join(filepath.Join(filepath.Dir(path),
fmt.Sprintf("._cfg%04d_%s", i, filepath.Base(path))))
if helpers.Exists(name) {
if fileHelper.Exists(name) {
continue
}
Info(fmt.Sprintf("Found protected file %s. Creating %s.", destPath,
@@ -509,13 +460,13 @@ func (a *PackageArtifact) Unpack(dst string, keepPerms bool) error {
tarModifier := helpers.NewTarModifierWrapper(dst, tarModifierWrapperFunc)
switch a.CompressionType {
case Zstandard:
case compression.Zstandard:
// Create the uncompressed archive
archive, err := os.Create(a.GetPath() + ".uncompressed")
archive, err := os.Create(a.Path + ".uncompressed")
if err != nil {
return err
}
defer os.RemoveAll(a.GetPath() + ".uncompressed")
defer os.RemoveAll(a.Path + ".uncompressed")
defer archive.Close()
original, err := os.Open(a.Path)
@@ -534,22 +485,22 @@ func (a *PackageArtifact) Unpack(dst string, keepPerms bool) error {
_, err = io.Copy(archive, d)
if err != nil {
return errors.Wrap(err, "Cannot copy to "+a.GetPath()+".uncompressed")
return errors.Wrap(err, "Cannot copy to "+a.Path+".uncompressed")
}
err = helpers.UntarProtect(a.GetPath()+".uncompressed", dst,
err = helpers.UntarProtect(a.Path+".uncompressed", dst,
LuetCfg.GetGeneral().SameOwner, protectedFiles, tarModifier)
if err != nil {
return err
}
return nil
case GZip:
case compression.GZip:
// Create the uncompressed archive
archive, err := os.Create(a.GetPath() + ".uncompressed")
archive, err := os.Create(a.Path + ".uncompressed")
if err != nil {
return err
}
defer os.RemoveAll(a.GetPath() + ".uncompressed")
defer os.RemoveAll(a.Path + ".uncompressed")
defer archive.Close()
original, err := os.Open(a.Path)
@@ -567,10 +518,10 @@ func (a *PackageArtifact) Unpack(dst string, keepPerms bool) error {
_, err = io.Copy(archive, r)
if err != nil {
return errors.Wrap(err, "Cannot copy to "+a.GetPath()+".uncompressed")
return errors.Wrap(err, "Cannot copy to "+a.Path+".uncompressed")
}
err = helpers.UntarProtect(a.GetPath()+".uncompressed", dst,
err = helpers.UntarProtect(a.Path+".uncompressed", dst,
LuetCfg.GetGeneral().SameOwner, protectedFiles, tarModifier)
if err != nil {
return err
@@ -578,7 +529,7 @@ func (a *PackageArtifact) Unpack(dst string, keepPerms bool) error {
return nil
// Defaults to tar only (covers when "none" is supplied)
default:
return helpers.UntarProtect(a.GetPath(), dst, LuetCfg.GetGeneral().SameOwner,
return helpers.UntarProtect(a.Path, dst, LuetCfg.GetGeneral().SameOwner,
protectedFiles, tarModifier)
}
return errors.New("Compression type must be supplied")
@@ -588,12 +539,12 @@ func (a *PackageArtifact) Unpack(dst string, keepPerms bool) error {
func (a *PackageArtifact) FileList() ([]string, error) {
var tr *tar.Reader
switch a.CompressionType {
case Zstandard:
archive, err := os.Create(a.GetPath() + ".uncompressed")
case compression.Zstandard:
archive, err := os.Create(a.Path + ".uncompressed")
if err != nil {
return []string{}, err
}
defer os.RemoveAll(a.GetPath() + ".uncompressed")
defer os.RemoveAll(a.Path + ".uncompressed")
defer archive.Close()
original, err := os.Open(a.Path)
@@ -609,13 +560,13 @@ func (a *PackageArtifact) FileList() ([]string, error) {
}
defer r.Close()
tr = tar.NewReader(r)
case GZip:
case compression.GZip:
// Create the uncompressed archive
archive, err := os.Create(a.GetPath() + ".uncompressed")
archive, err := os.Create(a.Path + ".uncompressed")
if err != nil {
return []string{}, err
}
defer os.RemoveAll(a.GetPath() + ".uncompressed")
defer os.RemoveAll(a.Path + ".uncompressed")
defer archive.Close()
original, err := os.Open(a.Path)
@@ -634,7 +585,7 @@ func (a *PackageArtifact) FileList() ([]string, error) {
// Defaults to tar only (covers when "none" is supplied)
default:
tarFile, err := os.Open(a.GetPath())
tarFile, err := os.Open(a.Path)
if err != nil {
return []string{}, errors.Wrap(err, "Could not open package archive")
}
@@ -671,47 +622,16 @@ type CopyJob struct {
Artifact string
}
func copyXattr(srcPath, dstPath, attr string) error {
data, err := system.Lgetxattr(srcPath, attr)
if err != nil {
return err
}
if data != nil {
if err := system.Lsetxattr(dstPath, attr, data, 0); err != nil {
return err
}
}
return nil
}
func doCopyXattrs(srcPath, dstPath string) error {
if err := copyXattr(srcPath, dstPath, "security.capability"); err != nil {
return err
}
return copyXattr(srcPath, dstPath, "trusted.overlay.opaque")
}
func worker(i int, wg *sync.WaitGroup, s <-chan CopyJob) {
defer wg.Done()
for job := range s {
//Info("#"+strconv.Itoa(i), "copying", job.Src, "to", job.Dst)
// if dir, err := helpers.IsDirectory(job.Src); err == nil && dir {
// err = helpers.CopyDir(job.Src, job.Dst)
// if err != nil {
// Warning("Error copying dir", job, err)
// }
// continue
// }
_, err := os.Lstat(job.Dst)
if err != nil {
Debug("Copying ", job.Src)
if err := helpers.CopyFile(job.Src, job.Dst); err != nil {
if err := fileHelper.DeepCopyFile(job.Src, job.Dst); err != nil {
Warning("Error copying", job, err)
}
doCopyXattrs(job.Src, job.Dst)
}
}
}
@@ -729,8 +649,24 @@ func compileRegexes(regexes []string) []*regexp.Regexp {
return result
}
type ArtifactNode struct {
Name string `json:"Name"`
Size int `json:"Size"`
}
type ArtifactDiffs struct {
Additions []ArtifactNode `json:"Adds"`
Deletions []ArtifactNode `json:"Dels"`
Changes []ArtifactNode `json:"Mods"`
}
type ArtifactLayer struct {
FromImage string `json:"Image1"`
ToImage string `json:"Image2"`
Diffs ArtifactDiffs `json:"Diff"`
}
// ExtractArtifactFromDelta extracts deltas from ArtifactLayer from an image in tar format
func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurrency int, keepPerms bool, includes []string, excludes []string, t CompressionImplementation) (Artifact, error) {
func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurrency int, keepPerms bool, includes []string, excludes []string, t compression.Implementation) (*PackageArtifact, error) {
archive, err := LuetCfg.GetSystem().TempDir("archive")
if err != nil {
@@ -852,45 +788,10 @@ func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurren
wg.Wait()
a := NewPackageArtifact(dst)
a.SetCompressionType(t)
a.CompressionType = t
err = a.Compress(archive, concurrency)
if err != nil {
return nil, errors.Wrap(err, "Error met while creating package archive")
}
return a, nil
}
func ComputeArtifactLayerSummary(diffs []ArtifactLayer) ArtifactLayersSummary {
ans := ArtifactLayersSummary{
Layers: make([]ArtifactLayerSummary, 0),
}
for _, layer := range diffs {
sum := ArtifactLayerSummary{
FromImage: layer.FromImage,
ToImage: layer.ToImage,
AddFiles: 0,
AddSizes: 0,
DelFiles: 0,
DelSizes: 0,
ChangeFiles: 0,
ChangeSizes: 0,
}
for _, a := range layer.Diffs.Additions {
sum.AddFiles++
sum.AddSizes += int64(a.Size)
}
for _, d := range layer.Diffs.Deletions {
sum.DelFiles++
sum.DelSizes += int64(d.Size)
}
for _, c := range layer.Diffs.Changes {
sum.ChangeFiles++
sum.ChangeSizes += int64(c.Size)
}
ans.Layers = append(ans.Layers, sum)
}
return ans
}

View File

@@ -0,0 +1,32 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package artifact_test
import (
"testing"
. "github.com/mudler/luet/cmd"
config "github.com/mudler/luet/pkg/config"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func TestArtifact(t *testing.T) {
RegisterFailHandler(Fail)
LoadConfig(config.LuetCfg)
RunSpecs(t, "Artifact Suite")
}

View File

@@ -13,7 +13,7 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler_test
package artifact_test
import (
"io/ioutil"
@@ -22,10 +22,14 @@ import (
"github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/solver"
backend "github.com/mudler/luet/pkg/compiler/backend"
. "github.com/mudler/luet/pkg/compiler/types/artifact"
compression "github.com/mudler/luet/pkg/compiler/types/compression"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
. "github.com/mudler/luet/pkg/compiler"
helpers "github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
@@ -38,18 +42,15 @@ var _ = Describe("Artifact", func() {
generalRecipe := tree.NewGeneralRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/buildtree")
err := generalRecipe.Load("../../../../tests/fixtures/buildtree")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
compiler := NewLuetCompiler(nil, generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "enman", Category: "app-admin", Version: "1.4.0"})
cc := NewLuetCompiler(nil, generalRecipe.GetDatabase())
lspec, err := cc.FromPackage(&pkg.DefaultPackage{Name: "enman", Category: "app-admin", Version: "1.4.0"})
Expect(err).ToNot(HaveOccurred())
lspec, ok := spec.(*LuetCompilationSpec)
Expect(ok).To(BeTrue())
Expect(lspec.Steps).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
Expect(lspec.Image).To(Equal("luet/base"))
Expect(lspec.Seed).To(Equal("alpine"))
@@ -71,7 +72,7 @@ var _ = Describe("Artifact", func() {
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
dockerfile, err := helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
dockerfile, err := fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM alpine
@@ -81,7 +82,7 @@ ENV PACKAGE_NAME=enman
ENV PACKAGE_VERSION=1.4.0
ENV PACKAGE_CATEGORY=app-admin`))
b := NewSimpleDockerBackend()
opts := CompilerBackendOptions{
opts := backend.Options{
ImageName: "luet/base",
SourcePath: tmpdir,
DockerFileName: "Dockerfile",
@@ -89,12 +90,12 @@ ENV PACKAGE_CATEGORY=app-admin`))
}
Expect(b.BuildImage(opts)).ToNot(HaveOccurred())
Expect(b.ExportImage(opts)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
Expect(b.BuildImage(opts)).ToNot(HaveOccurred())
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "LuetDockerfile"))
Expect(err).ToNot(HaveOccurred())
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "LuetDockerfile"))
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "LuetDockerfile"))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM luet/base
@@ -105,7 +106,7 @@ ENV PACKAGE_VERSION=1.4.0
ENV PACKAGE_CATEGORY=app-admin
RUN echo foo > /test
RUN echo bar > /test2`))
opts2 := CompilerBackendOptions{
opts2 := backend.Options{
ImageName: "test",
SourcePath: tmpdir,
DockerFileName: "LuetDockerfile",
@@ -113,8 +114,8 @@ RUN echo bar > /test2`))
}
Expect(b.BuildImage(opts2)).ToNot(HaveOccurred())
Expect(b.ExportImage(opts2)).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
diffs, err := b.Changes(opts, opts2)
Expect(fileHelper.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
diffs, err := compiler.GenerateChanges(b, opts, opts2)
Expect(err).ToNot(HaveOccurred())
artifacts := []ArtifactNode{{
@@ -135,30 +136,30 @@ RUN echo bar > /test2`))
Additions: artifacts,
},
}}))
err = b.ExtractRootfs(CompilerBackendOptions{ImageName: "test", Destination: rootfs}, false)
err = b.ExtractRootfs(backend.Options{ImageName: "test", Destination: rootfs}, false)
Expect(err).ToNot(HaveOccurred())
artifact, err := ExtractArtifactFromDelta(rootfs, filepath.Join(tmpdir, "package.tar"), diffs, 2, false, []string{}, []string{}, None)
a, err := ExtractArtifactFromDelta(rootfs, filepath.Join(tmpdir, "package.tar"), diffs, 2, false, []string{}, []string{}, compression.None)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(tmpdir, "package.tar"))).To(BeTrue())
err = helpers.Untar(artifact.GetPath(), unpacked, false)
Expect(fileHelper.Exists(filepath.Join(tmpdir, "package.tar"))).To(BeTrue())
err = helpers.Untar(a.Path, unpacked, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(unpacked, "test"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(unpacked, "test2"))).To(BeTrue())
content1, err := helpers.Read(filepath.Join(unpacked, "test"))
Expect(fileHelper.Exists(filepath.Join(unpacked, "test"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(unpacked, "test2"))).To(BeTrue())
content1, err := fileHelper.Read(filepath.Join(unpacked, "test"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("foo\n"))
content2, err := helpers.Read(filepath.Join(unpacked, "test2"))
content2, err := fileHelper.Read(filepath.Join(unpacked, "test2"))
Expect(err).ToNot(HaveOccurred())
Expect(content2).To(Equal("bar\n"))
err = artifact.Hash()
err = a.Hash()
Expect(err).ToNot(HaveOccurred())
err = artifact.Verify()
err = a.Verify()
Expect(err).ToNot(HaveOccurred())
Expect(helpers.CopyFile(filepath.Join(tmpdir, "output2.tar"), filepath.Join(tmpdir, "package.tar"))).ToNot(HaveOccurred())
Expect(fileHelper.CopyFile(filepath.Join(tmpdir, "output2.tar"), filepath.Join(tmpdir, "package.tar"))).ToNot(HaveOccurred())
err = artifact.Verify()
err = a.Verify()
Expect(err).To(HaveOccurred())
})
@@ -183,13 +184,13 @@ RUN echo bar > /test2`))
err = ioutil.WriteFile(filepath.Join(tmpdir, "foo", "bar", "test"), testString, 0644)
Expect(err).ToNot(HaveOccurred())
artifact := NewPackageArtifact(filepath.Join(tmpWork, "fake.tar"))
artifact.SetCompileSpec(&LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Version: "1.0"}})
a := NewPackageArtifact(filepath.Join(tmpWork, "fake.tar"))
a.CompileSpec = &compilerspec.LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Version: "1.0"}}
err = artifact.Compress(tmpdir, 1)
err = a.Compress(tmpdir, 1)
Expect(err).ToNot(HaveOccurred())
resultingImage := imageprefix + "foo--1.0"
opts, err := artifact.GenerateFinalImage(resultingImage, b, false)
opts, err := a.GenerateFinalImage(resultingImage, b, false)
Expect(err).ToNot(HaveOccurred())
Expect(opts.ImageName).To(Equal(resultingImage))
@@ -199,7 +200,7 @@ RUN echo bar > /test2`))
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(result) // clean up
err = b.ExtractRootfs(CompilerBackendOptions{ImageName: resultingImage, Destination: result}, false)
err = b.ExtractRootfs(backend.Options{ImageName: resultingImage, Destination: result}, false)
Expect(err).ToNot(HaveOccurred())
content, err := ioutil.ReadFile(filepath.Join(result, "test"))
@@ -225,13 +226,13 @@ RUN echo bar > /test2`))
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpWork) // clean up
artifact := NewPackageArtifact(filepath.Join(tmpWork, "fake.tar"))
artifact.SetCompileSpec(&LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Version: "1.0"}})
a := NewPackageArtifact(filepath.Join(tmpWork, "fake.tar"))
a.CompileSpec = &compilerspec.LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Version: "1.0"}}
err = artifact.Compress(tmpdir, 1)
err = a.Compress(tmpdir, 1)
Expect(err).ToNot(HaveOccurred())
resultingImage := imageprefix + "foo--1.0"
opts, err := artifact.GenerateFinalImage(resultingImage, b, false)
opts, err := a.GenerateFinalImage(resultingImage, b, false)
Expect(err).ToNot(HaveOccurred())
Expect(opts.ImageName).To(Equal(resultingImage))
@@ -241,10 +242,10 @@ RUN echo bar > /test2`))
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(result) // clean up
err = b.ExtractRootfs(CompilerBackendOptions{ImageName: resultingImage, Destination: result}, false)
err = b.ExtractRootfs(backend.Options{ImageName: resultingImage, Destination: result}, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.DirectoryIsEmpty(result)).To(BeFalse())
Expect(fileHelper.DirectoryIsEmpty(result)).To(BeFalse())
content, err := ioutil.ReadFile(filepath.Join(result, ".virtual"))
Expect(err).ToNot(HaveOccurred())
@@ -253,15 +254,15 @@ RUN echo bar > /test2`))
It("Retrieves uncompressed name", func() {
a := NewPackageArtifact("foo.tar.gz")
a.SetCompressionType(compiler.GZip)
a.CompressionType = (compression.GZip)
Expect(a.GetUncompressedName()).To(Equal("foo.tar"))
a = NewPackageArtifact("foo.tar.zst")
a.SetCompressionType(compiler.Zstandard)
a.CompressionType = compression.Zstandard
Expect(a.GetUncompressedName()).To(Equal("foo.tar"))
a = NewPackageArtifact("foo.tar")
a.SetCompressionType(compiler.None)
a.CompressionType = compression.None
Expect(a.GetUncompressedName()).To(Equal("foo.tar"))
})
})

View File

@@ -13,7 +13,7 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler
package artifact
import (
@@ -43,7 +43,7 @@ type HashOptions struct {
}
// Generate generates all Checksums supported for the artifact
func (c *Checksums) Generate(a Artifact) error {
func (c *Checksums) Generate(a *PackageArtifact) error {
return c.generateSHA256(a)
}
@@ -56,13 +56,13 @@ func (c Checksums) Compare(d Checksums) error {
return nil
}
func (c *Checksums) generateSHA256(a Artifact) error {
func (c *Checksums) generateSHA256(a *PackageArtifact) error {
return c.generateSum(a, HashOptions{Hasher: sha256.New(), Type: SHA256})
}
func (c *Checksums) generateSum(a Artifact, opts HashOptions) error {
func (c *Checksums) generateSum(a *PackageArtifact, opts HashOptions) error {
f, err := os.Open(a.GetPath())
f, err := os.Open(a.Path)
if err != nil {
return err
}

View File

@@ -13,13 +13,13 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler_test
package artifact_test
import (
"io/ioutil"
"os"
. "github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/compiler/types/artifact"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -40,13 +40,13 @@ var _ = Describe("Checksum", func() {
Expect(len(definitionsum)).To(Equal(0))
Expect(len(definitionsum2)).To(Equal(0))
err = buildsum.Generate(NewPackageArtifact("../../tests/fixtures/layers/alpine/build.yaml"))
err = buildsum.Generate(NewPackageArtifact("../../../../tests/fixtures/layers/alpine/build.yaml"))
Expect(err).ToNot(HaveOccurred())
err = definitionsum.Generate(NewPackageArtifact("../../tests/fixtures/layers/alpine/definition.yaml"))
err = definitionsum.Generate(NewPackageArtifact("../../../../tests/fixtures/layers/alpine/definition.yaml"))
Expect(err).ToNot(HaveOccurred())
err = definitionsum2.Generate(NewPackageArtifact("../../tests/fixtures/layers/alpine/definition.yaml"))
err = definitionsum2.Generate(NewPackageArtifact("../../../../tests/fixtures/layers/alpine/definition.yaml"))
Expect(err).ToNot(HaveOccurred())
Expect(len(buildsum)).To(Equal(1))

View File

@@ -0,0 +1,9 @@
package compression
type Implementation string
const (
None Implementation = "none" // e.g. tar for standard packages
GZip Implementation = "gzip"
Zstandard Implementation = "zstd"
)

View File

@@ -0,0 +1,199 @@
// Copyright © 2019-2021 Ettore Di Giacinto <mudler@sabayon.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package options
import (
"runtime"
"github.com/mudler/luet/pkg/compiler/types/compression"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/solver"
)
type Compiler struct {
PushImageRepository string
PullImageRepository []string
PullFirst, KeepImg, Push bool
Concurrency int
CompressionType compression.Implementation
Wait bool
OnlyDeps bool
NoDeps bool
SolverOptions config.LuetSolverOptions
BuildValuesFile []string
BuildValues []map[string]interface{}
PackageTargetOnly bool
Rebuild bool
BackendArgs []string
BackendType string
}
func NewDefaultCompiler() *Compiler {
return &Compiler{
PushImageRepository: "luet/cache",
PullFirst: false,
Push: false,
CompressionType: compression.None,
KeepImg: true,
Concurrency: runtime.NumCPU(),
OnlyDeps: false,
NoDeps: false,
SolverOptions: config.LuetSolverOptions{Options: solver.Options{Concurrency: 1, Type: solver.SingleCoreSimple}},
}
}
type Option func(cfg *Compiler) error
func (cfg *Compiler) Apply(opts ...Option) error {
for _, opt := range opts {
if opt == nil {
continue
}
if err := opt(cfg); err != nil {
return err
}
}
return nil
}
func WithOptions(opt *Compiler) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg = opt
return nil
}
}
func WithBackendType(r string) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.BackendType = r
return nil
}
}
func WithBuildValues(r []string) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.BuildValuesFile = r
return nil
}
}
func WithPullRepositories(r []string) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.PullImageRepository = r
return nil
}
}
func WithPushRepository(r string) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
if len(cfg.PullImageRepository) == 0 {
cfg.PullImageRepository = []string{cfg.PushImageRepository}
}
cfg.PushImageRepository = r
return nil
}
}
func BackendArgs(r []string) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.BackendArgs = r
return nil
}
}
func PullFirst(b bool) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.PullFirst = b
return nil
}
}
func KeepImg(b bool) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.KeepImg = b
return nil
}
}
func Rebuild(b bool) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.Rebuild = b
return nil
}
}
func PushImages(b bool) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.Push = b
return nil
}
}
func Wait(b bool) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.Wait = b
return nil
}
}
func OnlyDeps(b bool) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.OnlyDeps = b
return nil
}
}
func OnlyTarget(b bool) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.PackageTargetOnly = b
return nil
}
}
func NoDeps(b bool) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.NoDeps = b
return nil
}
}
func Concurrency(i int) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
if i == 0 {
i = runtime.NumCPU()
}
cfg.Concurrency = i
return nil
}
}
func WithCompressionType(t compression.Implementation) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.CompressionType = t
return nil
}
}
func WithSolverOptions(c config.LuetSolverOptions) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.SolverOptions = c
return nil
}
}

View File

@@ -13,12 +13,16 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler
package compilerspec
import (
"fmt"
"io/ioutil"
"path/filepath"
"github.com/mitchellh/hashstructure/v2"
options "github.com/mudler/luet/pkg/compiler/types/options"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
"github.com/otiai10/copy"
@@ -27,7 +31,7 @@ import (
type LuetCompilationspecs []LuetCompilationSpec
func NewLuetCompilationspecs(s ...CompilationSpec) CompilationSpecs {
func NewLuetCompilationspecs(s ...*LuetCompilationSpec) *LuetCompilationspecs {
all := LuetCompilationspecs{}
for _, spec := range s {
@@ -40,7 +44,7 @@ func (specs LuetCompilationspecs) Len() int {
return len(specs)
}
func (specs *LuetCompilationspecs) Remove(s CompilationSpecs) CompilationSpecs {
func (specs *LuetCompilationspecs) Remove(s *LuetCompilationspecs) *LuetCompilationspecs {
newSpecs := LuetCompilationspecs{}
SPECS:
for _, spec := range specs.All() {
@@ -54,16 +58,12 @@ SPECS:
return &newSpecs
}
func (specs *LuetCompilationspecs) Add(s CompilationSpec) {
c, ok := s.(*LuetCompilationSpec)
if !ok {
panic("LuetCompilationspecs supports only []LuetCompilationSpec")
}
*specs = append(*specs, *c)
func (specs *LuetCompilationspecs) Add(s *LuetCompilationSpec) {
*specs = append(*specs, *s)
}
func (specs *LuetCompilationspecs) All() []CompilationSpec {
var cspecs []CompilationSpec
func (specs *LuetCompilationspecs) All() []*LuetCompilationSpec {
var cspecs []*LuetCompilationSpec
for i, _ := range *specs {
f := (*specs)[i]
cspecs = append(cspecs, &f)
@@ -72,7 +72,7 @@ func (specs *LuetCompilationspecs) All() []CompilationSpec {
return cspecs
}
func (specs *LuetCompilationspecs) Unique() CompilationSpecs {
func (specs *LuetCompilationspecs) Unique() *LuetCompilationspecs {
newSpecs := LuetCompilationspecs{}
seen := map[string]bool{}
@@ -87,6 +87,13 @@ func (specs *LuetCompilationspecs) Unique() CompilationSpecs {
return &newSpecs
}
type CopyField struct {
Package *pkg.DefaultPackage `json:"package"`
Image string `json:"image"`
Source string `json:"source"`
Destination string `json:"destination"`
}
type LuetCompilationSpec struct {
Steps []string `json:"steps"` // Are run inside a container and the result layer diff is saved
Env []string `json:"env"`
@@ -103,9 +110,48 @@ type LuetCompilationSpec struct {
Unpack bool `json:"unpack"`
Includes []string `json:"includes"`
Excludes []string `json:"excludes"`
BuildOptions *options.Compiler `json:"build_options"`
Copy []CopyField `json:"copy"`
Join pkg.DefaultPackages `json:"join"`
}
func NewLuetCompilationSpec(b []byte, p pkg.Package) (CompilationSpec, error) {
// Signature is a portion of the spec that yields a signature for the hash
type Signature struct {
Image string
Steps []string
PackageDir string
Prelude []string
Seed string
Env []string
Retrieve []string
Unpack bool
Includes []string
Excludes []string
Copy []CopyField
Join pkg.DefaultPackages
}
func (cs *LuetCompilationSpec) signature() Signature {
return Signature{
Image: cs.Image,
Steps: cs.Steps,
PackageDir: cs.PackageDir,
Prelude: cs.Prelude,
Seed: cs.Seed,
Env: cs.Env,
Retrieve: cs.Retrieve,
Unpack: cs.Unpack,
Includes: cs.Includes,
Excludes: cs.Excludes,
Copy: cs.Copy,
Join: cs.Join,
}
}
func NewLuetCompilationSpec(b []byte, p pkg.Package) (*LuetCompilationSpec, error) {
var spec LuetCompilationSpec
err := yaml.Unmarshal(b, &spec)
if err != nil {
@@ -118,6 +164,10 @@ func (cs *LuetCompilationSpec) GetSourceAssertion() solver.PackagesAssertions {
return cs.SourceAssertion
}
func (cs *LuetCompilationSpec) SetBuildOptions(b options.Compiler) {
cs.BuildOptions = &b
}
func (cs *LuetCompilationSpec) SetSourceAssertion(as solver.PackagesAssertions) {
cs.SourceAssertion = as
}
@@ -209,7 +259,14 @@ func (cs *LuetCompilationSpec) UnpackedPackage() bool {
// a compilation spec has an image source when it depends on other packages or have a source image
// explictly supplied
func (cs *LuetCompilationSpec) HasImageSource() bool {
return (cs.Package != nil && len(cs.GetPackage().GetRequires()) != 0) || cs.GetImage() != ""
return (cs.Package != nil && len(cs.GetPackage().GetRequires()) != 0) || cs.GetImage() != "" || len(cs.Join) != 0
}
func (cs *LuetCompilationSpec) Hash() (string, error) {
// build a signature, we want to be part of the hash only the fields that are relevant for build purposes
signature := cs.signature()
h, err := hashstructure.Hash(signature, hashstructure.FormatV2, nil)
return fmt.Sprint(h), err
}
func (cs *LuetCompilationSpec) CopyRetrieves(dest string) error {
@@ -252,6 +309,13 @@ ADD ` + s + ` /luetbuild/`
}
}
for _, c := range cs.Copy {
if c.Image != "" {
copyLine := fmt.Sprintf("\nCOPY --from=%s %s %s\n", c.Image, c.Source, c.Destination)
spec = spec + copyLine
}
}
for _, s := range cs.Env {
spec = spec + `
ENV ` + s

View File

@@ -0,0 +1,32 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compilerspec_test
import (
"testing"
. "github.com/mudler/luet/cmd"
config "github.com/mudler/luet/pkg/config"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func TestSpec(t *testing.T) {
RegisterFailHandler(Fail)
LoadConfig(config.LuetCfg)
RunSpecs(t, "Spec Suite")
}

View File

@@ -13,17 +13,19 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler_test
package compilerspec_test
import (
"io/ioutil"
"os"
"path/filepath"
options "github.com/mudler/luet/pkg/compiler/types/options"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/compiler"
helpers "github.com/mudler/luet/pkg/helpers"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -32,63 +34,116 @@ import (
var _ = Describe("Spec", func() {
Context("Luet specs", func() {
It("Allows normal operations", func() {
testSpec := &LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Category: "a", Version: "0"}}
testSpec2 := &LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "bar", Category: "a", Version: "0"}}
testSpec3 := &LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "baz", Category: "a", Version: "0"}}
testSpec4 := &LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Category: "a", Version: "0"}}
testSpec := &compilerspec.LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Category: "a", Version: "0"}}
testSpec2 := &compilerspec.LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "bar", Category: "a", Version: "0"}}
testSpec3 := &compilerspec.LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "baz", Category: "a", Version: "0"}}
testSpec4 := &compilerspec.LuetCompilationSpec{Package: &pkg.DefaultPackage{Name: "foo", Category: "a", Version: "0"}}
specs := NewLuetCompilationspecs(testSpec, testSpec2)
specs := compilerspec.NewLuetCompilationspecs(testSpec, testSpec2)
Expect(specs.Len()).To(Equal(2))
Expect(specs.All()).To(Equal([]CompilationSpec{testSpec, testSpec2}))
Expect(specs.All()).To(Equal([]*compilerspec.LuetCompilationSpec{testSpec, testSpec2}))
specs.Add(testSpec3)
Expect(specs.All()).To(Equal([]CompilationSpec{testSpec, testSpec2, testSpec3}))
Expect(specs.All()).To(Equal([]*compilerspec.LuetCompilationSpec{testSpec, testSpec2, testSpec3}))
specs.Add(testSpec4)
Expect(specs.All()).To(Equal([]CompilationSpec{testSpec, testSpec2, testSpec3, testSpec4}))
Expect(specs.All()).To(Equal([]*compilerspec.LuetCompilationSpec{testSpec, testSpec2, testSpec3, testSpec4}))
newSpec := specs.Unique()
Expect(newSpec.All()).To(Equal([]CompilationSpec{testSpec, testSpec2, testSpec3}))
Expect(newSpec.All()).To(Equal([]*compilerspec.LuetCompilationSpec{testSpec, testSpec2, testSpec3}))
newSpec2 := specs.Remove(NewLuetCompilationspecs(testSpec, testSpec2))
Expect(newSpec2.All()).To(Equal([]CompilationSpec{testSpec3}))
newSpec2 := specs.Remove(compilerspec.NewLuetCompilationspecs(testSpec, testSpec2))
Expect(newSpec2.All()).To(Equal([]*compilerspec.LuetCompilationSpec{testSpec3}))
})
Context("virtuals", func() {
When("is empty", func() {
It("is virtual", func() {
spec := &LuetCompilationSpec{}
spec := &compilerspec.LuetCompilationSpec{}
Expect(spec.IsVirtual()).To(BeTrue())
})
})
When("has defined steps", func() {
It("is not a virtual", func() {
spec := &LuetCompilationSpec{Steps: []string{"foo"}}
spec := &compilerspec.LuetCompilationSpec{Steps: []string{"foo"}}
Expect(spec.IsVirtual()).To(BeFalse())
})
})
When("has defined image", func() {
It("is not a virtual", func() {
spec := &LuetCompilationSpec{Image: "foo"}
spec := &compilerspec.LuetCompilationSpec{Image: "foo"}
Expect(spec.IsVirtual()).To(BeFalse())
})
})
})
})
Context("Image hashing", func() {
It("is stable", func() {
spec1 := &compilerspec.LuetCompilationSpec{
Image: "foo",
BuildOptions: &options.Compiler{BuildValues: []map[string]interface{}{{"foo": "bar", "baz": true}}},
Package: &pkg.DefaultPackage{
Name: "foo",
Category: "Bar",
Labels: map[string]string{
"foo": "bar",
"baz": "foo",
},
},
}
spec2 := &compilerspec.LuetCompilationSpec{
Image: "foo",
BuildOptions: &options.Compiler{BuildValues: []map[string]interface{}{{"foo": "bar", "baz": true}}},
Package: &pkg.DefaultPackage{
Name: "foo",
Category: "Bar",
Labels: map[string]string{
"foo": "bar",
"baz": "foo",
},
},
}
spec3 := &compilerspec.LuetCompilationSpec{
Image: "foo",
Steps: []string{"foo"},
Package: &pkg.DefaultPackage{
Name: "foo",
Category: "Bar",
Labels: map[string]string{
"foo": "bar",
"baz": "foo",
},
},
}
hash, err := spec1.Hash()
Expect(err).ToNot(HaveOccurred())
hash2, err := spec2.Hash()
Expect(err).ToNot(HaveOccurred())
hash3, err := spec3.Hash()
Expect(err).ToNot(HaveOccurred())
Expect(hash).To(Equal(hash2))
hashagain, err := spec2.Hash()
Expect(err).ToNot(HaveOccurred())
Expect(hash).ToNot(Equal(hash3))
Expect(hash).To(Equal(hashagain))
})
})
Context("Simple package build definition", func() {
It("Loads it correctly", func() {
generalRecipe := tree.NewGeneralRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/buildtree")
err := generalRecipe.Load("../../../../tests/fixtures/buildtree")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
compiler := NewLuetCompiler(nil, generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "enman", Category: "app-admin", Version: "1.4.0"})
compiler := NewLuetCompiler(nil, generalRecipe.GetDatabase())
lspec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "enman", Category: "app-admin", Version: "1.4.0"})
Expect(err).ToNot(HaveOccurred())
lspec, ok := spec.(*LuetCompilationSpec)
Expect(ok).To(BeTrue())
Expect(lspec.Steps).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
Expect(lspec.Image).To(Equal("luet/base"))
Expect(lspec.Seed).To(Equal("alpine"))
@@ -99,7 +154,7 @@ var _ = Describe("Spec", func() {
lspec.Env = []string{"test=1"}
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
dockerfile, err := helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
dockerfile, err := fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM alpine
@@ -112,7 +167,7 @@ ENV test=1`))
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM luet/base
@@ -132,18 +187,15 @@ RUN echo bar > /test2`))
It("Renders retrieve and env fields", func() {
generalRecipe := tree.NewGeneralRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/retrieve")
err := generalRecipe.Load("../../../../tests/fixtures/retrieve")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
compiler := NewLuetCompiler(nil, generalRecipe.GetDatabase(), NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.0"})
compiler := NewLuetCompiler(nil, generalRecipe.GetDatabase())
lspec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
lspec, ok := spec.(*LuetCompilationSpec)
Expect(ok).To(BeTrue())
Expect(lspec.Steps).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
Expect(lspec.Image).To(Equal("luet/base"))
Expect(lspec.Seed).To(Equal("alpine"))
@@ -153,7 +205,7 @@ RUN echo bar > /test2`))
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
dockerfile, err := helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
dockerfile, err := fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM alpine
@@ -170,7 +222,7 @@ ENV test=1`))
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM alpine
@@ -185,7 +237,7 @@ ENV test=1`))
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`

View File

@@ -27,7 +27,6 @@ import (
"strings"
"time"
"github.com/mudler/luet/pkg/helpers"
pkg "github.com/mudler/luet/pkg/package"
solver "github.com/mudler/luet/pkg/solver"
@@ -65,6 +64,7 @@ type LuetGeneralConfig struct {
}
type LuetSolverOptions struct {
solver.Options
Type string `mapstructure:"type"`
LearnRate float32 `mapstructure:"rate"`
Discount float32 `mapstructure:"discount"`
@@ -159,8 +159,9 @@ type LuetRepository struct {
Enable bool `json:"enable" yaml:"enable" mapstructure:"enable"`
Cached bool `json:"cached,omitempty" yaml:"cached,omitempty" mapstructure:"cached,omitempty"`
Authentication map[string]string `json:"auth,omitempty" yaml:"auth,omitempty" mapstructure:"auth,omitempty"`
TreePath string `json:"tree_path,omitempty" yaml:"tree_path,omitempty" mapstructure:"tree_path"`
MetaPath string `json:"meta_path,omitempty" yaml:"meta_path,omitempty" mapstructure:"meta_path"`
TreePath string `json:"treepath,omitempty" yaml:"treepath,omitempty" mapstructure:"treepath"`
MetaPath string `json:"metapath,omitempty" yaml:"metapath,omitempty" mapstructure:"metapath"`
Verify bool `json:"verify,omitempty" yaml:"verify,omitempty" mapstructure:"verify"`
// Serialized options not used in repository configuration
@@ -404,8 +405,13 @@ system:
}
func (c *LuetSystemConfig) InitTmpDir() error {
if !helpers.Exists(c.TmpDirBase) {
return os.MkdirAll(c.TmpDirBase, os.ModePerm)
if _, err := os.Stat(c.TmpDirBase); err != nil {
if os.IsNotExist(err) {
err = os.MkdirAll(c.TmpDirBase, os.ModePerm)
if err != nil {
return err
}
}
}
return nil
}

View File

@@ -22,7 +22,7 @@ import (
"strings"
config "github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -38,7 +38,7 @@ var _ = Describe("Config", func() {
tmpDir, err := config.LuetCfg.GetSystem().TempDir("test1")
Expect(err).ToNot(HaveOccurred())
Expect(strings.HasPrefix(tmpDir, filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
Expect(helpers.Exists(tmpDir)).To(BeTrue())
Expect(fileHelper.Exists(tmpDir)).To(BeTrue())
defer os.RemoveAll(tmpDir)
})
@@ -49,7 +49,7 @@ var _ = Describe("Config", func() {
tmpFile, err := config.LuetCfg.GetSystem().TempFile("testfile1")
Expect(err).ToNot(HaveOccurred())
Expect(strings.HasPrefix(tmpFile.Name(), filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
Expect(helpers.Exists(tmpFile.Name())).To(BeTrue())
Expect(fileHelper.Exists(tmpFile.Name())).To(BeTrue())
defer os.Remove(tmpFile.Name())
})

View File

@@ -90,14 +90,11 @@ func UntarProtect(src, dst string, sameOwner bool, protectedFiles []string, modi
}
if sameOwner {
// PRE: i have root privileged.
// we do have root permissions, so we can extract keeping the same permissions.
replacerArchive := archive.ReplaceFileTarWrapper(in, mods)
opts := &archive.TarOptions{
// NOTE: NoLchown boolean is used for chmod of the symlink
// Probably it's needed set this always to true.
NoLchown: true,
NoLchown: false,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
ContinueOnError: true,
}
@@ -201,12 +198,8 @@ func Untar(src, dest string, sameOwner bool) error {
defer in.Close()
if sameOwner {
// PRE: i have root privileged.
opts := &archive.TarOptions{
// NOTE: NoLchown boolean is used for chmod of the symlink
// Probably it's needed set this always to true.
NoLchown: true,
NoLchown: false,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
ContinueOnError: true,
}

View File

@@ -25,6 +25,8 @@ import (
"os"
"path/filepath"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
"github.com/docker/docker/pkg/archive"
. "github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
@@ -128,7 +130,7 @@ var _ = Describe("Helpers Archive", func() {
err = archive.Untar(replacerArchive, targetDir, opts)
Expect(err).ToNot(HaveOccurred())
Expect(Exists(filepath.Join(targetDir, "._cfg0001_file-0"))).Should(Equal(true))
Expect(fileHelper.Exists(filepath.Join(targetDir, "._cfg0001_file-0"))).Should(Equal(true))
})
})
})

View File

@@ -0,0 +1,130 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package docker
import (
"context"
"encoding/hex"
"os"
"strings"
"github.com/mudler/luet/pkg/helpers/imgworker"
"github.com/docker/cli/cli/trust"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/registry"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
"github.com/theupdateframework/notary/tuf/data"
)
// See also https://github.com/docker/cli/blob/88c6089300a82d3373892adf6845a4fed1a4ba8d/cli/command/image/trust.go#L171
func verifyImage(image string, authConfig *types.AuthConfig) (string, error) {
ref, err := reference.ParseAnyReference(image)
if err != nil {
return "", errors.Wrapf(err, "invalid reference %s", image)
}
// only check if image ref doesn't contain hashes
if _, ok := ref.(reference.Digested); !ok {
namedRef, ok := ref.(reference.Named)
if !ok {
return "", errors.New("failed to resolve image digest using content trust: reference is not named")
}
namedRef = reference.TagNameOnly(namedRef)
taggedRef, ok := namedRef.(reference.NamedTagged)
if !ok {
return "", errors.New("failed to resolve image digest using content trust: reference is not tagged")
}
resolvedImage, err := trustedResolveDigest(context.Background(), taggedRef, authConfig, "luet")
if err != nil {
return "", errors.Wrap(err, "failed to resolve image digest using content trust")
}
resolvedFamiliar := reference.FamiliarString(resolvedImage)
return resolvedFamiliar, nil
}
return "", nil
}
func trustedResolveDigest(ctx context.Context, ref reference.NamedTagged, authConfig *types.AuthConfig, useragent string) (reference.Canonical, error) {
repoInfo, err := registry.ParseRepositoryInfo(ref)
if err != nil {
return nil, err
}
notaryRepo, err := trust.GetNotaryRepository(os.Stdin, os.Stdout, useragent, repoInfo, authConfig, "pull")
if err != nil {
return nil, errors.Wrap(err, "error establishing connection to trust repository")
}
t, err := notaryRepo.GetTargetByName(ref.Tag(), trust.ReleasesRole, data.CanonicalTargetsRole)
if err != nil {
return nil, trust.NotaryError(repoInfo.Name.Name(), err)
}
// Only get the tag if it's in the top level targets role or the releases delegation role
// ignore it if it's in any other delegation roles
if t.Role != trust.ReleasesRole && t.Role != data.CanonicalTargetsRole {
return nil, trust.NotaryError(repoInfo.Name.Name(), errors.Errorf("No trust data for %s", reference.FamiliarString(ref)))
}
h, ok := t.Hashes["sha256"]
if !ok {
return nil, errors.New("no valid hash, expecting sha256")
}
dgst := digest.NewDigestFromHex("sha256", hex.EncodeToString(h))
// Allow returning canonical reference with tag
return reference.WithDigest(ref, dgst)
}
// DownloadAndExtractDockerImage is a re-adaption
// from genuinetools/img https://github.com/genuinetools/img/blob/54d0ca981c1260546d43961a538550eef55c87cf/pull.go
func DownloadAndExtractDockerImage(temp, image, dest string, auth *types.AuthConfig, verify bool) (*imgworker.ListedImage, error) {
if verify {
img, err := verifyImage(image, auth)
if err != nil {
return nil, errors.Wrapf(err, "failed verifying image")
}
image = img
}
defer os.RemoveAll(temp)
c, err := imgworker.New(temp, auth)
if err != nil {
return nil, errors.Wrapf(err, "failed creating client")
}
defer c.Close()
listedImage, err := c.Pull(image)
if err != nil {
return nil, errors.Wrapf(err, "failed listing images")
}
os.RemoveAll(dest)
err = c.Unpack(image, dest)
return listedImage, err
}
func StripInvalidStringsFromImage(s string) string {
return strings.ReplaceAll(s, "+", "-")
}

View File

@@ -0,0 +1,30 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@sabayon.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package docker_test
import (
"github.com/mudler/luet/pkg/helpers/docker"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("StripInvalidStringsFromImage", func() {
Context("Image names", func() {
It("strips invalid chars", func() {
Expect(docker.StripInvalidStringsFromImage("foo+bar")).To(Equal("foo-bar"))
})
})
})

View File

@@ -13,12 +13,13 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers
package file
import (
"fmt"
"io"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
"sort"
@@ -26,10 +27,42 @@ import (
"syscall"
"time"
"github.com/docker/docker/pkg/system"
"github.com/google/renameio"
copy "github.com/otiai10/copy"
"github.com/pkg/errors"
)
var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
func RandStringRunes(n int) string {
b := make([]rune, n)
for i := range b {
b[i] = letterRunes[rand.Intn(len(letterRunes))]
}
return string(b)
}
func Move(src, dst string) error {
f, err := os.Open(src)
if err != nil {
return err
}
defer f.Close()
t, err := renameio.TempFile("", dst)
if err != nil {
return err
}
defer t.Cleanup()
_, err = io.Copy(t, f)
if err != nil {
return err
}
return t.CloseAtomicallyReplace()
}
func OrderFiles(target string, files []string) ([]string, []string) {
var newFiles []string
@@ -135,9 +168,27 @@ func Read(file string) (string, error) {
return string(dat), nil
}
func EnsureDirPerm(src, dst string) {
if info, err := os.Lstat(filepath.Dir(src)); err == nil {
if _, err := os.Lstat(filepath.Dir(dst)); os.IsNotExist(err) {
err := os.MkdirAll(filepath.Dir(dst), info.Mode().Perm())
if err != nil {
fmt.Println("warning: failed creating", filepath.Dir(dst), err.Error())
}
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
if err := os.Lchown(filepath.Dir(dst), int(stat.Uid), int(stat.Gid)); err != nil {
fmt.Println("warning: failed chowning", filepath.Dir(dst), err.Error())
}
}
}
} else {
EnsureDir(dst)
}
}
func EnsureDir(fileName string) error {
dirName := filepath.Dir(fileName)
if _, serr := os.Stat(dirName); serr != nil {
if _, serr := os.Stat(dirName); os.IsNotExist(serr) {
merr := os.MkdirAll(dirName, os.ModePerm) // FIXME: It should preserve permissions from src to dst instead
if merr != nil {
return merr
@@ -146,12 +197,39 @@ func EnsureDir(fileName string) error {
return nil
}
// CopyFile copies the contents of the file named src to the file named
func CopyFile(src, dst string) (err error) {
return copy.Copy(src, dst, copy.Options{
Sync: true,
OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
}
func copyXattr(srcPath, dstPath, attr string) error {
data, err := system.Lgetxattr(srcPath, attr)
if err != nil {
return err
}
if data != nil {
if err := system.Lsetxattr(dstPath, attr, data, 0); err != nil {
return err
}
}
return nil
}
func doCopyXattrs(srcPath, dstPath string) error {
if err := copyXattr(srcPath, dstPath, "security.capability"); err != nil {
return err
}
return copyXattr(srcPath, dstPath, "trusted.overlay.opaque")
}
// DeepCopyFile copies the contents of the file named src to the file named
// by dst. The file will be created if it does not already exist. If the
// destination file exists, all it's contents will be replaced by the contents
// of the source file. The file mode will be copied from the source and
// the copied data is synced/flushed to stable storage.
func CopyFile(src, dst string) (err error) {
func DeepCopyFile(src, dst string) (err error) {
// Workaround for https://github.com/otiai10/copy/issues/47
fi, err := os.Lstat(src)
if err != nil {
@@ -161,7 +239,7 @@ func CopyFile(src, dst string) (err error) {
fm := fi.Mode()
switch {
case fm&os.ModeNamedPipe != 0:
EnsureDir(dst)
EnsureDirPerm(src, dst)
if err := syscall.Mkfifo(dst, uint32(fi.Mode())); err != nil {
return errors.Wrap(err, "failed creating pipe")
}
@@ -173,6 +251,9 @@ func CopyFile(src, dst string) (err error) {
return nil
}
//filepath.Dir(src)
EnsureDirPerm(src, dst)
err = copy.Copy(src, dst, copy.Options{
Sync: true,
OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
@@ -180,11 +261,12 @@ func CopyFile(src, dst string) (err error) {
return err
}
if stat, ok := fi.Sys().(*syscall.Stat_t); ok {
if err := os.Chown(dst, int(stat.Uid), int(stat.Gid)); err != nil {
fmt.Println("failed chowning", dst, err.Error())
if err := os.Lchown(dst, int(stat.Uid), int(stat.Gid)); err != nil {
fmt.Println("warning: failed chowning", dst, err.Error())
}
}
return err
return doCopyXattrs(src, dst)
}
func IsDirectory(path string) (bool, error) {

View File

@@ -13,14 +13,15 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers_test
package file_test
import (
"io/ioutil"
"os"
"path/filepath"
. "github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
@@ -28,8 +29,8 @@ import (
var _ = Describe("Helpers", func() {
Context("Exists", func() {
It("Detect existing and not-existing files", func() {
Expect(Exists("../../tests/fixtures/buildtree/app-admin/enman/1.4.0/build.yaml")).To(BeTrue())
Expect(Exists("../../tests/fixtures/buildtree/app-admin/enman/1.4.0/build.yaml.not.exists")).To(BeFalse())
Expect(fileHelper.Exists("../../tests/fixtures/buildtree/app-admin/enman/1.4.0/build.yaml")).To(BeTrue())
Expect(fileHelper.Exists("../../tests/fixtures/buildtree/app-admin/enman/1.4.0/build.yaml.not.exists")).To(BeFalse())
})
})
@@ -38,15 +39,15 @@ var _ = Describe("Helpers", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
Expect(DirectoryIsEmpty(testDir)).To(BeTrue())
Expect(fileHelper.DirectoryIsEmpty(testDir)).To(BeTrue())
})
It("Detects directory with files", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
err = Touch(filepath.Join(testDir, "foo"))
err = fileHelper.Touch(filepath.Join(testDir, "foo"))
Expect(err).ToNot(HaveOccurred())
Expect(DirectoryIsEmpty(testDir)).To(BeFalse())
Expect(fileHelper.DirectoryIsEmpty(testDir)).To(BeFalse())
})
})
@@ -72,7 +73,7 @@ var _ = Describe("Helpers", func() {
err = ioutil.WriteFile(filepath.Join(testDir, "baz2", "foo"), []byte("test\n"), 0644)
Expect(err).ToNot(HaveOccurred())
ordered, notExisting := OrderFiles(testDir, []string{"bar", "baz", "bar/foo", "baz2", "foo", "baz2/foo", "notexisting"})
ordered, notExisting := fileHelper.OrderFiles(testDir, []string{"bar", "baz", "bar/foo", "baz2", "foo", "baz2/foo", "notexisting"})
Expect(ordered).To(Equal([]string{"baz", "bar/foo", "foo", "baz2/foo", "bar", "baz2"}))
Expect(notExisting).To(Equal([]string{"notexisting"}))
@@ -96,7 +97,7 @@ var _ = Describe("Helpers", func() {
err = os.MkdirAll(filepath.Join(testDir, "foo", "baz", "fa"), os.ModePerm)
Expect(err).ToNot(HaveOccurred())
ordered, _ := OrderFiles(testDir, []string{"foo", "foo/bar", "bar", "foo/baz/fa", "foo/baz"})
ordered, _ := fileHelper.OrderFiles(testDir, []string{"foo", "foo/bar", "bar", "foo/baz/fa", "foo/baz"})
Expect(ordered).To(Equal([]string{"foo/baz/fa", "foo/bar", "foo/baz", "foo", "bar"}))
})
})

View File

@@ -3,6 +3,9 @@ package helpers
import (
"io/ioutil"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
"github.com/imdario/mergo"
"github.com/pkg/errors"
"gopkg.in/yaml.v2"
"helm.sh/helm/v3/pkg/chart"
@@ -37,13 +40,44 @@ func RenderHelm(template string, values, d map[string]interface{}) (string, erro
type templatedata map[string]interface{}
func RenderFiles(toTemplate, valuesFile string, defaultFile string) (string, error) {
// UnMarshalValues unmarshal values files and joins them into a unique templatedata
// the join happens from right to left, so any rightmost value file overwrites the content of the ones before it.
func UnMarshalValues(values []string) (templatedata, error) {
dst := templatedata{}
if len(values) > 0 {
for _, bv := range reverse(values) {
current := templatedata{}
defBuild, err := ioutil.ReadFile(bv)
if err != nil {
return nil, errors.Wrap(err, "rendering file "+bv)
}
err = yaml.Unmarshal(defBuild, &current)
if err != nil {
return nil, errors.Wrap(err, "rendering file "+bv)
}
if err := mergo.Merge(&dst, current); err != nil {
return nil, errors.Wrap(err, "merging values file "+bv)
}
}
}
return dst, nil
}
func reverse(s []string) []string {
for i, j := 0, len(s)-1; i < j; i, j = i+1, j-1 {
s[i], s[j] = s[j], s[i]
}
return s
}
func RenderFiles(toTemplate, valuesFile string, defaultFile ...string) (string, error) {
raw, err := ioutil.ReadFile(toTemplate)
if err != nil {
return "", errors.Wrap(err, "reading file "+toTemplate)
}
if !Exists(valuesFile) {
if !fileHelper.Exists(valuesFile) {
return "", errors.Wrap(err, "file not existing "+valuesFile)
}
val, err := ioutil.ReadFile(valuesFile)
@@ -52,20 +86,14 @@ func RenderFiles(toTemplate, valuesFile string, defaultFile string) (string, err
}
var values templatedata
d := templatedata{}
if len(defaultFile) > 0 {
def, err := ioutil.ReadFile(defaultFile)
if err != nil {
return "", errors.Wrap(err, "reading file "+valuesFile)
}
if err = yaml.Unmarshal(def, &d); err != nil {
return "", errors.Wrap(err, "unmarshalling file "+toTemplate)
}
}
if err = yaml.Unmarshal(val, &values); err != nil {
return "", errors.Wrap(err, "unmarshalling file "+toTemplate)
}
return RenderHelm(string(raw), values, d)
dst, err := UnMarshalValues(defaultFile)
if err != nil {
return "", errors.Wrap(err, "unmarshalling values")
}
return RenderHelm(string(raw), values, dst)
}

View File

@@ -114,11 +114,46 @@ foo: "bar"
Expect(err).ToNot(HaveOccurred())
res, err := RenderFiles(toTemplate, values, "")
res, err := RenderFiles(toTemplate, values)
Expect(err).ToNot(HaveOccurred())
Expect(res).To(Equal("bar"))
})
It("Render files merging defaults", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
toTemplate := filepath.Join(testDir, "totemplate.yaml")
values := filepath.Join(testDir, "values.yaml")
d := filepath.Join(testDir, "default.yaml")
d2 := filepath.Join(testDir, "default2.yaml")
writeFile(toTemplate, `{{.Values.foo}}{{.Values.bar}}{{.Values.b}}`)
writeFile(values, `
foo: "bar"
b: "f"
`)
writeFile(d, `
foo: "baz"
`)
writeFile(d2, `
foo: "do"
bar: "nei"
`)
Expect(err).ToNot(HaveOccurred())
res, err := RenderFiles(toTemplate, values, d2, d)
Expect(err).ToNot(HaveOccurred())
Expect(res).To(Equal("bazneif"))
res, err = RenderFiles(toTemplate, values, d, d2)
Expect(err).ToNot(HaveOccurred())
Expect(res).To(Equal("doneif"))
})
It("doesn't interpolate if no one provides the values", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())

View File

@@ -0,0 +1,36 @@
package imgworker
import (
"context"
"github.com/docker/docker/api/types"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/auth"
"google.golang.org/grpc"
)
func NewDockerAuthProvider(auth *types.AuthConfig) session.Attachable {
return &authProvider{
config: auth,
}
}
type authProvider struct {
config *types.AuthConfig
}
func (ap *authProvider) Register(server *grpc.Server) {
// no-op
}
func (ap *authProvider) Credentials(ctx context.Context, req *auth.CredentialsRequest) (*auth.CredentialsResponse, error) {
res := &auth.CredentialsResponse{}
if ap.config.IdentityToken != "" {
res.Secret = ap.config.IdentityToken
} else {
res.Username = ap.config.Username
res.Secret = ap.config.Password
}
return res, nil
}

View File

@@ -8,6 +8,7 @@ import (
"path/filepath"
"github.com/containerd/containerd/namespaces"
dockertypes "github.com/docker/docker/api/types"
"github.com/genuinetools/img/types"
"github.com/moby/buildkit/control"
"github.com/moby/buildkit/session"
@@ -29,10 +30,11 @@ type Client struct {
sess *session.Session
ctx context.Context
auth *dockertypes.AuthConfig
}
// New returns a new client for communicating with the buildkit controller.
func New(root string) (*Client, error) {
func New(root string, auth *dockertypes.AuthConfig) (*Client, error) {
// Native backend is fine, our images have just one layer. No need to depend on anything
backend := types.NativeBackend
@@ -45,6 +47,7 @@ func New(root string) (*Client, error) {
backend: types.NativeBackend,
root: root,
localDirs: nil,
auth: auth,
}
if err := c.prepare(); err != nil {

View File

@@ -31,6 +31,7 @@ func (c *Client) Pull(image string) (*ListedImage, error) {
if err != nil {
return nil, err
}
// Parse the image name and tag.
named, err := reference.ParseNormalizedNamed(image)
if err != nil {
@@ -114,7 +115,6 @@ func (c *Client) Pull(image string) (*ListedImage, error) {
if _, err := e.Export(ctx, exporter.Source{Ref: ref}); err != nil {
return nil, err
}
// Get the image.
img, err := opt.ImageStore.Get(ctx, image)
if err != nil {

View File

@@ -4,10 +4,8 @@ package imgworker
import (
"context"
"os"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/moby/buildkit/session/filesync"
"github.com/moby/buildkit/session/testutil"
"github.com/pkg/errors"
@@ -31,7 +29,7 @@ func (c *Client) Session(ctx context.Context) (*session.Session, session.Dialer,
if err != nil {
return nil, nil, errors.Wrap(err, "failed to create session manager")
}
sessionName := "img"
sessionName := "luet"
s, err := session.NewSession(ctx, sessionName, "")
if err != nil {
return nil, nil, errors.Wrap(err, "failed to create session")
@@ -41,7 +39,7 @@ func (c *Client) Session(ctx context.Context) (*session.Session, session.Dialer,
syncedDirs = append(syncedDirs, filesync.SyncedDir{Name: name, Dir: d})
}
s.Allow(filesync.NewFSSyncProvider(syncedDirs))
s.Allow(authprovider.NewDockerAuthProvider(os.Stderr))
s.Allow(NewDockerAuthProvider(c.auth))
return s, sessionDialer(s, m), err
}

View File

@@ -5,6 +5,7 @@ package imgworker
import (
"errors"
"fmt"
"github.com/mudler/luet/pkg/bus"
"os"
"github.com/containerd/containerd/content"
@@ -18,6 +19,12 @@ import (
// TODO: this requires root permissions to mount/unmount layers, althrought it shouldn't be required.
// See how backends are unpacking images without asking for root permissions.
// UnpackEventData is the data structure to pass for the bus events
type UnpackEventData struct {
Image string
Dest string
}
// Unpack exports an image to a rootfs destination directory.
func (c *Client) Unpack(image, dest string) error {
@@ -59,6 +66,8 @@ func (c *Client) Unpack(image, dest string) error {
return fmt.Errorf("getting image manifest failed: %v", err)
}
_,_ = bus.Manager.Publish(bus.EventImagePreUnPack, UnpackEventData{Image: image, Dest: dest})
for _, desc := range manifest.Layers {
logrus.Debugf("Unpacking layer %s", desc.Digest.String())
@@ -71,12 +80,14 @@ func (c *Client) Unpack(image, dest string) error {
// Unpack the tarfile to the rootfs path.
// FROM: https://godoc.org/github.com/moby/moby/pkg/archive#TarOptions
if err := archive.Untar(content.NewReader(layer), dest, &archive.TarOptions{
NoLchown: true,
NoLchown: false,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
}); err != nil {
return fmt.Errorf("extracting tar for %s to directory %s failed: %v", desc.Digest.String(), dest, err)
}
}
_, _ = bus.Manager.Publish(bus.EventImagePostUnPack, UnpackEventData{Image: image, Dest: dest})
return nil
}

View File

@@ -14,12 +14,21 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers
package match
import (
"reflect"
"regexp"
)
func ReverseAny(s interface{}) {
n := reflect.ValueOf(s).Len()
swap := reflect.Swapper(s)
for i, j := 0, n-1; i < j; i, j = i+1, j-1 {
swap(i, j)
}
}
func MapMatchRegex(m *map[string]string, r *regexp.Regexp) bool {
ans := false

View File

@@ -16,58 +16,44 @@
package client
import (
"encoding/json"
"fmt"
"os"
"path"
"path/filepath"
"github.com/docker/docker/api/types"
"github.com/docker/go-units"
"github.com/pkg/errors"
imgworker "github.com/mudler/luet/pkg/installer/client/imgworker"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/types/artifact"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/helpers/docker"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
"github.com/mudler/luet/pkg/helpers/imgworker"
. "github.com/mudler/luet/pkg/logger"
)
const (
errImageDownloadMsg = "failed downloading image %s: %s"
)
type DockerClient struct {
RepoData RepoData
auth *types.AuthConfig
verify bool
}
func NewDockerClient(r RepoData) *DockerClient {
return &DockerClient{RepoData: r}
auth := &types.AuthConfig{}
dat, _ := json.Marshal(r.Authentication)
json.Unmarshal(dat, auth)
return &DockerClient{RepoData: r, auth: auth}
}
func downloadAndExtractDockerImage(image, dest string) error {
temp, err := config.LuetCfg.GetSystem().TempDir("contentstore")
if err != nil {
return err
}
defer os.RemoveAll(temp)
Debug("Temporary directory", temp)
c, err := imgworker.New(temp)
if err != nil {
return errors.Wrapf(err, "failed creating client")
}
defer c.Close()
// FROM Slightly adapted from genuinetools/img https://github.com/genuinetools/img/blob/54d0ca981c1260546d43961a538550eef55c87cf/pull.go
Debug("Pulling image", image)
listedImage, err := c.Pull(image)
if err != nil {
return errors.Wrapf(err, "failed listing images")
}
Debug("Pulled:", listedImage.Target.Digest)
Debug("Size:", units.BytesSize(float64(listedImage.ContentSize)))
Debug("Unpacking", image, "to", dest)
os.RemoveAll(dest)
return c.Unpack(image, dest)
}
func (c *DockerClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Artifact, error) {
func (c *DockerClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.PackageArtifact, error) {
//var u *url.URL = nil
var err error
var temp string
@@ -75,11 +61,11 @@ func (c *DockerClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Ar
Spinner(22)
defer SpinnerStop()
var resultingArtifact compiler.Artifact
artifactName := path.Base(artifact.GetPath())
var resultingArtifact *artifact.PackageArtifact
artifactName := path.Base(a.Path)
cacheFile := filepath.Join(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), artifactName)
Debug("Cache file", cacheFile)
if err := helpers.EnsureDir(cacheFile); err != nil {
if err := fileHelper.EnsureDir(cacheFile); err != nil {
return nil, errors.Wrapf(err, "could not create cache folder %s for %s", config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), cacheFile)
}
ok := false
@@ -92,11 +78,11 @@ func (c *DockerClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Ar
// is done in such cases (see repository.go)
// Check if file is already in cache
if helpers.Exists(cacheFile) {
if fileHelper.Exists(cacheFile) {
Debug("Cache hit for artifact", artifactName)
resultingArtifact = artifact
resultingArtifact.SetPath(cacheFile)
resultingArtifact.SetChecksums(compiler.Checksums{})
resultingArtifact = a
resultingArtifact.Path = cacheFile
resultingArtifact.Checksums = artifact.Checksums{}
} else {
temp, err = config.LuetCfg.GetSystem().TempDir("tree")
@@ -107,22 +93,31 @@ func (c *DockerClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Ar
for _, uri := range c.RepoData.Urls {
imageName := fmt.Sprintf("%s:%s", uri, artifact.GetCompileSpec().GetPackage().GetFingerPrint())
imageName := fmt.Sprintf("%s:%s", uri, a.CompileSpec.GetPackage().ImageID())
Info("Downloading image", imageName)
// imageName := fmt.Sprintf("%s/%s", uri, artifact.GetCompileSpec().GetPackage().GetPackageImageName())
err = downloadAndExtractDockerImage(imageName, temp)
contentstore, err := config.LuetCfg.GetSystem().TempDir("contentstore")
if err != nil {
Debug("Failed download of image", imageName)
Warning("Cannot create contentstore", err.Error())
continue
}
// imageName := fmt.Sprintf("%s/%s", uri, artifact.GetCompileSpec().GetPackage().GetPackageImageName())
info, err := docker.DownloadAndExtractDockerImage(contentstore, imageName, temp, c.auth, c.RepoData.Verify)
if err != nil {
Warning(fmt.Sprintf(errImageDownloadMsg, imageName, err.Error()))
continue
}
Info(fmt.Sprintf("Pulled: %s", info.Target.Digest))
Info(fmt.Sprintf("Size: %s", units.BytesSize(float64(info.ContentSize))))
Debug("\nCompressing result ", filepath.Join(temp), "to", cacheFile)
newart := artifact
newart := a
// We discard checksum, that are checked while during pull and unpack
newart.SetChecksums(compiler.Checksums{})
newart.SetPath(cacheFile) // First set to cache file
newart.SetPath(newart.GetUncompressedName()) // Calculate the real path from cacheFile
newart.Checksums = artifact.Checksums{}
newart.Path = cacheFile // First set to cache file
newart.Path = newart.GetUncompressedName() // Calculate the real path from cacheFile
err = newart.Compress(temp, 1)
if err != nil {
Error(fmt.Sprintf("Failed compressing package %s: %s", imageName, err.Error()))
@@ -145,8 +140,8 @@ func (c *DockerClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Ar
func (c *DockerClient) DownloadFile(name string) (string, error) {
var file *os.File = nil
var err error
var temp string
var temp, contentstore string
var info *imgworker.ListedImage
// Files should be in URI/repository:<file>
ok := false
@@ -156,25 +151,31 @@ func (c *DockerClient) DownloadFile(name string) (string, error) {
}
for _, uri := range c.RepoData.Urls {
file, err = config.LuetCfg.GetSystem().TempFile("DockerClient")
if err != nil {
continue
}
Debug("Downloading file", name, "from", uri)
imageName := fmt.Sprintf("%s:%s", uri, name)
//imageName := fmt.Sprintf("%s/%s:%s", uri, "repository", name)
err = downloadAndExtractDockerImage(imageName, temp)
contentstore, err = config.LuetCfg.GetSystem().TempDir("contentstore")
if err != nil {
Debug("Failed download of image", imageName)
Warning("Cannot create contentstore", err.Error())
continue
}
Debug("\nCopying file ", filepath.Join(temp, name), "to", file.Name())
err = helpers.CopyFile(filepath.Join(temp, name), file.Name())
imageName := fmt.Sprintf("%s:%s", uri, docker.StripInvalidStringsFromImage(name))
Info("Downloading", imageName)
info, err = docker.DownloadAndExtractDockerImage(contentstore, imageName, temp, c.auth, c.RepoData.Verify)
if err != nil {
Warning(fmt.Sprintf(errImageDownloadMsg, imageName, err.Error()))
continue
}
Info(fmt.Sprintf("Pulled: %s", info.Target.Digest))
Info(fmt.Sprintf("Size: %s", units.BytesSize(float64(info.ContentSize))))
Debug("\nCopying file ", filepath.Join(temp, name), "to", file.Name())
err = fileHelper.CopyFile(filepath.Join(temp, name), file.Name())
if err != nil {
continue
}

View File

@@ -20,8 +20,10 @@ import (
"os"
"path/filepath"
compiler "github.com/mudler/luet/pkg/compiler"
helpers "github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/compiler/types/artifact"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
. "github.com/mudler/luet/pkg/installer/client"
@@ -30,7 +32,7 @@ import (
)
// This test expect that the repository defined in UNIT_TEST_DOCKER_IMAGE is in zstd format.
// the repository is built by the 01_simple_docker.sh integration test file.
// the repository is built by the 01_simple_docker.sh integration test fileHelper.
// This test also require root. At the moment, unpacking docker images with 'img' requires root permission to
// mount/unmount layers.
var _ = Describe("Docker client", func() {
@@ -49,14 +51,14 @@ var _ = Describe("Docker client", func() {
It("Downloads single files", func() {
f, err := c.DownloadFile("repository.yaml")
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(f)).To(ContainSubstring("Test Repo"))
Expect(fileHelper.Read(f)).To(ContainSubstring("Test Repo"))
os.RemoveAll(f)
})
It("Downloads artifacts", func() {
f, err := c.DownloadArtifact(&compiler.PackageArtifact{
f, err := c.DownloadArtifact(&artifact.PackageArtifact{
Path: "test.tar",
CompileSpec: &compiler.LuetCompilationSpec{
CompileSpec: &compilerspec.LuetCompilationSpec{
Package: &pkg.DefaultPackage{
Name: "c",
Category: "test",
@@ -69,9 +71,9 @@ var _ = Describe("Docker client", func() {
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(f.Unpack(tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Read(filepath.Join(tmpdir, "c"))).To(Equal("c\n"))
Expect(helpers.Read(filepath.Join(tmpdir, "cd"))).To(Equal("c\n"))
os.RemoveAll(f.GetPath())
Expect(fileHelper.Read(filepath.Join(tmpdir, "c"))).To(Equal("c\n"))
Expect(fileHelper.Read(filepath.Join(tmpdir, "cd"))).To(Equal("c\n"))
os.RemoveAll(f.Path)
})
})
})

View File

@@ -24,13 +24,12 @@ import (
"path/filepath"
"time"
"github.com/mudler/luet/pkg/compiler/types/artifact"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/logger"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
"github.com/cavaliercoder/grab"
"github.com/mudler/luet/pkg/config"
"github.com/schollz/progressbar/v3"
)
@@ -66,18 +65,18 @@ func Round(input float64) float64 {
return math.Floor(input + 0.5)
}
func (c *HttpClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Artifact, error) {
func (c *HttpClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.PackageArtifact, error) {
var u *url.URL = nil
var err error
var req *grab.Request
var temp string
artifactName := path.Base(artifact.GetPath())
artifactName := path.Base(a.Path)
cacheFile := filepath.Join(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), artifactName)
ok := false
// Check if file is already in cache
if helpers.Exists(cacheFile) {
if fileHelper.Exists(cacheFile) {
Debug("Use artifact", artifactName, "from cache.")
} else {
@@ -156,7 +155,7 @@ func (c *HttpClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Arti
fmt.Sprintf("%.2f", (float64(resp.BytesPerSecond())/1024)/1024), "MiB/s )")
Debug("\nCopying file ", filepath.Join(temp, artifactName), "to", cacheFile)
err = helpers.CopyFile(filepath.Join(temp, artifactName), cacheFile)
err = fileHelper.CopyFile(filepath.Join(temp, artifactName), cacheFile)
bar.Finish()
ok = true
@@ -168,8 +167,8 @@ func (c *HttpClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Arti
}
}
newart := artifact
newart.SetPath(cacheFile)
newart := a
newart.Path = cacheFile
return newart, nil
}
@@ -218,7 +217,7 @@ func (c *HttpClient) DownloadFile(name string) (string, error) {
fmt.Sprintf("%.2f", (float64(resp.BytesComplete())/1000)/1000), "MB (",
fmt.Sprintf("%.2f", (float64(resp.BytesPerSecond())/1024)/1024), "MiB/s )")
err = helpers.CopyFile(filepath.Join(temp, name), file.Name())
err = fileHelper.CopyFile(filepath.Join(temp, name), file.Name())
if err != nil {
continue
}

View File

@@ -22,9 +22,8 @@ import (
"os"
"path/filepath"
compiler "github.com/mudler/luet/pkg/compiler"
helpers "github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/compiler/types/artifact"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/installer/client"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -47,7 +46,7 @@ var _ = Describe("Http client", func() {
c := NewHttpClient(RepoData{Urls: []string{ts.URL}})
path, err := c.DownloadFile("test.txt")
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(path)).To(Equal("test"))
Expect(fileHelper.Read(path)).To(Equal("test"))
os.RemoveAll(path)
})
@@ -63,10 +62,10 @@ var _ = Describe("Http client", func() {
Expect(err).ToNot(HaveOccurred())
c := NewHttpClient(RepoData{Urls: []string{ts.URL}})
path, err := c.DownloadArtifact(&compiler.PackageArtifact{Path: "test.txt"})
path, err := c.DownloadArtifact(&artifact.PackageArtifact{Path: "test.txt"})
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(path.GetPath())).To(Equal("test"))
os.RemoveAll(path.GetPath())
Expect(fileHelper.Read(path.Path)).To(Equal("test"))
os.RemoveAll(path.Path)
})
})

View File

@@ -18,4 +18,5 @@ package client
type RepoData struct {
Urls []string
Authentication map[string]string
Verify bool
}

View File

@@ -20,11 +20,10 @@ import (
"path"
"path/filepath"
"github.com/mudler/luet/pkg/compiler/types/artifact"
"github.com/mudler/luet/pkg/config"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/logger"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/helpers"
)
type LocalClient struct {
@@ -35,11 +34,11 @@ func NewLocalClient(r RepoData) *LocalClient {
return &LocalClient{RepoData: r}
}
func (c *LocalClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Artifact, error) {
func (c *LocalClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.PackageArtifact, error) {
var err error
rootfs := ""
artifactName := path.Base(artifact.GetPath())
artifactName := path.Base(a.Path)
cacheFile := filepath.Join(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), artifactName)
if !config.LuetCfg.ConfigFromHost {
@@ -50,7 +49,7 @@ func (c *LocalClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Art
}
// Check if file is already in cache
if helpers.Exists(cacheFile) {
if fileHelper.Exists(cacheFile) {
Debug("Use artifact", artifactName, "from cache.")
} else {
ok := false
@@ -61,7 +60,7 @@ func (c *LocalClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Art
Info("Downloading artifact", artifactName, "from", uri)
//defer os.Remove(file.Name())
err = helpers.CopyFile(filepath.Join(uri, artifactName), cacheFile)
err = fileHelper.CopyFile(filepath.Join(uri, artifactName), cacheFile)
if err != nil {
continue
}
@@ -74,8 +73,8 @@ func (c *LocalClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Art
}
}
newart := artifact
newart.SetPath(cacheFile)
newart := a
newart.Path = cacheFile
return newart, nil
}
@@ -104,7 +103,7 @@ func (c *LocalClient) DownloadFile(name string) (string, error) {
}
//defer os.Remove(file.Name())
err = helpers.CopyFile(filepath.Join(uri, name), file.Name())
err = fileHelper.CopyFile(filepath.Join(uri, name), file.Name())
if err != nil {
continue
}

View File

@@ -20,9 +20,8 @@ import (
"os"
"path/filepath"
compiler "github.com/mudler/luet/pkg/compiler"
helpers "github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/compiler/types/artifact"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/installer/client"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -42,7 +41,7 @@ var _ = Describe("Local client", func() {
c := NewLocalClient(RepoData{Urls: []string{tmpdir}})
path, err := c.DownloadFile("test.txt")
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(path)).To(Equal("test"))
Expect(fileHelper.Read(path)).To(Equal("test"))
os.RemoveAll(path)
})
@@ -56,10 +55,10 @@ var _ = Describe("Local client", func() {
Expect(err).ToNot(HaveOccurred())
c := NewLocalClient(RepoData{Urls: []string{tmpdir}})
path, err := c.DownloadArtifact(&compiler.PackageArtifact{Path: "test.txt"})
path, err := c.DownloadArtifact(&artifact.PackageArtifact{Path: "test.txt"})
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(path.GetPath())).To(Equal("test"))
os.RemoveAll(path.GetPath())
Expect(fileHelper.Read(path.Path)).To(Equal("test"))
os.RemoveAll(path.Path)
})
})

View File

@@ -16,6 +16,7 @@
package installer
import (
"os"
"os/exec"
"github.com/ghodss/yaml"
@@ -48,7 +49,7 @@ func (f *LuetFinalizer) RunInstall(s *System) error {
for _, c := range f.Install {
toRun := append(args, c)
Info(":shell: Executing finalizer on ", s.Target, cmd, toRun)
if s.Target == "/" {
if s.Target == string(os.PathSeparator) {
cmd := exec.Command(cmd, toRun...)
stdoutStderr, err := cmd.CombinedOutput()
if err != nil {

View File

@@ -24,15 +24,16 @@ import (
"strings"
"sync"
. "github.com/logrusorgru/aurora"
"github.com/mudler/luet/pkg/bus"
compiler "github.com/mudler/luet/pkg/compiler"
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
"github.com/mudler/luet/pkg/helpers/match"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
. "github.com/logrusorgru/aurora"
"github.com/pkg/errors"
)
@@ -47,6 +48,7 @@ type LuetInstallerOptions struct {
CheckConflicts bool
SolverUpgrade, RemoveUnavailableOnUpgrade, UpgradeNewRevisions bool
Ask bool
DownloadOnly bool
}
type LuetInstaller struct {
@@ -57,11 +59,11 @@ type LuetInstaller struct {
type ArtifactMatch struct {
Package pkg.Package
Artifact compiler.Artifact
Repository Repository
Artifact *artifact.PackageArtifact
Repository *LuetSystemRepository
}
func NewLuetInstaller(opts LuetInstallerOptions) Installer {
func NewLuetInstaller(opts LuetInstallerOptions) *LuetInstaller {
return &LuetInstaller{Options: opts}
}
@@ -105,11 +107,11 @@ func (l *LuetInstaller) computeUpgrade(syncedRepos Repositories, s *System) (pkg
continue
}
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
if artefact.CompileSpec.GetPackage() == nil {
return uninstall, toInstall, errors.New("Package in compilespec empty")
}
if artefact.GetCompileSpec().GetPackage().Matches(p) && artefact.GetCompileSpec().GetPackage().GetBuildTimestamp() != p.GetBuildTimestamp() {
if artefact.CompileSpec.GetPackage().Matches(p) && artefact.CompileSpec.GetPackage().GetBuildTimestamp() != p.GetBuildTimestamp() {
toInstall = append(toInstall, matches[0].Package).Unique()
uninstall = append(uninstall, p).Unique()
}
@@ -126,6 +128,8 @@ func packsToList(p pkg.Packages) string {
for _, pp := range p {
packs = append(packs, pp.HumanReadableString())
}
sort.Strings(packs)
return strings.Join(packs, " ")
}
@@ -135,6 +139,7 @@ func matchesToList(artefacts map[string]ArtifactMatch) string {
for fingerprint, match := range artefacts {
packs = append(packs, fmt.Sprintf("%s (%s)", fingerprint, match.Repository.GetName()))
}
sort.Strings(packs)
return strings.Join(packs, " ")
}
@@ -151,39 +156,7 @@ func (l *LuetInstaller) Upgrade(s *System) error {
Info(":memo: note: will consider new build revisions while upgrading")
}
Spinner(32)
uninstall, toInstall, err := l.computeUpgrade(syncedRepos, s)
if err != nil {
return errors.Wrap(err, "failed computing upgrade")
}
SpinnerStop()
if len(uninstall) > 0 {
Info(":recycle: Packages that are going to be removed from the system:\n ", Yellow(packsToList(uninstall)).BgBlack().String())
}
if len(toInstall) > 0 {
Info(":zap:Packages that are going to be installed in the system:\n ", Green(packsToList(toInstall)).BgBlack().String())
}
if len(toInstall) == 0 && len(uninstall) == 0 {
Info("Nothing to do")
return nil
}
if l.Options.Ask {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return l.swap(syncedRepos, uninstall, toInstall, s, true)
} else {
return errors.New("Aborted by user")
}
}
Spinner(32)
defer SpinnerStop()
return l.swap(syncedRepos, uninstall, toInstall, s, true)
return l.checkAndUpgrade(syncedRepos, s)
}
func (l *LuetInstaller) SyncRepositories(inMemory bool) (Repositories, error) {
@@ -224,11 +197,19 @@ func (l *LuetInstaller) Swap(toRemove pkg.Packages, toInstall pkg.Packages, s *S
toRemoveFinal = append(toRemoveFinal, pp)
}
}
o := Option{
FullUninstall: false,
Force: true,
CheckConflicts: false,
FullCleanUninstall: false,
NoDeps: l.Options.NoDeps,
OnlyDeps: false,
}
return l.swap(syncedRepos, toRemoveFinal, toInstall, s, false)
return l.swap(o, syncedRepos, toRemoveFinal, toInstall, s)
}
func (l *LuetInstaller) computeSwap(syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
func (l *LuetInstaller) computeSwap(o Option, syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
@@ -243,8 +224,8 @@ func (l *LuetInstaller) computeSwap(syncedRepos Repositories, toRemove pkg.Packa
systemAfterChanges := &System{Database: installedtmp}
packs, err := l.computeUninstall(systemAfterChanges, toRemove...)
if err != nil && !l.Options.Force {
packs, err := l.computeUninstall(o, systemAfterChanges, toRemove...)
if err != nil && !o.Force {
Error("Failed computing uninstall for ", packsToList(toRemove))
return nil, nil, nil, nil, errors.Wrap(err, "computing uninstall "+packsToList(toRemove))
}
@@ -255,30 +236,16 @@ func (l *LuetInstaller) computeSwap(syncedRepos Repositories, toRemove pkg.Packa
}
}
match, packages, assertions, allRepos, err := l.computeInstall(syncedRepos, toInstall, systemAfterChanges)
match, packages, assertions, allRepos, err := l.computeInstall(o, syncedRepos, toInstall, systemAfterChanges)
for _, p := range toInstall {
assertions = append(assertions, solver.PackageAssert{Package: p.(*pkg.DefaultPackage), Value: true})
}
return match, packages, assertions, allRepos, err
}
func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System, forceNodeps bool) error {
forced := l.Options.Force
nodeps := l.Options.NoDeps
func (l *LuetInstaller) swap(o Option, syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) error {
// We don't want any conflict with the installed to raise during the upgrade.
// In this way we both force uninstalls and we avoid to check with conflicts
// against the current system state which is pending to deletion
// E.g. you can't check for conflicts for an upgrade of a new version of A
// if the old A results installed in the system. This is due to the fact that
// now the solver enforces the constraints and explictly denies two packages
// of the same version installed.
l.Options.Force = true
if forceNodeps {
l.Options.NoDeps = true
}
match, packages, assertions, allRepos, err := l.computeSwap(syncedRepos, toRemove, toInstall, s)
match, packages, assertions, allRepos, err := l.computeSwap(o, syncedRepos, toRemove, toInstall, s)
if err != nil {
return errors.Wrap(err, "failed computing package replacement")
}
@@ -304,16 +271,227 @@ func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, to
if err := l.download(syncedRepos, match); err != nil {
return errors.Wrap(err, "Pre-downloading packages")
}
err = l.Uninstall(s, toRemove...)
if err != nil && !l.Options.Force {
Error("Failed uninstall for ", packsToList(toRemove))
return errors.Wrap(err, "uninstalling "+packsToList(toRemove))
if l.Options.DownloadOnly {
return nil
}
l.Options.Force = forced
l.Options.NoDeps = nodeps
return l.install(syncedRepos, match, packages, assertions, allRepos, s)
ops := l.getOpsWithOptions(toRemove, match, Option{
Force: o.Force,
NoDeps: false,
OnlyDeps: o.OnlyDeps,
RunFinalizers: false,
}, o, syncedRepos, packages, assertions, allRepos)
err = l.runOps(ops, s)
if err != nil {
return errors.Wrap(err, "failed running installer options")
}
toFinalize, err := l.getFinalizers(allRepos, assertions, match, o.NoDeps)
if err != nil {
return errors.Wrap(err, "failed getting package to finalize")
}
return s.ExecuteFinalizers(toFinalize)
}
type Option struct {
Force bool
NoDeps bool
CheckConflicts bool
FullUninstall bool
FullCleanUninstall bool
OnlyDeps bool
RunFinalizers bool
}
type operation struct {
Option Option
Package pkg.Package
}
type installOperation struct {
operation
Reposiories Repositories
Packages pkg.Packages
Assertions solver.PackagesAssertions
Database pkg.PackageDatabase
Matches map[string]ArtifactMatch
}
// installerOp is the operation that is sent to the
// upgradeWorker's channel (todo)
type installerOp struct {
Uninstall operation
Install installOperation
}
func (l *LuetInstaller) runOps(ops []installerOp, s *System) error {
all := make(chan installerOp)
wg := new(sync.WaitGroup)
// Do the real install
for i := 0; i < l.Options.Concurrency; i++ {
wg.Add(1)
go l.installerOpWorker(i, wg, all, s)
}
for _, c := range ops {
all <- c
}
close(all)
wg.Wait()
return nil
}
// TODO: use installerOpWorker in place of all the other workers.
// This one is general enough to read a list of operations and execute them.
func (l *LuetInstaller) installerOpWorker(i int, wg *sync.WaitGroup, c <-chan installerOp, s *System) error {
defer wg.Done()
for p := range c {
if p.Uninstall.Package != nil {
Debug("Replacing package inplace")
toUninstall, uninstall, err := l.generateUninstallFn(p.Uninstall.Option, s, p.Uninstall.Package)
if err != nil {
Error("Failed to generate Uninstall function for" + err.Error())
continue
//return errors.Wrap(err, "while computing uninstall")
}
err = uninstall()
if err != nil {
Error("Failed uninstall for ", packsToList(toUninstall))
continue
//return errors.Wrap(err, "uninstalling "+packsToList(toUninstall))
}
}
if p.Install.Package != nil {
artMatch := p.Install.Matches[p.Install.Package.GetFingerPrint()]
ass := p.Install.Assertions.Search(p.Install.Package.GetFingerPrint())
packageToInstall, _ := p.Install.Packages.Find(p.Install.Package.GetPackageName())
err := l.install(
p.Install.Option,
p.Install.Reposiories,
map[string]ArtifactMatch{p.Install.Package.GetFingerPrint(): artMatch},
pkg.Packages{packageToInstall},
solver.PackagesAssertions{*ass},
p.Install.Database,
s,
)
if err != nil {
Error(err)
}
}
}
return nil
}
// checks wheter we can uninstall and install in place and compose installer worker ops
func (l *LuetInstaller) getOpsWithOptions(
toUninstall pkg.Packages, installMatch map[string]ArtifactMatch, installOpt, uninstallOpt Option,
syncedRepos Repositories, toInstall pkg.Packages, solution solver.PackagesAssertions, allRepos pkg.PackageDatabase) []installerOp {
resOps := []installerOp{}
for _, match := range installMatch {
if pack, err := toUninstall.Find(match.Package.GetPackageName()); err == nil {
resOps = append(resOps, installerOp{
Uninstall: operation{Package: pack, Option: uninstallOpt},
Install: installOperation{
operation: operation{
Package: match.Package,
Option: installOpt,
},
Matches: installMatch,
Packages: toInstall,
Reposiories: syncedRepos,
Assertions: solution,
Database: allRepos,
},
})
} else {
resOps = append(resOps, installerOp{
Install: installOperation{
operation: operation{Package: match.Package, Option: installOpt},
Matches: installMatch,
Reposiories: syncedRepos,
Packages: toInstall,
Assertions: solution,
Database: allRepos,
},
})
}
}
for _, p := range toUninstall {
found := false
for _, match := range installMatch {
if match.Package.GetPackageName() == p.GetPackageName() {
found = true
}
}
if !found {
resOps = append(resOps, installerOp{
Uninstall: operation{Package: p, Option: uninstallOpt},
})
}
}
return resOps
}
func (l *LuetInstaller) checkAndUpgrade(r Repositories, s *System) error {
Spinner(32)
uninstall, toInstall, err := l.computeUpgrade(r, s)
if err != nil {
return errors.Wrap(err, "failed computing upgrade")
}
SpinnerStop()
if len(uninstall) > 0 {
Info(":recycle: Packages that are going to be removed from the system:\n ", Yellow(packsToList(uninstall)).BgBlack().String())
}
if len(toInstall) > 0 {
Info(":zap:Packages that are going to be installed in the system:\n ", Green(packsToList(toInstall)).BgBlack().String())
}
if len(toInstall) == 0 && len(uninstall) == 0 {
Info("Nothing to do")
return nil
}
// We don't want any conflict with the installed to raise during the upgrade.
// In this way we both force uninstalls and we avoid to check with conflicts
// against the current system state which is pending to deletion
// E.g. you can't check for conflicts for an upgrade of a new version of A
// if the old A results installed in the system. This is due to the fact that
// now the solver enforces the constraints and explictly denies two packages
// of the same version installed.
o := Option{
FullUninstall: false,
Force: true,
CheckConflicts: false,
FullCleanUninstall: false,
NoDeps: true,
OnlyDeps: false,
}
if l.Options.Ask {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return l.swap(o, r, uninstall, toInstall, s)
} else {
return errors.New("Aborted by user")
}
}
return l.swap(o, r, uninstall, toInstall, s)
}
func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
@@ -322,7 +500,20 @@ func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
return err
}
match, packages, assertions, allRepos, err := l.computeInstall(syncedRepos, cp, s)
if len(s.Database.World()) > 0 {
Info(":thinking: Checking for available upgrades")
if err := l.checkAndUpgrade(syncedRepos, s); err != nil {
return errors.Wrap(err, "while checking upgrades before install")
}
}
o := Option{
NoDeps: l.Options.NoDeps,
Force: l.Options.Force,
OnlyDeps: l.Options.OnlyDeps,
RunFinalizers: true,
}
match, packages, assertions, allRepos, err := l.computeInstall(o, syncedRepos, cp, s)
if err != nil {
return err
}
@@ -359,12 +550,12 @@ func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return l.install(syncedRepos, match, packages, assertions, allRepos, s)
return l.install(o, syncedRepos, match, packages, assertions, allRepos, s)
} else {
return errors.New("Aborted by user")
}
}
return l.install(syncedRepos, match, packages, assertions, allRepos, s)
return l.install(o, syncedRepos, match, packages, assertions, allRepos, s)
}
func (l *LuetInstaller) download(syncedRepos Repositories, toDownload map[string]ArtifactMatch) error {
@@ -402,12 +593,12 @@ func (l *LuetInstaller) Reclaim(s *System) error {
for _, repo := range syncedRepos {
for _, artefact := range repo.GetIndex() {
Debug("Checking if",
artefact.GetCompileSpec().GetPackage().HumanReadableString(),
artefact.CompileSpec.GetPackage().HumanReadableString(),
"from", repo.GetName(), "is installed")
FILES:
for _, f := range artefact.GetFiles() {
if helpers.Exists(filepath.Join(s.Target, f)) {
p, err := repo.GetTree().GetDatabase().FindPackage(artefact.GetCompileSpec().GetPackage())
for _, f := range artefact.Files {
if fileHelper.Exists(filepath.Join(s.Target, f)) {
p, err := repo.GetTree().GetDatabase().FindPackage(artefact.CompileSpec.GetPackage())
if err != nil {
return err
}
@@ -431,7 +622,7 @@ func (l *LuetInstaller) Reclaim(s *System) error {
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Failed creating package")
}
s.Database.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: pack.GetFingerPrint(), Files: match.Artifact.GetFiles()})
s.Database.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: pack.GetFingerPrint(), Files: match.Artifact.Files})
Info(":zap:Reclaimed package:", pack.HumanReadableString())
}
Info("Done!")
@@ -439,7 +630,7 @@ func (l *LuetInstaller) Reclaim(s *System) error {
return nil
}
func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
func (l *LuetInstaller) computeInstall(o Option, syncedRepos Repositories, cp pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
var p pkg.Packages
toInstall := map[string]ArtifactMatch{}
allRepos := pkg.NewInMemoryDatabase(false)
@@ -472,11 +663,11 @@ func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages
var packagesToInstall pkg.Packages
var err error
if !l.Options.NoDeps {
if !o.NoDeps {
solv := solver.NewResolver(solver.Options{Type: l.Options.SolverOptions.Implementation, Concurrency: l.Options.Concurrency}, s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
solution, err = solv.Install(p)
/// TODO: PackageAssertions needs to be a map[fingerprint]pack so lookup is in O(1)
if err != nil && !l.Options.Force {
if err != nil && !o.Force {
return toInstall, p, solution, allRepos, errors.Wrap(err, "Failed solving solution for package")
}
// Gathers things to install
@@ -489,7 +680,7 @@ func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages
packagesToInstall = append(packagesToInstall, assertion.Package)
}
}
} else if !l.Options.OnlyDeps {
} else if !o.OnlyDeps {
for _, currentPack := range p {
if _, err := s.Database.FindPackage(currentPack); err == nil {
// skip matching if it is installed already
@@ -508,11 +699,11 @@ func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages
}
A:
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
if artefact.CompileSpec.GetPackage() == nil {
return toInstall, p, solution, allRepos, errors.New("Package in compilespec empty")
}
if matches[0].Package.Matches(artefact.GetCompileSpec().GetPackage()) {
currentPack.SetBuildTimestamp(artefact.GetCompileSpec().GetPackage().GetBuildTimestamp())
if matches[0].Package.Matches(artefact.CompileSpec.GetPackage()) {
currentPack.SetBuildTimestamp(artefact.CompileSpec.GetPackage().GetBuildTimestamp())
// Filter out already installed
if _, err := s.Database.FindPackage(currentPack); err != nil {
toInstall[currentPack.GetFingerPrint()] = ArtifactMatch{Package: currentPack, Artifact: artefact, Repository: matches[0].Repo}
@@ -524,12 +715,56 @@ func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages
return toInstall, p, solution, allRepos, nil
}
func (l *LuetInstaller) install(syncedRepos Repositories, toInstall map[string]ArtifactMatch, p pkg.Packages, solution solver.PackagesAssertions, allRepos pkg.PackageDatabase, s *System) error {
func (l *LuetInstaller) getFinalizers(allRepos pkg.PackageDatabase, solution solver.PackagesAssertions, toInstall map[string]ArtifactMatch, nodeps bool) ([]pkg.Package, error) {
var toFinalize []pkg.Package
if !nodeps {
// TODO: Lower those errors as warning
for _, w := range toInstall {
// Finalizers needs to run in order and in sequence.
ordered, err := solution.Order(allRepos, w.Package.GetFingerPrint())
if err != nil {
return toFinalize, errors.Wrap(err, "While order a solution for "+w.Package.HumanReadableString())
}
ORDER:
for _, ass := range ordered {
if ass.Value {
installed, ok := toInstall[ass.Package.GetFingerPrint()]
if !ok {
// It was a dep already installed in the system, so we can skip it safely
continue ORDER
}
treePackage, err := installed.Repository.GetTree().GetDatabase().FindPackage(ass.Package)
if err != nil {
return toFinalize, errors.Wrap(err, "Error getting package "+ass.Package.HumanReadableString())
}
toFinalize = append(toFinalize, treePackage)
}
}
}
} else {
for _, c := range toInstall {
treePackage, err := c.Repository.GetTree().GetDatabase().FindPackage(c.Package)
if err != nil {
return toFinalize, errors.Wrap(err, "Error getting package "+c.Package.HumanReadableString())
}
toFinalize = append(toFinalize, treePackage)
}
}
return toFinalize, nil
}
func (l *LuetInstaller) install(o Option, syncedRepos Repositories, toInstall map[string]ArtifactMatch, p pkg.Packages, solution solver.PackagesAssertions, allRepos pkg.PackageDatabase, s *System) error {
// Install packages into rootfs in parallel.
if err := l.download(syncedRepos, toInstall); err != nil {
return errors.Wrap(err, "Downloading packages")
}
if l.Options.DownloadOnly {
return nil
}
all := make(chan ArtifactMatch)
wg := new(sync.WaitGroup)
@@ -549,52 +784,25 @@ func (l *LuetInstaller) install(syncedRepos Repositories, toInstall map[string]A
for _, c := range toInstall {
// Annotate to the system that the package was installed
_, err := s.Database.CreatePackage(c.Package)
if err != nil && !l.Options.Force {
if err != nil && !o.Force {
return errors.Wrap(err, "Failed creating package")
}
bus.Manager.Publish(bus.EventPackageInstall, c)
}
var toFinalize []pkg.Package
if !l.Options.NoDeps {
// TODO: Lower those errors as warning
for _, w := range p {
// Finalizers needs to run in order and in sequence.
ordered, err := solution.Order(allRepos, w.GetFingerPrint())
if err != nil {
return errors.Wrap(err, "While order a solution for "+w.HumanReadableString())
}
ORDER:
for _, ass := range ordered {
if ass.Value {
installed, ok := toInstall[ass.Package.GetFingerPrint()]
if !ok {
// It was a dep already installed in the system, so we can skip it safely
continue ORDER
}
treePackage, err := installed.Repository.GetTree().GetDatabase().FindPackage(ass.Package)
if err != nil {
return errors.Wrap(err, "Error getting package "+ass.Package.HumanReadableString())
}
toFinalize = append(toFinalize, treePackage)
}
}
if !o.RunFinalizers {
return nil
}
}
} else {
for _, c := range toInstall {
treePackage, err := c.Repository.GetTree().GetDatabase().FindPackage(c.Package)
if err != nil {
return errors.Wrap(err, "Error getting package "+c.Package.HumanReadableString())
}
toFinalize = append(toFinalize, treePackage)
}
toFinalize, err := l.getFinalizers(allRepos, solution, toInstall, o.NoDeps)
if err != nil {
return errors.Wrap(err, "failed getting package to finalize")
}
return s.ExecuteFinalizers(toFinalize)
}
func (l *LuetInstaller) downloadPackage(a ArtifactMatch) (compiler.Artifact, error) {
func (l *LuetInstaller) downloadPackage(a ArtifactMatch) (*artifact.PackageArtifact, error) {
artifact, err := a.Repository.Client().DownloadArtifact(a.Artifact)
if err != nil {
@@ -608,26 +816,26 @@ func (l *LuetInstaller) downloadPackage(a ArtifactMatch) (compiler.Artifact, err
return artifact, nil
}
func (l *LuetInstaller) installPackage(a ArtifactMatch, s *System) error {
func (l *LuetInstaller) installPackage(m ArtifactMatch, s *System) error {
artifact, err := l.downloadPackage(a)
a, err := l.downloadPackage(m)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Failed downloading package")
}
files, err := artifact.FileList()
files, err := a.FileList()
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Could not open package archive")
}
err = artifact.Unpack(s.Target, true)
err = a.Unpack(s.Target, true)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Error met while unpacking rootfs")
}
// First create client and download
// Then unpack to system
return s.Database.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: a.Package.GetFingerPrint(), Files: files})
return s.Database.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: m.Package.GetFingerPrint(), Files: files})
}
func (l *LuetInstaller) downloadWorker(i int, wg *sync.WaitGroup, c <-chan ArtifactMatch) error {
@@ -668,6 +876,51 @@ func (l *LuetInstaller) installerWorker(i int, wg *sync.WaitGroup, c <-chan Arti
return nil
}
func checkAndPrunePath(path string) {
// check if now the target path is empty
targetPath := filepath.Dir(path)
fi, err := os.Lstat(targetPath)
if err != nil {
// Warning("Dir not found (it was before?) ", err.Error())
return
}
switch mode := fi.Mode(); {
case mode.IsDir():
files, err := ioutil.ReadDir(targetPath)
if err != nil {
Warning("Failed reading folder", targetPath, err.Error())
}
if len(files) != 0 {
Debug("Preserving not-empty folder", targetPath)
return
}
}
if err = os.Remove(targetPath); err != nil {
Warning("Failed removing file (maybe not present in the system target anymore ?)", targetPath, err.Error())
}
}
// We will try to cleanup every path from the file, if the folders left behind are empty
func pruneEmptyFilePath(path string) {
checkAndPrunePath(path)
// A path is for e.g. /usr/bin/bar
// we want to create an array as "/usr", "/usr/bin", "/usr/bin/bar"
paths := strings.Split(path, string(os.PathSeparator))
currentPath := filepath.Join(string(os.PathSeparator), paths[0])
allPaths := []string{currentPath}
for _, p := range paths[1:] {
currentPath = filepath.Join(currentPath, p)
allPaths = append(allPaths, currentPath)
}
match.ReverseAny(allPaths)
for _, p := range allPaths {
checkAndPrunePath(p)
}
}
func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
var cp *config.ConfigProtect
annotationDir := ""
@@ -690,7 +943,7 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
cp.Map(files)
}
toRemove, notPresent := helpers.OrderFiles(s.Target, files)
toRemove, notPresent := fileHelper.OrderFiles(s.Target, files)
// Remove from target
for _, f := range toRemove {
@@ -729,6 +982,8 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
if err = os.Remove(target); err != nil {
Warning("Failed removing file (maybe not present in the system target anymore ?)", target, err.Error())
}
pruneEmptyFilePath(target)
}
for _, f := range notPresent {
@@ -742,6 +997,8 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
if err = os.Remove(target); err != nil {
Debug("Failed removing file (not present in the system target)", target, err.Error())
}
pruneEmptyFilePath(target)
}
err = s.Database.RemovePackageFiles(p)
@@ -759,17 +1016,17 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
return nil
}
func (l *LuetInstaller) computeUninstall(s *System, packs ...pkg.Package) (pkg.Packages, error) {
func (l *LuetInstaller) computeUninstall(o Option, s *System, packs ...pkg.Package) (pkg.Packages, error) {
var toUninstall pkg.Packages
// compute uninstall from all world - remove packages in parallel - run uninstall finalizer (in order) TODO - mark the uninstallation in db
// Get installed definition
checkConflicts := l.Options.CheckConflicts
full := l.Options.FullUninstall
if l.Options.Force == true { // IF forced, we want to remove the package and all its requires
checkConflicts = false
full = false
}
checkConflicts := o.CheckConflicts
full := o.FullUninstall
// if o.Force == true { // IF forced, we want to remove the package and all its requires
// checkConflicts = false
// full = false
// }
// Create a temporary DB with the installed packages
// so the solver is much faster finding the deptree
@@ -779,11 +1036,11 @@ func (l *LuetInstaller) computeUninstall(s *System, packs ...pkg.Package) (pkg.P
return toUninstall, errors.Wrap(err, "Failed create temporary in-memory db")
}
if !l.Options.NoDeps {
if !o.NoDeps {
solv := solver.NewResolver(solver.Options{Type: l.Options.SolverOptions.Implementation, Concurrency: l.Options.Concurrency}, installedtmp, installedtmp, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
var solution pkg.Packages
var err error
if l.Options.FullCleanUninstall {
if o.FullCleanUninstall {
solution, err = solv.UninstallUniverse(packs)
if err != nil {
return toUninstall, errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
@@ -804,32 +1061,47 @@ func (l *LuetInstaller) computeUninstall(s *System, packs ...pkg.Package) (pkg.P
return toUninstall, nil
}
func (l *LuetInstaller) Uninstall(s *System, packs ...pkg.Package) error {
func (l *LuetInstaller) generateUninstallFn(o Option, s *System, packs ...pkg.Package) (pkg.Packages, func() error, error) {
for _, p := range packs {
if packs, _ := s.Database.FindPackages(p); len(packs) == 0 {
return errors.New("Package not found in the system")
return nil, nil, errors.New("Package not found in the system")
}
}
Spinner(32)
toUninstall, err := l.computeUninstall(s, packs...)
toUninstall, err := l.computeUninstall(o, s, packs...)
if err != nil {
return errors.Wrap(err, "while computing uninstall")
return nil, nil, errors.Wrap(err, "while computing uninstall")
}
SpinnerStop()
uninstall := func() error {
for _, p := range toUninstall {
err := l.uninstall(p, s)
if err != nil && !l.Options.Force {
if err != nil && !o.Force {
return errors.Wrap(err, "Uninstall failed")
}
}
return nil
}
return toUninstall, uninstall, nil
}
func (l *LuetInstaller) Uninstall(s *System, packs ...pkg.Package) error {
Spinner(32)
o := Option{
FullUninstall: l.Options.FullUninstall,
Force: l.Options.Force,
CheckConflicts: l.Options.CheckConflicts,
FullCleanUninstall: l.Options.FullCleanUninstall,
}
toUninstall, uninstall, err := l.generateUninstallFn(o, s, packs...)
if err != nil {
return errors.Wrap(err, "while computing uninstall")
}
SpinnerStop()
if len(toUninstall) == 0 {
Info("Nothing to do")
return nil
@@ -847,4 +1119,4 @@ func (l *LuetInstaller) Uninstall(s *System, packs ...pkg.Package) error {
return uninstall()
}
func (l *LuetInstaller) Repositories(r []Repository) { l.PackageRepositories = r }
func (l *LuetInstaller) Repositories(r []*LuetSystemRepository) { l.PackageRepositories = r }

View File

@@ -23,9 +23,11 @@ import (
// . "github.com/mudler/luet/pkg/installer"
compiler "github.com/mudler/luet/pkg/compiler"
backend "github.com/mudler/luet/pkg/compiler/backend"
compression "github.com/mudler/luet/pkg/compiler/types/compression"
"github.com/mudler/luet/pkg/compiler/types/options"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
"github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/installer"
solver "github.com/mudler/luet/pkg/solver"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/installer"
pkg "github.com/mudler/luet/pkg/package"
@@ -34,7 +36,7 @@ import (
. "github.com/onsi/gomega"
)
func stubRepo(tmpdir, tree string) (installer.Repository, error) {
func stubRepo(tmpdir, tree string) (*LuetSystemRepository, error) {
return GenerateRepository(
"test",
"description",
@@ -43,10 +45,11 @@ func stubRepo(tmpdir, tree string) (installer.Repository, error) {
1,
tmpdir,
[]string{tree},
pkg.NewInMemoryDatabase(false), nil, "", false, false)
pkg.NewInMemoryDatabase(false), nil, "", false, false, false, nil)
}
var _ = Describe("Installer", func() {
Context("Writes a repository definition", func() {
It("Writes a repo and can install packages from it", func() {
//repo:=NewLuetSystemRepository()
@@ -62,7 +65,9 @@ var _ = Describe("Installer", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(),
generalRecipe.GetDatabase(),
options.Concurrency(2))
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -77,38 +82,37 @@ var _ = Describe("Installer", func() {
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
spec.SetOutputPath(tmpdir)
c.SetConcurrency(2)
artifact, err := c.Compile(false, spec)
a, err := c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(a.Path)).To(BeTrue())
Expect(helpers.Untar(a.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
content1, err := fileHelper.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
content2, err := fileHelper.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -133,8 +137,8 @@ urls:
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -151,8 +155,8 @@ urls:
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
@@ -178,7 +182,7 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(),
generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
generalRecipe.GetDatabase(), options.Concurrency(2))
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -193,43 +197,42 @@ urls:
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
spec.SetOutputPath(tmpdir)
c.SetConcurrency(2)
artifact, err := c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
content1, err := fileHelper.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
content2, err := fileHelper.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
treeFile := NewDefaultTreeRepositoryFile()
treeFile.SetCompressionType(compiler.None)
treeFile.SetCompressionType(compression.None)
repo.SetRepositoryFile(REPOFILE_TREE_KEY, treeFile)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -254,8 +257,8 @@ urls:
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -272,8 +275,8 @@ urls:
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
@@ -298,7 +301,8 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(),
options.Concurrency(2))
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -313,25 +317,24 @@ urls:
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
spec.SetOutputPath(tmpdir)
c.SetConcurrency(2)
artifact, err := c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
content1, err := fileHelper.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
content2, err := fileHelper.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository(
"test",
@@ -340,18 +343,18 @@ urls:
[]string{tmpdir}, 1,
tmpdir,
[]string{"../../tests/fixtures/buildable"},
pkg.NewInMemoryDatabase(false), nil, "", false, false)
pkg.NewInMemoryDatabase(false), nil, "", false, false, false, nil)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -381,8 +384,8 @@ urls:
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -399,8 +402,8 @@ urls:
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
@@ -423,7 +426,8 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(),
options.Concurrency(2))
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -438,25 +442,24 @@ urls:
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
spec.SetOutputPath(tmpdir)
c.SetConcurrency(2)
artifact, err := c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
content1, err := fileHelper.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
content2, err := fileHelper.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository(
"test",
@@ -466,18 +469,18 @@ urls:
1,
tmpdir,
[]string{"../../tests/fixtures/buildable"},
pkg.NewInMemoryDatabase(false), nil, "", false, false)
pkg.NewInMemoryDatabase(false), nil, "", false, false, false, nil)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -518,7 +521,7 @@ urls:
Expect(len(generalRecipe2.GetDatabase().GetPackages())).To(Equal(1))
c = compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c = compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), options.Concurrency(2))
spec, err = c.FromPackage(&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -526,11 +529,10 @@ urls:
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
spec.SetOutputPath(tmpdir2)
c.SetConcurrency(2)
artifact, err = c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
repo, err = stubRepo(tmpdir2, "../../tests/fixtures/alpine")
Expect(err).ToNot(HaveOccurred())
@@ -577,7 +579,7 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(4))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -595,24 +597,23 @@ urls:
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(2)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec2, spec3))
_, errs := c.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec2, spec3))
Expect(errs).To(BeEmpty())
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -642,8 +643,8 @@ urls:
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -660,11 +661,11 @@ urls:
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
// New version - new files
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
@@ -695,8 +696,8 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
Expect(len(generalRecipeNewRepo.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c2 := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipeNewRepo.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
c2 := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipeNewRepo.GetDatabase())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -718,21 +719,20 @@ urls:
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdirnewrepo)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(2)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec3))
_, errs := c.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec3))
Expect(errs).To(BeEmpty())
_, errs = c2.CompileParallel(false, compiler.NewLuetCompilationspecs(spec2))
_, errs = c2.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec2))
Expect(errs).To(BeEmpty())
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade_old_repo")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
@@ -775,8 +775,8 @@ urls:
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -794,11 +794,11 @@ urls:
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
// New version - new files
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
@@ -827,7 +827,12 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(4))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c := compiler.NewLuetCompiler(
backend.NewSimpleDockerBackend(),
generalRecipe.GetDatabase(),
options.Concurrency(2),
options.WithCompressionType(compression.GZip),
)
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -844,26 +849,25 @@ urls:
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(2)
c.SetCompressionType(compiler.GZip)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec2, spec3))
_, errs := c.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec2, spec3))
Expect(errs).To(BeEmpty())
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -893,8 +897,8 @@ urls:
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -911,11 +915,11 @@ urls:
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
// New version - new files
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
@@ -985,7 +989,9 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(4))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(),
options.Concurrency(2),
options.WithCompressionType(compression.GZip))
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -1002,26 +1008,24 @@ urls:
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(2)
c.SetCompressionType(compiler.GZip)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec2, spec3))
_, errs := c.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec2, spec3))
Expect(errs).To(BeEmpty())
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -1056,13 +1060,13 @@ urls:
Expect(err).To(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(0))
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
Expect(helpers.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
Expect(fileHelper.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
Expect(fileHelper.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
Expect(fileHelper.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
err = inst.Reclaim(system)
Expect(err).ToNot(HaveOccurred())
@@ -1090,7 +1094,8 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(),
options.WithCompressionType(compression.GZip))
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -1104,24 +1109,22 @@ urls:
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(1)
c.SetCompressionType(compiler.GZip)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec3))
_, errs := c.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec3))
Expect(errs).To(BeEmpty())
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade_old_repo")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -1156,13 +1159,13 @@ urls:
Expect(err).To(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(0))
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
Expect(helpers.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
Expect(fileHelper.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
Expect(fileHelper.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
Expect(fileHelper.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
err = inst.Reclaim(system)
Expect(err).ToNot(HaveOccurred())
@@ -1182,7 +1185,7 @@ urls:
Expect(len(generalRecipe2.GetDatabase().GetPackages())).To(Equal(3))
c = compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
c = compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase())
spec, err = c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
@@ -1192,7 +1195,7 @@ urls:
defer os.RemoveAll(tmpdir2) // clean up
spec.SetOutputPath(tmpdir2)
_, errs = c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec))
_, errs = c.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeEmpty())
@@ -1216,11 +1219,11 @@ urls:
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
// New version - new files
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})

View File

@@ -16,64 +16,13 @@
package installer
import (
compiler "github.com/mudler/luet/pkg/compiler"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
//"github.com/mudler/luet/pkg/solver"
)
type Installer interface {
Install(pkg.Packages, *System) error
Uninstall(*System, ...pkg.Package) error
Upgrade(s *System) error
Reclaim(s *System) error
Repositories([]Repository)
SyncRepositories(bool) (Repositories, error)
Swap(pkg.Packages, pkg.Packages, *System) error
}
type Client interface {
DownloadArtifact(compiler.Artifact) (compiler.Artifact, error)
DownloadArtifact(*artifact.PackageArtifact) (*artifact.PackageArtifact, error)
DownloadFile(string) (string, error)
}
type Repositories []Repository
type Repository interface {
GetName() string
GetDescription() string
GetUrls() []string
SetUrls([]string)
AddUrl(string)
GetPriority() int
GetIndex() compiler.ArtifactIndex
SetIndex(i compiler.ArtifactIndex)
GetTree() tree.Builder
SetTree(tree.Builder)
Write(path string, resetRevision, force bool) error
Sync(bool) (Repository, error)
GetTreePath() string
SetTreePath(string)
GetMetaPath() string
SetMetaPath(string)
GetType() string
SetType(string)
SetAuthentication(map[string]string)
GetAuthentication() map[string]string
GetRevision() int
IncrementRevision()
GetLastUpdate() string
SetLastUpdate(string)
Client() Client
SetPriority(int)
GetRepositoryFile(string) (LuetRepositoryFile, error)
SetRepositoryFile(string, LuetRepositoryFile)
SetName(p string)
Serialize() (*LuetSystemRepositoryMetadata, LuetSystemRepositorySerialized)
GetBackend() compiler.CompilerBackend
SetBackend(b compiler.CompilerBackend)
FileSearch(pattern string) (pkg.Packages, error)
SearchArtefact(p pkg.Package) (compiler.Artifact, error)
}
type Repositories []*LuetSystemRepository

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,274 @@
// Copyright © 2019-2021 Ettore Di Giacinto <mudler@sabayon.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package installer
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/mudler/luet/pkg/bus"
compiler "github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/helpers/docker"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/pkg/errors"
)
type dockerRepositoryGenerator struct {
b compiler.CompilerBackend
imagePrefix string
imagePush, force bool
}
func (l *dockerRepositoryGenerator) Initialize(path string, db pkg.PackageDatabase) ([]*artifact.PackageArtifact, error) {
Info("Generating docker images for packages in", l.imagePrefix)
var art []*artifact.PackageArtifact
var ff = func(currentpath string, info os.FileInfo, err error) error {
if err != nil {
Debug("Skipping", info.Name(), err.Error())
return nil
}
if info.IsDir() {
Debug("Skipping directories")
return nil
}
if !strings.HasSuffix(info.Name(), ".metadata.yaml") {
return nil
}
if err := l.pushImageFromArtifact(artifact.NewPackageArtifact(currentpath), l.b, true); err != nil {
return errors.Wrap(err, "while pushing metadata file associated to the artifact")
}
dat, err := ioutil.ReadFile(currentpath)
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
}
a, err := artifact.NewPackageArtifactFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
// Set the path relative to the file.
// The metadata contains the full path where the file was located during buildtime.
a.Path = filepath.Join(filepath.Dir(currentpath), filepath.Base(a.Path))
// We want to include packages that are ONLY referenced in the tree.
// the ones which aren't should be deleted. (TODO: by another cli command?)
if _, notfound := db.FindPackage(a.CompileSpec.Package); notfound != nil {
Debug(fmt.Sprintf("Package %s not found in tree. Ignoring it.",
a.CompileSpec.Package.HumanReadableString()))
return nil
}
packageImage := fmt.Sprintf("%s:%s", l.imagePrefix, a.CompileSpec.GetPackage().ImageID())
if l.imagePush && l.b.ImageAvailable(packageImage) && !l.force {
Info("Image", packageImage, "already present, skipping. use --force-push to override")
} else {
Info("Generating final image", packageImage,
"for package ", a.CompileSpec.GetPackage().HumanReadableString())
if opts, err := a.GenerateFinalImage(packageImage, l.b, true); err != nil {
return errors.Wrap(err, "Failed generating metadata tree"+opts.ImageName)
}
}
if l.imagePush {
if err := pushImage(l.b, packageImage, l.force); err != nil {
return errors.Wrapf(err, "Failed while pushing image: '%s'", packageImage)
}
}
art = append(art, a)
return nil
}
err := filepath.Walk(path, ff)
if err != nil {
return nil, err
}
return art, nil
}
func pushImage(b compiler.CompilerBackend, image string, force bool) error {
if b.ImageAvailable(image) && !force {
Debug("Image", image, "already present, skipping")
return nil
}
return b.Push(backend.Options{ImageName: image})
}
func (d *dockerRepositoryGenerator) pushFileFromArtifact(a *artifact.PackageArtifact, imageTree string) error {
Debug("Generating image", imageTree)
if opts, err := a.GenerateFinalImage(imageTree, d.b, false); err != nil {
return errors.Wrap(err, "Failed generating metadata tree "+opts.ImageName)
}
if d.imagePush {
if err := pushImage(d.b, imageTree, true); err != nil {
return errors.Wrapf(err, "Failed while pushing image: '%s'", imageTree)
}
}
return nil
}
func (d *dockerRepositoryGenerator) pushRepoMetadata(repospec string, r *LuetSystemRepository) error {
// create temp dir for metafile
metaDir, err := config.LuetCfg.GetSystem().TempDir("metadata")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for metadata")
}
defer os.RemoveAll(metaDir) // clean up
tempRepoFile := filepath.Join(metaDir, REPOSITORY_SPECFILE+".tar")
if err := helpers.Tar(repospec, tempRepoFile); err != nil {
return errors.Wrap(err, "Error met while archiving repository file")
}
a := artifact.NewPackageArtifact(tempRepoFile)
imageRepo := fmt.Sprintf("%s:%s", d.imagePrefix, REPOSITORY_SPECFILE)
if err := d.pushFileFromArtifact(a, imageRepo); err != nil {
return errors.Wrap(err, "while pushing file from artifact")
}
return nil
}
func (d *dockerRepositoryGenerator) pushImageFromArtifact(a *artifact.PackageArtifact, b compiler.CompilerBackend, checkIfExists bool) error {
// we generate a new archive containing the required compressed file.
// TODO: Bundle all the extra files in 1 docker image only, instead of an image for each file
treeArchive, err := artifact.CreateArtifactForFile(a.Path)
if err != nil {
return errors.Wrap(err, "failed generating checksums for tree")
}
imageTree := fmt.Sprintf("%s:%s", d.imagePrefix, docker.StripInvalidStringsFromImage(a.GetFileName()))
if checkIfExists && d.imagePush && d.b.ImageAvailable(imageTree) && !d.force {
Info("Image", imageTree, "already present, skipping. use --force-push to override")
return nil
} else {
return d.pushFileFromArtifact(treeArchive, imageTree)
}
}
// Generate creates a Docker luet repository
func (d *dockerRepositoryGenerator) Generate(r *LuetSystemRepository, imagePrefix string, resetRevision bool) error {
// - Iterate over meta, build final images, push them if necessary
// - while pushing, check if image already exists, and if exist push them only if --force is supplied
// - Generate final images for metadata and push
imageRepository := fmt.Sprintf("%s:%s", imagePrefix, REPOSITORY_SPECFILE)
r.LastUpdate = strconv.FormatInt(time.Now().Unix(), 10)
repoTemp, err := config.LuetCfg.GetSystem().TempDir("repo")
if err != nil {
return errors.Wrap(err, "error met while creating tempdir for repository")
}
defer os.RemoveAll(repoTemp) // clean up
if r.GetBackend().ImageAvailable(imageRepository) {
if err := r.GetBackend().DownloadImage(backend.Options{ImageName: imageRepository}); err != nil {
return errors.Wrapf(err, "while downloading '%s'", imageRepository)
}
if err := r.GetBackend().ExtractRootfs(backend.Options{ImageName: imageRepository, Destination: repoTemp}, false); err != nil {
return errors.Wrapf(err, "while extracting '%s'", imageRepository)
}
}
repospec := filepath.Join(repoTemp, REPOSITORY_SPECFILE)
// Increment the internal revision version by reading the one which is already available (if any)
if err := r.BumpRevision(repospec, resetRevision); err != nil {
return err
}
Info(fmt.Sprintf(
"For repository %s creating revision %d and last update %s...",
r.Name, r.Revision, r.LastUpdate,
))
bus.Manager.Publish(bus.EventRepositoryPreBuild, struct {
Repo LuetSystemRepository
Path string
}{
Repo: *r,
Path: imageRepository,
})
// Create tree and repository file
a, err := r.AddTree(r.GetTree(), repoTemp, REPOFILE_TREE_KEY, NewDefaultTreeRepositoryFile())
if err != nil {
return errors.Wrap(err, "error met while adding runtime tree to repository")
}
// we generate a new archive containing the required compressed file.
// TODO: Bundle all the extra files in 1 docker image only, instead of an image for each file
if err := d.pushImageFromArtifact(a, d.b, false); err != nil {
return errors.Wrap(err, "error met while pushing runtime tree")
}
a, err = r.AddTree(r.BuildTree, repoTemp, REPOFILE_COMPILER_TREE_KEY, NewDefaultCompilerTreeRepositoryFile())
if err != nil {
return errors.Wrap(err, "error met while adding compiler tree to repository")
}
// we generate a new archive containing the required compressed file.
// TODO: Bundle all the extra files in 1 docker image only, instead of an image for each file
if err := d.pushImageFromArtifact(a, d.b, false); err != nil {
return errors.Wrap(err, "error met while pushing compiler tree")
}
// create temp dir for metafile
metaDir, err := config.LuetCfg.GetSystem().TempDir("metadata")
if err != nil {
return errors.Wrap(err, "error met while creating tempdir for metadata")
}
defer os.RemoveAll(metaDir) // clean up
a, err = r.AddMetadata(repospec, metaDir)
if err != nil {
return errors.Wrap(err, "failed adding Metadata file to repository")
}
if err := d.pushImageFromArtifact(a, d.b, false); err != nil {
return errors.Wrap(err, "error met while pushing docker image from artifact")
}
if err := d.pushRepoMetadata(repospec, r); err != nil {
return errors.Wrap(err, "while pushing repository metadata tree")
}
bus.Manager.Publish(bus.EventRepositoryPostBuild, struct {
Repo LuetSystemRepository
Path string
}{
Repo: *r,
Path: imagePrefix,
})
return nil
}

View File

@@ -0,0 +1,132 @@
// Copyright © 2019-2021 Ettore Di Giacinto <mudler@sabayon.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package installer
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
"time"
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/bus"
"github.com/pkg/errors"
)
type localRepositoryGenerator struct{}
func (l *localRepositoryGenerator) Initialize(path string, db pkg.PackageDatabase) ([]*artifact.PackageArtifact, error) {
return buildPackageIndex(path, db)
}
func buildPackageIndex(path string, db pkg.PackageDatabase) ([]*artifact.PackageArtifact, error) {
var art []*artifact.PackageArtifact
var ff = func(currentpath string, info os.FileInfo, err error) error {
if err != nil {
Debug("Failed walking", err.Error())
return err
}
if !strings.HasSuffix(info.Name(), ".metadata.yaml") {
return nil // Skip with no errors
}
dat, err := ioutil.ReadFile(currentpath)
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
}
a, err := artifact.NewPackageArtifactFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
// We want to include packages that are ONLY referenced in the tree.
// the ones which aren't should be deleted. (TODO: by another cli command?)
if _, notfound := db.FindPackage(a.CompileSpec.GetPackage()); notfound != nil {
Debug(fmt.Sprintf("Package %s not found in tree. Ignoring it.",
a.CompileSpec.GetPackage().HumanReadableString()))
return nil
}
art = append(art, a)
return nil
}
err := filepath.Walk(path, ff)
if err != nil {
return nil, err
}
return art, nil
}
// Generate creates a Local luet repository
func (*localRepositoryGenerator) Generate(r *LuetSystemRepository, dst string, resetRevision bool) error {
err := os.MkdirAll(dst, os.ModePerm)
if err != nil {
return err
}
r.LastUpdate = strconv.FormatInt(time.Now().Unix(), 10)
repospec := filepath.Join(dst, REPOSITORY_SPECFILE)
// Increment the internal revision version by reading the one which is already available (if any)
if err := r.BumpRevision(repospec, resetRevision); err != nil {
return err
}
Info(fmt.Sprintf(
"Repository %s: creating revision %d and last update %s...",
r.Name, r.Revision, r.LastUpdate,
))
bus.Manager.Publish(bus.EventRepositoryPreBuild, struct {
Repo LuetSystemRepository
Path string
}{
Repo: *r,
Path: dst,
})
if _, err := r.AddTree(r.GetTree(), dst, REPOFILE_TREE_KEY, NewDefaultTreeRepositoryFile()); err != nil {
return errors.Wrap(err, "error met while adding runtime tree to repository")
}
if _, err := r.AddTree(r.BuildTree, dst, REPOFILE_COMPILER_TREE_KEY, NewDefaultCompilerTreeRepositoryFile()); err != nil {
return errors.Wrap(err, "error met while adding compiler tree to repository")
}
if _, err := r.AddMetadata(repospec, dst); err != nil {
return errors.Wrap(err, "failed adding Metadata file to repository")
}
bus.Manager.Publish(bus.EventRepositoryPostBuild, struct {
Repo LuetSystemRepository
Path string
}{
Repo: *r,
Path: dst,
})
return nil
}

View File

@@ -1,4 +1,4 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Copyright © 2019 Ettore Di Giacinto <mudler@sabayon.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
@@ -26,18 +26,20 @@ import (
"github.com/mudler/luet/pkg/compiler"
backend "github.com/mudler/luet/pkg/compiler/backend"
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
config "github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/installer"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/installer"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func dockerStubRepo(tmpdir, tree, image string, push, force bool) (installer.Repository, error) {
func dockerStubRepo(tmpdir, tree, image string, push, force bool) (*LuetSystemRepository, error) {
return GenerateRepository(
"test",
"description",
@@ -46,7 +48,7 @@ func dockerStubRepo(tmpdir, tree, image string, push, force bool) (installer.Rep
1,
tmpdir,
[]string{tree},
pkg.NewInMemoryDatabase(false), backend.NewSimpleDockerBackend(), image, push, force)
pkg.NewInMemoryDatabase(false), backend.NewSimpleDockerBackend(), image, push, force, false, nil)
}
var _ = Describe("Repository", func() {
@@ -64,7 +66,7 @@ var _ = Describe("Repository", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -79,38 +81,37 @@ var _ = Describe("Repository", func() {
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
content1, err := fileHelper.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
content2, err := fileHelper.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, true)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
})
It("Generate repository metadata of files ONLY referenced in a tree", func() {
@@ -132,11 +133,11 @@ var _ = Describe("Repository", func() {
Expect(len(generalRecipe2.GetDatabase().GetPackages())).To(Equal(1))
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler2 := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler2 := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase())
spec2, err := compiler2.FromPackage(&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
compiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
compiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -153,46 +154,44 @@ var _ = Describe("Repository", func() {
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
compiler2.SetConcurrency(1)
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
artifact2, err := compiler2.Compile(false, spec2)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact2.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact2.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact2.Path)).To(BeTrue())
Expect(helpers.Untar(artifact2.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
content1, err := fileHelper.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
content2, err := fileHelper.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(fileHelper.Exists(spec2.Rel("alpine-seed-1.0.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec2.Rel("alpine-seed-1.0.metadata.yaml"))).To(BeTrue())
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false, true)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
// We check now that the artifact not referenced in the tree
// (spec2) is not indexed in the repository
@@ -254,7 +253,7 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
localcompiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
localcompiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := localcompiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -266,22 +265,21 @@ urls:
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
localcompiler.SetConcurrency(1)
artifact, err := localcompiler.Compile(false, spec)
a, err := localcompiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(a.Path)).To(BeTrue())
Expect(helpers.Untar(a.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
repo, err := dockerStubRepo(tmpdir, "../../tests/fixtures/buildable", repoImage, true, true)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(repoImage, false, true)
Expect(err).ToNot(HaveOccurred())
@@ -298,11 +296,11 @@ urls:
f, err := c.DownloadFile("repository.yaml")
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(f)).To(ContainSubstring("name: test"))
Expect(fileHelper.Read(f)).To(ContainSubstring("name: test"))
a, err := c.DownloadArtifact(&compiler.PackageArtifact{
a, err = c.DownloadArtifact(&artifact.PackageArtifact{
Path: "test.tar",
CompileSpec: &compiler.LuetCompilationSpec{
CompileSpec: &compilerspec.LuetCompilationSpec{
Package: &pkg.DefaultPackage{
Name: "b",
Category: "test",
@@ -313,7 +311,7 @@ urls:
Expect(err).ToNot(HaveOccurred())
Expect(a.Unpack(extracted, false)).ToNot(HaveOccurred())
Expect(helpers.Read(filepath.Join(extracted, "test6"))).To(Equal("artifact6\n"))
Expect(fileHelper.Read(filepath.Join(extracted, "test6"))).To(Equal("artifact6\n"))
})
It("generates images of virtual packages", func() {
@@ -329,7 +327,7 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(5))
localcompiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions(), solver.Options{Type: solver.SingleCoreSimple})
localcompiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase())
spec, err := localcompiler.FromPackage(&pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.99"})
Expect(err).ToNot(HaveOccurred())
@@ -341,19 +339,18 @@ urls:
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
localcompiler.SetConcurrency(1)
artifact, err := localcompiler.Compile(false, spec)
a, err := localcompiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(a.Path)).To(BeTrue())
Expect(helpers.Untar(a.Path, tmpdir, false)).ToNot(HaveOccurred())
repo, err := dockerStubRepo(tmpdir, "../../tests/fixtures/virtuals", repoImage, true, true)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(repoImage, false, true)
Expect(err).ToNot(HaveOccurred())
@@ -370,11 +367,11 @@ urls:
f, err := c.DownloadFile("repository.yaml")
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Read(f)).To(ContainSubstring("name: test"))
Expect(fileHelper.Read(f)).To(ContainSubstring("name: test"))
a, err := c.DownloadArtifact(&compiler.PackageArtifact{
a, err = c.DownloadArtifact(&artifact.PackageArtifact{
Path: "test.tar",
CompileSpec: &compiler.LuetCompilationSpec{
CompileSpec: &compilerspec.LuetCompilationSpec{
Package: &pkg.DefaultPackage{
Name: "a",
Category: "test",
@@ -386,7 +383,7 @@ urls:
Expect(a.Unpack(extracted, false)).ToNot(HaveOccurred())
Expect(helpers.DirectoryIsEmpty(extracted)).To(BeFalse())
Expect(fileHelper.DirectoryIsEmpty(extracted)).To(BeFalse())
content, err := ioutil.ReadFile(filepath.Join(extracted, ".virtual"))
Expect(err).ToNot(HaveOccurred())
@@ -397,14 +394,14 @@ urls:
repos := Repositories{
&LuetSystemRepository{
Index: compiler.ArtifactIndex{
&compiler.PackageArtifact{
CompileSpec: &compiler.LuetCompilationSpec{
&artifact.PackageArtifact{
CompileSpec: &compilerspec.LuetCompilationSpec{
Package: &pkg.DefaultPackage{},
},
Path: "bar",
Files: []string{"boo"},
},
&compiler.PackageArtifact{
&artifact.PackageArtifact{
Path: "d",
Files: []string{"baz"},
},
@@ -414,15 +411,15 @@ urls:
matches := repos.SearchPackages("bo", FileSearch)
Expect(len(matches)).To(Equal(1))
Expect(matches[0].Artifact.GetPath()).To(Equal("bar"))
Expect(matches[0].Artifact.Path).To(Equal("bar"))
})
It("Searches packages", func() {
repo := &LuetSystemRepository{
Index: compiler.ArtifactIndex{
&compiler.PackageArtifact{
&artifact.PackageArtifact{
Path: "foo",
CompileSpec: &compiler.LuetCompilationSpec{
CompileSpec: &compilerspec.LuetCompilationSpec{
Package: &pkg.DefaultPackage{
Name: "foo",
Category: "bar",
@@ -430,9 +427,9 @@ urls:
},
},
},
&compiler.PackageArtifact{
&artifact.PackageArtifact{
Path: "baz",
CompileSpec: &compiler.LuetCompilationSpec{
CompileSpec: &compilerspec.LuetCompilationSpec{
Package: &pkg.DefaultPackage{
Name: "foo",
Category: "baz",
@@ -449,7 +446,7 @@ urls:
Version: "1.0",
})
Expect(err).ToNot(HaveOccurred())
Expect(a.GetPath()).To(Equal("baz"))
Expect(a.Path).To(Equal("baz"))
a, err = repo.SearchArtefact(&pkg.DefaultPackage{
Name: "foo",
@@ -457,7 +454,7 @@ urls:
Version: "1.0",
})
Expect(err).ToNot(HaveOccurred())
Expect(a.GetPath()).To(Equal("foo"))
Expect(a.Path).To(Equal("foo"))
// Doesn't exist. so must fail
_, err = repo.SearchArtefact(&pkg.DefaultPackage{

View File

@@ -3,6 +3,7 @@ package installer
import (
"github.com/hashicorp/go-multierror"
"github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
@@ -23,8 +24,8 @@ func (s *System) ExecuteFinalizers(packs []pkg.Package) error {
var errs error
executedFinalizer := map[string]bool{}
for _, p := range packs {
if helpers.Exists(p.Rel(tree.FinalizerFile)) {
out, err := helpers.RenderFiles(p.Rel(tree.FinalizerFile), p.Rel(tree.DefinitionFile), "")
if fileHelper.Exists(p.Rel(tree.FinalizerFile)) {
out, err := helpers.RenderFiles(p.Rel(tree.FinalizerFile), p.Rel(tree.DefinitionFile))
if err != nil {
Warning("Failed rendering finalizer for ", p.HumanReadableString(), err.Error())
errs = multierror.Append(errs, err)

View File

@@ -26,7 +26,8 @@ import (
"strconv"
"strings"
"github.com/mudler/luet/pkg/helpers"
"github.com/mudler/luet/pkg/helpers/docker"
"github.com/mudler/luet/pkg/helpers/match"
version "github.com/mudler/luet/pkg/versioner"
gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
@@ -46,7 +47,7 @@ type Package interface {
GetFingerPrint() string
GetPackageName() string
GetPackageImageName() string
ImageID() string
Requires([]*DefaultPackage) Package
Conflicts([]*DefaultPackage) Package
Revdeps(PackageDatabase) Packages
@@ -113,6 +114,14 @@ type Package interface {
GetBuildTimestamp() string
Clone() Package
GetMetadataFilePath() string
SetTreeDir(s string)
GetTreeDir() string
Mark() Package
JSON() ([]byte, error)
}
type Tree interface {
@@ -125,6 +134,19 @@ type Tree interface {
type Packages []Package
type DefaultPackages []*DefaultPackage
func (d DefaultPackages) Hash(salt string) string {
overallFp := ""
for _, c := range d {
overallFp = overallFp + c.HashFingerprint("join")
}
h := md5.New()
io.WriteString(h, fmt.Sprintf("%s-%s", overallFp, salt))
return fmt.Sprintf("%x", h.Sum(nil))
}
// >> Unmarshallers
// DefaultPackageFromYaml decodes a package from yaml bytes
func DefaultPackageFromYaml(yml []byte) (DefaultPackage, error) {
@@ -210,6 +232,11 @@ func (t *DefaultPackage) JSON() ([]byte, error) {
return buffer.Bytes(), err
}
// GetMetadataFilePath returns the canonical name of an artifact metadata file
func (d *DefaultPackage) GetMetadataFilePath() string {
return d.GetFingerPrint() + ".metadata.yaml"
}
// DefaultPackage represent a standard package definition
type DefaultPackage struct {
ID int `storm:"id,increment" json:"id"` // primary key with auto increment
@@ -235,6 +262,8 @@ type DefaultPackage struct {
BuildTimestamp string `json:"buildtimestamp,omitempty"`
Labels map[string]string `json:"labels,omitempty"` // Affects YAML field names too.
TreeDir string `json:"treedir,omitempty"`
}
// State represent the package state
@@ -251,6 +280,12 @@ func NewPackage(name, version string, requires []*DefaultPackage, conflicts []*D
}
}
func (p *DefaultPackage) SetTreeDir(s string) {
p.TreeDir = s
}
func (p *DefaultPackage) GetTreeDir() string {
return p.TreeDir
}
func (p *DefaultPackage) String() string {
b, err := p.JSON()
if err != nil {
@@ -289,8 +324,8 @@ func (p *DefaultPackage) GetPackageName() string {
return fmt.Sprintf("%s-%s", p.Name, p.Category)
}
func (p *DefaultPackage) GetPackageImageName() string {
return fmt.Sprintf("%s-%s:%s", p.Name, p.Category, p.Version)
func (p *DefaultPackage) ImageID() string {
return docker.StripInvalidStringsFromImage(p.GetFingerPrint())
}
// GetBuildTimestamp returns the package build timestamp
@@ -325,19 +360,19 @@ func (p *DefaultPackage) IsHidden() bool {
}
func (p *DefaultPackage) HasLabel(label string) bool {
return helpers.MapHasKey(&p.Labels, label)
return match.MapHasKey(&p.Labels, label)
}
func (p *DefaultPackage) MatchLabel(r *regexp.Regexp) bool {
return helpers.MapMatchRegex(&p.Labels, r)
return match.MapMatchRegex(&p.Labels, r)
}
func (p *DefaultPackage) HasAnnotation(label string) bool {
return helpers.MapHasKey(&p.Annotations, label)
return match.MapHasKey(&p.Annotations, label)
}
func (p *DefaultPackage) MatchAnnotation(r *regexp.Regexp) bool {
return helpers.MapMatchRegex(&p.Annotations, r)
return match.MapMatchRegex(&p.Annotations, r)
}
// AddUse adds a use to a package
@@ -473,6 +508,12 @@ func (p *DefaultPackage) Matches(m Package) bool {
return false
}
func (p *DefaultPackage) Mark() Package {
marked := p.Clone()
marked.SetName("@@" + marked.GetName())
return marked
}
func (p *DefaultPackage) Expand(definitiondb PackageDatabase) (Packages, error) {
var versionsInWorld Packages
@@ -626,6 +667,16 @@ func (set Packages) Best(v version.Versioner) Package {
return versionsMap[sorted[len(sorted)-1]]
}
func (set Packages) Find(packageName string) (Package, error) {
for _, p := range set {
if p.GetPackageName() == packageName {
return p, nil
}
}
return &DefaultPackage{}, errors.New("package not found")
}
func (set Packages) Unique() Packages {
var result Packages
uniq := make(map[string]Package)

View File

@@ -66,6 +66,17 @@ var _ = Describe("Package", func() {
})
})
Context("ImageID", func() {
It("Returns a correct ImageID escaping unsupported chars", func() {
p := NewPackage("A", "1.0+p1", []*DefaultPackage{}, []*DefaultPackage{})
Expect(p.ImageID()).To(Equal("A--1.0-p1"))
})
It("Returns a correct ImageID", func() {
p := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
Expect(p.ImageID()).To(Equal("A--1.0"))
})
})
Context("Find label on packages", func() {
a := NewPackage("A", ">=1.0", []*DefaultPackage{}, []*DefaultPackage{})
a.AddLabel("project1", "test1")

View File

@@ -260,24 +260,42 @@ func (a PackagesAssertions) TrueLen() int {
// and checks it's not the only one. if it's unique it marks it specially - so the hash
// which is generated is unique for the selected package
func (assertions PackagesAssertions) HashFrom(p pkg.Package) string {
return assertions.SaltedHashFrom(p, map[string]string{})
}
func (assertions PackagesAssertions) AssertionHash() string {
return assertions.SaltedAssertionHash(map[string]string{})
}
func (assertions PackagesAssertions) SaltedHashFrom(p pkg.Package, salts map[string]string) string {
var assertionhash string
// When we don't have any solution to hash for, we need to generate an UUID by ourselves
latestsolution := assertions.Drop(p)
if latestsolution.TrueLen() == 0 {
assertionhash = assertions.Mark(p).AssertionHash()
// Preserve the hash if supplied of marked packages
marked := p.Mark()
if markedHash, exists := salts[p.GetFingerPrint()]; exists {
salts[marked.GetFingerPrint()] = markedHash
}
assertionhash = assertions.Mark(p).SaltedAssertionHash(salts)
} else {
assertionhash = latestsolution.AssertionHash()
assertionhash = latestsolution.SaltedAssertionHash(salts)
}
return assertionhash
}
func (assertions PackagesAssertions) AssertionHash() string {
func (assertions PackagesAssertions) SaltedAssertionHash(salts map[string]string) string {
var fingerprint string
for _, assertion := range assertions { // Note: Always order them first!
if assertion.Value { // Tke into account only dependencies installed (get fingerprint of subgraph)
fingerprint += assertion.ToString() + "\n"
salt, exists := salts[assertion.Package.GetFingerPrint()]
if exists {
fingerprint += assertion.ToString() + salt + "\n"
} else {
fingerprint += assertion.ToString() + "\n"
}
}
}
hash := sha256.Sum256([]byte(fingerprint))
@@ -316,8 +334,7 @@ func (assertions PackagesAssertions) Mark(p pkg.Package) PackagesAssertions {
for _, a := range assertions {
if a.Package.Matches(p) {
marked := a.Package.Clone()
marked.SetName("@@" + marked.GetName())
marked := a.Package.Mark()
a = PackageAssert{Package: marked.(*pkg.DefaultPackage), Value: a.Value, Hash: a.Hash}
}
ass = append(ass, a)

View File

@@ -382,6 +382,9 @@ var _ = Describe("Decoder", func() {
Expect(solution.HashFrom(X)).ToNot(Equal(solution2.HashFrom(F)))
Expect(solution3.HashFrom(D)).To(Equal(solution.HashFrom(X)))
Expect(solution3.SaltedHashFrom(D, map[string]string{D.GetFingerPrint(): "foo"})).ToNot(Equal(solution3.HashFrom(D)))
Expect(solution4.SaltedHashFrom(Y, map[string]string{X.GetFingerPrint(): "foo"})).ToNot(Equal(solution4.HashFrom(Y)))
Expect(empty.AssertionHash()).ToNot(Equal(solution3.HashFrom(D)))
Expect(empty.AssertionHash()).ToNot(Equal(solution2.HashFrom(F)))

View File

@@ -26,6 +26,7 @@ import (
"path/filepath"
"github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
"github.com/pkg/errors"
)
@@ -57,6 +58,17 @@ type CompilerRecipe struct {
Recipe
}
// CompilerRecipes copies tree 1:1 as they contain the specs
// and the build context required for reproducible builds
func (r *CompilerRecipe) Save(path string) error {
for _, p := range r.SourcePath {
if err := fileHelper.CopyDir(p, filepath.Join(path, filepath.Base(p))); err != nil {
return errors.Wrap(err, "while copying source tree")
}
}
return nil
}
func (r *CompilerRecipe) Load(path string) error {
r.SourcePath = append(r.SourcePath, path)
@@ -87,12 +99,13 @@ func (r *CompilerRecipe) Load(path string) error {
}
// Path is set only internally when tree is loaded from disk
pack.SetPath(filepath.Dir(currentpath))
pack.SetTreeDir(path)
// Instead of rdeps, have a different tree for build deps.
compileDefPath := pack.Rel(CompilerDefinitionFile)
if helpers.Exists(compileDefPath) {
if fileHelper.Exists(compileDefPath) {
dat, err := helpers.RenderFiles(compileDefPath, currentpath, "")
dat, err := helpers.RenderFiles(compileDefPath, currentpath)
if err != nil {
return errors.Wrap(err,
"Error templating file "+CompilerDefinitionFile+" from "+
@@ -115,22 +128,29 @@ func (r *CompilerRecipe) Load(path string) error {
}
case CollectionFile:
dat, err := ioutil.ReadFile(currentpath)
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
}
packs, err := pkg.DefaultPackagesFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
packsRaw, err := pkg.GetRawPackages(dat)
if err != nil {
return errors.Wrap(err, "Error reading raw packages from "+currentpath)
}
for _, pack := range packs {
pack.SetPath(filepath.Dir(currentpath))
pack.SetTreeDir(path)
// Instead of rdeps, have a different tree for build deps.
compileDefPath := pack.Rel(CompilerDefinitionFile)
if helpers.Exists(compileDefPath) {
if fileHelper.Exists(compileDefPath) {
raw := packsRaw.Find(pack.GetName(), pack.GetCategory(), pack.GetVersion())
buildyaml, err := ioutil.ReadFile(compileDefPath)
@@ -159,9 +179,7 @@ func (r *CompilerRecipe) Load(path string) error {
return errors.Wrap(err, "Error creating package "+pack.GetName())
}
}
}
return nil
}

View File

@@ -26,8 +26,9 @@ import (
"os"
"path/filepath"
"github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
"github.com/pkg/errors"
)
@@ -61,8 +62,8 @@ func (r *InstallerRecipe) Save(path string) error {
}
// Instead of rdeps, have a different tree for build deps.
finalizerPath := p.Rel(FinalizerFile)
if helpers.Exists(finalizerPath) { // copy finalizer file from the source tree
helpers.CopyFile(finalizerPath, filepath.Join(dir, FinalizerFile))
if fileHelper.Exists(finalizerPath) { // copy finalizer file from the source tree
fileHelper.CopyFile(finalizerPath, filepath.Join(dir, FinalizerFile))
}
}
@@ -71,7 +72,7 @@ func (r *InstallerRecipe) Save(path string) error {
func (r *InstallerRecipe) Load(path string) error {
if !helpers.Exists(path) {
if !fileHelper.Exists(path) {
return errors.New(fmt.Sprintf(
"Path %s doesn't exit.", path,
))

View File

@@ -26,9 +26,10 @@ import (
"os"
"path/filepath"
helpers "github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
spectooling "github.com/mudler/luet/pkg/spectooling"
"github.com/pkg/errors"
)
@@ -77,7 +78,7 @@ func (r *Recipe) Load(path string) error {
// if err != nil {
// return err
// }
if !helpers.Exists(path) {
if !fileHelper.Exists(path) {
return errors.New(fmt.Sprintf(
"Path %s doesn't exit.", path,
))

17
tests/fixtures/copy/c/a/build.yaml vendored Normal file
View File

@@ -0,0 +1,17 @@
image: "alpine"
copy:
- package:
name: "a"
category: "test"
version: ">=0"
source: /test3
destination: /test3
- image: "busybox"
source: /bin/busybox
destination: /busybox
steps:
- mkdir /bina
- cp /test3 /result
- cp -rf /busybox /bina/busybox

View File

@@ -0,0 +1,3 @@
category: "test"
name: "c"
version: "1.2"

View File

@@ -0,0 +1,7 @@
image: "alpine"
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo artifact3 > /test3
- echo artifact4 > /test4

View File

@@ -0,0 +1,3 @@
category: "test"
name: "a"
version: "1.2"

View File

@@ -0,0 +1,13 @@
requires:
- category: "test"
name: "a"
version: ">=0"
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo artifact5 > /newc
- echo artifact6 > /newnewc
- chmod +x generate.sh
- ./generate.sh

View File

@@ -0,0 +1,3 @@
category: "test"
name: "b"
version: "1.1"

View File

@@ -0,0 +1 @@
echo generated > /sonewc

11
tests/fixtures/docker_repo/c/build.yaml vendored Normal file
View File

@@ -0,0 +1,11 @@
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo c > /c
- echo c > /cd
requires:
- category: "test"
name: "b"
version: "1.0"

View File

@@ -0,0 +1,3 @@
category: "test"
name: "c"
version: "1.0"

View File

@@ -0,0 +1,9 @@
image: "alpine"
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo artifact5 > /test5
- echo artifact6 > /test6
- chmod +x generate.sh
- ./generate.sh

View File

@@ -0,0 +1,3 @@
category: "test"
name: "b"
version: "1.0"

View File

@@ -0,0 +1 @@
echo generated > /artifact42

View File

@@ -0,0 +1,10 @@
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo artifact3 > /test3
- echo artifact4 > /test4
requires:
- category: "test"
name: "b"
version: "1.0"

View File

@@ -0,0 +1,8 @@
category: "test"
name: "a"
version: "1.0"
requires:
- category: "test"
name: "b"
version: "1.0"

11
tests/fixtures/docker_repo/d/build.yaml vendored Normal file
View File

@@ -0,0 +1,11 @@
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo s > /d
- echo dd > /dd
requires:
- category: "test"
name: "c"
version: "1.0"

View File

@@ -0,0 +1,3 @@
category: "test"
name: "d"
version: "1.0"

Some files were not shown because too many files have changed in this diff Show More