Compare commits

...

214 Commits
0.6.1 ... 0.7.6

Author SHA1 Message Date
Ettore Di Giacinto
1b6ffac7bd Tag 0.7.6 2020-05-02 14:50:29 +02:00
Ettore Di Giacinto
faef3d093a Add test fixtures 2020-05-02 14:03:34 +02:00
Ettore Di Giacinto
d4c25d74f5 Remove focus from test 2020-05-02 13:27:15 +02:00
Ettore Di Giacinto
2ed9781c88 Don't try to match against repo already installed packages 2020-05-02 13:24:46 +02:00
Ettore Di Giacinto
9d6d6bc0c8 Add more upgrade and reclaim tests scenarios 2020-05-02 12:18:19 +02:00
Ettore Di Giacinto
f8b2837741 Enhance reclaim output 2020-05-02 12:17:54 +02:00
geaaru
878e6d7b9c Merge pull request #103 from mudler/tmpdir-cleanup
Tmpdir cleanup
2020-05-02 09:20:01 +02:00
Daniele Rondina
c5b41946dc Add integration test for tmpdir cleanup 2020-05-02 08:43:25 +02:00
Daniele Rondina
a89c0af2f8 config/config_test: Fix typo 2020-05-01 15:34:17 +02:00
Daniele Rondina
60d9017952 CleanupTmpDir() is not Fatal 2020-05-01 15:27:19 +02:00
Daniele Rondina
1f99fde1c5 Add config test suite 2020-05-01 15:26:53 +02:00
Ettore Di Giacinto
4b63c9eaf9 Add test to be sure we don't index packages not in the tree 2020-05-01 11:48:05 +02:00
Ettore Di Giacinto
286d0fba2c Fix typo 2020-05-01 11:07:14 +02:00
Ettore Di Giacinto
624518bf77 Don't add package to the repository which aren't referenced by the tree 2020-05-01 10:52:40 +02:00
Daniele Rondina
51a4037b1b config: Initialize luet tmp basedir if doesn't exist 2020-05-01 08:18:18 +02:00
Ettore Di Giacinto
b13828c883 Add development version 2020-04-30 22:50:37 +02:00
Ettore Di Giacinto
886cbd0036 Tag 0.7.5 2020-04-30 22:50:12 +02:00
Ettore Di Giacinto
b91288153a Make tree validation cmd concurrent 2020-04-30 21:48:46 +02:00
Ettore Di Giacinto
9cb290b484 Improve database errors 2020-04-30 20:44:34 +02:00
Daniele Rondina
11944873ea Integrate tmpdir_base params and tmpdirs cleanup 2020-04-30 20:29:28 +02:00
Ettore Di Giacinto
322ac99f17 Annotate package runtime definition when reclaiming
This was an issue as we were copying the buildspec instead
2020-04-30 18:56:50 +02:00
Ettore Di Giacinto
6acc5fc97e Make BuildFormula support dependency cycles 2020-04-30 18:56:35 +02:00
Ettore Di Giacinto
6a4557a3b3 Make RequireContains support dependency cycles 2020-04-30 18:56:06 +02:00
Ettore Di Giacinto
fb3c568051 Add development version 2020-04-24 19:46:35 +02:00
Ettore Di Giacinto
14bc26ee22 Tag 0.7.4 2020-04-24 19:46:14 +02:00
Ettore Di Giacinto
18ccdbd6a9 Add --revdep to luet pkglist
Include also the path in the results
2020-04-24 19:05:47 +02:00
Ettore Di Giacinto
7d960b733d Add --revdep to luet search
Fixes #52
2020-04-24 19:05:24 +02:00
Ettore Di Giacinto
919b2c3cfc Don't print misleading messages to the user 2020-04-24 19:04:21 +02:00
Ettore Di Giacinto
a457c53824 Fixups to ExpandedRevdeps 2020-04-24 00:15:18 +02:00
Ettore Di Giacinto
8305d01e76 Walk requires in ExpandedRevdeps
In this way we annotate the visited and we avoid cycles that can be
generated by revdeps
2020-04-23 23:44:42 +02:00
Ettore Di Giacinto
ec32677dc1 Merge branch 'master' into develop 2020-04-22 22:17:49 +02:00
Ettore Di Giacinto
60619c36a3 Merge pull request #95 from mudler/develop (#96) 2020-04-22 22:16:58 +02:00
Ettore Di Giacinto
141f1bf773 Merge pull request #95 from mudler/develop
Merge develop
2020-04-22 22:15:34 +02:00
Ettore Di Giacinto
9321a310a5 Support mount with target folder in box exec
Also mount bind before pivotroot, this fixes bind mounting in general
2020-04-21 17:30:27 +02:00
Ettore Di Giacinto
4cee27da7f Fixup box CLI params and consume new box options 2020-04-20 23:03:03 +02:00
Ettore Di Giacinto
98876c8f20 Add support for env and hostmounts in box 2020-04-20 23:01:49 +02:00
Ettore Di Giacinto
4c2d38be59 Add package ExpandedRevdeps 2020-04-19 11:55:13 +02:00
Ettore Di Giacinto
20aa7c89f8 Use different result structs in pkglist 2020-04-19 11:37:23 +02:00
Ettore Di Giacinto
937609a9f4 Add machine readable output to pkglist 2020-04-19 11:24:41 +02:00
Ettore Di Giacinto
a3f0d848c9 Add builtime search to tree pkglist 2020-04-19 10:53:05 +02:00
Ettore Di Giacinto
8e91c255a3 Make FindPackageMatch match packages HumanReadableString 2020-04-19 10:47:55 +02:00
Ettore Di Giacinto
62381ed46d Structure search machine readable output 2020-04-19 10:31:07 +02:00
Ettore Di Giacinto
1171987ed3 Adapt integration test to search output change 2020-04-19 10:24:39 +02:00
Ettore Di Giacinto
ac871cb0a3 Add --output option to search
In such way it can be parsed by scripts more easily.
It also disable the spinner based on loglevel

Fixes #92
2020-04-18 23:22:00 +02:00
Ettore Di Giacinto
82cd0e17b6 Enhance search output 2020-04-18 22:51:27 +02:00
Ettore Di Giacinto
c972795a30 Tweak upgrade integration test
In this way we also test that the user configuration takes precedence
over the one advertized remotely
2020-04-18 17:35:58 +02:00
Ettore Di Giacinto
4d466d0330 Sync the repository data with user configured data
Fixup a regression, in this way we respect the user configuration in
regards to a repository to be consumed (priority and repository type)
regardless of the one advertized remotely.

Also add informative message about prio and type.
2020-04-18 17:33:46 +02:00
Ettore Di Giacinto
84407e5ae7 Add SetPriority to repository 2020-04-18 17:33:34 +02:00
Ettore Di Giacinto
845797a155 Merge pull request #93 from mudler/develop
Merge develop
2020-04-18 14:45:20 +02:00
Ettore Di Giacinto
83e19359a9 Update bbolt to 1.3.4
Includes fixes for Go 1.14

See: https://github.com/etcd-io/bbolt/issues/187
2020-04-18 13:58:14 +02:00
Ettore Di Giacinto
d77f875a8a Bump go version in travis 2020-04-18 12:41:23 +02:00
Ettore Di Giacinto
3963fb64e5 Update vendor 2020-04-18 11:49:28 +02:00
Ettore Di Giacinto
1d5ce53443 Add luet box command
It adds only the 'exec' subcommand to spawn processes in custom boxes

Relates to #45
2020-04-18 11:43:14 +02:00
Ettore Di Giacinto
a14f0abb5c Update vendor 2020-04-18 11:42:34 +02:00
Ettore Di Giacinto
64bac0823c Consume our moby fork
It handles #91
2020-04-18 11:41:34 +02:00
Ettore Di Giacinto
ee0fe1a86a Allow to finalizer to specify entrypoint command
In this way, finalizer in strict environment can override the default
shell used to run commands.

The shell keyword is a list, as it needs to contain the full command +
args.
2020-04-14 17:46:39 +02:00
Ettore Di Giacinto
333b9cc023 Add reclaim to CLI
Relates to #86
2020-04-13 19:33:30 +02:00
Ettore Di Giacinto
538ba8f5df Add luet reclaim
Reclaim allows to migrate between different system layouts. This
is a experimental feature (yet) and might be revisited in the future.

This change:

- Adds Reclaim(system) to Installer
- Adds unit tests

Relates to #86
2020-04-13 19:32:51 +02:00
Ettore Di Giacinto
eb56956c65 Add Touch file helper 2020-04-13 19:32:50 +02:00
Ettore Di Giacinto
32cf132a0c Drop downloadOnly bool from Installer cli options 2020-04-13 19:32:50 +02:00
Ettore Di Giacinto
4e2d42e397 Drop downloadOnly bool from installer.Install 2020-04-13 19:32:50 +02:00
Ettore Di Giacinto
c2eed1d999 Add quay.io badge 2020-04-13 18:11:02 +02:00
Daniele Rondina
1b8176777a cmd/search: Add filter to result 2020-04-13 15:50:45 +02:00
Ettore Di Giacinto
528f481a7b Annotate package files in metadata
Fixes #87
2020-04-13 10:52:41 +02:00
Ettore Di Giacinto
2f3eabc3ca Annotate git hash and time in displayed version 2020-04-12 14:37:49 +02:00
Ettore Di Giacinto
579a4f20fc Add docker fixture for integration tests 2020-04-11 20:12:43 +02:00
Ettore Di Giacinto
dd45a46ed0 Add docker from scratch integration test 2020-04-11 20:07:10 +02:00
Ettore Di Giacinto
f987ee9fa0 Handle errors from os/user
We use it to generate defaults, if error is found, we imply preserve
permissions
2020-04-11 20:05:56 +02:00
Ettore Di Giacinto
0fe2a8c950 Add development version 2020-04-11 18:30:11 +02:00
Ettore Di Giacinto
9704992c43 Tag 0.7.3 2020-04-11 18:29:46 +02:00
Ettore Di Giacinto
c2beab64cb Merge pull request #84 from mudler/develop
Merge develop
2020-04-11 18:24:02 +02:00
Ettore Di Giacinto
d8919f7250 Check if dependencies are selectors in validate 2020-04-10 20:00:37 +02:00
Daniele Rondina
42c380029c cmd/tree/validate: Fix order analysis 2020-04-10 17:41:27 +02:00
Daniele Rondina
69a82a1ca5 simpledocker: Use debug option for print container-diff results 2020-04-10 09:32:12 +02:00
Daniele Rondina
02c33896d5 simpledocker: Move warning before return 2020-04-10 09:31:08 +02:00
Daniele Rondina
c2e9176ab2 solver.Order now return error 2020-04-09 17:07:57 +02:00
Daniele Rondina
726a749b4b cmd/tree/validate: Integrate order of the solution 2020-04-09 15:58:05 +02:00
Ettore Di Giacinto
0d2668e452 Print warning if container-diffs return errors to stderr 2020-04-08 18:31:40 +02:00
Ettore Di Giacinto
77ba4193aa Add ValidateSelector to versioner interface and consume it
We can refactor furthermore by dropping the package methods, as now we
can consume a versioner in all places that requires it
2020-04-05 15:52:16 +02:00
Ettore Di Giacinto
5a5e7f1dfa Move version logic inside versioner
WIP, it needs yet to be under the interface implementation
2020-04-05 11:16:14 +02:00
Ettore Di Giacinto
cc0999b4c4 Merge pull request #83 from auto-maintainers/branch-be6698fa706d6ccfc3734b805b212f75175fd557
Update dependencies
2020-04-05 09:00:51 +02:00
MarvinHatesOceans
be6698fa70 Update dependencies 2020-04-04 23:56:14 +00:00
Ettore Di Giacinto
6540dcc99d Merge pull request #82 from mudler/packages_type
Add Packages type and Versioner interface
2020-04-04 17:28:49 +02:00
Ettore Di Giacinto
cf8091a7fd Re-enable debian versioning test, add more cases 2020-04-04 16:49:41 +02:00
Ettore Di Giacinto
3caaa01eb8 Add semver detection to versioner
In this way we compare only all version that are compliant to semver,
otherwise we fallback to debian sorting.
2020-04-04 16:48:48 +02:00
Ettore Di Giacinto
4eae0a851c Add versioner unit tests 2020-04-04 16:33:08 +02:00
Ettore Di Giacinto
626419f955 Fix original version mapping in versioner 2020-04-04 16:23:43 +02:00
Ettore Di Giacinto
5bf60deffc Update vendor 2020-04-04 15:35:45 +02:00
Ettore Di Giacinto
84625be9ac Adapt package.Best to take a Versioner interface 2020-04-04 15:33:14 +02:00
Ettore Di Giacinto
aaa8d8b7d6 Update gomod 2020-04-04 15:32:13 +02:00
Ettore Di Giacinto
04f9a88a5f Add versioner interface to wrap versioning operations
This allows to scope responsabilities and enables switching to different
ordering algorithms
2020-04-04 15:29:20 +02:00
Ettore Di Giacinto
9d58b0a0cc Adapt tests to refactor 2020-04-04 14:41:58 +02:00
Ettore Di Giacinto
5e31d940f0 Introduce Packages for []Package 2020-04-04 14:29:08 +02:00
Daniele Rondina
42d1ec5585 cmd/tree/bump: Add --to-stdout option 2020-04-04 13:42:25 +02:00
Daniele Rondina
07e78dd89b Update vendor github.com/Sabayon/pkgs-checker@076438c31739 2020-04-04 11:40:36 +02:00
geaaru
625a6ce773 Merge pull request #80 from mudler/versioning_integration
Versioning integration
2020-04-03 19:24:43 +02:00
Daniele Rondina
d6faa3335f tests/integration/08_versioning.sh: Add version with _ 2020-04-03 18:33:01 +02:00
Daniele Rondina
129ca8da55 Add version sanitized to Best() 2020-04-03 18:32:26 +02:00
Daniele Rondina
9e95d7d2a9 tests/integration/08_versioning.sh: Drop wrong assert 2020-04-03 15:24:56 +02:00
Daniele Rondina
e94674fca3 tests/integration/08_versioning.sh: Simplify test 2020-04-03 15:06:27 +02:00
Daniele Rondina
730b9a087a tests/integration/08_versioning.sh: Fix test 2020-04-02 19:39:31 +02:00
Ettore Di Giacinto
9cf6330146 Build empty packages in versioning fixture 2020-03-29 15:11:20 +02:00
Ettore Di Giacinto
daa0b815c1 Fixup assertEquals on versioning integration test 2020-03-29 14:52:14 +02:00
Ettore Di Giacinto
1779382811 Extend tree validate checks 2020-03-29 14:52:13 +02:00
Ettore Di Giacinto
c705728a2a Build specific package, it triggers version search 2020-03-29 14:52:12 +02:00
Ettore Di Giacinto
15e772d96d Fixup versioning of fixture 2020-03-29 14:52:10 +02:00
Ettore Di Giacinto
0d6dc49a95 Add versioning integration test 2020-03-29 14:52:00 +02:00
Daniele Rondina
8df76c8941 Update vendor github.com/Sabayon/pkgs-checker@fd529912510d 2020-03-29 14:04:55 +02:00
Ettore Di Giacinto
b1ce8d2eba Merge pull request #77 from auto-maintainers/branch-830fdb5bf299f959bcd02fe577c19471daa229cd
Update dependencies
2020-03-29 12:15:14 +02:00
Daniele Rondina
ee66839bd3 pkg/package/version: Better search for last + 2020-03-29 09:10:52 +02:00
Daniele Rondina
9b61ed9e1d Fix panic if luet is executed without args 2020-03-29 08:48:51 +02:00
MarvinHatesOceans
d59fff0740 Update dependencies 2020-03-28 23:19:43 +00:00
Ettore Di Giacinto
aeb10338f6 Add development version 2020-03-28 17:32:06 +01:00
Ettore Di Giacinto
02653e03d8 Tag 0.7.2 2020-03-28 17:31:35 +01:00
Ettore Di Giacinto
2c48fe0524 Merge pull request #76 from mudler/box_finalizers
Add Box finalizers
2020-03-28 17:30:09 +01:00
Ettore Di Giacinto
a0113dcd13 Drop verbose output 2020-03-28 17:05:23 +01:00
Ettore Di Giacinto
d45536505b Add integration test for box finalizers 2020-03-28 16:59:37 +01:00
Ettore Di Giacinto
92d335d5d1 Add exec command to cli
It is the entrypoint for box, to run commands with unshare
2020-03-28 16:58:59 +01:00
Ettore Di Giacinto
49d7b4e2bf Add box structure to handle containerized executions 2020-03-28 16:58:46 +01:00
Ettore Di Giacinto
f15ed3fda1 Move finalizer in its own struct
Also distinguish when we run in a root target dir ("/") or when we want
to run the commands in a separate folder
2020-03-28 16:57:40 +01:00
Ettore Di Giacinto
2ad16fa875 Add ListDir helper 2020-03-28 16:55:16 +01:00
Ettore Di Giacinto
416be23a46 Don't always remove unpacked repository content 2020-03-28 12:07:40 +01:00
Daniele Rondina
98c7d5c450 cmd/create-repo: Fix default value of tree-compression 2020-03-27 08:59:52 +01:00
Daniele Rondina
50091b2a4b pkg/installer: Add test for uncompress tree and minor cleanup 2020-03-25 00:19:52 +01:00
Ettore Di Giacinto
0067fa82a5 Merge pull request #75 from mudler/fixup_upgrade
Don't attempt to upgrade packages already in system
2020-03-24 21:14:53 +01:00
Ettore Di Giacinto
117554792d Don't attempt to upgrade packages already in system 2020-03-24 20:30:59 +01:00
Daniele Rondina
d0b7552aca tests: Align test to new args 2020-03-24 20:01:22 +01:00
Daniele Rondina
454e9d934e Use filename instead of name on repo specs 2020-03-24 19:40:11 +01:00
Ettore Di Giacinto
dd91a61caf Merge pull request #74 from mudler/split_download
Split download
2020-03-24 19:02:16 +01:00
Ettore Di Giacinto
bc5d01c3df Add integration test for search of installed packages 2020-03-24 18:19:09 +01:00
Ettore Di Giacinto
af7f1de9f1 Split download and installer worker
Also, drop downloadOnly from install
2020-03-24 18:19:09 +01:00
Ettore Di Giacinto
3f5aa7db22 Merge pull request #73 from mudler/refactor-repository
Refactor repository
2020-03-24 17:44:37 +01:00
Daniele Rondina
a0f9222068 Add check of repository metafile archive in tests 2020-03-24 14:01:22 +01:00
Daniele Rondina
520768d0ca pkg/installer: Align tests with new default 2020-03-24 12:52:45 +01:00
Daniele Rondina
4f002ab40f pkg/installer/repository: The check for repo_files is not needed for parsing normal repo 2020-03-24 12:52:24 +01:00
Daniele Rondina
60635a03eb Fix propagation of repo file name and add test for meta compressed 2020-03-24 00:31:41 +01:00
Daniele Rondina
202ed2651a 💥 Refactor and split repository.yaml file 2020-03-24 00:05:16 +01:00
Daniele Rondina
4e461fd6be cmd/repo/update: Update only enabled repo 2020-03-23 23:40:27 +01:00
Daniele Rondina
6bd8fe6789 Add omitempty to DefaultPackage fields 2020-03-22 22:23:11 +01:00
Ettore Di Giacinto
7cf6d51355 Tweak ginkgo parameters
- Add flakeAttempts
- Add race detection to CI runs
2020-03-22 22:07:46 +01:00
Ettore Di Giacinto
d5166c55ab Lock only on commands that aren't meant to run in parallel 2020-03-22 15:19:08 +01:00
Ettore Di Giacinto
7de5f6656d Merge branch 'develop' 2020-03-22 14:42:51 +01:00
Ettore Di Giacinto
9e62111e1a Add development version 2020-03-22 14:42:35 +01:00
Ettore Di Giacinto
83cfcd878d Tag 0.7.1 2020-03-22 14:42:04 +01:00
Ettore Di Giacinto
578b323e1b Merge pull request #72 from mudler/speedup_travis
Speedup travis
2020-03-22 14:41:19 +01:00
Ettore Di Giacinto
d7b503d4a5 Drop verbosity in ginkgo params 2020-03-22 14:41:19 +01:00
Ettore Di Giacinto
fcdaf9338e Drop unneeded dependencies 2020-03-22 14:41:19 +01:00
Ettore Di Giacinto
adeb973c31 Make test-coverage in one step, deploy release only on tags 2020-03-22 14:41:15 +01:00
Ettore Di Giacinto
655da7e883 Merge pull request #72 from mudler/speedup_travis
Speedup travis
2020-03-22 14:40:34 +01:00
Ettore Di Giacinto
8dbc266b39 Drop verbosity in ginkgo params 2020-03-22 13:52:07 +01:00
Ettore Di Giacinto
5942e2f20c Drop unneeded dependencies 2020-03-22 11:00:30 +01:00
Ettore Di Giacinto
1a8fb77771 Make test-coverage in one step, deploy release only on tags 2020-03-22 10:55:04 +01:00
Daniele Rondina
40830ecf42 installer/repository.go: Use switch with search options 2020-03-22 10:46:51 +01:00
Ettore Di Giacinto
b2ba17f7f7 Add development version 2020-03-21 21:43:21 +01:00
Ettore Di Giacinto
3c5e144b8d Tag 0.7 2020-03-21 21:39:12 +01:00
Ettore Di Giacinto
44aa69928a Merge pull request #69 from mudler/add-pkg-labels
Added `labels` to package definition
2020-03-21 21:37:42 +01:00
Daniele Rondina
c7652c8a70 Added labels to package definition
* cmd/search: Add support for search of the packages
  with a specific label.

* review Search method of Repositories for permit
  different search modes.

* labels are k/v attributes and could be matched
  through label key (with HasLabel method) or through
  regex that use "$key" + "=" + "$value"
2020-03-21 19:24:27 +01:00
Ettore Di Giacinto
fab2d21ac1 Drop unused function 2020-03-19 22:39:22 +01:00
Ettore Di Giacinto
487239334f Treat base case with same algorithm of dependencies 2020-03-19 22:37:39 +01:00
Ettore Di Giacinto
69477a0c36 Make uniform build of seeds and packages 2020-03-19 22:34:26 +01:00
Ettore Di Giacinto
67362a3bc1 Drop unused condition 2020-03-19 22:29:34 +01:00
Ettore Di Giacinto
302d18e749 Tag the target image with the computed hash 2020-03-19 22:28:33 +01:00
geaaru
9047e13d21 Merge pull request #68 from mudler/tree-bump
Tree bump
2020-03-16 22:57:14 +01:00
Daniele Rondina
0249c0fa4a version: Add temporary workaround for handle build string with _ 2020-03-16 22:18:48 +01:00
Daniele Rondina
fc40c770ab Add 'tree bump' command 2020-03-16 22:18:48 +01:00
Ettore Di Giacinto
4357ee45e9 Drop unused functions in package 2020-03-16 18:08:46 +01:00
Daniele Rondina
6ce9a86245 Update vendor github.com/Sabayon/pkgs-checker@b6efed54b4b1 2020-03-16 00:37:31 +01:00
Daniele Rondina
853a1995b4 cmd/tree/validate: Now support multiple tree paths 2020-03-15 19:16:22 +01:00
Daniele Rondina
039b77fdfd cmd/tree/validate: Add counter of broken deps 2020-03-15 17:40:20 +01:00
Ettore Di Giacinto
050d9b3095 Adapt to image based on debian instead of alpine for fixture 2020-03-15 15:30:27 +01:00
Ettore Di Giacinto
4985ff7b5a Use standard golang image for fixture (alpine doesn't contain gcc) 2020-03-15 15:42:24 +01:00
Ettore Di Giacinto
6f138811dd Make recipes idempotent, allowing to load multiple trees 2020-03-15 13:31:12 +01:00
Daniele Rondina
f02c54a274 cmd/tree/validate: Improve scripting integration 2020-03-14 17:38:57 +01:00
Daniele Rondina
d6182cba3b Add cli command 'tree validate' 2020-03-14 11:14:14 +01:00
Daniele Rondina
bc5c0fa0cf cmd/tree/pkglist: Cleanup code 2020-03-14 11:14:14 +01:00
Ettore Di Giacinto
dc64dbff75 Don't try to solve finalizers with --nodeps 2020-03-13 19:32:00 +01:00
Daniele Rondina
a94e430a3b cmd/tree/pkglist: Add --matches/-m flag 2020-03-13 18:08:49 +01:00
Daniele Rondina
93f8f0dd0f solver: Add category to tests 2020-03-12 00:26:53 +01:00
Daniele Rondina
7bdcc72dd3 Update vendor github.com/Sabayon/pkgs-checker@f4e0aec412f0 2020-03-11 08:38:15 +01:00
Daniele Rondina
e47c07d438 Update vendor github.com/Sabayon/pkgs-checker@935615ba9d27 2020-03-10 09:12:22 +01:00
Daniele Rondina
6fc5d1a97b Upgrade vendor github.com/Sabayon/pkgs-checker@v0.6.1 2020-03-09 23:35:01 +01:00
Daniele Rondina
f4a5a97cff tree/compiler_recipe: Return right error message 2020-03-08 16:28:12 +01:00
Daniele Rondina
8c972403a3 tree/compiler_recipe: Handle invalid tree path
If instead of abs path is used bash annotation
now return an error related to wrong directory
supply.
2020-03-08 15:47:40 +01:00
Daniele Rondina
82a17a1a22 Drop cfgFile print to improve bash integration 2020-03-08 00:13:45 +01:00
Daniele Rondina
17238d187a Change 'Skip dir' message to debug 2020-03-08 00:07:20 +01:00
Daniele Rondina
ad011e937f cmd/tree/pkglist: Fix init of regExcludes 2020-03-06 17:54:26 +01:00
Daniele Rondina
25c796c430 Add 'tree pkglist' command cli 2020-03-06 17:51:04 +01:00
Daniele Rondina
df5499393f --verbose/-v is now --debug/-d 2020-03-06 16:53:36 +01:00
Ettore Di Giacinto
026ddf6fc9 Add dev version tag 2020-03-05 18:37:16 +01:00
Ettore Di Giacinto
ec0b83a811 Prepare for 0.6.4 tag 2020-03-05 18:36:51 +01:00
Ettore Di Giacinto
765261f233 Add Swap() to Installer. Refactor code in Upgrade() to consume it 2020-02-27 23:48:14 +01:00
Ettore Di Giacinto
6c7e24fadf Sync repositories once (and at beginning) during upgrade 2020-02-27 23:25:29 +01:00
Ettore Di Giacinto
de23e0d5b1 Download dependencies before installing
This enables smoother upgrades, and uses cache to pre-download
packages before uninstall/install

Allows also from cli to download deps only
2020-02-27 23:15:31 +01:00
Ettore Di Giacinto
8572aa5222 Preserve cache data from deletion during uninstall 2020-02-27 23:14:36 +01:00
Ettore Di Giacinto
0e0c2f21a6 Add dev version tag 2020-02-27 18:57:41 +01:00
Ettore Di Giacinto
575079bb77 Prepare for 0.6.3 tag 2020-02-27 18:57:13 +01:00
Ettore Di Giacinto
5bcc8d112a Enforce solver constraints
- Don't sign installed packages during finalizer execution
- Enforce solver constraints: build ALO and AMO rules taking into account
  that the current package might not be selected at all.
- Force uninstalls on upgrade
- Enable option to tell uninstall to ignore conflict with the analized system state,
  as we don't want any conflict with the installed to raise during the upgrade.
  In this way we both force uninstalls and we avoid to check with conflicts
  against the current system state which is pending to deletion.
  This is due to the fact that now the solver enforces the constraints
  and explictly denies two packages of the same version installed.
- Adapt test as now we generate more constraints, which makes the solver more
  noisy on the package that are explictly selected or not
2020-02-27 18:38:31 +01:00
Ettore Di Giacinto
65b17f5283 Add simple upgrade integration test 2020-02-26 23:59:48 +01:00
Ettore Di Giacinto
a0618107a8 Adapt tests 2020-02-26 23:56:21 +01:00
Ettore Di Giacinto
25f2abf103 Do conflict with same package versions while building formula 2020-02-26 23:01:08 +01:00
Ettore Di Giacinto
e3ebfd6bfe Respect ImageRepository with build --nodeps 2020-02-26 19:00:22 +01:00
Ettore Di Giacinto
48df98bcae Add dev version tag 2020-02-23 22:42:02 +01:00
Ettore Di Giacinto
0ce1baa448 Prepare for 0.6.2 tag 2020-02-23 22:41:33 +01:00
Ettore Di Giacinto
07610bc216 Add build option to remove backend artifacts
Add keep-exported-images to build to instruct to remove backend artifacts (docker images)
2020-02-21 22:33:18 +01:00
Daniele Rondina
a04dadf100 Add support for parsing version with build 2020-02-21 21:56:32 +01:00
Daniele Rondina
2f4ce42472 Update vendor github.com/Sabayon/pkgs-checker 2020-02-21 21:55:42 +01:00
Daniele Rondina
def04724d4 Add support for build identifier on version 2020-02-20 17:46:47 +01:00
Ettore Di Giacinto
38296bc5d7 Add test for HashFingerprint() 2020-02-19 18:33:14 +01:00
Ettore Di Giacinto
7e16e0cdb6 Add HashFingerprint() to Package and consume it while generating image names
In this way we don't depend on the package fingerpint characters while generating images, allowing
to support completely semver (e.g. + character is not allowed in docker images tag).

This change is not backward compatible with images currently already built and will invalidate the cache
2020-02-19 18:28:29 +01:00
Ettore Di Giacinto
647ea35983 Add a way to extend the CLI with external binaries
This lets luet to be extended like git does (just by having luet-subcommand binaries under $PATH)
following the UNIX filosophy. It has the downside that we silence the errors from cobra and we handle
them by ourselves.

Also the Exec syscall is not portable, and it should have implementation for each platform (hence why in helpers)
2020-02-18 23:08:14 +01:00
Ettore Di Giacinto
4648dedb55 Add --nodeps, --onlydeps and --force to Installer
This allows also to use it on upgrade and uninstall as it exposes the Installer options to the CLI.

See #48, #49
2020-02-18 18:37:56 +01:00
Ettore Di Giacinto
2ce427d601 Add Nodeps, Onlydeps and Force options to Compiler
See #41
2020-02-18 18:36:44 +01:00
Ettore Di Giacinto
b01c017507 Add HumanReadableString() to Package 2020-02-18 18:20:45 +01:00
Ettore Di Giacinto
189c042fea Add dev version tag 2020-02-17 18:04:36 +01:00
695 changed files with 35740 additions and 21387 deletions

View File

@@ -2,17 +2,18 @@ language: go
services:
- docker
go:
- "1.12"
- "1.14"
env:
- "GO15VENDOREXPERIMENT=1"
before_install:
- make deps
- curl -LO https://storage.googleapis.com/container-diff/latest/container-diff-linux-amd64 && chmod +x container-diff-linux-amd64 && mkdir -p $HOME/bin && export PATH=$PATH:$HOME/bin && mv container-diff-linux-amd64 $HOME/bin/container-diff
script:
- make multiarch-build test test-integration
- make multiarch-build test-integration test-coverage
after_success:
- make coverage
- bash <(curl -s https://codecov.io/bash)
- git config --global user.name "Deployer" && git config --global user.email foo@bar.com
- go get github.com/tcnksm/ghr
- ghr -u mudler -r luet --replace $TRAVIS_TAG release/
- |
if [ -n "$TRAVIS_TAG" ] && [ "$TRAVIS_PULL_REQUEST" == "false" ]; then
git config --global user.name "Deployer" && git config --global user.email foo@bar.com
go get github.com/tcnksm/ghr
ghr -u mudler -r luet --replace $TRAVIS_TAG release/
fi

View File

@@ -1,3 +1,8 @@
# go tool nm ./luet | grep Commit
LDFLAGS += -X "github.com/mudler/luet/cmd.BuildTime=$(shell date -u '+%Y-%m-%d %I:%M:%S %Z')"
LDFLAGS += -X "github.com/mudler/luet/cmd.BuildCommit=$(shell git rev-parse HEAD)"
NAME ?= luet
PACKAGE_NAME ?= $(NAME)
PACKAGE_CONFLICT ?= $(PACKAGE_NAME)-beta
@@ -19,7 +24,7 @@ fmt:
test:
GO111MODULE=off go get github.com/onsi/ginkgo/ginkgo
GO111MODULE=off go get github.com/onsi/gomega/...
ginkgo -race -r ./...
ginkgo -race -r -flakeAttempts 3 ./...
.PHONY: test-integration
test-integration:
@@ -29,6 +34,10 @@ test-integration:
coverage:
go test ./... -race -coverprofile=coverage.txt -covermode=atomic
.PHONY: test-coverage
test-coverage:
scripts/ginkgo.coverage.sh --codecov
.PHONY: help
help:
# make all => deps test lint build
@@ -55,7 +64,7 @@ deps:
.PHONY: build
build:
CGO_ENABLED=0 go build
CGO_ENABLED=0 go build -ldflags '$(LDFLAGS)'
.PHONY: image
image:
@@ -63,7 +72,7 @@ image:
.PHONY: gox-build
gox-build:
CGO_ENABLED=0 gox $(BUILD_PLATFORMS) -output="release/$(NAME)-$(VERSION)-{{.OS}}-{{.Arch}}"
CGO_ENABLED=0 gox $(BUILD_PLATFORMS) -ldflags '$(LDFLAGS)' -output="release/$(NAME)-$(VERSION)-{{.OS}}-{{.Arch}}"
.PHONY: lint
lint:
@@ -81,4 +90,4 @@ test-docker:
.PHONY: multiarch-build
multiarch-build:
gox $(BUILD_PLATFORMS) -output="release/$(NAME)-$(VERSION)-{{.OS}}-{{.Arch}}"
gox $(BUILD_PLATFORMS) -ldflags '$(LDFLAGS)' -output="release/$(NAME)-$(VERSION)-{{.OS}}-{{.Arch}}"

View File

@@ -1,4 +1,5 @@
# luet - Container-based Package manager
[![Docker Repository on Quay](https://quay.io/repository/luet/base/status "Docker Repository on Quay")](https://quay.io/repository/luet/base)
[![Go Report Card](https://goreportcard.com/badge/github.com/mudler/luet)](https://goreportcard.com/report/github.com/mudler/luet)
[![Build Status](https://travis-ci.org/mudler/luet.svg?branch=master)](https://travis-ci.org/mudler/luet)
[![GoDoc](https://godoc.org/github.com/mudler/luet?status.svg)](https://godoc.org/github.com/mudler/luet)

36
cmd/box.go Normal file
View File

@@ -0,0 +1,36 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
. "github.com/mudler/luet/cmd/box"
"github.com/spf13/cobra"
)
var boxGroupCmd = &cobra.Command{
Use: "box [command] [OPTIONS]",
Short: "Manage luet boxes",
}
func init() {
RootCmd.AddCommand(boxGroupCmd)
boxGroupCmd.AddCommand(
NewBoxExecCommand(),
)
}

81
cmd/box/exec.go Normal file
View File

@@ -0,0 +1,81 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd_box
import (
"os"
b64 "encoding/base64"
"github.com/mudler/luet/pkg/box"
. "github.com/mudler/luet/pkg/logger"
"github.com/spf13/cobra"
)
func NewBoxExecCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "exec [OPTIONS]",
Short: "Execute a binary in a box",
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
},
Run: func(cmd *cobra.Command, args []string) {
stdin, _ := cmd.Flags().GetBool("stdin")
stdout, _ := cmd.Flags().GetBool("stdout")
stderr, _ := cmd.Flags().GetBool("stderr")
rootfs, _ := cmd.Flags().GetString("rootfs")
base, _ := cmd.Flags().GetBool("decode")
entrypoint, _ := cmd.Flags().GetString("entrypoint")
envs, _ := cmd.Flags().GetStringArray("env")
mounts, _ := cmd.Flags().GetStringArray("mount")
if base {
var ss []string
for _, a := range args {
sDec, _ := b64.StdEncoding.DecodeString(a)
ss = append(ss, string(sDec))
}
//If the command to run is complex,using base64 to avoid bad input
args = ss
}
Info("Executing", args, "in", rootfs)
b := box.NewBox(entrypoint, args, mounts, envs, rootfs, stdin, stdout, stderr)
err := b.Run()
if err != nil {
Fatal(err)
}
},
}
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
ans.Flags().String("rootfs", path, "Rootfs path")
ans.Flags().Bool("stdin", false, "Attach to stdin")
ans.Flags().Bool("stdout", true, "Attach to stdout")
ans.Flags().Bool("stderr", true, "Attach to stderr")
ans.Flags().Bool("decode", false, "Base64 decode")
ans.Flags().StringArrayP("env", "e", []string{}, "Environment settings")
ans.Flags().StringArrayP("mount", "m", []string{}, "List of paths to bind-mount from the host")
ans.Flags().String("entrypoint", "/bin/sh", "Entrypoint command (/bin/sh)")
return ans
}

View File

@@ -15,18 +15,17 @@
package cmd
import (
"fmt"
"io/ioutil"
"os"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
tree "github.com/mudler/luet/pkg/tree"
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
@@ -45,12 +44,16 @@ var buildCmd = &cobra.Command{
viper.BindPFlag("revdeps", cmd.Flags().Lookup("revdeps"))
viper.BindPFlag("all", cmd.Flags().Lookup("all"))
viper.BindPFlag("compression", cmd.Flags().Lookup("compression"))
viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
viper.BindPFlag("image-repository", cmd.Flags().Lookup("image-repository"))
viper.BindPFlag("push", cmd.Flags().Lookup("push"))
viper.BindPFlag("pull", cmd.Flags().Lookup("pull"))
viper.BindPFlag("keep-images", cmd.Flags().Lookup("keep-images"))
LuetCfg.Viper.BindPFlag("keep-exported-images", cmd.Flags().Lookup("keep-exported-images"))
LuetCfg.Viper.BindPFlag("solver.type", cmd.Flags().Lookup("solver-type"))
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
@@ -72,6 +75,9 @@ var buildCmd = &cobra.Command{
push := viper.GetBool("push")
pull := viper.GetBool("pull")
keepImages := viper.GetBool("keep-images")
nodeps := viper.GetBool("nodeps")
onlydeps := viper.GetBool("onlydeps")
keepExportedImages := viper.GetBool("keep-exported-images")
compilerSpecs := compiler.NewLuetCompilationspecs()
var compilerBackend compiler.CompilerBackend
@@ -126,31 +132,21 @@ var buildCmd = &cobra.Command{
opts.PullFirst = pull
opts.KeepImg = keepImages
opts.Push = push
opts.OnlyDeps = onlydeps
opts.NoDeps = nodeps
opts.KeepImageExport = keepExportedImages
luetCompiler := compiler.NewLuetCompiler(compilerBackend, generalRecipe.GetDatabase(), opts)
luetCompiler.SetConcurrency(concurrency)
luetCompiler.SetCompressionType(compiler.CompressionImplementation(compressionType))
if !all {
for _, a := range args {
gp, err := _gentoo.ParsePackageStr(a)
pack, err := helpers.ParsePackageStr(a)
if err != nil {
Fatal("Invalid package string ", a, ": ", err.Error())
}
if gp.Version == "" {
gp.Version = "0"
gp.Condition = _gentoo.PkgCondGreaterEqual
}
pack := &pkg.DefaultPackage{
Name: gp.Name,
Version: fmt.Sprintf("%s%s%s",
pkg.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
gp.Version,
gp.VersionSuffix,
),
Category: gp.Category,
Uri: make([]string, 0),
}
spec, err := luetCompiler.FromPackage(pack)
if err != nil {
Fatal("Error: " + err.Error())
@@ -212,6 +208,9 @@ func init() {
buildCmd.Flags().Bool("push", false, "Push images to a hub")
buildCmd.Flags().Bool("pull", false, "Pull images from a hub")
buildCmd.Flags().Bool("keep-images", true, "Keep built docker images in the host")
buildCmd.Flags().Bool("nodeps", false, "Build only the target packages, skipping deps (it works only if you already built the deps locally, or by using --pull) ")
buildCmd.Flags().Bool("onlydeps", false, "Build only package dependencies")
buildCmd.Flags().Bool("keep-exported-images", false, "Keep exported images used during building")
buildCmd.Flags().String("solver-type", "", "Solver strategy")
buildCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")

View File

@@ -40,7 +40,9 @@ var createrepoCmd = &cobra.Command{
viper.BindPFlag("urls", cmd.Flags().Lookup("urls"))
viper.BindPFlag("type", cmd.Flags().Lookup("type"))
viper.BindPFlag("tree-compression", cmd.Flags().Lookup("tree-compression"))
viper.BindPFlag("tree-path", cmd.Flags().Lookup("tree-path"))
viper.BindPFlag("tree-filename", cmd.Flags().Lookup("tree-filename"))
viper.BindPFlag("meta-compression", cmd.Flags().Lookup("meta-compression"))
viper.BindPFlag("meta-filename", cmd.Flags().Lookup("meta-filename"))
viper.BindPFlag("reset-revision", cmd.Flags().Lookup("reset-revision"))
viper.BindPFlag("repo", cmd.Flags().Lookup("repo"))
},
@@ -57,9 +59,14 @@ var createrepoCmd = &cobra.Command{
t := viper.GetString("type")
reset := viper.GetBool("reset-revision")
treetype := viper.GetString("tree-compression")
treepath := viper.GetString("tree-path")
treeName := viper.GetString("tree-filename")
metatype := viper.GetString("meta-compression")
metaName := viper.GetString("meta-filename")
source_repo := viper.GetString("repo")
treeFile := installer.NewDefaultTreeRepositoryFile()
metaFile := installer.NewDefaultMetaRepositoryFile()
if source_repo != "" {
// Search for system repository
lrepo, err := LuetCfg.GetSystemRepository(source_repo)
@@ -93,13 +100,24 @@ var createrepoCmd = &cobra.Command{
}
if treetype != "" {
repo.SetTreeCompressionType(compiler.CompressionImplementation(treetype))
treeFile.SetCompressionType(compiler.CompressionImplementation(treetype))
}
if treepath != "" {
repo.SetTreePath(treepath)
if treeName != "" {
treeFile.SetFileName(treeName)
}
if metatype != "" {
metaFile.SetCompressionType(compiler.CompressionImplementation(metatype))
}
if metaName != "" {
metaFile.SetFileName(metaName)
}
repo.SetRepositoryFile(installer.REPOFILE_TREE_KEY, treeFile)
repo.SetRepositoryFile(installer.REPOFILE_META_KEY, metaFile)
err = repo.Write(dst, reset)
if err != nil {
Fatal("Error: " + err.Error())
@@ -122,8 +140,10 @@ func init() {
createrepoCmd.Flags().Bool("reset-revision", false, "Reset repository revision.")
createrepoCmd.Flags().String("repo", "", "Use repository defined in configuration.")
createrepoCmd.Flags().String("tree-compression", "none", "Compression alg: none, gzip")
createrepoCmd.Flags().String("tree-path", installer.TREE_TARBALL, "Repository tree filename")
createrepoCmd.Flags().String("tree-compression", "gzip", "Compression alg: none, gzip")
createrepoCmd.Flags().String("tree-filename", installer.TREE_TARBALL, "Repository tree filename")
createrepoCmd.Flags().String("meta-compression", "none", "Compression alg: none, gzip")
createrepoCmd.Flags().String("meta-filename", installer.REPOSITORY_METAFILE+".tar", "Repository metadata filename")
RootCmd.AddCommand(createrepoCmd)
}

86
cmd/exec.go Normal file
View File

@@ -0,0 +1,86 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"fmt"
"os"
b64 "encoding/base64"
"github.com/mudler/luet/pkg/box"
. "github.com/mudler/luet/pkg/logger"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
var execCmd = &cobra.Command{
Use: "exec --rootfs /path [command]",
Short: "Execute a command in the rootfs context",
Long: `Uses unshare technique and pivot root to execute a command inside a folder containing a valid rootfs`,
PreRun: func(cmd *cobra.Command, args []string) {
},
// If you change this, look at pkg/box/exec that runs this command and adapt
Run: func(cmd *cobra.Command, args []string) {
stdin, _ := cmd.Flags().GetBool("stdin")
stdout, _ := cmd.Flags().GetBool("stdout")
stderr, _ := cmd.Flags().GetBool("stderr")
rootfs, _ := cmd.Flags().GetString("rootfs")
base, _ := cmd.Flags().GetBool("decode")
entrypoint, _ := cmd.Flags().GetString("entrypoint")
envs, _ := cmd.Flags().GetStringArray("env")
mounts, _ := cmd.Flags().GetStringArray("mount")
if base {
var ss []string
for _, a := range args {
sDec, _ := b64.StdEncoding.DecodeString(a)
ss = append(ss, string(sDec))
}
//If the command to run is complex,using base64 to avoid bad input
args = ss
}
Info("Executing", args, "in", rootfs)
b := box.NewBox(entrypoint, args, mounts, envs, rootfs, stdin, stdout, stderr)
err := b.Exec()
if err != nil {
Fatal(errors.Wrap(err, fmt.Sprintf("entrypoint: %s rootfs: %s", entrypoint, rootfs)))
}
},
}
func init() {
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
execCmd.Hidden = true
execCmd.Flags().String("rootfs", path, "Rootfs path")
execCmd.Flags().Bool("stdin", false, "Attach to stdin")
execCmd.Flags().Bool("stdout", false, "Attach to stdout")
execCmd.Flags().Bool("stderr", false, "Attach to stderr")
execCmd.Flags().Bool("decode", false, "Base64 decode")
execCmd.Flags().StringArrayP("env", "e", []string{}, "Environment settings")
execCmd.Flags().StringArrayP("mount", "m", []string{}, "List of paths to bind-mount from the host")
execCmd.Flags().String("entrypoint", "/bin/sh", "Entrypoint command (/bin/sh)")
RootCmd.AddCommand(execCmd)
}

View File

@@ -15,17 +15,16 @@
package cmd
import (
"fmt"
"os"
"path/filepath"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
"github.com/spf13/cobra"
)
@@ -39,33 +38,20 @@ var installCmd = &cobra.Command{
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
LuetCfg.Viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
LuetCfg.Viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
},
Long: `Install packages in parallel`,
Run: func(cmd *cobra.Command, args []string) {
var toInstall []pkg.Package
var toInstall pkg.Packages
var systemDB pkg.PackageDatabase
for _, a := range args {
gp, err := _gentoo.ParsePackageStr(a)
pack, err := helpers.ParsePackageStr(a)
if err != nil {
Fatal("Invalid package string ", a, ": ", err.Error())
}
if gp.Version == "" {
gp.Version = "0"
gp.Condition = _gentoo.PkgCondGreaterEqual
}
pack := &pkg.DefaultPackage{
Name: gp.Name,
Version: fmt.Sprintf("%s%s%s",
pkg.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
gp.Version,
gp.VersionSuffix,
),
Category: gp.Category,
Uri: make([]string, 0),
}
toInstall = append(toInstall, pack)
}
@@ -83,7 +69,9 @@ var installCmd = &cobra.Command{
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
force := LuetCfg.Viper.GetBool("force")
nodeps := LuetCfg.Viper.GetBool("nodeps")
onlydeps := LuetCfg.Viper.GetBool("onlydeps")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
LuetCfg.GetSolverOptions().Discount = float32(discount)
@@ -91,7 +79,14 @@ var installCmd = &cobra.Command{
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{Concurrency: LuetCfg.GetGeneral().Concurrency, SolverOptions: *LuetCfg.GetSolverOptions()})
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
NoDeps: nodeps,
Force: force,
OnlyDeps: onlydeps,
PreserveSystemEssentialData: true,
})
inst.Repositories(repos)
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
@@ -119,6 +114,9 @@ func init() {
installCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
installCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
installCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
installCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful!)")
installCmd.Flags().Bool("onlydeps", false, "Consider **only** package dependencies")
installCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
RootCmd.AddCommand(installCmd)
}

88
cmd/reclaim.go Normal file
View File

@@ -0,0 +1,88 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"os"
"path/filepath"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/spf13/cobra"
)
var reclaimCmd = &cobra.Command{
Use: "reclaim",
Short: "Reclaim packages to Luet database from available repositories",
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
},
Long: `Add packages to the systemdb if files belonging to packages
in available repositories exists in the target root.`,
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
// This shouldn't be necessary, but we need to unmarshal the repositories to a concrete struct, thus we need to port them back to the Repositories type
repos := installer.Repositories{}
for _, repo := range LuetCfg.SystemRepositories {
if !repo.Enable {
continue
}
r := installer.NewSystemRepository(repo)
repos = append(repos, r)
}
force := LuetCfg.Viper.GetBool("force")
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
Force: force,
PreserveSystemEssentialData: true,
})
inst.Repositories(repos)
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
err := inst.Reclaim(system)
if err != nil {
Fatal("Error: " + err.Error())
}
},
}
func init() {
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
reclaimCmd.Flags().String("system-dbpath", path, "System db path")
reclaimCmd.Flags().String("system-target", path, "System rootpath")
reclaimCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
RootCmd.AddCommand(reclaimCmd)
}

View File

@@ -62,7 +62,7 @@ $> luet repo update repo1 repo2
} else {
for _, repo := range LuetCfg.SystemRepositories {
if repo.Cached {
if repo.Cached && repo.Enable {
r := installer.NewSystemRepository(repo)
Spinner(32)
_, err := r.Sync(force)

View File

@@ -16,6 +16,7 @@
package cmd
import (
"fmt"
"os"
"os/user"
"path/filepath"
@@ -24,6 +25,7 @@ import (
"github.com/marcsauter/single"
config "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
repo "github.com/mudler/luet/pkg/repository"
@@ -33,31 +35,53 @@ import (
var cfgFile string
var Verbose bool
var LockedCommands = []string{"install", "uninstall", "upgrade"}
const (
LuetCLIVersion = "0.6.1"
LuetCLIVersion = "0.7.6"
LuetEnvPrefix = "LUET"
)
// Build time and commit information.
//
// ⚠️ WARNING: should only be set by "-ldflags".
var (
BuildTime string
BuildCommit string
)
// RootCmd represents the base command when called without any subcommands
var RootCmd = &cobra.Command{
Use: "luet",
Short: "Package manager for the XXth century!",
Long: `Package manager which uses containers to build packages`,
Version: LuetCLIVersion,
Version: fmt.Sprintf("%s-g%s %s", LuetCLIVersion, BuildCommit, BuildTime),
PersistentPreRun: func(cmd *cobra.Command, args []string) {
err := LoadConfig(config.LuetCfg)
if err != nil {
Fatal("failed to load configuration:", err.Error())
}
// Initialize tmpdir prefix. TODO: Move this with LoadConfig
// directly on sub command to ensure the creation only when it's
// needed.
err = config.LuetCfg.GetSystem().InitTmpDir()
if err != nil {
Fatal("failed on init tmp basedir:", err.Error())
}
},
PersistentPostRun: func(cmd *cobra.Command, args []string) {
// Cleanup all tmp directories used by luet
err := config.LuetCfg.GetSystem().CleanupTmpDir()
if err != nil {
Warning("failed on cleanup tmpdir:", err.Error())
}
},
SilenceErrors: true,
}
func LoadConfig(c *config.LuetConfig) error {
// If a config file is found, read it in.
if err := c.Viper.ReadInConfig(); err != nil {
Warning(err)
}
c.Viper.ReadInConfig()
err := c.Viper.Unmarshal(&config.LuetCfg)
if err != nil {
@@ -88,19 +112,36 @@ func LoadConfig(c *config.LuetConfig) error {
// Execute adds all child commands to the root command sets flags appropriately.
// This is called by main.main(). It only needs to happen once to the rootCmd.
func Execute() {
// XXX: This is mostly from scratch images.
if os.Getenv("LUET_NOLOCK") != "true" {
s := single.New("luet")
if err := s.CheckLock(); err != nil && err == single.ErrAlreadyRunning {
Fatal("another instance of the app is already running, exiting")
} else if err != nil {
// Another error occurred, might be worth handling it as well
Fatal("failed to acquire exclusive app lock:", err.Error())
if len(os.Args) > 1 {
for _, lockedCmd := range LockedCommands {
if os.Args[1] == lockedCmd {
s := single.New("luet")
if err := s.CheckLock(); err != nil && err == single.ErrAlreadyRunning {
Fatal("another instance of the app is already running, exiting")
} else if err != nil {
// Another error occurred, might be worth handling it as well
Fatal("failed to acquire exclusive app lock:", err.Error())
}
defer s.TryUnlock()
break
}
}
}
defer s.TryUnlock()
}
if err := RootCmd.Execute(); err != nil {
Error(err)
if len(os.Args) > 0 {
for _, c := range RootCmd.Commands() {
if c.Name() == os.Args[1] {
os.Exit(-1) // Something failed
}
}
// Try to load a bin from path.
helpers.Exec("luet-"+os.Args[1], os.Args[1:], os.Environ())
}
fmt.Println(err)
os.Exit(-1)
}
}
@@ -109,22 +150,24 @@ func init() {
cobra.OnInitialize(initConfig)
pflags := RootCmd.PersistentFlags()
pflags.StringVar(&cfgFile, "config", "", "config file (default is $HOME/.luet.yaml)")
pflags.BoolVarP(&Verbose, "verbose", "v", false, "verbose output")
pflags.BoolP("debug", "d", false, "verbose output")
pflags.Bool("fatal", false, "Enables Warnings to exit")
u, err := user.Current()
if err != nil {
Fatal("failed to retrieve user identity:", err.Error())
}
sameOwner := false
if u.Uid == "0" {
u, err := user.Current()
// os/user doesn't work in from scratch environments
if err != nil {
Warning("failed to retrieve user identity:", err.Error())
sameOwner = true
}
if u != nil && u.Uid == "0" {
sameOwner = true
}
pflags.Bool("same-owner", sameOwner, "Maintain same owner on uncompress.")
pflags.Int("concurrency", runtime.NumCPU(), "Concurrency")
config.LuetCfg.Viper.BindPFlag("general.same_owner", pflags.Lookup("same-owner"))
config.LuetCfg.Viper.BindPFlag("general.debug", pflags.Lookup("verbose"))
config.LuetCfg.Viper.BindPFlag("general.debug", pflags.Lookup("debug"))
config.LuetCfg.Viper.BindPFlag("general.concurrency", pflags.Lookup("concurrency"))
config.LuetCfg.Viper.BindPFlag("general.fatal_warnings", pflags.Lookup("fatal"))
}
@@ -141,7 +184,6 @@ func initConfig() {
viper.SetConfigType("yaml")
viper.SetConfigName(".luet") // name of config file (without extension)
if cfgFile != "" { // enable ability to specify config file via flag
Info(">>> cfgFile: ", cfgFile)
viper.SetConfigFile(cfgFile)
} else {
viper.AddConfigPath(dir)

View File

@@ -15,18 +15,29 @@
package cmd
import (
"fmt"
"os"
"path/filepath"
"regexp"
"github.com/ghodss/yaml"
. "github.com/mudler/luet/pkg/config"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/spf13/cobra"
)
type PackageResult struct {
Name string `json:"name"`
Category string `json:"category"`
Version string `json:"version"`
Repository string `json:"repository"`
}
type Results struct {
Packages []PackageResult `json:"packages"`
}
var searchCmd = &cobra.Command{
Use: "search <term>",
Short: "Search packages",
@@ -42,7 +53,7 @@ var searchCmd = &cobra.Command{
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
var results Results
if len(args) != 1 {
Fatal("Wrong number of arguments (expected 1)")
}
@@ -51,6 +62,14 @@ var searchCmd = &cobra.Command{
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
searchWithLabel, _ := cmd.Flags().GetBool("by-label")
searchWithLabelMatch, _ := cmd.Flags().GetBool("by-label-regex")
revdeps, _ := cmd.Flags().GetBool("revdeps")
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
LuetCfg.GetLogging().SetLogLevel("error")
}
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
@@ -70,20 +89,51 @@ var searchCmd = &cobra.Command{
repos = append(repos, r)
}
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{Concurrency: LuetCfg.GetGeneral().Concurrency, SolverOptions: *LuetCfg.GetSolverOptions()})
inst := installer.NewLuetInstaller(
installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
},
)
inst.Repositories(repos)
synced, err := inst.SyncRepositories(false)
if err != nil {
Fatal("Error: " + err.Error())
}
Info("--- Search results: ---")
Info("--- Search results (" + args[0] + "): ---")
matches := synced.Search(args[0])
matches := []installer.PackageMatch{}
if searchWithLabel {
matches = synced.SearchLabel(args[0])
} else if searchWithLabelMatch {
matches = synced.SearchLabelMatch(args[0])
} else {
matches = synced.Search(args[0])
}
for _, m := range matches {
Info(":package:", m.Package.GetCategory(), m.Package.GetName(),
m.Package.GetVersion(), "repository:", m.Repo.GetName())
if !revdeps {
Info(fmt.Sprintf(":file_folder:%s", m.Repo.GetName()), fmt.Sprintf(":package:%s", m.Package.HumanReadableString()))
results.Packages = append(results.Packages,
PackageResult{
Name: m.Package.GetName(),
Version: m.Package.GetVersion(),
Category: m.Package.GetCategory(),
Repository: m.Repo.GetName(),
})
} else {
visited := make(map[string]interface{})
for _, revdep := range m.Package.ExpandedRevdeps(m.Repo.GetTree().GetDatabase(), visited) {
Info(fmt.Sprintf(":file_folder:%s", m.Repo.GetName()), fmt.Sprintf(":package:%s", revdep.HumanReadableString()))
results.Packages = append(results.Packages,
PackageResult{
Name: revdep.GetName(),
Version: revdep.GetVersion(),
Category: revdep.GetCategory(),
Repository: m.Repo.GetName(),
})
}
}
}
} else {
@@ -94,16 +144,65 @@ var searchCmd = &cobra.Command{
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
var term = regexp.MustCompile(args[0])
for _, k := range system.Database.GetPackages() {
pack, err := system.Database.GetPackage(k)
if err == nil && term.MatchString(pack.GetName()) {
Info(":package:", pack.GetCategory(), pack.GetName(), pack.GetVersion())
var err error
iMatches := pkg.Packages{}
if searchWithLabel {
iMatches, err = system.Database.FindPackageLabel(args[0])
} else if searchWithLabelMatch {
iMatches, err = system.Database.FindPackageLabelMatch(args[0])
} else {
iMatches, err = system.Database.FindPackageMatch(args[0])
}
if err != nil {
Fatal("Error: " + err.Error())
}
for _, pack := range iMatches {
if !revdeps {
Info(fmt.Sprintf(":package:%s", pack.HumanReadableString()))
results.Packages = append(results.Packages,
PackageResult{
Name: pack.GetName(),
Version: pack.GetVersion(),
Category: pack.GetCategory(),
Repository: "system",
})
} else {
visited := make(map[string]interface{})
for _, revdep := range pack.ExpandedRevdeps(system.Database, visited) {
Info(fmt.Sprintf(":package:%s", pack.HumanReadableString()))
results.Packages = append(results.Packages,
PackageResult{
Name: revdep.GetName(),
Version: revdep.GetVersion(),
Category: revdep.GetCategory(),
Repository: "system",
})
}
}
}
}
y, err := yaml.Marshal(results)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
switch out {
case "yaml":
fmt.Println(string(y))
case "json":
j2, err := yaml.YAMLToJSON(y)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(j2))
}
},
}
@@ -116,8 +215,12 @@ func init() {
searchCmd.Flags().String("system-target", path, "System rootpath")
searchCmd.Flags().Bool("installed", false, "Search between system packages")
searchCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+AvailableResolvers+" )")
searchCmd.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")
searchCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
searchCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
searchCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
searchCmd.Flags().Bool("by-label", false, "Search packages through label")
searchCmd.Flags().Bool("by-label-regex", false, "Search packages through label regex")
searchCmd.Flags().Bool("revdeps", false, "Search package reverse dependencies")
RootCmd.AddCommand(searchCmd)
}

38
cmd/tree.go Normal file
View File

@@ -0,0 +1,38 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
. "github.com/mudler/luet/cmd/tree"
"github.com/spf13/cobra"
)
var treeGroupCmd = &cobra.Command{
Use: "tree [command] [OPTIONS]",
Short: "Tree operations",
}
func init() {
RootCmd.AddCommand(treeGroupCmd)
treeGroupCmd.AddCommand(
NewTreePkglistCommand(),
NewTreeValidateCommand(),
NewTreeBumpCommand(),
)
}

78
cmd/tree/bump.go Normal file
View File

@@ -0,0 +1,78 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd_tree
import (
"fmt"
//"os"
//"sort"
. "github.com/mudler/luet/pkg/logger"
tree "github.com/mudler/luet/pkg/tree"
"github.com/spf13/cobra"
)
func NewTreeBumpCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "bump [OPTIONS]",
Short: "Bump a new package build version.",
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
df, _ := cmd.Flags().GetString("definition-file")
if df == "" {
Fatal("Mandatory definition.yaml path missing.")
}
},
Run: func(cmd *cobra.Command, args []string) {
spec, _ := cmd.Flags().GetString("definition-file")
toStdout, _ := cmd.Flags().GetBool("to-stdout")
pack, err := tree.ReadDefinitionFile(spec)
if err != nil {
Fatal(err.Error())
}
// Retrieve version build section with Gentoo parser
err = pack.BumpBuildVersion()
if err != nil {
Fatal("Error on increment build version: " + err.Error())
}
if toStdout {
data, err := pack.Yaml()
if err != nil {
Fatal("Error on yaml conversion: " + err.Error())
}
fmt.Println(string(data))
} else {
err = tree.WriteDefinitionFile(&pack, spec)
if err != nil {
Fatal("Error on write definition file: " + err.Error())
}
fmt.Printf("Bumped package %s/%s-%s.\n", pack.Category, pack.Name, pack.Version)
}
},
}
ans.Flags().StringP("definition-file", "f", "", "Path of the definition to bump.")
ans.Flags().BoolP("to-stdout", "o", false, "Bump package to output.")
return ans
}

217
cmd/tree/pkglist.go Normal file
View File

@@ -0,0 +1,217 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd_tree
import (
"fmt"
"sort"
//. "github.com/mudler/luet/pkg/config"
"github.com/ghodss/yaml"
. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
tree "github.com/mudler/luet/pkg/tree"
"github.com/spf13/cobra"
)
type TreePackageResult struct {
Name string `json:"name"`
Category string `json:"category"`
Version string `json:"version"`
Path string `json:"path"`
}
type TreeResults struct {
Packages []TreePackageResult `json:"packages"`
}
func pkgDetail(pkg pkg.Package) string {
ans := fmt.Sprintf(`
@@ Package: %s/%s-%s
Description: %s
License: %s`,
pkg.GetCategory(), pkg.GetName(), pkg.GetVersion(),
pkg.GetDescription(), pkg.GetLicense())
for idx, u := range pkg.GetURI() {
if idx == 0 {
ans += fmt.Sprintf(" URLs: %s", u)
} else {
ans += fmt.Sprintf(" %s", u)
}
}
return ans
}
func NewTreePkglistCommand() *cobra.Command {
var excludes []string
var matches []string
var ans = &cobra.Command{
Use: "pkglist [OPTIONS]",
Short: "List of the packages found in tree.",
Args: cobra.NoArgs,
PreRun: func(cmd *cobra.Command, args []string) {
t, _ := cmd.Flags().GetStringArray("tree")
if len(t) == 0 {
Fatal("Mandatory tree param missing.")
}
},
Run: func(cmd *cobra.Command, args []string) {
var results TreeResults
treePath, _ := cmd.Flags().GetStringArray("tree")
verbose, _ := cmd.Flags().GetBool("verbose")
buildtime, _ := cmd.Flags().GetBool("buildtime")
full, _ := cmd.Flags().GetBool("full")
revdeps, _ := cmd.Flags().GetBool("revdeps")
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
LuetCfg.GetLogging().SetLogLevel("error")
}
var reciper tree.Builder
if buildtime {
reciper = tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
} else {
reciper = tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))
}
for _, t := range treePath {
err := reciper.Load(t)
if err != nil {
Fatal("Error on load tree ", err)
}
}
regExcludes, err := helpers.CreateRegexArray(excludes)
if err != nil {
Fatal(err.Error())
}
regMatches, err := helpers.CreateRegexArray(matches)
if err != nil {
Fatal(err.Error())
}
plist := make([]string, 0)
for _, p := range reciper.GetDatabase().World() {
pkgstr := ""
addPkg := true
if full {
pkgstr = pkgDetail(p)
} else if verbose {
pkgstr = p.HumanReadableString()
} else {
pkgstr = fmt.Sprintf("%s/%s", p.GetCategory(), p.GetName())
}
if len(matches) > 0 {
matched := false
for _, rgx := range regMatches {
if rgx.MatchString(pkgstr) {
matched = true
break
}
}
if !matched {
addPkg = false
}
}
if len(excludes) > 0 && addPkg {
for _, rgx := range regExcludes {
if rgx.MatchString(pkgstr) {
addPkg = false
break
}
}
}
if addPkg {
if !revdeps {
plist = append(plist, pkgstr)
results.Packages = append(results.Packages, TreePackageResult{
Name: p.GetName(),
Version: p.GetVersion(),
Category: p.GetCategory(),
Path: p.GetPath(),
})
} else {
visited := make(map[string]interface{})
for _, revdep := range p.ExpandedRevdeps(reciper.GetDatabase(), visited) {
if full {
pkgstr = pkgDetail(revdep)
} else if verbose {
pkgstr = revdep.HumanReadableString()
} else {
pkgstr = fmt.Sprintf("%s/%s", revdep.GetCategory(), revdep.GetName())
}
plist = append(plist, pkgstr)
results.Packages = append(results.Packages, TreePackageResult{
Name: revdep.GetName(),
Version: revdep.GetVersion(),
Category: revdep.GetCategory(),
Path: revdep.GetPath(),
})
}
}
}
}
y, err := yaml.Marshal(results)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
switch out {
case "yaml":
fmt.Println(string(y))
case "json":
j2, err := yaml.YAMLToJSON(y)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(j2))
default:
sort.Strings(plist)
for _, p := range plist {
fmt.Println(p)
}
}
},
}
ans.Flags().BoolP("buildtime", "b", false, "Build time match")
ans.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")
ans.Flags().Bool("revdeps", false, "Search package reverse dependencies")
ans.Flags().BoolP("verbose", "v", false, "Add package version")
ans.Flags().BoolP("full", "f", false, "Show package detail")
ans.Flags().StringArrayP("tree", "t", []string{}, "Path of the tree to use.")
ans.Flags().StringSliceVarP(&matches, "matches", "m", []string{},
"Include only matched packages from list. (Use string as regex).")
ans.Flags().StringSliceVarP(&excludes, "exclude", "e", []string{},
"Exclude matched packages from list. (Use string as regex).")
return ans
}

295
cmd/tree/validate.go Normal file
View File

@@ -0,0 +1,295 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd_tree
import (
"errors"
"fmt"
"os"
"regexp"
"sort"
"strconv"
"sync"
. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
tree "github.com/mudler/luet/pkg/tree"
"github.com/spf13/cobra"
)
func validateWorker(i int,
wg *sync.WaitGroup,
c <-chan pkg.Package,
reciper tree.Builder,
withSolver bool,
regExcludes, regMatches []*regexp.Regexp,
excludes, matches []string,
errs chan error) {
defer wg.Done()
var depSolver solver.PackageSolver
brokenPkgs := 0
brokenDeps := 0
var errstr string
emptyInstallationDb := pkg.NewInMemoryDatabase(false)
if withSolver {
depSolver = solver.NewSolver(pkg.NewInMemoryDatabase(false),
reciper.GetDatabase(),
emptyInstallationDb)
}
for p := range c {
found, err := reciper.GetDatabase().FindPackages(
&pkg.DefaultPackage{
Name: p.GetName(),
Category: p.GetCategory(),
Version: ">=0",
},
)
if err != nil || len(found) < 1 {
if err != nil {
errstr = err.Error()
} else {
errstr = "No packages"
}
Error(fmt.Sprintf("%s/%s-%s: Broken. No versions could be found by database %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
errstr,
))
errs <- errors.New(
fmt.Sprintf("%s/%s-%s: Broken. No versions could be found by database %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
errstr,
))
brokenPkgs++
}
pkgstr := fmt.Sprintf("%s/%s-%s", p.GetCategory(), p.GetName(),
p.GetVersion())
validpkg := true
if len(matches) > 0 {
matched := false
for _, rgx := range regMatches {
if rgx.MatchString(pkgstr) {
matched = true
break
}
}
if !matched {
continue
}
}
if len(excludes) > 0 {
excluded := false
for _, rgx := range regExcludes {
if rgx.MatchString(pkgstr) {
excluded = true
break
}
}
if excluded {
continue
}
}
Info("Checking package "+fmt.Sprintf("%s/%s-%s", p.GetCategory(), p.GetName(), p.GetVersion()), "with", len(p.GetRequires()), "dependencies.")
all := p.GetRequires()
all = append(all, p.GetConflicts()...)
for _, r := range all {
var deps pkg.Packages
var err error
if r.IsSelector() {
deps, err = reciper.GetDatabase().FindPackages(
&pkg.DefaultPackage{
Name: r.GetName(),
Category: r.GetCategory(),
Version: r.GetVersion(),
},
)
} else {
deps = append(deps, r)
}
if err != nil || len(deps) < 1 {
if err != nil {
errstr = err.Error()
} else {
errstr = "No packages"
}
Error(fmt.Sprintf("%s/%s-%s: Broken Dep %s/%s-%s - %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
errstr,
))
errs <- errors.New(
fmt.Sprintf("%s/%s-%s: Broken Dep %s/%s-%s - %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
errstr))
brokenDeps++
validpkg = false
} else {
Debug("Find packages for dep",
fmt.Sprintf("%s/%s-%s", r.GetCategory(), r.GetName(), r.GetVersion()))
if withSolver {
Spinner(32)
solution, err := depSolver.Install(pkg.Packages{r})
ass := solution.SearchByName(r.GetPackageName())
if err == nil {
_, err = solution.Order(reciper.GetDatabase(), ass.Package.GetFingerPrint())
}
SpinnerStop()
if err != nil {
Error(fmt.Sprintf("%s/%s-%s: solver broken for dep %s/%s-%s - %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
err.Error(),
))
errs <- errors.New(
fmt.Sprintf("%s/%s-%s: solver broken for Dep %s/%s-%s - %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
err.Error()))
brokenDeps++
validpkg = false
}
}
}
}
if !validpkg {
brokenPkgs++
}
}
}
func NewTreeValidateCommand() *cobra.Command {
var excludes []string
var matches []string
var treePaths []string
var ans = &cobra.Command{
Use: "validate [OPTIONS]",
Short: "Validate a tree or a list of packages",
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
if len(treePaths) < 1 {
Fatal("Mandatory tree param missing.")
}
},
Run: func(cmd *cobra.Command, args []string) {
concurrency := LuetCfg.GetGeneral().Concurrency
withSolver, _ := cmd.Flags().GetBool("with-solver")
reciper := tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))
for _, treePath := range treePaths {
err := reciper.Load(treePath)
if err != nil {
Fatal("Error on load tree ", err)
}
}
regExcludes, err := helpers.CreateRegexArray(excludes)
if err != nil {
Fatal(err.Error())
}
regMatches, err := helpers.CreateRegexArray(matches)
if err != nil {
Fatal(err.Error())
}
all := make(chan pkg.Package)
errs := make(chan error)
var wg = new(sync.WaitGroup)
for i := 0; i < concurrency; i++ {
wg.Add(1)
go validateWorker(i, wg, all,
reciper, withSolver, regExcludes, regMatches, excludes, matches,
errs)
}
for _, p := range reciper.GetDatabase().World() {
all <- p
}
close(all)
// Wait separately and once done close the channel
go func() {
wg.Wait()
close(errs)
}()
stringerrs := []string{}
for e := range errs {
stringerrs = append(stringerrs, e.Error())
}
sort.Strings(stringerrs)
for _, e := range stringerrs {
fmt.Println(e)
}
// fmt.Println("Broken packages:", brokenPkgs, "(", brokenDeps, "deps ).")
if len(stringerrs) != 0 {
Fatal("Errors: " + strconv.Itoa(len(stringerrs)))
// if brokenPkgs > 0 {
//os.Exit(1)
} else {
Info("All good! :white_check_mark:")
os.Exit(0)
}
},
}
ans.Flags().BoolP("with-solver", "s", false,
"Enable check of requires also with solver.")
ans.Flags().StringSliceVarP(&treePaths, "tree", "t", []string{},
"Path of the tree to use.")
ans.Flags().StringSliceVarP(&excludes, "exclude", "e", []string{},
"Exclude matched packages from analysis. (Use string as regex).")
ans.Flags().StringSliceVarP(&matches, "matches", "m", []string{},
"Analyze only matched packages. (Use string as regex).")
return ans
}

View File

@@ -15,16 +15,15 @@
package cmd
import (
"fmt"
"os"
"path/filepath"
. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
"github.com/spf13/cobra"
)
@@ -39,34 +38,25 @@ var uninstallCmd = &cobra.Command{
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
LuetCfg.Viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
for _, a := range args {
gp, err := _gentoo.ParsePackageStr(a)
pack, err := helpers.ParsePackageStr(a)
if err != nil {
Fatal("Invalid package string ", a, ": ", err.Error())
}
if gp.Version == "" {
gp.Version = "0"
gp.Condition = _gentoo.PkgCondGreaterEqual
}
pack := &pkg.DefaultPackage{
Name: gp.Name,
Version: fmt.Sprintf("%s%s%s",
pkg.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
gp.Version,
gp.VersionSuffix,
),
Category: gp.Category,
Uri: make([]string, 0),
}
stype := LuetCfg.Viper.GetString("solver.type")
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
force := LuetCfg.Viper.GetBool("force")
nodeps := LuetCfg.Viper.GetBool("nodeps")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
@@ -75,7 +65,12 @@ var uninstallCmd = &cobra.Command{
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{Concurrency: LuetCfg.GetGeneral().Concurrency, SolverOptions: *LuetCfg.GetSolverOptions()})
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
NoDeps: nodeps,
Force: force,
})
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
@@ -103,5 +98,8 @@ func init() {
uninstallCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
uninstallCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
uninstallCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
uninstallCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful!)")
uninstallCmd.Flags().Bool("force", false, "Force uninstall")
RootCmd.AddCommand(uninstallCmd)
}

View File

@@ -36,6 +36,7 @@ var upgradeCmd = &cobra.Command{
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
},
Long: `Upgrades packages in parallel`,
Run: func(cmd *cobra.Command, args []string) {
@@ -55,6 +56,7 @@ var upgradeCmd = &cobra.Command{
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
force := LuetCfg.Viper.GetBool("force")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
@@ -63,7 +65,11 @@ var upgradeCmd = &cobra.Command{
Debug("Solver", LuetCfg.GetSolverOptions().String())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{Concurrency: LuetCfg.GetGeneral().Concurrency, SolverOptions: *LuetCfg.GetSolverOptions()})
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
Force: force,
})
inst.Repositories(repos)
_, err := inst.SyncRepositories(false)
if err != nil {
@@ -95,5 +101,7 @@ func init() {
upgradeCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
upgradeCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
upgradeCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
upgradeCmd.Flags().Bool("force", false, "Force upgrade by ignoring errors")
RootCmd.AddCommand(upgradeCmd)
}

View File

@@ -56,6 +56,10 @@
# The path is append to rootfs option path.
# database_path: "/var/cache/luet"
#
# Define the tmpdir base directory where luet store temporary files.
# Default $TMPDIR/tmpluet
# tmpdir_base: "/tmp/tmpluet"
#
# ---------------------------------------------
# Repositories configurations directories.
# ---------------------------------------------
@@ -96,8 +100,11 @@
# packages. By default caching is disable.
# cached: false
#
# Path where store tree of the specifications. Default path is $database_path/repos/$repo_name
# tree_path: "/var/cache/luet/repos/local"
# Path where store tree of the specifications. Default path is $database_path/repos/$repo_name/treefs
# tree_path: "/var/cache/luet/repos/local/treefs"
#
# Path where store repository metadata. Default path is $database_path/repos/$repo_name/meta
# meta_path: "/var/cache/luet/repos/local/meta"
#
# Define the list of the URL where retrieve tree and packages.
# urls:
@@ -125,4 +132,4 @@
#
# Number of overall attempts that the solver has available before bailing out.
# max_attempts: 9000
#
#

14
go.mod
View File

@@ -4,23 +4,25 @@ go 1.12
require (
github.com/DataDog/zstd v1.4.4 // indirect
github.com/Sabayon/pkgs-checker v0.4.2-0.20200101193228-1d500105afb7
github.com/Sabayon/pkgs-checker v0.6.2-0.20200404093625-076438c31739
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154
github.com/briandowns/spinner v1.7.0
github.com/cavaliercoder/grab v2.0.0+incompatible
github.com/crillab/gophersat v1.1.9-0.20200211102949-9a8bf7f2f0a3
github.com/docker/docker v0.7.3-0.20180827131323-0c5f8d2b9b23
github.com/docker/docker v17.12.0-ce-rc1.0.20200417035958-130b0bc6032c+incompatible
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd
github.com/ghodss/yaml v1.0.0
github.com/hashicorp/go-version v1.2.0
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3
github.com/klauspost/pgzip v1.2.1
github.com/knqyf263/go-deb-version v0.0.0-20190517075300-09fca494f03d
github.com/kr/pretty v0.2.0 // indirect
github.com/kyokomi/emoji v2.1.0+incompatible
github.com/logrusorgru/aurora v0.0.0-20190417123914-21d75270181e
github.com/marcsauter/single v0.0.0-20181104081128-f8bf46f26ec0
github.com/mattn/go-isatty v0.0.10 // indirect
github.com/mudler/docker-companion v0.4.6-0.20191110154655-b8b364100616
github.com/moby/sys/mount v0.1.1-0.20200320164225-6154f11e6840 // indirect
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87
github.com/onsi/ginkgo v1.10.1
github.com/onsi/gomega v1.7.0
github.com/otiai10/copy v1.0.2
@@ -32,16 +34,16 @@ require (
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.5.0
github.com/stevenle/topsort v0.0.0-20130922064739-8130c1d7596b
go.etcd.io/bbolt v1.3.3
go.etcd.io/bbolt v1.3.4
go.uber.org/atomic v1.5.1 // indirect
go.uber.org/multierr v1.4.0 // indirect
go.uber.org/zap v1.13.0
golang.org/x/crypto v0.0.0-20191227163750-53104e6ec876 // indirect
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f // indirect
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553 // indirect
golang.org/x/sys v0.0.0-20200102141924-c96a22e43c9c // indirect
golang.org/x/tools v0.0.0-20200102200121-6de373a2766c // indirect
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 // indirect
gopkg.in/yaml.v2 v2.2.7
mvdan.cc/sh/v3 v3.0.0-beta1
)
replace github.com/docker/docker => github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200418093736-b2b0766ef22c+incompatible

126
go.sum
View File

@@ -1,3 +1,4 @@
bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
@@ -5,15 +6,24 @@ github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/DataDog/zstd v1.4.4 h1:+IawcoXhCBylN7ccwdwf8LOH2jKq7NavGpEPanrlTzE=
github.com/DataDog/zstd v1.4.4/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200418091245-6e0951be974d+incompatible h1:QseqqJ8Os1VK8f4h5lRkah3nOL9o9Ph5CQH/5jOU8U0=
github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200418091245-6e0951be974d+incompatible/go.mod h1:/XyFFC7lL96pE2kKmar2jd4LKxWzy1MmbiDHV0nK3bU=
github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200418093736-b2b0766ef22c+incompatible h1:0cMtxRtHURUYiWHyJ7FOwd+wH5kPNGjXlDf48ovuLlw=
github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200418093736-b2b0766ef22c+incompatible/go.mod h1:/XyFFC7lL96pE2kKmar2jd4LKxWzy1MmbiDHV0nK3bU=
github.com/Microsoft/go-winio v0.4.11 h1:zoIOcVf0xPN1tnMVbTtEdI+P8OofVk3NObnwOQ6nK2Q=
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
github.com/Microsoft/go-winio v0.4.15-0.20200113171025-3fe6c5262873 h1:93nQ7k53GjoMQ07HVP8g6Zj1fQZDDj7Xy2VkNNtvX8o=
github.com/Microsoft/go-winio v0.4.15-0.20200113171025-3fe6c5262873/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
github.com/Microsoft/hcsshim v0.8.6 h1:ZfF0+zZeYdzMIVMZHKtDKJvLHj76XCuVae/jNkjj0IA=
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.7 h1:ptnOoufxGSzauVTsdE+wMYnCWA301PdoN4xg5oRdZpg=
github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEVMRuU21PR1EtLVZJmdB18Gu3Rw=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/Sabayon/pkgs-checker v0.4.2-0.20200101193228-1d500105afb7 h1:Vf80sSLu1ZWjjMmUKhw0FqM43lEOvT8O5B22NaHB6AQ=
github.com/Sabayon/pkgs-checker v0.4.2-0.20200101193228-1d500105afb7/go.mod h1:GFGM6ZzSE5owdGgjLnulj0+Vt9UTd5LFGmB2AOVPYrE=
github.com/Sabayon/pkgs-checker v0.6.2-0.20200404093625-076438c31739 h1:cWiphrLut8a+RNwosFre5j+zWDe1m6y1OpkSzn5bu1w=
github.com/Sabayon/pkgs-checker v0.6.2-0.20200404093625-076438c31739/go.mod h1:GFGM6ZzSE5owdGgjLnulj0+Vt9UTd5LFGmB2AOVPYrE=
github.com/Sereal/Sereal v0.0.0-20181211220259-509a78ddbda3 h1:Xu7z47ZiE/J+sKXHZMGxEor/oY2q6dq51fkO0JqdSwY=
github.com/Sereal/Sereal v0.0.0-20181211220259-509a78ddbda3/go.mod h1:D0JMgToj/WdxCgd30Kc1UcA9E+WdZoJqeVOuYW7iTBM=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
@@ -31,17 +41,32 @@ github.com/aws/aws-sdk-go v1.23.21/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpi
github.com/aybabtme/rgbterm v0.0.0-20170906152045-cc83f3b3ce59/go.mod h1:q/89r3U2H7sSsE2t6Kca0lfwTK8JdoNGS/yzM/4iH5I=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/briandowns/spinner v1.7.0 h1:aan1hBBOoscry2TXAkgtxkJiq7Se0+9pt+TUWaPrB4g=
github.com/briandowns/spinner v1.7.0/go.mod h1://Zf9tMcxfRUA36V23M6YGEAv+kECGfvpnLTnb8n4XQ=
github.com/cavaliercoder/grab v2.0.0+incompatible h1:wZHbBQx56+Yxjx2TCGDcenhh3cJn7cCLMfkEPmySTSE=
github.com/cavaliercoder/grab v2.0.0+incompatible/go.mod h1:tTBkfNqSBfuMmMBFaO2phgyhdYhiZQ/+iXCZDzcDsMI=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f h1:tSNMc+rJDfmYntojat8lljbt1mgKNpTxUZJsSzJ9Y1s=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/containerd v1.2.4 h1:qN8LCvw+KA5wVCOnHspD/n2K9cJ34+YOs05qBBWhHiw=
github.com/containerd/containerd v1.2.4/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.0 h1:xjvXQWABwS2uiv3TWgQt5Uth60Gu86LTGZXMJkjc7rY=
github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/continuity v0.0.0-20180814194400-c7c5070e6f6e/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20181203112020-004b46473808 h1:4BX8f882bXEDKfWIf0wa8HRvpnBoPszJJXL+TVbBw4M=
github.com/containerd/continuity v0.0.0-20181203112020-004b46473808/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20200228182428-0f16d7a0959c h1:8ahmSVELW1wghbjerVAyuEYD5+Dio66RYvSS0iGfL1M=
github.com/containerd/continuity v0.0.0-20200228182428-0f16d7a0959c/go.mod h1:Dq467ZllaHgAtVp4p1xUQWBrFXR9s/wyoTpG8zOJGkY=
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
@@ -53,8 +78,6 @@ github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwc
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d h1:U+s90UTSYgptZMwQh2aRr3LuazLJIa+Pg3Kc1ylSYVY=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/crillab/gophersat v1.1.7 h1:f2Phe0W9jGyN1OefygKdcTdNM99q/goSjbWrFRjZGWc=
github.com/crillab/gophersat v1.1.7/go.mod h1:S91tHga1PCZzYhCkStwZAhvp1rCc+zqtSi55I+vDWGc=
github.com/crillab/gophersat v1.1.9-0.20200211102949-9a8bf7f2f0a3 h1:HO63LCf9kTXQgUnlvFeS2qSDQhZ/cLP8DAJO89CythY=
github.com/crillab/gophersat v1.1.9-0.20200211102949-9a8bf7f2f0a3/go.mod h1:S91tHga1PCZzYhCkStwZAhvp1rCc+zqtSi55I+vDWGc=
github.com/cyphar/filepath-securejoin v0.2.2 h1:jCwT2GTP+PY5nBz3c/YL5PAIbusElVrPujOBSCj8xRg=
@@ -66,6 +89,8 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZm
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docker/distribution v2.7.0+incompatible h1:neUDAlf3wX6Ml4HdqTrbcOHXtfRN0TFIwt6YFL7N9RU=
github.com/docker/distribution v2.7.0+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v0.7.3-0.20180827131323-0c5f8d2b9b23 h1:mJtkfC9RUrUWHMk0cFDNhVoc9U3k2FRAzEZ+5pqSIHo=
github.com/docker/docker v0.7.3-0.20180827131323-0c5f8d2b9b23/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
@@ -77,24 +102,32 @@ github.com/docker/libnetwork v0.8.0-dev.2.0.20180608203834-19279f049241 h1:+ebE/
github.com/docker/libnetwork v0.8.0-dev.2.0.20180608203834-19279f049241/go.mod h1:93m0aTqz6z+g32wla4l4WxTrdtvBRmVzYRkYvasA5Z8=
github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7 h1:UhxFibDNY/bfvqU5CAUmr9zpesgbU6SWc8/B4mflAE4=
github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd h1:JWEotl+g5uCCn37eVAYLF3UjBqO5HJ0ezZ5Zgnsdoqc=
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd/go.mod h1:y0+kb0ORo7mC8lQbUzC4oa7ufu565J6SyUgWd39Z1Ic=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsouza/go-dockerclient v1.3.1 h1:h0SaeiAGihssk+aZeKohbubHYKroCBlC7uuUyNhORI4=
github.com/fsouza/go-dockerclient v1.3.1/go.mod h1:IN9UPc4/w7cXiARH2Yg99XxUHbAM+6rAi9hzBVbkWRU=
github.com/fsouza/go-dockerclient v1.6.4 h1:B+L+1lz1LUrNgEUUh8PSG76s70EYC49ssv2xvTefTMM=
github.com/fsouza/go-dockerclient v1.6.4/go.mod h1:GOdftxWLWIbIWKbIMDroKFJzPdg6Iw7r+jX1DDZdVsA=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
@@ -107,18 +140,26 @@ github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8l
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/context v1.1.1 h1:AWwleXJkX/nhcU9bZSnZoi3h/qGYqQAGhq6zZe/aQW8=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2 h1:Pgr17XVTNXAk3q/r4CpKzC5xBM/qW1uVLV+IhRZpIIk=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gorilla/mux v1.7.4/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
github.com/hashicorp/go-version v1.2.0 h1:3vNe/fWF5CBgRIguda1meWhsZHy3m8gCJ5wx+dIzX/E=
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/hcl v1.0.0 h1:0Anlzjpi4vEasTeNFn2mLJgTSwt0+6sfsiTG8qcWGx4=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/heroku/docker-registry-client v0.0.0-20181004091502-47ecf50fd8d4 h1:44WMsEqwiYnpHA3E4Rg1K379MH5iZllp2sO5nzXARI0=
@@ -134,6 +175,7 @@ github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22
github.com/jpillora/backoff v0.0.0-20180909062703-3050d21c67d7/go.mod h1:2iMrUgbbvHEiQClaW2NsSzMyGHqN+rDFqY705q49KG0=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.8.3 h1:CkLseiEYMM/fRb0RIg9mXB+Iwgmle+U9KGFu+JCO4Ec=
github.com/klauspost/compress v1.8.3/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
@@ -141,6 +183,8 @@ github.com/klauspost/cpuid v1.2.1 h1:vJi+O/nMdFt0vqm8NZBI6wzALWdA2X+egi0ogNyrC/w
github.com/klauspost/cpuid v1.2.1/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
github.com/klauspost/pgzip v1.2.1 h1:oIPZROsWuPHpOdMVWLuJZXwgjhrW8r1yEX8UqMyeNHM=
github.com/klauspost/pgzip v1.2.1/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
github.com/knqyf263/go-deb-version v0.0.0-20190517075300-09fca494f03d h1:X4cedH4Kn3JPupAwwWuo4AzYp16P0OyLO9d7OnMZc/c=
github.com/knqyf263/go-deb-version v0.0.0-20190517075300-09fca494f03d/go.mod h1:o8sgWoz3JADecfc/cTYD92/Et1yMqMy0utV1z+VaZao=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2 h1:DB17ag19krx9CFsz4o3enTrPXyIXCl+2iCXH/aMAp9s=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@@ -175,10 +219,28 @@ github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyex
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/sys v0.0.0-20200320164225-6154f11e6840 h1:JG4pF+44/QLorxR8IyPXDwnhmfrinii0qaPsjmb4v4M=
github.com/moby/sys/mount v0.1.0 h1:Ytx78EatgFKtrqZ0BvJ0UtJE472ZvawVmil6pIfuCCU=
github.com/moby/sys/mount v0.1.0/go.mod h1:FVQFLDRWwyBjDTBNQXDlWnSFREqOo3OKX9aqhmeoo74=
github.com/moby/sys/mount v0.1.1-0.20200320164225-6154f11e6840 h1:8oWZ4vNoz5UeQc0l0lJMTfIjsEP5fyUI+TuK0gbrm0o=
github.com/moby/sys/mount v0.1.1-0.20200320164225-6154f11e6840/go.mod h1:FVQFLDRWwyBjDTBNQXDlWnSFREqOo3OKX9aqhmeoo74=
github.com/moby/sys/mountinfo v0.1.0 h1:r8vMRbMAFEAfiNptYVokP+nfxPJzvRuia5e2vzXtENo=
github.com/moby/sys/mountinfo v0.1.0/go.mod h1:w2t2Avltqx8vE7gX5l+QiBKxODu2TX0+Syr3h52Tw4o=
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/mudler/docker-companion v0.4.6-0.20191110154655-b8b364100616 h1:JwaU8XCtGoY43bo15sKp7qpJ5eaa5vFtRW/oE8yUY/4=
github.com/mudler/docker-companion v0.4.6-0.20191110154655-b8b364100616/go.mod h1:5uUdicxxQ0H591kxsvbZtgjY11OWVUuDShh08gg60sI=
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87 h1:mGz7T8KvmHH0gLWPI5tQne8xl2cO3T8wrrb6Aa16Jxo=
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87/go.mod h1:1w4zI1LYXDeiUXqedPcrT5eQJnmKR6dbg5iJMgSIP/Y=
github.com/mudler/moby v1.13.1 h1:5cBPERK0rJxQrZNUAwJplBnAx82lblDQOSW97XiErck=
github.com/mudler/moby v1.13.1/go.mod h1:PyWxMK6EBkgxr56aaA8QboVb2zeLae6KieMajAkgdbI=
github.com/mudler/moby v17.12.0-ce-rc1.0.20200414193041-3d17d54c7bb2+incompatible h1:NhzknS6LX7aVdEf/OX3qwa99tIEYHPKNcBwRUqQFE2Y=
github.com/mudler/moby v17.12.0-ce-rc1.0.20200414193041-3d17d54c7bb2+incompatible/go.mod h1:PyWxMK6EBkgxr56aaA8QboVb2zeLae6KieMajAkgdbI=
github.com/mudler/moby v17.12.0-ce-rc1.0.20200417164438-1380785c5bb0+incompatible h1:9yBoQCRzpBKiPa+PgB/ayONrYh8hImL9TnBiAxKuiHg=
github.com/mudler/moby v17.12.0-ce-rc1.0.20200417164438-1380785c5bb0+incompatible/go.mod h1:PyWxMK6EBkgxr56aaA8QboVb2zeLae6KieMajAkgdbI=
github.com/mudler/moby v17.12.0-ce-rc1.0.20200418083242-8931a73b51b2+incompatible/go.mod h1:PyWxMK6EBkgxr56aaA8QboVb2zeLae6KieMajAkgdbI=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
@@ -190,14 +252,18 @@ github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/openSUSE/umoci v0.1.1-0.20191030112807-c0dd46ae078f h1:G9hyzNrFbTgp9KEoGRcNYxAT41lo7hDy9oxXT1Y7WHI=
github.com/openSUSE/umoci v0.1.1-0.20191030112807-c0dd46ae078f/go.mod h1:3p4KA5nwyY65lVmQZxv7tm0YEylJ+t1fY91ORsVXv58=
github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v0.1.1 h1:GlxAyO6x8rfZYN9Tt0Kti5a/cP41iuiO2yYT0IJGY8Y=
github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.1 h1:wY4pOY8fBdSIvs9+IDHC55thBuEulhzfSgKeC1yFvzQ=
github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
github.com/otiai10/copy v1.0.2 h1:DDNipYy6RkIkjMwy+AWzgKiNTyj2RUI9yEMeETEpVyc=
github.com/otiai10/copy v1.0.2/go.mod h1:c7RpqBkwMom4bYTSkLSym4VSJz/XtncWRAj/J4PEIMY=
github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95 h1:+OLn68pqasWca0z5ryit9KGfp3sUsW4Lqg32iRMJyzs=
@@ -211,6 +277,7 @@ github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f h1:WyCn68lTiy
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f/go.mod h1:/iRjX3DdSK956SzsUdV55J+wIsQ+2IBWmBrB4RvZfk4=
github.com/pkg/diff v0.0.0-20190930165518-531926345625/go.mod h1:kFj35MyHn14a6pIgWhm46KVjJr5CHys3eEYxkuKD1EI=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1-0.20171018195549-f15c970de5b7/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@@ -219,10 +286,12 @@ github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXP
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.1.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
@@ -238,10 +307,14 @@ github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQD
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.5.0 h1:1N5EYkVAPEywqZRJd7cwnRtCb6xJx7NH3T3WUTF980Q=
github.com/sirupsen/logrus v1.5.0/go.mod h1:+F7Ogzej0PZc/94MaYx/nvG9jOFMD2osvC3s+Squfpo=
github.com/smartystreets/assertions v1.0.0/go.mod h1:kHHU4qYBaI3q23Pp3VPrmWhuIUrLW/7eUrw0BU5VaoM=
github.com/smartystreets/assertions v1.0.1/go.mod h1:kHHU4qYBaI3q23Pp3VPrmWhuIUrLW/7eUrw0BU5VaoM=
github.com/smartystreets/go-aws-auth v0.0.0-20180515143844-0c1422d1fdb9/go.mod h1:SnhjPscd9TpLiy1LpzGSKh3bXCfxxXuqd9xmQJy3slM=
@@ -254,11 +327,13 @@ github.com/spf13/afero v1.2.2 h1:5jhuqJyZCZf2JRofRvN/nIFgIWNzPa3/Vz8mYylgbWc=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5 h1:f0B+LkLX6DtmRH1isoNA9VTtNUK9K8xYd28JNNfOv/s=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.1-0.20171106142849-4c012f6dcd95/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
@@ -276,6 +351,7 @@ github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJy
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tj/assert v0.0.0-20171129193455-018094318fb0/go.mod h1:mZ9/Rh9oLWpLLDRpvE+3b7gP/C2YyLFYxNmcLnPTMe0=
github.com/tj/go-elastic v0.0.0-20171221160941-36157cbbebc2/go.mod h1:WjeM0Oo1eNAjXGDx2yma7uG2XoyRZTq1uv3M/o7imD0=
github.com/tj/go-kinesis v0.0.0-20171128231115-08b17f58cb1b/go.mod h1:/yhzCV0xPfx6jb1bBgRFjl5lytqVqZXEaeqWP8lTEao=
@@ -283,6 +359,7 @@ github.com/tj/go-spin v1.1.0/go.mod h1:Mg1mzmePZm4dva8Qz60H2lHwmJ2loum4VIrLgVnKw
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1 h1:+mkCCcOFKPnCmVYVcURKps1Xe+3zP90gSYGNfRkjoIY=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/vbatts/go-mtree v0.4.4 h1:+CncqETnSpxBCCUhRnNQBvxhsjWXNuc+ExZsLSNaj5o=
@@ -293,12 +370,19 @@ github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc h1:R83G5ikgLMxrB
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
github.com/vmihailenco/msgpack v4.0.1+incompatible h1:RMF1enSPeKTlXrXdOcqjFUElywVZjjC6pqse21bKbEU=
github.com/vmihailenco/msgpack v4.0.1+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.0/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.3 h1:MUGmc65QhB3pIlaQ5bB4LwqSj6GIonVJXpZiaKNyaKk=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.4 h1:hi1bXHMVrlQh6WwxAy+qZCV/SYIlqo+Ushwdpa4tAKg=
go.etcd.io/bbolt v1.3.4/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.5.1 h1:rsqfU5vBkVknbhUGbAUwQKR2H4ItV8tjJ+6kJX4cxHM=
@@ -312,6 +396,7 @@ go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9E
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.13.0 h1:nR6NoDBgAf67s68NhaXbsojM+2gxp3S1hWkHDl27pVU=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180820150726-614d502a4dac/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
@@ -323,19 +408,26 @@ golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191227163750-53104e6ec876 h1:sKJQZMuxjOAR/Uo2LBfU90onWEf1dF4C+0hPJCc9Mpc=
golang.org/x/crypto v0.0.0-20191227163750-53104e6ec876/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975 h1:/Tl7pH94bvbAAHBdZJT947M/+gp0+CqQXDtMRC0fseo=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f h1:J5lckAjkw6qYlOZNj90mLYNTEKDvWeuc1yieZ8qUzUE=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190912160710-24e19bdeb0f2 h1:4dVFTC832rPn4pomLSz1vA+are2+dU19w1H8OngV7nc=
@@ -346,6 +438,7 @@ golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAG
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -360,21 +453,31 @@ golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190913121621-c3b328c6e5a7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be h1:QAcqgptGM8IQBC9K/RC4o+O9YmqEm0diQn9QmZw/0mU=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200102141924-c96a22e43c9c h1:OYFUffxXPezb7BVTx9AaD4Vl0qtxmklBIkwCKH1YwDY=
golang.org/x/sys v0.0.0-20200102141924-c96a22e43c9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae h1:/WDfKMnPU+m5M4xB+6x4kaepxRw6jWvR5iDRdvjHgy8=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190913181337-0240832f5c3d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@@ -388,9 +491,19 @@ golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IV
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0 h1:igQkv0AAhEIvTEpD5LIpAfav2eeVO9HBTjvKHVJPRSs=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0 h1:G+97AoqBnmZIT91cLG/EkCoK9NSelj64P8bOHHNmGn0=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.27.1 h1:zvIju4sqAGvwKspUQOhwnpcqSbzi7/H6QomNNjTL4sk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -413,9 +526,14 @@ gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gotest.tools v2.1.0+incompatible h1:5USw7CrJBYKqjg9R7QlA6jzqZKEAtvW82aNmsxxGPxw=
gotest.tools v2.1.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/v3 v3.0.2 h1:kG1BFyqVHuQoVQiR1bWGnfz/fmHvvuiSPIV7rvl360E=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
mvdan.cc/editorconfig v0.1.1-0.20191109213504-890940e3f00e/go.mod h1:Ge4atmRUYqueGppvJ7JNrtqpqokoJEFxYbP0Z+WeKS8=
mvdan.cc/sh/v3 v3.0.0-beta1 h1:UqiwBEXEPzelaGxuvixaOtzc7WzKtrElePJ8HqvW7K8=
mvdan.cc/sh/v3 v3.0.0-beta1/go.mod h1:rBIndNJFYPp8xSppiZcGIk6B5d1g3OEARxEaXjPxwVI=

185
pkg/box/exec.go Normal file
View File

@@ -0,0 +1,185 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package box
import (
b64 "encoding/base64"
"fmt"
"os"
"os/exec"
"strings"
"syscall"
"github.com/pkg/errors"
helpers "github.com/mudler/luet/pkg/helpers"
)
type Box interface {
Run() error
Exec() error
}
type DefaultBox struct {
Name string
Root string
Env []string
Cmd string
Args []string
HostMounts []string
Stdin, Stdout, Stderr bool
}
func NewBox(cmd string, args, hostmounts, env []string, rootfs string, stdin, stdout, stderr bool) Box {
return &DefaultBox{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
Cmd: cmd,
Args: args,
Root: rootfs,
HostMounts: hostmounts,
Env: env,
}
}
func (b *DefaultBox) Exec() error {
if err := mountProc(b.Root); err != nil {
return errors.Wrap(err, "Failed mounting proc on rootfs")
}
if err := mountDev(b.Root); err != nil {
return errors.Wrap(err, "Failed mounting dev on rootfs")
}
for _, hostMount := range b.HostMounts {
target := hostMount
if strings.Contains(hostMount, ":") {
dest := strings.Split(hostMount, ":")
if len(dest) != 2 {
return errors.New("Invalid arguments for mount, it can be: fullpath, or source:target")
}
hostMount = dest[0]
target = dest[1]
}
if err := mountBind(hostMount, b.Root, target); err != nil {
return errors.Wrap(err, fmt.Sprintf("Failed mounting %s on rootfs", hostMount))
}
}
if err := PivotRoot(b.Root); err != nil {
return errors.Wrap(err, "Failed switching pivot on rootfs")
}
cmd := exec.Command(b.Cmd, b.Args...)
if b.Stdin {
cmd.Stdin = os.Stdin
}
if b.Stderr {
cmd.Stderr = os.Stderr
}
if b.Stdout {
cmd.Stdout = os.Stdout
}
cmd.Env = b.Env
if err := cmd.Run(); err != nil {
return errors.Wrap(err, fmt.Sprintf("Error running the %s command in box.Exec", b.Cmd))
}
return nil
}
func (b *DefaultBox) Run() error {
if !helpers.Exists(b.Root) {
return errors.New(b.Root + " does not exist")
}
// This matches with exec CLI command in luet
// TODO: Pass by env var as well
execCmd := []string{"exec", "--rootfs", b.Root, "--entrypoint", b.Cmd}
if b.Stdin {
execCmd = append(execCmd, "--stdin")
}
if b.Stderr {
execCmd = append(execCmd, "--stderr")
}
if b.Stdout {
execCmd = append(execCmd, "--stdout")
}
// Encode the command in base64 to avoid bad input from the args given
execCmd = append(execCmd, "--decode")
for _, m := range b.HostMounts {
execCmd = append(execCmd, "--mount")
execCmd = append(execCmd, m)
}
for _, e := range b.Env {
execCmd = append(execCmd, "--env")
execCmd = append(execCmd, e)
}
for _, a := range b.Args {
execCmd = append(execCmd, b64.StdEncoding.EncodeToString([]byte(a)))
}
cmd := exec.Command("/proc/self/exe", execCmd...)
if b.Stdin {
cmd.Stdin = os.Stdin
}
if b.Stderr {
cmd.Stderr = os.Stderr
}
if b.Stdout {
cmd.Stdout = os.Stdout
}
cmd.SysProcAttr = &syscall.SysProcAttr{
Cloneflags: syscall.CLONE_NEWNS |
syscall.CLONE_NEWUTS |
syscall.CLONE_NEWIPC |
syscall.CLONE_NEWPID |
syscall.CLONE_NEWNET |
syscall.CLONE_NEWUSER,
UidMappings: []syscall.SysProcIDMap{
{
ContainerID: 0,
HostID: os.Getuid(),
Size: 1,
},
},
GidMappings: []syscall.SysProcIDMap{
{
ContainerID: 0,
HostID: os.Getgid(),
Size: 1,
},
},
}
if err := cmd.Run(); err != nil {
return errors.Wrap(err, "Failed running Box command in box.Run")
}
return nil
}

94
pkg/box/rootfs.go Normal file
View File

@@ -0,0 +1,94 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package box
import (
"os"
"path/filepath"
"syscall"
)
func PivotRoot(newroot string) error {
putold := filepath.Join(newroot, "/.pivot_root")
// bind mount newroot to itself - this is a slight hack needed to satisfy the
// pivot_root requirement that newroot and putold must not be on the same
// filesystem as the current root
if err := syscall.Mount(newroot, newroot, "", syscall.MS_BIND|syscall.MS_REC, ""); err != nil {
return err
}
// create putold directory
if err := os.MkdirAll(putold, 0700); err != nil {
return err
}
// call pivot_root
if err := syscall.PivotRoot(newroot, putold); err != nil {
return err
}
// ensure current working directory is set to new root
if err := os.Chdir("/"); err != nil {
return err
}
// umount putold, which now lives at /.pivot_root
putold = "/.pivot_root"
if err := syscall.Unmount(putold, syscall.MNT_DETACH); err != nil {
return err
}
// remove putold
if err := os.RemoveAll(putold); err != nil {
return err
}
return nil
}
func mountProc(newroot string) error {
source := "proc"
target := filepath.Join(newroot, "/proc")
fstype := "proc"
flags := 0
data := ""
os.MkdirAll(target, 0755)
if err := syscall.Mount(source, target, fstype, uintptr(flags), data); err != nil {
return err
}
return nil
}
func mountBind(hostfolder, newroot, dst string) error {
source := hostfolder
target := filepath.Join(newroot, dst)
fstype := "bind"
data := ""
os.MkdirAll(target, 0755)
if err := syscall.Mount(source, target, fstype, syscall.MS_BIND|syscall.MS_REC, data); err != nil {
return err
}
return nil
}
func mountDev(newroot string) error {
return mountBind("/dev", newroot, "/dev")
}

View File

@@ -60,6 +60,7 @@ func (i ArtifactIndex) CleanPath() ArtifactIndex {
Dependencies: art.Dependencies,
CompressionType: art.CompressionType,
Checksums: art.Checksums,
Files: art.Files,
})
}
return newIndex
@@ -77,6 +78,7 @@ type PackageArtifact struct {
Checksums Checksums `json:"checksums"`
SourceAssertion solver.PackagesAssertions `json:"-"`
CompressionType CompressionImplementation `json:"compressiontype"`
Files []string `json:"files"`
}
func NewPackageArtifact(path string) Artifact {
@@ -116,9 +118,19 @@ func (a *PackageArtifact) SetCompressionType(t CompressionImplementation) {
func (a *PackageArtifact) GetChecksums() Checksums {
return a.Checksums
}
func (a *PackageArtifact) SetChecksums(c Checksums) {
a.Checksums = c
}
func (a *PackageArtifact) SetFiles(f []string) {
a.Files = f
}
func (a *PackageArtifact) GetFiles() []string {
return a.Files
}
func (a *PackageArtifact) Hash() error {
return a.Checksums.Generate(a)
}
@@ -308,6 +320,7 @@ func (a *PackageArtifact) Unpack(dst string, keepPerms bool) error {
return errors.New("Compression type must be supplied")
}
// FileList generates the list of file of a package from the local archive
func (a *PackageArtifact) FileList() ([]string, error) {
var tr *tar.Reader
switch a.CompressionType {
@@ -397,14 +410,14 @@ func worker(i int, wg *sync.WaitGroup, s <-chan CopyJob) {
// ExtractArtifactFromDelta extracts deltas from ArtifactLayer from an image in tar format
func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurrency int, keepPerms bool, includes []string, t CompressionImplementation) (Artifact, error) {
archive, err := ioutil.TempDir(os.TempDir(), "archive")
archive, err := LuetCfg.GetSystem().TempDir("archive")
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for archive")
}
defer os.RemoveAll(archive) // clean up
if strings.HasSuffix(src, ".tar") {
rootfs, err := ioutil.TempDir(os.TempDir(), "rootfs")
rootfs, err := LuetCfg.GetSystem().TempDir("rootfs")
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}

View File

@@ -16,6 +16,7 @@
package backend
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
@@ -236,14 +237,22 @@ func (*SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPer
// ]
// Changes uses container-diff (https://github.com/GoogleContainerTools/container-diff) for retrieving out layer diffs
func (*SimpleDocker) Changes(fromImage, toImage string) ([]compiler.ArtifactLayer, error) {
tmpdiffs, err := ioutil.TempDir(os.TempDir(), "tmpdiffs")
tmpdiffs, err := config.LuetCfg.GetSystem().TempDir("tmpdiffs")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tmpdiffs) // clean up
var errorBuffer bytes.Buffer
diffargs := []string{"diff", fromImage, toImage, "-v", "error", "-q", "--type=file", "-j", "-n", "-c", tmpdiffs}
out, err := exec.Command("container-diff", diffargs...).Output()
cmd := exec.Command("container-diff", diffargs...)
cmd.Stderr = &errorBuffer
out, err := cmd.Output()
if string(errorBuffer.Bytes()) != "" {
Warning("container-diff errored with: " + string(errorBuffer.Bytes()))
}
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Failed Resolving layer diffs: "+string(out))
}
@@ -259,7 +268,7 @@ func (*SimpleDocker) Changes(fromImage, toImage string) ([]compiler.ArtifactLaye
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Failed unmarshalling json response: "+string(out))
}
if config.LuetCfg.GetLogging().Level == "debug" {
if config.LuetCfg.GetGeneral().Debug {
summary := compiler.ComputeArtifactLayerSummary(diffs)
for _, l := range summary.Layers {
Debug(fmt.Sprintf("Diff %s -> %s: add %d (%d bytes), del %d (%d bytes), change %d (%d bytes)",

View File

@@ -265,11 +265,12 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
}
}
fp := p.GetPackage().HashFingerprint()
if buildertaggedImage == "" {
buildertaggedImage = cs.ImageRepository + "-" + p.GetPackage().GetFingerPrint() + "-builder"
buildertaggedImage = cs.ImageRepository + "-" + fp + "-builder"
}
if packageImage == "" {
packageImage = cs.ImageRepository + "-" + p.GetPackage().GetFingerPrint()
packageImage = cs.ImageRepository + "-" + fp
}
Info(pkgTag, "Generating :whale: definition for builder image from", image)
@@ -300,6 +301,10 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
return nil, errors.Wrap(err, "Could not export image")
}
if !cs.Options.KeepImageExport {
defer os.Remove(builderOpts.Destination)
}
if cs.Options.Push && buildBuilderImage {
if err = cs.Backend.Push(builderOpts); err != nil {
return nil, errors.Wrap(err, "Could not push image: "+image+" "+builderOpts.DockerFileName)
@@ -338,6 +343,10 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
return nil, errors.Wrap(err, "Failed exporting image")
}
if !cs.Options.KeepImageExport {
defer os.Remove(runnerOpts.Destination)
}
if cs.Options.Push && buildPackageImage {
err = cs.Backend.Push(runnerOpts)
if err != nil {
@@ -411,84 +420,19 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
artifact.SetCompileSpec(p)
}
err = artifact.WriteYaml(p.GetOutputPath())
filelist, err := artifact.FileList()
if err != nil {
return artifact, err
}
Info(pkgTag, " :white_check_mark: Done")
return artifact, nil
}
func (cs *LuetCompiler) packageFromImage(p CompilationSpec, tag string, keepPermissions, keepImg bool, concurrency int) (Artifact, error) {
if !cs.Clean {
if art, err := LoadArtifactFromYaml(p); err == nil {
Debug("Artifact reloaded. Skipping build")
return art, err
}
}
pkgTag := ":package: " + p.GetPackage().GetName()
Info(pkgTag, " 🍩 Build starts 🔨 🔨 🔨 ")
builderOpts := CompilerBackendOptions{
ImageName: p.GetImage(),
Destination: p.Rel(p.GetPackage().GetFingerPrint() + ".image.tar"),
}
err := cs.Backend.DownloadImage(builderOpts)
if err != nil {
return nil, errors.Wrap(err, "Could not download image")
return artifact, errors.Wrap(err, "Failed getting package list")
}
if tag != "" {
err = cs.Backend.CopyImage(p.GetImage(), tag)
if err != nil {
return nil, errors.Wrap(err, "Could not download image")
}
}
err = cs.Backend.ExportImage(builderOpts)
if err != nil {
return nil, errors.Wrap(err, "Could not export image")
}
rootfs, err := ioutil.TempDir(p.GetOutputPath(), "rootfs")
if err != nil {
return nil, errors.Wrap(err, "Could not create tempdir")
}
defer os.RemoveAll(rootfs) // clean up
// TODO: Compression and such
err = cs.Backend.ExtractRootfs(CompilerBackendOptions{
ImageName: p.GetImage(),
SourcePath: builderOpts.Destination, Destination: rootfs}, keepPermissions)
if err != nil {
return nil, errors.Wrap(err, "Could not extract rootfs")
}
artifact := NewPackageArtifact(p.Rel(p.GetPackage().GetFingerPrint() + ".package.tar"))
artifact.SetCompileSpec(p)
artifact.SetCompressionType(cs.CompressionType)
err = artifact.Compress(rootfs, concurrency)
if err != nil {
return nil, errors.Wrap(err, "Error met while creating package archive")
}
if !keepImg {
// We keep them around, so to not reload them from the tar (which should be the "correct way") and we automatically share the same layers
// TODO: Handle caching and optionally do not remove things
err = cs.Backend.RemoveImage(builderOpts)
if err != nil {
Warning("Could not remove image ", builderOpts.ImageName)
// return nil, errors.Wrap(err, "Could not remove image")
}
}
Info(pkgTag, " :white_check_mark: Done")
artifact.SetFiles(filelist)
err = artifact.WriteYaml(p.GetOutputPath())
if err != nil {
return artifact, err
return artifact, errors.Wrap(err, "Failed while writing metadata file")
}
Info(pkgTag, " :white_check_mark: Done")
return artifact, nil
}
@@ -496,12 +440,15 @@ func (cs *LuetCompiler) ComputeDepTree(p CompilationSpec) (solver.PackagesAssert
s := solver.NewResolver(pkg.NewInMemoryDatabase(false), cs.Database, pkg.NewInMemoryDatabase(false), cs.Options.SolverOptions.Resolver())
solution, err := s.Install([]pkg.Package{p.GetPackage()})
solution, err := s.Install(pkg.Packages{p.GetPackage()})
if err != nil {
return nil, errors.Wrap(err, "While computing a solution for "+p.GetPackage().GetName())
return nil, errors.Wrap(err, "While computing a solution for "+p.GetPackage().HumanReadableString())
}
dependencies := solution.Order(cs.Database, p.GetPackage().GetFingerPrint())
dependencies, err := solution.Order(cs.Database, p.GetPackage().GetFingerPrint())
if err != nil {
return nil, errors.Wrap(err, "While order a solution for "+p.GetPackage().HumanReadableString())
}
assertions := solver.PackagesAssertions{}
for _, assertion := range dependencies { //highly dependent on the order
@@ -530,21 +477,20 @@ func (cs *LuetCompiler) Compile(keepPermissions bool, p CompilationSpec) (Artifa
}
func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p CompilationSpec) (Artifact, error) {
Info(":package: Compiling", p.GetPackage().GetName(), "version", p.GetPackage().GetVersion(), ".... :coffee:")
Info(":package: Compiling", p.GetPackage().HumanReadableString(), ".... :coffee:")
if len(p.GetPackage().GetRequires()) == 0 && p.GetImage() == "" {
Error("Package with no deps and no seed image supplied, bailing out")
return nil, errors.New("Package " + p.GetPackage().GetFingerPrint() + "with no deps and no seed image supplied, bailing out")
}
targetAssertion := p.GetSourceAssertion().Search(p.GetPackage().GetFingerPrint())
targetPackageHash := cs.ImageRepository + ":" + targetAssertion.Hash.PackageHash
// - If image is set we just generate a plain dockerfile
// Treat last case (easier) first. The image is provided and we just compute a plain dockerfile with the images listed as above
if p.GetImage() != "" {
if p.ImageUnpack() { // If it is just an entire image, create a package from it
return cs.packageFromImage(p, "", keepPermissions, cs.KeepImg, concurrency)
}
return cs.compileWithImage(p.GetImage(), "", "", concurrency, keepPermissions, cs.KeepImg, p)
return cs.compileWithImage(p.GetImage(), "", targetPackageHash, concurrency, keepPermissions, cs.KeepImg, p)
}
// - If image is not set, we read a base_image. Then we will build one image from it to kick-off our build based
@@ -558,74 +504,68 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
depsN := 0
currentN := 0
Info(":deciduous_tree: Build dependencies for " + p.GetPackage().GetName())
for _, assertion := range dependencies { //highly dependent on the order
depsN++
Info(" :arrow_right_hook:", assertion.Package.GetName(), ":leaves:", assertion.Package.GetVersion(), "(", assertion.Package.GetCategory(), ")")
}
for _, assertion := range dependencies { //highly dependent on the order
currentN++
pkgTag := fmt.Sprintf(":package: %d/%d %s ⤑ %s", currentN, depsN, p.GetPackage().GetName(), assertion.Package.GetName())
Info(pkgTag, " :zap: Building dependency")
compileSpec, err := cs.FromPackage(assertion.Package)
if err != nil {
return nil, errors.Wrap(err, "Error while generating compilespec for "+assertion.Package.GetName())
if !cs.Options.NoDeps {
Info(":deciduous_tree: Build dependencies for " + p.GetPackage().HumanReadableString())
for _, assertion := range dependencies { //highly dependent on the order
depsN++
Info(" :arrow_right_hook:", assertion.Package.HumanReadableString(), ":leaves:")
}
compileSpec.SetOutputPath(p.GetOutputPath())
buildImageHash := cs.ImageRepository + ":" + assertion.Hash.BuildHash
currentPackageImageHash := cs.ImageRepository + ":" + assertion.Hash.PackageHash
Debug(pkgTag, " :arrow_right_hook: :whale: Builder image from", buildImageHash)
Debug(pkgTag, " :arrow_right_hook: :whale: Package image name", currentPackageImageHash)
for _, assertion := range dependencies { //highly dependent on the order
currentN++
pkgTag := fmt.Sprintf(":package: %d/%d %s ⤑ %s", currentN, depsN, p.GetPackage().HumanReadableString(), assertion.Package.HumanReadableString())
Info(pkgTag, " :zap: Building dependency")
compileSpec, err := cs.FromPackage(assertion.Package)
if err != nil {
return nil, errors.Wrap(err, "Error while generating compilespec for "+assertion.Package.GetName())
}
compileSpec.SetOutputPath(p.GetOutputPath())
lastHash = currentPackageImageHash
if compileSpec.GetImage() != "" {
// TODO: Refactor this
if compileSpec.ImageUnpack() { // If it is just an entire image, create a package from it
if compileSpec.GetImage() == "" {
return nil, errors.New("No image defined for package: " + assertion.Package.GetName())
}
Info(pkgTag, ":whale: Sourcing package from image", compileSpec.GetImage())
artifact, err := cs.packageFromImage(compileSpec, currentPackageImageHash, keepPermissions, cs.KeepImg, concurrency)
buildImageHash := cs.ImageRepository + ":" + assertion.Hash.BuildHash
currentPackageImageHash := cs.ImageRepository + ":" + assertion.Hash.PackageHash
Debug(pkgTag, " :arrow_right_hook: :whale: Builder image from", buildImageHash)
Debug(pkgTag, " :arrow_right_hook: :whale: Package image name", currentPackageImageHash)
lastHash = currentPackageImageHash
if compileSpec.GetImage() != "" {
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().HumanReadableString()+" from image")
artifact, err := cs.compileWithImage(compileSpec.GetImage(), buildImageHash, currentPackageImageHash, concurrency, keepPermissions, cs.KeepImg, compileSpec)
if err != nil {
return nil, errors.Wrap(err, "Failed compiling "+compileSpec.GetPackage().GetName())
return nil, errors.Wrap(err, "Failed compiling "+compileSpec.GetPackage().HumanReadableString())
}
departifacts = append(departifacts, artifact)
Info(pkgTag, ":white_check_mark: Done")
continue
}
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().GetFingerPrint()+" from image")
artifact, err := cs.compileWithImage(compileSpec.GetImage(), buildImageHash, currentPackageImageHash, concurrency, keepPermissions, cs.KeepImg, compileSpec)
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().HumanReadableString()+" from tree")
artifact, err := cs.compileWithImage(buildImageHash, "", currentPackageImageHash, concurrency, keepPermissions, cs.KeepImg, compileSpec)
if err != nil {
return nil, errors.Wrap(err, "Failed compiling "+compileSpec.GetPackage().GetName())
return nil, errors.Wrap(err, "Failed compiling "+compileSpec.GetPackage().HumanReadableString())
// deperrs = append(deperrs, err)
// break // stop at first error
}
departifacts = append(departifacts, artifact)
Info(pkgTag, ":white_check_mark: Done")
continue
Info(pkgTag, ":collision: Done")
}
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().GetFingerPrint()+" from tree")
artifact, err := cs.compileWithImage(buildImageHash, "", currentPackageImageHash, concurrency, keepPermissions, cs.KeepImg, compileSpec)
} else if len(dependencies) > 0 {
lastHash = cs.ImageRepository + ":" + dependencies[len(dependencies)-1].Hash.PackageHash
}
if !cs.Options.OnlyDeps {
Info(":package:", p.GetPackage().HumanReadableString(), ":cyclone: Building package target from:", lastHash)
artifact, err := cs.compileWithImage(lastHash, "", targetPackageHash, concurrency, keepPermissions, cs.KeepImg, p)
if err != nil {
return nil, errors.Wrap(err, "Failed compiling "+compileSpec.GetPackage().GetName())
// deperrs = append(deperrs, err)
// break // stop at first error
return artifact, err
}
departifacts = append(departifacts, artifact)
Info(pkgTag, ":collision: Done")
}
artifact.SetDependencies(departifacts)
artifact.SetSourceAssertion(p.GetSourceAssertion())
Info(":package:", p.GetPackage().GetName(), ":cyclone: Building package target from:", lastHash)
artifact, err := cs.compileWithImage(lastHash, "", "", concurrency, keepPermissions, cs.KeepImg, p)
if err != nil {
return artifact, err
} else {
return departifacts[len(departifacts)-1], nil
}
artifact.SetDependencies(departifacts)
artifact.SetSourceAssertion(p.GetSourceAssertion())
return artifact, err
}
func (cs *LuetCompiler) FromPackage(p pkg.Package) (CompilationSpec, error) {

View File

@@ -650,4 +650,43 @@ var _ = Describe("Compiler", func() {
Expect(helpers.Exists(spec.Rel("var"))).ToNot(BeTrue())
})
})
Context("File list", func() {
It("is generated after the compilation process and annotated in the metadata", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/packagelayers")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "runtime", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
compiler.SetCompressionType(GZip)
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
Expect(len(artifacts[0].GetDependencies())).To(Equal(1))
Expect(artifacts[0].GetFiles()).To(ContainElement("bin/busybox"))
Expect(helpers.Exists(spec.Rel("runtime-layer-0.1.metadata.yaml"))).To(BeTrue())
art, err := LoadArtifactFromYaml(spec)
Expect(err).ToNot(HaveOccurred())
files := art.GetFiles()
Expect(files).To(ContainElement("bin/busybox"))
})
})
})

View File

@@ -49,7 +49,10 @@ type CompilerOptions struct {
Concurrency int
CompressionType CompressionImplementation
Clean bool
KeepImageExport bool
OnlyDeps bool
NoDeps bool
SolverOptions config.LuetSolverOptions
}
@@ -62,6 +65,8 @@ func NewDefaultCompilerOptions() *CompilerOptions {
KeepImg: true,
Concurrency: runtime.NumCPU(),
Clean: true,
OnlyDeps: false,
NoDeps: false,
}
}
@@ -97,6 +102,9 @@ type Artifact interface {
Hash() error
Verify() error
SetFiles(f []string)
GetFiles() []string
GetChecksums() Checksums
SetChecksums(c Checksums)
}

View File

@@ -27,7 +27,9 @@ import (
"strings"
"time"
"github.com/mudler/luet/pkg/helpers"
solver "github.com/mudler/luet/pkg/solver"
v "github.com/spf13/viper"
)
@@ -80,6 +82,7 @@ type LuetSystemConfig struct {
DatabasePath string `yaml:"database_path" mapstructure:"database_path"`
Rootfs string `yaml:"rootfs" mapstructure:"rootfs"`
PkgsCachePath string `yaml:"pkgs_cache_path" mapstructure:"pkgs_cache_path"`
TmpDirBase string `yaml:"tmpdir_base" mapstructure:"tmpdir_base"`
}
func (sc LuetSystemConfig) GetRepoDatabaseDirPath(name string) string {
@@ -131,6 +134,7 @@ type LuetRepository struct {
Cached bool `json:"cached,omitempty" yaml:"cached,omitempty" mapstructure:"cached,omitempty"`
Authentication map[string]string `json:"auth,omitempty" yaml:"auth,omitempty" mapstructure:"auth,omitempty"`
TreePath string `json:"tree_path,omitempty" yaml:"tree_path,omitempty" mapstructure:"tree_path"`
MetaPath string `json:"meta_path,omitempty" yaml:"meta_path,omitempty" mapstructure:"meta_path"`
// Serialized options not used in repository configuration
@@ -153,6 +157,7 @@ func NewLuetRepository(name, t, descr string, urls []string, priority int, enabl
Cached: cached,
Authentication: make(map[string]string, 0),
TreePath: "",
MetaPath: "",
}
}
@@ -164,6 +169,7 @@ func NewEmptyLuetRepository() *LuetRepository {
Type: "",
Priority: 9999,
TreePath: "",
MetaPath: "",
Enable: false,
Cached: false,
Authentication: make(map[string]string, 0),
@@ -209,8 +215,9 @@ func GenDefault(viper *v.Viper) {
viper.SetDefault("general.spinner_charset", 22)
viper.SetDefault("general.fatal_warnings", false)
u, _ := user.Current()
if u.Uid == "0" {
u, err := user.Current()
// os/user doesn't work in from scratch environments
if err != nil || u.Uid == "0" {
viper.SetDefault("general.same_owner", true)
} else {
viper.SetDefault("general.same_owner", false)
@@ -219,6 +226,7 @@ func GenDefault(viper *v.Viper) {
viper.SetDefault("system.database_engine", "boltdb")
viper.SetDefault("system.database_path", "/var/cache/luet")
viper.SetDefault("system.rootfs", "/")
viper.SetDefault("system.tmpdir_base", filepath.Join(os.TempDir(), "tmpluet"))
viper.SetDefault("system.pkgs_cache_path", "packages")
viper.SetDefault("repos_confdir", []string{"/etc/luet/repos.conf.d"})
@@ -303,6 +311,10 @@ func (c *LuetGeneralConfig) GetSpinnerMs() time.Duration {
return duration
}
func (c *LuetLoggingConfig) SetLogLevel(s string) {
c.Level = s
}
func (c *LuetLoggingConfig) String() string {
ans := fmt.Sprintf(`
logging:
@@ -319,8 +331,37 @@ system:
database_engine: %s
database_path: %s
pkgs_cache_path: %s
tmpdir_base: %s
rootfs: %s`,
c.DatabaseEngine, c.DatabasePath, c.PkgsCachePath, c.Rootfs)
c.DatabaseEngine, c.DatabasePath, c.PkgsCachePath,
c.TmpDirBase, c.Rootfs)
return ans
}
func (c *LuetSystemConfig) InitTmpDir() error {
if !helpers.Exists(c.TmpDirBase) {
return os.MkdirAll(c.TmpDirBase, os.ModePerm)
}
return nil
}
func (c *LuetSystemConfig) CleanupTmpDir() error {
return os.RemoveAll(c.TmpDirBase)
}
func (c *LuetSystemConfig) TempDir(pattern string) (string, error) {
err := c.InitTmpDir()
if err != nil {
return "", err
}
return ioutil.TempDir(c.TmpDirBase, pattern)
}
func (c *LuetSystemConfig) TempFile(pattern string) (*os.File, error) {
err := c.InitTmpDir()
if err != nil {
return nil, err
}
return ioutil.TempFile(c.TmpDirBase, pattern)
}

View File

@@ -0,0 +1,33 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package config_test
import (
"testing"
. "github.com/mudler/luet/cmd"
config "github.com/mudler/luet/pkg/config"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func TestSolver(t *testing.T) {
RegisterFailHandler(Fail)
LoadConfig(config.LuetCfg)
RunSpecs(t, "Config Suite")
}

59
pkg/config/config_test.go Normal file
View File

@@ -0,0 +1,59 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package config_test
import (
"os"
"path/filepath"
"strings"
config "github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Config", func() {
Context("Simple temporary directory creation", func() {
It("Create Temporary directory", func() {
// PRE: tmpdir_base contains default value.
tmpDir, err := config.LuetCfg.GetSystem().TempDir("test1")
Expect(err).ToNot(HaveOccurred())
Expect(strings.HasPrefix(tmpDir, filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
Expect(helpers.Exists(tmpDir)).To(BeTrue())
defer os.RemoveAll(tmpDir)
})
It("Create Temporary file", func() {
// PRE: tmpdir_base contains default value.
tmpFile, err := config.LuetCfg.GetSystem().TempFile("testfile1")
Expect(err).ToNot(HaveOccurred())
Expect(strings.HasPrefix(tmpFile.Name(), filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
Expect(helpers.Exists(tmpFile.Name())).To(BeTrue())
defer os.Remove(tmpFile.Name())
})
})
})

View File

@@ -67,11 +67,13 @@ func Untar(src, dest string, sameOwner bool) error {
// Probably it's needed set this always to true.
NoLchown: true,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
ContinueOnError: true,
}
ans = archive.Untar(in, dest, opts)
} else {
// TODO: replace with https://github.com/mholt/archiver ?
var fileReader io.ReadCloser = in
tr := tar.NewReader(fileReader)

78
pkg/helpers/cli.go Normal file
View File

@@ -0,0 +1,78 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers
import (
"errors"
"fmt"
"regexp"
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
pkg "github.com/mudler/luet/pkg/package"
version "github.com/mudler/luet/pkg/versioner"
)
func CreateRegexArray(rgx []string) ([]*regexp.Regexp, error) {
ans := make([]*regexp.Regexp, len(rgx))
if len(rgx) > 0 {
for idx, reg := range rgx {
re := regexp.MustCompile(reg)
if re == nil {
return nil, errors.New("Invalid regex " + reg + "!")
}
ans[idx] = re
}
}
return ans, nil
}
func ParsePackageStr(p string) (*pkg.DefaultPackage, error) {
gp, err := _gentoo.ParsePackageStr(p)
if err != nil {
return nil, err
}
if gp.Version == "" {
gp.Version = "0"
gp.Condition = _gentoo.PkgCondGreaterEqual
}
pkgVersion := ""
if gp.VersionBuild != "" {
pkgVersion = fmt.Sprintf("%s%s%s+%s",
version.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
gp.Version,
gp.VersionSuffix,
gp.VersionBuild,
)
} else {
pkgVersion = fmt.Sprintf("%s%s%s",
version.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
gp.Version,
gp.VersionSuffix,
)
}
pack := &pkg.DefaultPackage{
Name: gp.Name,
Category: gp.Category,
Version: pkgVersion,
Uri: make([]string, 0),
}
return pack, nil
}

View File

@@ -19,10 +19,46 @@ import (
"io/ioutil"
"os"
"path/filepath"
"time"
copy "github.com/otiai10/copy"
)
func ListDir(dir string) ([]string, error) {
content := []string{}
err := filepath.Walk(dir,
func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
content = append(content, path)
return nil
})
return content, err
}
// Touch creates an empty file
func Touch(f string) error {
_, err := os.Stat(f)
if os.IsNotExist(err) {
file, err := os.Create(f)
if err != nil {
return err
}
defer file.Close()
} else {
currentTime := time.Now().Local()
err = os.Chtimes(f, currentTime, currentTime)
if err != nil {
return err
}
}
return nil
}
// Exists reports whether the named file or directory exists.
func Exists(name string) bool {
if _, err := os.Stat(name); err != nil {

32
pkg/helpers/sys.go Normal file
View File

@@ -0,0 +1,32 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers
import (
"os/exec"
"syscall"
"github.com/pkg/errors"
)
// This allows a multi-platform switch in the future
func Exec(cmd string, args []string, env []string) error {
path, err := exec.LookPath(cmd)
if err != nil {
return errors.Wrap(err, "Could not find binary in path: "+cmd)
}
return syscall.Exec(path, args, env)
}

View File

@@ -17,7 +17,6 @@ package client
import (
"fmt"
"io/ioutil"
"math"
"net/url"
"os"
@@ -79,7 +78,7 @@ func (c *HttpClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Arti
Info("Use artifact", artifactName, "from cache.")
} else {
temp, err = ioutil.TempDir(os.TempDir(), "tree")
temp, err = config.LuetCfg.GetSystem().TempDir("tree")
if err != nil {
return nil, err
}
@@ -139,7 +138,7 @@ func (c *HttpClient) DownloadFile(name string) (string, error) {
ok := false
temp, err = ioutil.TempDir(os.TempDir(), "tree")
temp, err = config.LuetCfg.GetSystem().TempDir("tree")
if err != nil {
return "", err
}
@@ -148,7 +147,7 @@ func (c *HttpClient) DownloadFile(name string) (string, error) {
for _, uri := range c.RepoData.Urls {
file, err = ioutil.TempFile(os.TempDir(), "HttpClient")
file, err = config.LuetCfg.GetSystem().TempFile("HttpClient")
if err != nil {
continue
}

View File

@@ -16,7 +16,6 @@
package client
import (
"io/ioutil"
"os"
"path"
"path/filepath"
@@ -76,7 +75,7 @@ func (c *LocalClient) DownloadFile(name string) (string, error) {
ok := false
for _, uri := range c.RepoData.Urls {
Info("Downloading file", name, "from", uri)
file, err = ioutil.TempFile(os.TempDir(), "localclient")
file, err = config.LuetCfg.GetSystem().TempFile("localclient")
if err != nil {
continue
}

View File

@@ -0,0 +1,90 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package installer
import (
"os/exec"
"github.com/ghodss/yaml"
box "github.com/mudler/luet/pkg/box"
. "github.com/mudler/luet/pkg/logger"
"github.com/pkg/errors"
)
type LuetFinalizer struct {
Shell []string `json:"shell"`
Install []string `json:"install"`
Uninstall []string `json:"uninstall"` // TODO: Where to store?
}
func (f *LuetFinalizer) RunInstall(s *System) error {
var cmd string
var args []string
if len(f.Shell) == 0 {
// Default to sh otherwise
cmd = "sh"
args = []string{"-c"}
} else {
cmd = f.Shell[0]
if len(f.Shell) > 1 {
args = f.Shell[1:]
}
}
for _, c := range f.Install {
toRun := append(args, c)
Info("Executing finalizer on ", s.Target, cmd, toRun)
if s.Target == "/" {
cmd := exec.Command(cmd, toRun...)
stdoutStderr, err := cmd.CombinedOutput()
if err != nil {
return errors.Wrap(err, "Failed running command: "+string(stdoutStderr))
}
Info(string(stdoutStderr))
} else {
b := box.NewBox(cmd, toRun, []string{}, []string{}, s.Target, false, true, true)
err := b.Run()
if err != nil {
return errors.Wrap(err, "Failed running command ")
}
}
}
return nil
}
// TODO: We don't store uninstall finalizers ?!
func (f *LuetFinalizer) RunUnInstall() error {
for _, c := range f.Uninstall {
Debug("finalizer:", "sh", "-c", c)
cmd := exec.Command("sh", "-c", c)
stdoutStderr, err := cmd.CombinedOutput()
if err != nil {
return errors.Wrap(err, "Failed running command: "+string(stdoutStderr))
}
Info(string(stdoutStderr))
}
return nil
}
func NewLuetFinalizerFromYaml(data []byte) (*LuetFinalizer, error) {
var p LuetFinalizer
err := yaml.Unmarshal(data, &p)
if err != nil {
return &p, err
}
return &p, err
}

View File

@@ -18,12 +18,11 @@ package installer
import (
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"sort"
"strings"
"sync"
"github.com/ghodss/yaml"
compiler "github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
@@ -36,8 +35,12 @@ import (
)
type LuetInstallerOptions struct {
SolverOptions config.LuetSolverOptions
Concurrency int
SolverOptions config.LuetSolverOptions
Concurrency int
NoDeps bool
OnlyDeps bool
Force bool
PreserveSystemEssentialData bool
}
type LuetInstaller struct {
@@ -52,47 +55,6 @@ type ArtifactMatch struct {
Repository Repository
}
type LuetFinalizer struct {
Install []string `json:"install"`
Uninstall []string `json:"uninstall"` // TODO: Where to store?
}
func (f *LuetFinalizer) RunInstall() error {
for _, c := range f.Install {
Debug("finalizer:", "sh", "-c", c)
cmd := exec.Command("sh", "-c", c)
stdoutStderr, err := cmd.CombinedOutput()
if err != nil {
return errors.Wrap(err, "Failed running command: "+string(stdoutStderr))
}
Info(string(stdoutStderr))
}
return nil
}
// TODO: We don't store uninstall finalizers ?!
func (f *LuetFinalizer) RunUnInstall() error {
for _, c := range f.Install {
Debug("finalizer:", "sh", "-c", c)
cmd := exec.Command("sh", "-c", c)
stdoutStderr, err := cmd.CombinedOutput()
if err != nil {
return errors.Wrap(err, "Failed running command: "+string(stdoutStderr))
}
Info(string(stdoutStderr))
}
return nil
}
func NewLuetFinalizerFromYaml(data []byte) (*LuetFinalizer, error) {
var p LuetFinalizer
err := yaml.Unmarshal(data, &p)
if err != nil {
return &p, err
}
return &p, err
}
func NewLuetInstaller(opts LuetInstallerOptions) Installer {
return &LuetInstaller{Options: opts}
}
@@ -103,31 +65,24 @@ func (l *LuetInstaller) Upgrade(s *System) error {
return err
}
// First match packages against repositories by priority
// matches := syncedRepos.PackageMatches(p)
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
// compute a "big" world
solv := solver.NewResolver(s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
uninstall, solution, err := solv.Upgrade()
uninstall, solution, err := solv.Upgrade(false)
if err != nil {
return errors.Wrap(err, "Failed solving solution for upgrade")
}
for _, u := range uninstall {
err := l.Uninstall(u, s)
if err != nil {
Warning("Failed uninstall for ", u.GetFingerPrint())
}
}
toInstall := []pkg.Package{}
toInstall := pkg.Packages{}
for _, assertion := range solution {
if assertion.Value {
// Be sure to filter from solutions packages already installed in the system
if _, err := s.Database.FindPackage(assertion.Package); err != nil && assertion.Value {
toInstall = append(toInstall, assertion.Package)
}
}
return l.Install(toInstall, s)
return l.swap(syncedRepos, uninstall, toInstall, s)
}
func (l *LuetInstaller) SyncRepositories(inMemory bool) (Repositories, error) {
@@ -152,8 +107,158 @@ func (l *LuetInstaller) SyncRepositories(inMemory bool) (Repositories, error) {
return syncedRepos, nil
}
func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
var p []pkg.Package
func (l *LuetInstaller) Swap(toRemove pkg.Packages, toInstall pkg.Packages, s *System) error {
syncedRepos, err := l.SyncRepositories(true)
if err != nil {
return err
}
return l.swap(syncedRepos, toRemove, toInstall, s)
}
func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) error {
// First match packages against repositories by priority
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
toInstall = syncedRepos.ResolveSelectors(toInstall)
if err := l.download(syncedRepos, toInstall); err != nil {
return errors.Wrap(err, "Pre-downloading packages")
}
// We don't want any conflict with the installed to raise during the upgrade.
// In this way we both force uninstalls and we avoid to check with conflicts
// against the current system state which is pending to deletion
// E.g. you can't check for conflicts for an upgrade of a new version of A
// if the old A results installed in the system. This is due to the fact that
// now the solver enforces the constraints and explictly denies two packages
// of the same version installed.
forced := false
if l.Options.Force {
forced = true
}
l.Options.Force = true
for _, u := range toRemove {
Info(":package: Marked for deletion", u.HumanReadableString())
err := l.Uninstall(u, s)
if err != nil && !l.Options.Force {
Error("Failed uninstall for ", u.HumanReadableString())
return errors.Wrap(err, "uninstalling "+u.HumanReadableString())
}
}
l.Options.Force = forced
return l.install(syncedRepos, toInstall, s)
}
func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
syncedRepos, err := l.SyncRepositories(true)
if err != nil {
return err
}
return l.install(syncedRepos, cp, s)
}
func (l *LuetInstaller) download(syncedRepos Repositories, cp pkg.Packages) error {
toDownload := map[string]ArtifactMatch{}
// FIXME: This can be optimized. We don't need to re-match this to the repository
// But we could just do it once
// Gathers things to download
for _, currentPack := range cp {
matches := syncedRepos.PackageMatches(pkg.Packages{currentPack})
if len(matches) == 0 {
return errors.New("Failed matching solutions against repository for " + currentPack.HumanReadableString() + " where are definitions coming from?!")
}
A:
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
return errors.New("Package in compilespec empty")
}
if matches[0].Package.Matches(artefact.GetCompileSpec().GetPackage()) {
toDownload[currentPack.GetFingerPrint()] = ArtifactMatch{Package: currentPack, Artifact: artefact, Repository: matches[0].Repo}
break A
}
}
}
// Download packages into cache in parallel.
all := make(chan ArtifactMatch)
var wg = new(sync.WaitGroup)
// Download
for i := 0; i < l.Options.Concurrency; i++ {
wg.Add(1)
go l.downloadWorker(i, wg, all)
}
for _, c := range toDownload {
all <- c
}
close(all)
wg.Wait()
return nil
}
// Reclaim adds packages to the system database
// if files from artifacts in the repositories are found
// in the system target
func (l *LuetInstaller) Reclaim(s *System) error {
syncedRepos, err := l.SyncRepositories(true)
if err != nil {
return err
}
var toMerge []ArtifactMatch = []ArtifactMatch{}
for _, repo := range syncedRepos {
for _, artefact := range repo.GetIndex() {
Debug("Checking if",
artefact.GetCompileSpec().GetPackage().HumanReadableString(),
"from", repo.GetName(), "is installed")
FILES:
for _, f := range artefact.GetFiles() {
if helpers.Exists(filepath.Join(s.Target, f)) {
p, err := repo.GetTree().GetDatabase().FindPackage(artefact.GetCompileSpec().GetPackage())
if err != nil {
return err
}
Info("Found package:", p.HumanReadableString())
toMerge = append(toMerge, ArtifactMatch{Artifact: artefact, Package: p})
break FILES
}
}
}
}
for _, match := range toMerge {
pack := match.Package
vers, _ := s.Database.FindPackageVersions(pack)
if len(vers) >= 1 {
Warning("Filtering out package " + pack.HumanReadableString() + ", already reclaimed")
continue
}
_, err := s.Database.CreatePackage(pack)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Failed creating package")
}
s.Database.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: pack.GetFingerPrint(), Files: match.Artifact.GetFiles()})
Info("Reclaimed package:", pack.HumanReadableString())
}
Info("Done!")
return nil
}
func (l *LuetInstaller) install(syncedRepos Repositories, cp pkg.Packages, s *System) error {
var p pkg.Packages
// Check if the package is installed first
for _, pi := range cp {
@@ -161,7 +266,7 @@ func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
vers, _ := s.Database.FindPackageVersions(pi)
if len(vers) >= 1 {
Warning("Filtering out package " + pi.GetFingerPrint() + ", it has other versions already installed. Uninstall one of them first ")
Warning("Filtering out package " + pi.HumanReadableString() + ", it has other versions already installed. Uninstall one of them first ")
continue
//return errors.New("Package " + pi.GetFingerPrint() + " has other versions already installed. Uninstall one of them first: " + strings.Join(vers, " "))
@@ -175,10 +280,6 @@ func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
}
// First get metas from all repos (and decodes trees)
syncedRepos, err := l.SyncRepositories(true)
if err != nil {
return err
}
// First match packages against repositories by priority
// matches := syncedRepos.PackageMatches(p)
@@ -186,42 +287,77 @@ func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
p = syncedRepos.ResolveSelectors(p)
solv := solver.NewResolver(s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
solution, err := solv.Install(p)
if err != nil {
return errors.Wrap(err, "Failed solving solution for package")
}
// Gathers things to install
toInstall := map[string]ArtifactMatch{}
for _, assertion := range solution {
if assertion.Value {
matches := syncedRepos.PackageMatches([]pkg.Package{assertion.Package})
if len(matches) == 0 {
return errors.New("Failed matching solutions against repository - where are definitions coming from?!")
}
A:
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
return errors.New("Package in compilespec empty")
var packagesToInstall pkg.Packages
var err error
var solution solver.PackagesAssertions
}
if matches[0].Package.Matches(artefact.GetCompileSpec().GetPackage()) {
// Filter out already installed
if _, err := s.Database.FindPackage(assertion.Package); err != nil {
toInstall[assertion.Package.GetFingerPrint()] = ArtifactMatch{Package: assertion.Package, Artifact: artefact, Repository: matches[0].Repo}
}
break A
}
if !l.Options.NoDeps {
solv := solver.NewResolver(s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
solution, err = solv.Install(p)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Failed solving solution for package")
}
// Gathers things to install
for _, assertion := range solution {
if assertion.Value {
packagesToInstall = append(packagesToInstall, assertion.Package)
}
}
} else if !l.Options.OnlyDeps {
for _, currentPack := range p {
packagesToInstall = append(packagesToInstall, currentPack)
}
}
// Gathers things to install
for _, currentPack := range packagesToInstall {
// Check if package is already installed.
if _, err := s.Database.FindPackage(currentPack); err == nil {
// skip matching if it is installed already
continue
}
matches := syncedRepos.PackageMatches(pkg.Packages{currentPack})
if len(matches) == 0 {
return errors.New("Failed matching solutions against repository for " + currentPack.HumanReadableString() + " where are definitions coming from?!")
}
A:
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
return errors.New("Package in compilespec empty")
}
if matches[0].Package.Matches(artefact.GetCompileSpec().GetPackage()) {
// Filter out already installed
if _, err := s.Database.FindPackage(currentPack); err != nil {
toInstall[currentPack.GetFingerPrint()] = ArtifactMatch{Package: currentPack, Artifact: artefact, Repository: matches[0].Repo}
}
break A
}
}
}
// Install packages into rootfs in parallel.
all := make(chan ArtifactMatch)
var wg = new(sync.WaitGroup)
// Download first
for i := 0; i < l.Options.Concurrency; i++ {
wg.Add(1)
go l.downloadWorker(i, wg, all)
}
for _, c := range toInstall {
all <- c
}
close(all)
wg.Wait()
all = make(chan ArtifactMatch)
wg = new(sync.WaitGroup)
// Do the real install
for i := 0; i < l.Options.Concurrency; i++ {
wg.Add(1)
go l.installerWorker(i, wg, all, s)
@@ -233,82 +369,119 @@ func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
close(all)
wg.Wait()
for _, c := range toInstall {
// Annotate to the system that the package was installed
_, err := s.Database.CreatePackage(c.Package)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Failed creating package")
}
}
executedFinalizer := map[string]bool{}
// TODO: Lower those errors as warning
for _, w := range p {
// Finalizers needs to run in order and in sequence.
ordered := solution.Order(allRepos, w.GetFingerPrint())
for _, ass := range ordered {
if ass.Value {
// Annotate to the system that the package was installed
// TODO: Annotate also files that belong to the package, somewhere to uninstall
if _, err := s.Database.FindPackage(ass.Package); err == nil {
err := s.Database.UpdatePackage(ass.Package)
if err != nil {
return errors.Wrap(err, "Failed updating package")
}
} else {
_, err := s.Database.CreatePackage(ass.Package)
if err != nil {
return errors.Wrap(err, "Failed creating package")
}
}
installed, ok := toInstall[ass.Package.GetFingerPrint()]
if !ok {
return errors.New("Couldn't find ArtifactMatch for " + ass.Package.GetFingerPrint())
}
if !l.Options.NoDeps {
// TODO: Lower those errors as warning
for _, w := range p {
// Finalizers needs to run in order and in sequence.
ordered, err := solution.Order(allRepos, w.GetFingerPrint())
if err != nil {
return errors.Wrap(err, "While order a solution for "+w.HumanReadableString())
}
ORDER:
for _, ass := range ordered {
if ass.Value {
treePackage, err := installed.Repository.GetTree().GetDatabase().FindPackage(ass.Package)
if err != nil {
return errors.Wrap(err, "Error getting package "+ass.Package.GetFingerPrint())
}
if helpers.Exists(treePackage.Rel(tree.FinalizerFile)) {
Info("Executing finalizer for " + ass.Package.GetName())
finalizerRaw, err := ioutil.ReadFile(treePackage.Rel(tree.FinalizerFile))
installed, ok := toInstall[ass.Package.GetFingerPrint()]
if !ok {
// It was a dep already installed in the system, so we can skip it safely
continue ORDER
}
treePackage, err := installed.Repository.GetTree().GetDatabase().FindPackage(ass.Package)
if err != nil {
return errors.Wrap(err, "Error reading file "+treePackage.Rel(tree.FinalizerFile))
return errors.Wrap(err, "Error getting package "+ass.Package.HumanReadableString())
}
if _, exists := executedFinalizer[ass.Package.GetFingerPrint()]; !exists {
finalizer, err := NewLuetFinalizerFromYaml(finalizerRaw)
if err != nil {
return errors.Wrap(err, "Error reading finalizer "+treePackage.Rel(tree.FinalizerFile))
if helpers.Exists(treePackage.Rel(tree.FinalizerFile)) {
Info("Executing finalizer for " + ass.Package.HumanReadableString())
finalizerRaw, err := ioutil.ReadFile(treePackage.Rel(tree.FinalizerFile))
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Error reading file "+treePackage.Rel(tree.FinalizerFile))
}
err = finalizer.RunInstall()
if err != nil {
return errors.Wrap(err, "Error executing install finalizer "+treePackage.Rel(tree.FinalizerFile))
if _, exists := executedFinalizer[ass.Package.GetFingerPrint()]; !exists {
finalizer, err := NewLuetFinalizerFromYaml(finalizerRaw)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Error reading finalizer "+treePackage.Rel(tree.FinalizerFile))
}
err = finalizer.RunInstall(s)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Error executing install finalizer "+treePackage.Rel(tree.FinalizerFile))
}
executedFinalizer[ass.Package.GetFingerPrint()] = true
}
executedFinalizer[ass.Package.GetFingerPrint()] = true
}
}
}
}
} else {
for _, c := range toInstall {
treePackage, err := c.Repository.GetTree().GetDatabase().FindPackage(c.Package)
if err != nil {
return errors.Wrap(err, "Error getting package "+c.Package.HumanReadableString())
}
if helpers.Exists(treePackage.Rel(tree.FinalizerFile)) {
Info("Executing finalizer for " + c.Package.HumanReadableString())
finalizerRaw, err := ioutil.ReadFile(treePackage.Rel(tree.FinalizerFile))
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Error reading file "+treePackage.Rel(tree.FinalizerFile))
}
if _, exists := executedFinalizer[c.Package.GetFingerPrint()]; !exists {
finalizer, err := NewLuetFinalizerFromYaml(finalizerRaw)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Error reading finalizer "+treePackage.Rel(tree.FinalizerFile))
}
err = finalizer.RunInstall(s)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Error executing install finalizer "+treePackage.Rel(tree.FinalizerFile))
}
executedFinalizer[c.Package.GetFingerPrint()] = true
}
}
}
}
return nil
}
func (l *LuetInstaller) installPackage(a ArtifactMatch, s *System) error {
func (l *LuetInstaller) downloadPackage(a ArtifactMatch) (compiler.Artifact, error) {
artifact, err := a.Repository.Client().DownloadArtifact(a.Artifact)
if err != nil {
return errors.Wrap(err, "Error on download artifact")
return nil, errors.Wrap(err, "Error on download artifact")
}
err = artifact.Verify()
if err != nil {
return errors.Wrap(err, "Artifact integrity check failure")
if err != nil && !l.Options.Force {
return nil, errors.Wrap(err, "Artifact integrity check failure")
}
return artifact, nil
}
func (l *LuetInstaller) installPackage(a ArtifactMatch, s *System) error {
artifact, err := l.downloadPackage(a)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Failed downloading package")
}
files, err := artifact.FileList()
if err != nil {
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Could not open package archive")
}
err = artifact.Unpack(s.Target, true)
if err != nil {
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Error met while unpacking rootfs")
}
@@ -317,17 +490,43 @@ func (l *LuetInstaller) installPackage(a ArtifactMatch, s *System) error {
return s.Database.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: a.Package.GetFingerPrint(), Files: files})
}
func (l *LuetInstaller) downloadWorker(i int, wg *sync.WaitGroup, c <-chan ArtifactMatch) error {
defer wg.Done()
for p := range c {
// TODO: Keep trace of what was added from the tar, and save it into system
_, err := l.downloadPackage(p)
if err != nil && !l.Options.Force {
//TODO: Uninstall, rollback.
Fatal("Failed installing package "+p.Package.GetName(), err.Error())
return errors.Wrap(err, "Failed installing package "+p.Package.GetName())
}
if err == nil {
Info(":package: ", p.Package.HumanReadableString(), "downloaded")
} else if err != nil && l.Options.Force {
Info(":package: ", p.Package.HumanReadableString(), "downloaded with failures (force download)")
}
}
return nil
}
func (l *LuetInstaller) installerWorker(i int, wg *sync.WaitGroup, c <-chan ArtifactMatch, s *System) error {
defer wg.Done()
for p := range c {
// TODO: Keep trace of what was added from the tar, and save it into system
err := l.installPackage(p, s)
if err != nil {
if err != nil && !l.Options.Force {
//TODO: Uninstall, rollback.
Fatal("Failed installing package "+p.Package.GetName(), err.Error())
return errors.Wrap(err, "Failed installing package "+p.Package.GetName())
}
if err == nil {
Info(":package: ", p.Package.HumanReadableString(), "installed")
} else if err != nil && l.Options.Force {
Info(":package: ", p.Package.HumanReadableString(), "installed with failures (force install)")
}
}
return nil
@@ -342,7 +541,15 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
// Remove from target
for _, f := range files {
target := filepath.Join(s.Target, f)
Info("Removing", target)
Debug("Removing", target)
if l.Options.PreserveSystemEssentialData &&
strings.HasPrefix(f, config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath()) ||
strings.HasPrefix(f, config.LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath()) {
Warning("Preserve ", f, " which is required by luet ( you have to delete it manually if you really need to)")
continue
}
err := os.Remove(target)
if err != nil {
Warning("Failed removing file (not present in the system target ?)", target)
@@ -362,19 +569,34 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
}
func (l *LuetInstaller) Uninstall(p pkg.Package, s *System) error {
// compute uninstall from all world - remove packages in parallel - run uninstall finalizer (in order) - mark the uninstallation in db
// compute uninstall from all world - remove packages in parallel - run uninstall finalizer (in order) TODO - mark the uninstallation in db
// Get installed definition
solv := solver.NewResolver(s.Database, s.Database, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
solution, err := solv.Uninstall(p)
if err != nil {
return errors.Wrap(err, "Uninstall failed")
checkConflicts := true
if l.Options.Force == true {
checkConflicts = false
}
for _, p := range solution {
Info("Uninstalling", p.GetFingerPrint())
if !l.Options.NoDeps {
solv := solver.NewResolver(s.Database, s.Database, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
solution, err := solv.Uninstall(p, checkConflicts)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
}
for _, p := range solution {
Info("Uninstalling", p.HumanReadableString())
err := l.uninstall(p, s)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Uninstall failed")
}
}
} else {
Info("Uninstalling", p.HumanReadableString(), "without deps")
err := l.uninstall(p, s)
if err != nil {
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Uninstall failed")
}
Info(":package: ", p.HumanReadableString(), "uninstalled")
}
return nil

View File

@@ -47,9 +47,9 @@ var _ = Describe("Installer", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
@@ -62,9 +62,9 @@ var _ = Describe("Installer", func() {
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2", "chmod +x generate.sh"}))
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(2)
c.SetConcurrency(2)
artifact, err := compiler.Compile(false, spec)
artifact, err := c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
@@ -86,12 +86,133 @@ var _ = Describe("Installer", func() {
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
systemDB := pkg.NewInMemoryDatabase(false)
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
files, err := systemDB.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(files).To(Equal([]string{"artifact42", "test5", "test6"}))
Expect(err).ToNot(HaveOccurred())
Expect(len(system.Database.GetPackages())).To(Equal(1))
p, err := system.Database.GetPackage(system.Database.GetPackages()[0])
Expect(err).ToNot(HaveOccurred())
Expect(p.GetName()).To(Equal("b"))
err = inst.Uninstall(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}, system)
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = systemDB.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
})
})
Context("Writes a repository definition without compression", func() {
It("Writes a repo and can install packages from it", func() {
//repo:=NewLuetSystemRepository()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(),
generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(spec.BuildSteps()).To(Equal([]string{"echo artifact5 > /test5", "echo artifact6 > /test6", "./generate.sh"}))
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2", "chmod +x generate.sh"}))
spec.SetOutputPath(tmpdir)
c.SetConcurrency(2)
artifact, err := c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
treeFile := NewDefaultTreeRepositoryFile()
treeFile.SetCompressionType(compiler.None)
repo.SetRepositoryFile(REPOFILE_TREE_KEY, treeFile)
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -160,9 +281,9 @@ urls:
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
@@ -175,9 +296,9 @@ urls:
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2", "chmod +x generate.sh"}))
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(2)
c.SetConcurrency(2)
artifact, err := compiler.Compile(false, spec)
artifact, err := c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
@@ -199,12 +320,14 @@ urls:
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -262,6 +385,149 @@ urls:
})
It("Installs new packages from a syste with others installed", func() {
//repo:=NewLuetSystemRepository()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(spec.BuildSteps()).To(Equal([]string{"echo artifact5 > /test5", "echo artifact6 > /test6", "./generate.sh"}))
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2", "chmod +x generate.sh"}))
spec.SetOutputPath(tmpdir)
c.SetConcurrency(2)
artifact, err := c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
bolt, err := ioutil.TempDir("", "db")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(bolt) // clean up
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
tmpdir2, err := ioutil.TempDir("", "tree2")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe2 := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe2.Load("../../tests/fixtures/alpine")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe2.GetDatabase().GetPackages())).To(Equal(1))
c = compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err = c.FromPackage(&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
spec.SetOutputPath(tmpdir2)
c.SetConcurrency(2)
artifact, err = c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
repo, err = GenerateRepository("test", "description", "disk", []string{tmpdir2}, 1, tmpdir2, "../../tests/fixtures/alpine", pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
err = repo.Write(tmpdir2, false)
Expect(err).ToNot(HaveOccurred())
fakeroot, err = ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst = NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err = NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir2+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir2))
Expect(repo.GetType()).To(Equal("disk"))
system.Target = fakeroot
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
})
})
Context("Simple upgrades", func() {
@@ -307,12 +573,14 @@ urls:
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -376,6 +644,140 @@ urls:
})
It("Handles package drops", func() {
//repo:=NewLuetSystemRepository()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
generalRecipeNewRepo := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/upgrade_old_repo")
Expect(err).ToNot(HaveOccurred())
err = generalRecipeNewRepo.Load("../../tests/fixtures/upgrade_new_repo")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
Expect(len(generalRecipeNewRepo.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
c2 := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipeNewRepo.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
spec3, err := c.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
spec2, err := c2.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
tmpdirnewrepo, err := ioutil.TempDir("", "tree2")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdirnewrepo) // clean up
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdirnewrepo)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(2)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec3))
Expect(errs).To(BeEmpty())
_, errs = c2.CompileParallel(false, compiler.NewLuetCompilationspecs(spec2))
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/upgrade_old_repo", pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
repoupgrade, err := GenerateRepository("test", "description", "disk", []string{tmpdirnewrepo}, 1, tmpdirnewrepo, "../../tests/fixtures/upgrade_new_repo", pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
err = repoupgrade.Write(tmpdirnewrepo, false)
Expect(err).ToNot(HaveOccurred())
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
repoupgrade2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdirnewrepo+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
bolt, err := ioutil.TempDir("", "db")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(bolt) // clean up
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(len(system.Database.GetPackages())).To(Equal(1))
p, err := system.Database.GetPackage(system.Database.GetPackages()[0])
Expect(err).ToNot(HaveOccurred())
Expect(p.GetName()).To(Equal("b"))
files, err := systemDB.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(files).To(Equal([]string{"artifact42", "test5", "test6"}))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repoupgrade2})
err = inst.Upgrade(system)
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
// New version - new files
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
// New package should be there
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
})
})
Context("Compressed packages", func() {
@@ -420,14 +822,16 @@ urls:
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
@@ -493,4 +897,267 @@ urls:
})
Context("Existing files", func() {
It("Reclaims them", func() {
//repo:=NewLuetSystemRepository()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/upgrade")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(4))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
spec2, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
spec3, err := c.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(2)
c.SetCompressionType(compiler.GZip)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec2, spec3))
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/upgrade", pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
bolt, err := ioutil.TempDir("", "db")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(bolt) // clean up
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(0))
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
Expect(helpers.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
err = inst.Reclaim(system)
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(2))
})
It("Upgrades reclaimed packages", func() {
//repo:=NewLuetSystemRepository()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/upgrade_old_repo")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
spec3, err := c.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(1)
c.SetCompressionType(compiler.GZip)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec3))
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/upgrade_old_repo", pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
bolt, err := ioutil.TempDir("", "db")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(bolt) // clean up
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(0))
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
Expect(helpers.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
err = inst.Reclaim(system)
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(2))
generalRecipe2 := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe2.Load("../../tests/fixtures/upgrade_new_repo")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe2.GetDatabase().GetPackages())).To(Equal(3))
c = compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err = c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
tmpdir2, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir2) // clean up
spec.SetOutputPath(tmpdir2)
_, errs = c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec))
Expect(errs).To(BeEmpty())
repo, err = GenerateRepository("test", "description", "disk", []string{tmpdir2}, 1, tmpdir2, "../../tests/fixtures/upgrade_new_repo", pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
err = repo.Write(tmpdir2, false)
Expect(err).ToNot(HaveOccurred())
inst = NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err = NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir2+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
err = inst.Upgrade(system)
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
// New version - new files
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
// New package should be there
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
})
})
})

View File

@@ -23,11 +23,14 @@ import (
)
type Installer interface {
Install([]pkg.Package, *System) error
Install(pkg.Packages, *System) error
Uninstall(pkg.Package, *System) error
Upgrade(s *System) error
Reclaim(s *System) error
Repositories([]Repository)
SyncRepositories(bool) (Repositories, error)
Swap(pkg.Packages, pkg.Packages, *System) error
}
type Client interface {
@@ -45,12 +48,15 @@ type Repository interface {
AddUrl(string)
GetPriority() int
GetIndex() compiler.ArtifactIndex
SetIndex(i compiler.ArtifactIndex)
GetTree() tree.Builder
SetTree(tree.Builder)
Write(path string, resetRevision bool) error
Sync(bool) (Repository, error)
GetTreePath() string
SetTreePath(string)
GetMetaPath() string
SetMetaPath(string)
GetType() string
SetType(string)
SetAuthentication(map[string]string)
@@ -61,8 +67,9 @@ type Repository interface {
SetLastUpdate(string)
Client() Client
GetTreeChecksums() compiler.Checksums
GetTreeCompressionType() compiler.CompressionImplementation
SetTreeCompressionType(c compiler.CompressionImplementation)
SetTreeChecksums(c compiler.Checksums)
SetPriority(int)
GetRepositoryFile(string) (LuetRepositoryFile, error)
SetRepositoryFile(string, LuetRepositoryFile)
Serialize() (*LuetSystemRepositoryMetadata, LuetSystemRepositorySerialized)
}

View File

@@ -21,7 +21,6 @@ import (
"os"
"path"
"path/filepath"
"regexp"
"sort"
"strconv"
"strings"
@@ -41,42 +40,152 @@ import (
)
const (
REPOSITORY_METAFILE = "repository.meta.yaml"
REPOSITORY_SPECFILE = "repository.yaml"
TREE_TARBALL = "tree.tar"
REPOFILE_TREE_KEY = "tree"
REPOFILE_META_KEY = "meta"
)
type LuetRepositoryFile struct {
FileName string `json:"filename"`
CompressionType compiler.CompressionImplementation `json:"compressiontype,omitempty"`
Checksums compiler.Checksums `json:"checksums,omitempty"`
}
type LuetSystemRepository struct {
*config.LuetRepository
Index compiler.ArtifactIndex `json:"index"`
Tree tree.Builder `json:"-"`
TreePath string `json:"treepath"`
TreeCompressionType compiler.CompressionImplementation `json:"treecompressiontype"`
TreeChecksums compiler.Checksums `json:"treechecksums"`
Index compiler.ArtifactIndex `json:"index"`
Tree tree.Builder `json:"-"`
RepositoryFiles map[string]LuetRepositoryFile `json:"repo_files"`
}
type LuetSystemRepositorySerialized struct {
Name string `json:"name"`
Description string `json:"description,omitempty"`
Urls []string `json:"urls"`
Priority int `json:"priority"`
Index []*compiler.PackageArtifact `json:"index"`
Type string `json:"type"`
Revision int `json:"revision,omitempty"`
LastUpdate string `json:"last_update,omitempty"`
TreePath string `json:"treepath"`
TreeCompressionType compiler.CompressionImplementation `json:"treecompressiontype"`
TreeChecksums compiler.Checksums `json:"treechecksums"`
Name string `json:"name"`
Description string `json:"description,omitempty"`
Urls []string `json:"urls"`
Priority int `json:"priority"`
Type string `json:"type"`
Revision int `json:"revision,omitempty"`
LastUpdate string `json:"last_update,omitempty"`
TreePath string `json:"treepath"`
MetaPath string `json:"metapath"`
RepositoryFiles map[string]LuetRepositoryFile `json:"repo_files"`
}
type LuetSystemRepositoryMetadata struct {
Index []*compiler.PackageArtifact `json:"index,omitempty"`
}
type LuetSearchModeType string
const (
SLabel LuetSearchModeType = "label"
SRegexPkg LuetSearchModeType = "regexPkg"
SRegexLabel LuetSearchModeType = "regexLabel"
)
type LuetSearchOpts struct {
Pattern string
Mode LuetSearchModeType
}
func NewLuetSystemRepositoryMetadata(file string, removeFile bool) (*LuetSystemRepositoryMetadata, error) {
ans := &LuetSystemRepositoryMetadata{}
err := ans.ReadFile(file, removeFile)
if err != nil {
return nil, err
}
return ans, nil
}
func (m *LuetSystemRepositoryMetadata) WriteFile(path string) error {
data, err := yaml.Marshal(m)
if err != nil {
return err
}
err = ioutil.WriteFile(path, data, os.ModePerm)
if err != nil {
return err
}
return nil
}
func (m *LuetSystemRepositoryMetadata) ReadFile(file string, removeFile bool) error {
if file == "" {
return errors.New("Invalid path for repository metadata")
}
dat, err := ioutil.ReadFile(file)
if err != nil {
return err
}
if removeFile {
defer os.Remove(file)
}
err = yaml.Unmarshal(dat, m)
if err != nil {
return err
}
return nil
}
func (m *LuetSystemRepositoryMetadata) ToArtifactIndex() (ans compiler.ArtifactIndex) {
for _, a := range m.Index {
ans = append(ans, a)
}
return
}
func NewDefaultTreeRepositoryFile() LuetRepositoryFile {
return LuetRepositoryFile{
FileName: TREE_TARBALL,
CompressionType: compiler.GZip,
}
}
func NewDefaultMetaRepositoryFile() LuetRepositoryFile {
return LuetRepositoryFile{
FileName: REPOSITORY_METAFILE + ".tar",
CompressionType: compiler.None,
}
}
func (f *LuetRepositoryFile) SetFileName(n string) {
f.FileName = n
}
func (f *LuetRepositoryFile) GetFileName() string {
return f.FileName
}
func (f *LuetRepositoryFile) SetCompressionType(c compiler.CompressionImplementation) {
f.CompressionType = c
}
func (f *LuetRepositoryFile) GetCompressionType() compiler.CompressionImplementation {
return f.CompressionType
}
func (f *LuetRepositoryFile) SetChecksums(c compiler.Checksums) {
f.Checksums = c
}
func (f *LuetRepositoryFile) GetChecksums() compiler.Checksums {
return f.Checksums
}
func GenerateRepository(name, descr, t string, urls []string, priority int, src, treeDir string, db pkg.PackageDatabase) (Repository, error) {
art, err := buildPackageIndex(src)
tr := tree.NewInstallerRecipe(db)
err := tr.Load(treeDir)
if err != nil {
return nil, err
}
tr := tree.NewInstallerRecipe(db)
err = tr.Load(treeDir)
art, err := buildPackageIndex(src, tr.GetDatabase())
if err != nil {
return nil, err
}
@@ -88,15 +197,17 @@ func GenerateRepository(name, descr, t string, urls []string, priority int, src,
func NewSystemRepository(repo config.LuetRepository) Repository {
return &LuetSystemRepository{
LuetRepository: &repo,
LuetRepository: &repo,
RepositoryFiles: map[string]LuetRepositoryFile{},
}
}
func NewLuetSystemRepository(repo *config.LuetRepository, art []compiler.Artifact, builder tree.Builder) Repository {
return &LuetSystemRepository{
LuetRepository: repo,
Index: art,
Tree: builder,
LuetRepository: repo,
Index: art,
Tree: builder,
RepositoryFiles: map[string]LuetRepositoryFile{},
}
}
@@ -106,6 +217,7 @@ func NewLuetSystemRepositoryFromYaml(data []byte, db pkg.PackageDatabase) (Repos
if err != nil {
return nil, err
}
r := &LuetSystemRepository{
LuetRepository: config.NewLuetRepository(
p.Name,
@@ -116,9 +228,7 @@ func NewLuetSystemRepositoryFromYaml(data []byte, db pkg.PackageDatabase) (Repos
true,
false,
),
TreeCompressionType: p.TreeCompressionType,
TreeChecksums: p.TreeChecksums,
TreePath: p.TreePath,
RepositoryFiles: p.RepositoryFiles,
}
if p.Revision > 0 {
r.Revision = p.Revision
@@ -126,17 +236,12 @@ func NewLuetSystemRepositoryFromYaml(data []byte, db pkg.PackageDatabase) (Repos
if p.LastUpdate != "" {
r.LastUpdate = p.LastUpdate
}
i := compiler.ArtifactIndex{}
for _, ii := range p.Index {
i = append(i, ii)
}
r.Index = i
r.Tree = tree.NewInstallerRecipe(db)
return r, err
}
func buildPackageIndex(path string) ([]compiler.Artifact, error) {
func buildPackageIndex(path string, db pkg.PackageDatabase) ([]compiler.Artifact, error) {
var art []compiler.Artifact
var ff = func(currentpath string, info os.FileInfo, err error) error {
@@ -154,6 +259,13 @@ func buildPackageIndex(path string) ([]compiler.Artifact, error) {
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
// We want to include packages that are ONLY referenced in the tree.
// the ones which aren't should be deleted. (TODO: by another cli command?)
if _, notfound := db.FindPackage(artifact.GetCompileSpec().GetPackage()); notfound != nil {
return nil
}
art = append(art, artifact)
return nil
@@ -167,6 +279,10 @@ func buildPackageIndex(path string) ([]compiler.Artifact, error) {
return art, nil
}
func (r *LuetSystemRepository) SetPriority(n int) {
r.LuetRepository.Priority = n
}
func (r *LuetSystemRepository) GetName() string {
return r.LuetRepository.Name
}
@@ -178,22 +294,6 @@ func (r *LuetSystemRepository) GetAuthentication() map[string]string {
return r.LuetRepository.Authentication
}
func (r *LuetSystemRepository) GetTreeCompressionType() compiler.CompressionImplementation {
return r.TreeCompressionType
}
func (r *LuetSystemRepository) GetTreeChecksums() compiler.Checksums {
return r.TreeChecksums
}
func (r *LuetSystemRepository) SetTreeCompressionType(c compiler.CompressionImplementation) {
r.TreeCompressionType = c
}
func (r *LuetSystemRepository) SetTreeChecksums(c compiler.Checksums) {
r.TreeChecksums = c
}
func (r *LuetSystemRepository) GetType() string {
return r.LuetRepository.Type
}
@@ -218,12 +318,21 @@ func (r *LuetSystemRepository) GetTreePath() string {
func (r *LuetSystemRepository) SetTreePath(p string) {
r.TreePath = p
}
func (r *LuetSystemRepository) GetMetaPath() string {
return r.MetaPath
}
func (r *LuetSystemRepository) SetMetaPath(p string) {
r.MetaPath = p
}
func (r *LuetSystemRepository) SetTree(b tree.Builder) {
r.Tree = b
}
func (r *LuetSystemRepository) GetIndex() compiler.ArtifactIndex {
return r.Index
}
func (r *LuetSystemRepository) SetIndex(i compiler.ArtifactIndex) {
r.Index = i
}
func (r *LuetSystemRepository) GetTree() tree.Builder {
return r.Tree
}
@@ -239,10 +348,19 @@ func (r *LuetSystemRepository) SetLastUpdate(u string) {
func (r *LuetSystemRepository) IncrementRevision() {
r.LuetRepository.Revision++
}
func (r *LuetSystemRepository) SetAuthentication(auth map[string]string) {
r.LuetRepository.Authentication = auth
}
func (r *LuetSystemRepository) GetRepositoryFile(name string) (LuetRepositoryFile, error) {
ans, ok := r.RepositoryFiles[name]
if ok {
return ans, nil
}
return ans, errors.New("Repository file " + name + " not found!")
}
func (r *LuetSystemRepository) SetRepositoryFile(name string, f LuetRepositoryFile) {
r.RepositoryFiles[name] = f
}
func (r *LuetSystemRepository) ReadSpecFile(file string, removeFile bool) (Repository, error) {
dat, err := ioutil.ReadFile(file)
@@ -259,6 +377,16 @@ func (r *LuetSystemRepository) ReadSpecFile(file string, removeFile bool) (Repos
return nil, errors.Wrap(err, "Error reading repository from file "+file)
}
// Check if mandatory key are present
_, err = repo.GetRepositoryFile(REPOFILE_TREE_KEY)
if err != nil {
return nil, errors.New("Invalid repository without the " + REPOFILE_TREE_KEY + " key file.")
}
_, err = repo.GetRepositoryFile(REPOFILE_META_KEY)
if err != nil {
return nil, errors.New("Invalid repository without the " + REPOFILE_META_KEY + " key file.")
}
return repo, err
}
@@ -267,7 +395,6 @@ func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
if err != nil {
return err
}
r.Index = r.Index.CleanPath()
r.LastUpdate = strconv.FormatInt(time.Now().Unix(), 10)
repospec := filepath.Join(dst, REPOSITORY_SPECFILE)
@@ -290,7 +417,8 @@ func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
r.Name, r.Revision, r.LastUpdate,
))
archive, err := ioutil.TempDir(os.TempDir(), "archive")
// Create tree and repository file
archive, err := config.LuetCfg.GetSystem().TempDir("archive")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for archive")
}
@@ -299,26 +427,68 @@ func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
if err != nil {
return errors.Wrap(err, "Error met while saving the tree")
}
tpath := r.GetTreePath()
if tpath == "" {
tpath = TREE_TARBALL
treeFile, err := r.GetRepositoryFile(REPOFILE_TREE_KEY)
if err != nil {
treeFile = NewDefaultTreeRepositoryFile()
r.SetRepositoryFile(REPOFILE_TREE_KEY, treeFile)
}
a := compiler.NewPackageArtifact(filepath.Join(dst, tpath))
a.SetCompressionType(r.TreeCompressionType)
a := compiler.NewPackageArtifact(filepath.Join(dst, treeFile.GetFileName()))
a.SetCompressionType(treeFile.GetCompressionType())
err = a.Compress(archive, 1)
if err != nil {
return errors.Wrap(err, "Error met while creating package archive")
}
r.TreePath = path.Base(a.GetPath())
// Update the tree name with the name created by compression selected.
treeFile.SetFileName(path.Base(a.GetPath()))
err = a.Hash()
if err != nil {
return errors.Wrap(err, "Failed generating checksums for tree")
}
r.TreeChecksums = a.GetChecksums()
treeFile.SetChecksums(a.GetChecksums())
r.SetRepositoryFile(REPOFILE_TREE_KEY, treeFile)
data, err := yaml.Marshal(r)
// Create Metadata struct and serialized repository
meta, serialized := r.Serialize()
// Create metadata file and repository file
metaTmpDir, err := config.LuetCfg.GetSystem().TempDir("metadata")
defer os.RemoveAll(metaTmpDir) // clean up
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for metadata")
}
metaFile, err := r.GetRepositoryFile(REPOFILE_META_KEY)
if err != nil {
metaFile = NewDefaultMetaRepositoryFile()
r.SetRepositoryFile(REPOFILE_META_KEY, metaFile)
}
repoMetaSpec := filepath.Join(metaTmpDir, REPOSITORY_METAFILE)
// Create repository.meta.yaml file
err = meta.WriteFile(repoMetaSpec)
if err != nil {
return err
}
a = compiler.NewPackageArtifact(filepath.Join(dst, metaFile.GetFileName()))
a.SetCompressionType(metaFile.GetCompressionType())
err = a.Compress(metaTmpDir, 1)
if err != nil {
return errors.Wrap(err, "Error met while archiving repository metadata")
}
metaFile.SetFileName(path.Base(a.GetPath()))
r.SetRepositoryFile(REPOFILE_META_KEY, metaFile)
err = a.Hash()
if err != nil {
return errors.Wrap(err, "Failed generating checksums for metadata")
}
metaFile.SetChecksums(a.GetChecksums())
data, err := yaml.Marshal(serialized)
if err != nil {
return err
}
@@ -346,7 +516,7 @@ func (r *LuetSystemRepository) Client() Client {
}
func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
var repoUpdated bool = false
var treefs string
var treefs, metafs string
Debug("Sync of the repository", r.Name, "in progress...")
c := r.Client()
@@ -384,37 +554,71 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
} else {
treefs = r.GetTreePath()
}
if r.GetMetaPath() == "" {
metafs = filepath.Join(repobasedir, "metafs")
} else {
metafs = r.GetMetaPath()
}
} else {
treefs, err = ioutil.TempDir(os.TempDir(), "treefs")
treefs, err = config.LuetCfg.GetSystem().TempDir("treefs")
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
// Note: If we always remove them, later on, no other structure can access
// to the tree for e.g. to retrieve finalizers
metafs, err = config.LuetCfg.GetSystem().TempDir("metafs")
if err != nil {
return nil, errors.Wrap(err, "Error met whilte creating tempdir for metafs")
}
//defer os.RemoveAll(metafs)
}
// POST: treeFile and metaFile are present. I check this inside
// ReadSpecFile and NewLuetSystemRepositoryFromYaml
treeFile, _ := repo.GetRepositoryFile(REPOFILE_TREE_KEY)
metaFile, _ := repo.GetRepositoryFile(REPOFILE_META_KEY)
if !repoUpdated {
tpath := repo.GetTreePath()
if tpath == "" {
tpath = TREE_TARBALL
}
a := compiler.NewPackageArtifact(tpath)
artifact, err := c.DownloadArtifact(a)
// Get Tree
a := compiler.NewPackageArtifact(treeFile.GetFileName())
artifactTree, err := c.DownloadArtifact(a)
if err != nil {
return nil, errors.Wrap(err, "While downloading "+tpath)
return nil, errors.Wrap(err, "While downloading "+treeFile.GetFileName())
}
defer os.Remove(artifact.GetPath())
defer os.Remove(artifactTree.GetPath())
artifact.SetChecksums(repo.GetTreeChecksums())
artifact.SetCompressionType(repo.GetTreeCompressionType())
artifactTree.SetChecksums(treeFile.GetChecksums())
artifactTree.SetCompressionType(treeFile.GetCompressionType())
err = artifact.Verify()
err = artifactTree.Verify()
if err != nil {
return nil, errors.Wrap(err, "Tree integrity check failure")
}
Debug("Tree tarball for the repository " + r.GetName() + " downloaded correctly.")
// Get Repository Metadata
a = compiler.NewPackageArtifact(metaFile.GetFileName())
artifactMeta, err := c.DownloadArtifact(a)
if err != nil {
return nil, errors.Wrap(err, "While downloading "+metaFile.GetFileName())
}
defer os.Remove(artifactMeta.GetPath())
artifactMeta.SetChecksums(metaFile.GetChecksums())
artifactMeta.SetCompressionType(metaFile.GetCompressionType())
err = artifactMeta.Verify()
if err != nil {
return nil, errors.Wrap(err, "Metadata integrity check failure")
}
Debug("Metadata tarball for the repository " + r.GetName() + " downloaded correctly.")
if r.Cached {
// Copy updated repository.yaml file to repo dir now that the tree is synced.
err = helpers.CopyFile(file, filepath.Join(repobasedir, REPOSITORY_SPECFILE))
@@ -423,26 +627,44 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
}
// Remove previous tree
os.RemoveAll(treefs)
// Remove previous meta dir
os.RemoveAll(metafs)
}
Debug("Decompress tree of the repository " + r.Name + "...")
err = artifact.Unpack(treefs, true)
err = artifactTree.Unpack(treefs, true)
if err != nil {
return nil, errors.Wrap(err, "Error met while unpacking tree")
}
// FIXME: It seems that tar with only one file doesn't create destination
// directory. I create directory directly for now.
os.MkdirAll(metafs, os.ModePerm)
err = artifactMeta.Unpack(metafs, true)
if err != nil {
return nil, errors.Wrap(err, "Error met while unpacking metadata")
}
tsec, _ := strconv.ParseInt(repo.GetLastUpdate(), 10, 64)
InfoC(
Bold(Red(":house: Repository "+r.GetName()+" revision: ")).String() +
Bold(Red(":house: Repository "+repo.GetName()+" revision: ")).String() +
Bold(Green(repo.GetRevision())).String() + " - " +
Bold(Green(time.Unix(tsec, 0).String())).String(),
)
} else {
Info("Repository", r.GetName(), "is already up to date.")
Info("Repository", repo.GetName(), "is already up to date.")
}
meta, err := NewLuetSystemRepositoryMetadata(
filepath.Join(metafs, REPOSITORY_METAFILE), false,
)
if err != nil {
return nil, errors.Wrap(err, "While processing "+REPOSITORY_METAFILE)
}
repo.SetIndex(meta.ToArtifactIndex())
reciper := tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))
err = reciper.Load(treefs)
if err != nil {
@@ -451,21 +673,59 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
repo.SetTree(reciper)
repo.SetTreePath(treefs)
// Copy the local available data to the one which was synced
// e.g. locally we can override the type (disk), or priority
// while remotely it could be advertized differently
repo.SetUrls(r.GetUrls())
repo.SetAuthentication(r.GetAuthentication())
repo.SetType(r.GetType())
repo.SetPriority(r.GetPriority())
InfoC(
Bold(Yellow(":information_source: Repository "+repo.GetName()+" priority: ")).String() +
Bold(Green(repo.GetPriority())).String() + " - type " +
Bold(Green(repo.GetType())).String(),
)
return repo, nil
}
func (r *LuetSystemRepository) Serialize() (*LuetSystemRepositoryMetadata, LuetSystemRepositorySerialized) {
serialized := LuetSystemRepositorySerialized{
Name: r.Name,
Description: r.Description,
Urls: r.Urls,
Priority: r.Priority,
Type: r.Type,
Revision: r.Revision,
LastUpdate: r.LastUpdate,
RepositoryFiles: r.RepositoryFiles,
}
// Check if is needed set the index or simply use
// value returned by CleanPath
r.Index = r.Index.CleanPath()
meta := &LuetSystemRepositoryMetadata{
Index: []*compiler.PackageArtifact{},
}
for _, a := range r.Index {
art := a.(*compiler.PackageArtifact)
meta.Index = append(meta.Index, art)
}
return meta, serialized
}
func (r Repositories) Len() int { return len(r) }
func (r Repositories) Swap(i, j int) { r[i], r[j] = r[j], r[i] }
func (r Repositories) Less(i, j int) bool {
return r[i].GetPriority() < r[j].GetPriority()
}
func (r Repositories) World() []pkg.Package {
func (r Repositories) World() pkg.Packages {
cache := map[string]pkg.Package{}
world := []pkg.Package{}
world := pkg.Packages{}
// Get Uniques. Walk in reverse so the definitions of most prio-repo overwrites lower ones
// In this way, when we will walk again later the deps sorting them by most higher prio we have better chance of success.
@@ -503,7 +763,7 @@ type PackageMatch struct {
Package pkg.Package
}
func (re Repositories) PackageMatches(p []pkg.Package) []PackageMatch {
func (re Repositories) PackageMatches(p pkg.Packages) []PackageMatch {
// TODO: Better heuristic. here we pick the first repo that contains the atom, sorted by priority but
// we should do a permutations and get the best match, and in case there are more solutions the user should be able to pick
sort.Sort(re)
@@ -524,10 +784,10 @@ PACKAGE:
}
func (re Repositories) ResolveSelectors(p []pkg.Package) []pkg.Package {
func (re Repositories) ResolveSelectors(p pkg.Packages) pkg.Packages {
// If a selector is given, get the best from each repo
sort.Sort(re) // respect prio
var matches []pkg.Package
var matches pkg.Packages
PACKAGE:
for _, pack := range p {
REPOSITORY:
@@ -554,14 +814,25 @@ PACKAGE:
}
func (re Repositories) Search(s string) []PackageMatch {
func (re Repositories) SearchPackages(p string, o LuetSearchOpts) []PackageMatch {
sort.Sort(re)
var term = regexp.MustCompile(s)
var matches []PackageMatch
var err error
for _, r := range re {
for _, pack := range r.GetTree().GetDatabase().World() {
if term.MatchString(pack.GetName()) {
var repoMatches pkg.Packages
switch o.Mode {
case SRegexPkg:
repoMatches, err = r.GetTree().GetDatabase().FindPackageMatch(p)
case SLabel:
repoMatches, err = r.GetTree().GetDatabase().FindPackageLabel(p)
case SRegexLabel:
repoMatches, err = r.GetTree().GetDatabase().FindPackageLabelMatch(p)
}
if err == nil && len(repoMatches) > 0 {
for _, pack := range repoMatches {
matches = append(matches, PackageMatch{Package: pack, Repo: r})
}
}
@@ -569,3 +840,15 @@ func (re Repositories) Search(s string) []PackageMatch {
return matches
}
func (re Repositories) SearchLabelMatch(s string) []PackageMatch {
return re.SearchPackages(s, LuetSearchOpts{Pattern: s, Mode: SRegexLabel})
}
func (re Repositories) SearchLabel(s string) []PackageMatch {
return re.SearchPackages(s, LuetSearchOpts{Pattern: s, Mode: SLabel})
}
func (re Repositories) Search(s string) []PackageMatch {
return re.SearchPackages(s, LuetSearchOpts{Pattern: s, Mode: SRegexPkg})
}

View File

@@ -18,6 +18,7 @@ package installer_test
import (
// . "github.com/mudler/luet/pkg/installer"
"io/ioutil"
"os"
@@ -34,7 +35,7 @@ import (
var _ = Describe("Repository", func() {
Context("Generation", func() {
It("Generate repository metadat", func() {
It("Generate repository metadata", func() {
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
@@ -86,12 +87,113 @@ var _ = Describe("Repository", func() {
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
})
It("Generate repository metadata of files ONLY referenced in a tree", func() {
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
generalRecipe2 := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe2.Load("../../tests/fixtures/finalizers")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe2.GetDatabase().GetPackages())).To(Equal(1))
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler2 := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec2, err := compiler2.FromPackage(&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
compiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
Expect(spec2.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(spec.BuildSteps()).To(Equal([]string{"echo artifact5 > /test5", "echo artifact6 > /test6", "./generate.sh"}))
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2", "chmod +x generate.sh"}))
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
compiler2.SetConcurrency(1)
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
artifact2, err := compiler2.Compile(false, spec2)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact2.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact2.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
// We check now that the artifact not referenced in the tree
// (spec2) is not indexed in the repository
repository, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
repos, err := repository.Sync(true)
Expect(err).ToNot(HaveOccurred())
_, err = repos.GetTree().GetDatabase().FindPackage(spec.GetPackage())
Expect(err).ToNot(HaveOccurred())
_, err = repos.GetTree().GetDatabase().FindPackage(spec2.GetPackage())
Expect(err).To(HaveOccurred()) // should throw error
})
})
Context("Matching packages", func() {

View File

@@ -9,6 +9,6 @@ type System struct {
Target string
}
func (s *System) World() ([]pkg.Package, error) {
func (s *System) World() (pkg.Packages, error) {
return s.Database.World(), nil
}

View File

@@ -53,12 +53,20 @@ func ZapLogger() error {
}
func Spinner(i int) {
var confLevel int
if LuetCfg.GetGeneral().Debug {
confLevel = 3
} else {
confLevel = level2Number(LuetCfg.GetLogging().Level)
}
if 2 > confLevel {
return
}
if i > 43 {
i = 43
}
if !LuetCfg.GetGeneral().Debug && !s.Active() {
if !s.Active() {
// s.UpdateCharSet(spinner.CharSets[i])
s.Start() // Start the spinner
}
@@ -79,9 +87,16 @@ func SpinnerText(suffix, prefix string) {
}
func SpinnerStop() {
if !LuetCfg.GetGeneral().Debug {
s.Stop()
var confLevel int
if LuetCfg.GetGeneral().Debug {
confLevel = 3
} else {
confLevel = level2Number(LuetCfg.GetLogging().Level)
}
if 2 > confLevel {
return
}
s.Stop()
}
func level2Number(level string) int {

View File

@@ -33,7 +33,7 @@ type PackageSet interface {
GetPackage(ID string) (Package, error)
Clean() error
FindPackage(Package) (Package, error)
FindPackages(p Package) ([]Package, error)
FindPackages(p Package) (Packages, error)
UpdatePackage(p Package) error
GetAllPackages(packages chan Package) error
RemovePackage(Package) error
@@ -41,10 +41,13 @@ type PackageSet interface {
GetPackageFiles(Package) ([]string, error)
SetPackageFiles(*PackageFile) error
RemovePackageFiles(Package) error
FindPackageVersions(p Package) ([]Package, error)
World() []Package
FindPackageVersions(p Package) (Packages, error)
World() Packages
FindPackageCandidate(p Package) (Package, error)
FindPackageLabel(labelKey string) (Packages, error)
FindPackageLabelMatch(pattern string) (Packages, error)
FindPackageMatch(pattern string) (Packages, error)
}
type PackageFile struct {

View File

@@ -17,7 +17,9 @@ package pkg
import (
"encoding/base64"
"fmt"
"os"
"regexp"
"strconv"
"sync"
"time"
@@ -227,19 +229,19 @@ func (db *BoltDatabase) getProvide(p Package) (Package, error) {
db.Unlock()
if !ok {
return nil, errors.New("No versions found for package")
return nil, errors.New(fmt.Sprintf("No versions found for: %s", p.HumanReadableString()))
}
for ve, _ := range versions {
match, err := p.VersionMatchSelector(ve)
match, err := p.VersionMatchSelector(ve, nil)
if err != nil {
return nil, errors.Wrap(err, "Error on match version")
}
if match {
pa, ok := db.ProvidesDatabase[p.GetPackageName()][ve]
if !ok {
return nil, errors.New("No versions found for package")
return nil, errors.New(fmt.Sprintf("No versions found for: %s", p.HumanReadableString()))
}
return pa, nil //pick the first (we shouldn't have providers that are conflicting)
// TODO: A find dbcall here would recurse, but would give chance to have providers of providers
@@ -308,12 +310,12 @@ func (db *BoltDatabase) RemovePackage(p Package) error {
var found DefaultPackage
err = bolt.Select(q.Eq("Name", p.GetName()), q.Eq("Category", p.GetCategory()), q.Eq("Version", p.GetVersion())).Limit(1).Delete(&found)
if err != nil {
return errors.Wrap(err, "No package found to delete")
return errors.New(fmt.Sprintf("Package not found: %s", p.HumanReadableString()))
}
return nil
}
func (db *BoltDatabase) World() []Package {
func (db *BoltDatabase) World() Packages {
var all []Package
// FIXME: This should all be locked in the db - for now forbid the solver to be run in threads.
@@ -323,7 +325,7 @@ func (db *BoltDatabase) World() []Package {
all = append(all, pack)
}
}
return all
return Packages(all)
}
func (db *BoltDatabase) FindPackageCandidate(p Package) (Package, error) {
@@ -337,7 +339,7 @@ func (db *BoltDatabase) FindPackageCandidate(p Package) (Package, error) {
if err != nil || len(packages) == 0 {
required = p
} else {
required = Best(packages)
required = packages.Best(nil)
}
return required, nil
@@ -350,7 +352,7 @@ func (db *BoltDatabase) FindPackageCandidate(p Package) (Package, error) {
// FindPackages return the list of the packages beloging to cat/name (any versions in requested range)
// FIXME: Optimize, see inmemorydb
func (db *BoltDatabase) FindPackages(p Package) ([]Package, error) {
func (db *BoltDatabase) FindPackages(p Package) (Packages, error) {
// Provides: Treat as the replaced package here
if provided, err := db.getProvide(p); err == nil {
p = provided
@@ -361,7 +363,7 @@ func (db *BoltDatabase) FindPackages(p Package) ([]Package, error) {
continue
}
match, err := p.SelectorMatchVersion(w.GetVersion())
match, err := p.SelectorMatchVersion(w.GetVersion(), nil)
if err != nil {
return nil, errors.Wrap(err, "Error on match selector")
}
@@ -369,11 +371,11 @@ func (db *BoltDatabase) FindPackages(p Package) ([]Package, error) {
versionsInWorld = append(versionsInWorld, w)
}
}
return versionsInWorld, nil
return Packages(versionsInWorld), nil
}
// FindPackageVersions return the list of the packages beloging to cat/name
func (db *BoltDatabase) FindPackageVersions(p Package) ([]Package, error) {
func (db *BoltDatabase) FindPackageVersions(p Package) (Packages, error) {
var versionsInWorld []Package
for _, w := range db.World() {
if w.GetName() != p.GetName() || w.GetCategory() != p.GetCategory() {
@@ -382,5 +384,63 @@ func (db *BoltDatabase) FindPackageVersions(p Package) ([]Package, error) {
versionsInWorld = append(versionsInWorld, w)
}
return versionsInWorld, nil
return Packages(versionsInWorld), nil
}
func (db *BoltDatabase) FindPackageLabel(labelKey string) (Packages, error) {
var ans []Package
for _, k := range db.GetPackages() {
pack, err := db.GetPackage(k)
if err != nil {
return ans, err
}
if pack.HasLabel(labelKey) {
ans = append(ans, pack)
}
}
return Packages(ans), nil
}
func (db *BoltDatabase) FindPackageLabelMatch(pattern string) (Packages, error) {
var ans []Package
re := regexp.MustCompile(pattern)
if re == nil {
return nil, errors.New("Invalid regex " + pattern + "!")
}
for _, k := range db.GetPackages() {
pack, err := db.GetPackage(k)
if err != nil {
return ans, err
}
if pack.MatchLabel(re) {
ans = append(ans, pack)
}
}
return Packages(ans), nil
}
func (db *BoltDatabase) FindPackageMatch(pattern string) (Packages, error) {
var ans []Package
re := regexp.MustCompile(pattern)
if re == nil {
return nil, errors.New("Invalid regex " + pattern + "!")
}
for _, k := range db.GetPackages() {
pack, err := db.GetPackage(k)
if err != nil {
return ans, err
}
if re.MatchString(pack.HumanReadableString()) {
ans = append(ans, pack)
}
}
return Packages(ans), nil
}

View File

@@ -18,6 +18,8 @@ package pkg
import (
"encoding/base64"
"encoding/json"
"fmt"
"regexp"
"sync"
"github.com/pkg/errors"
@@ -58,7 +60,7 @@ func (db *InMemoryDatabase) Get(s string) (string, error) {
defer db.Unlock()
pa, ok := db.Database[s]
if !ok {
return "", errors.New("No key found with that id")
return "", errors.New(fmt.Sprintf("No key found for: %s", s))
}
return pa, nil
}
@@ -177,7 +179,7 @@ func (db *InMemoryDatabase) getProvide(p Package) (Package, error) {
for ve, _ := range versions {
match, err := p.VersionMatchSelector(ve)
match, err := p.VersionMatchSelector(ve, nil)
if err != nil {
return nil, errors.Wrap(err, "Error on match version")
}
@@ -222,7 +224,7 @@ func (db *InMemoryDatabase) FindPackage(p Package) (Package, error) {
}
// FindPackages return the list of the packages beloging to cat/name
func (db *InMemoryDatabase) FindPackageVersions(p Package) ([]Package, error) {
func (db *InMemoryDatabase) FindPackageVersions(p Package) (Packages, error) {
versions, ok := db.CacheNoVersion[p.GetPackageName()]
if !ok {
return nil, errors.New("No versions found for package")
@@ -235,11 +237,11 @@ func (db *InMemoryDatabase) FindPackageVersions(p Package) ([]Package, error) {
}
versionsInWorld = append(versionsInWorld, w)
}
return versionsInWorld, nil
return Packages(versionsInWorld), nil
}
// FindPackages return the list of the packages beloging to cat/name (any versions in requested range)
func (db *InMemoryDatabase) FindPackages(p Package) ([]Package, error) {
func (db *InMemoryDatabase) FindPackages(p Package) (Packages, error) {
// Provides: Treat as the replaced package here
if provided, err := db.getProvide(p); err == nil {
@@ -247,11 +249,11 @@ func (db *InMemoryDatabase) FindPackages(p Package) ([]Package, error) {
}
versions, ok := db.CacheNoVersion[p.GetPackageName()]
if !ok {
return nil, errors.New("No versions found for package")
return nil, errors.New(fmt.Sprintf("No versions found for: %s", p.HumanReadableString()))
}
var versionsInWorld []Package
for ve, _ := range versions {
match, err := p.SelectorMatchVersion(ve)
match, err := p.SelectorMatchVersion(ve, nil)
if err != nil {
return nil, errors.Wrap(err, "Error on match selector")
}
@@ -264,7 +266,7 @@ func (db *InMemoryDatabase) FindPackages(p Package) ([]Package, error) {
versionsInWorld = append(versionsInWorld, w)
}
}
return versionsInWorld, nil
return Packages(versionsInWorld), nil
}
func (db *InMemoryDatabase) UpdatePackage(p Package) error {
@@ -276,7 +278,7 @@ func (db *InMemoryDatabase) UpdatePackage(p Package) error {
return db.Set(p.GetFingerPrint(), enc)
return errors.New("Package not found")
return errors.New(fmt.Sprintf("Package not found: %s", p.HumanReadableString()))
}
func (db *InMemoryDatabase) GetPackages() []string {
@@ -301,7 +303,7 @@ func (db *InMemoryDatabase) GetPackageFiles(p Package) ([]string, error) {
pa, ok := db.FileDatabase[p.GetFingerPrint()]
if !ok {
return pa, errors.New("No key found with that id")
return pa, errors.New(fmt.Sprintf("No key found for: %s", p.HumanReadableString()))
}
return pa, nil
@@ -326,7 +328,7 @@ func (db *InMemoryDatabase) RemovePackage(p Package) error {
delete(db.Database, p.GetFingerPrint())
return nil
}
func (db *InMemoryDatabase) World() []Package {
func (db *InMemoryDatabase) World() Packages {
var all []Package
// FIXME: This should all be locked in the db - for now forbid the solver to be run in threads.
for _, k := range db.GetPackages() {
@@ -335,7 +337,7 @@ func (db *InMemoryDatabase) World() []Package {
all = append(all, pack)
}
}
return all
return Packages(all)
}
func (db *InMemoryDatabase) FindPackageCandidate(p Package) (Package, error) {
@@ -348,7 +350,7 @@ func (db *InMemoryDatabase) FindPackageCandidate(p Package) (Package, error) {
if err != nil || len(packages) == 0 {
required = p
} else {
required = Best(packages)
required = packages.Best(nil)
}
return required, nil
@@ -358,3 +360,62 @@ func (db *InMemoryDatabase) FindPackageCandidate(p Package) (Package, error) {
return required, err
}
func (db *InMemoryDatabase) FindPackageLabel(labelKey string) (Packages, error) {
var ans []Package
for _, k := range db.GetPackages() {
pack, err := db.GetPackage(k)
if err != nil {
return ans, err
}
if pack.HasLabel(labelKey) {
ans = append(ans, pack)
}
}
return Packages(ans), nil
}
func (db *InMemoryDatabase) FindPackageLabelMatch(pattern string) (Packages, error) {
var ans []Package
re := regexp.MustCompile(pattern)
if re == nil {
return nil, errors.New("Invalid regex " + pattern + "!")
}
for _, k := range db.GetPackages() {
pack, err := db.GetPackage(k)
if err != nil {
return ans, err
}
if pack.MatchLabel(re) {
ans = append(ans, pack)
}
}
return Packages(ans), nil
}
func (db *InMemoryDatabase) FindPackageMatch(pattern string) (Packages, error) {
var ans []Package
re := regexp.MustCompile(pattern)
if re == nil {
return nil, errors.New("Invalid regex " + pattern + "!")
}
for _, k := range db.GetPackages() {
pack, err := db.GetPackage(k)
if err != nil {
return ans, err
}
if re.MatchString(pack.HumanReadableString()) {
ans = append(ans, pack)
}
}
return Packages(ans), nil
}

View File

@@ -17,19 +17,21 @@ package pkg
import (
"bytes"
"crypto/md5"
"encoding/json"
"fmt"
"io"
"path/filepath"
"sort"
"regexp"
"strconv"
"strings"
// . "github.com/mudler/luet/pkg/logger"
gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
"github.com/crillab/gophersat/bf"
version "github.com/hashicorp/go-version"
"github.com/jinzhu/copier"
"github.com/ghodss/yaml"
"github.com/jinzhu/copier"
version "github.com/mudler/luet/pkg/versioner"
"github.com/pkg/errors"
)
// Package is a package interface (TBD)
@@ -44,14 +46,16 @@ type Package interface {
GetPackageName() string
Requires([]*DefaultPackage) Package
Conflicts([]*DefaultPackage) Package
Revdeps(PackageDatabase) []Package
Revdeps(PackageDatabase) Packages
ExpandedRevdeps(definitiondb PackageDatabase, visited map[string]interface{}) Packages
LabelDeps(PackageDatabase, string) Packages
GetProvides() []*DefaultPackage
SetProvides([]*DefaultPackage) Package
GetRequires() []*DefaultPackage
GetConflicts() []*DefaultPackage
Expand(PackageDatabase) ([]Package, error)
Expand(PackageDatabase) (Packages, error)
SetCategory(string)
GetName() string
@@ -60,7 +64,7 @@ type Package interface {
GetVersion() string
RequiresContains(PackageDatabase, Package) (bool, error)
Matches(m Package) bool
Bigger(m Package) bool
BumpBuildVersion() error
AddUse(use string)
RemoveUse(use string)
@@ -82,21 +86,30 @@ type Package interface {
SetLicense(string)
GetLicense() string
AddLabel(string, string)
GetLabels() map[string]string
HasLabel(string) bool
MatchLabel(*regexp.Regexp) bool
IsSelector() bool
VersionMatchSelector(string) (bool, error)
SelectorMatchVersion(string) (bool, error)
VersionMatchSelector(string, version.Versioner) (bool, error)
SelectorMatchVersion(string, version.Versioner) (bool, error)
String() string
HumanReadableString() string
HashFingerprint() string
}
type Tree interface {
GetPackageSet() PackageDatabase
Prelude() string // A tree might have a prelude to be able to consume a tree
SetPackageSet(s PackageDatabase)
World() ([]Package, error)
World() (Packages, error)
FindPackage(Package) (Package, error)
}
type Packages []Package
// >> Unmarshallers
// DefaultPackageFromYaml decodes a package from yaml bytes
func DefaultPackageFromYaml(yml []byte) (DefaultPackage, error) {
@@ -130,25 +143,27 @@ func (t *DefaultPackage) JSON() ([]byte, error) {
// DefaultPackage represent a standard package definition
type DefaultPackage struct {
ID int `storm:"id,increment" json:"id"` // primary key with auto increment
Name string `json:"name"` // Affects YAML field names too.
Version string `json:"version"` // Affects YAML field names too.
Category string `json:"category"` // Affects YAML field names too.
UseFlags []string `json:"use_flags"` // Affects YAML field names too.
State State
PackageRequires []*DefaultPackage `json:"requires"` // Affects YAML field names too.
PackageConflicts []*DefaultPackage `json:"conflicts"` // Affects YAML field names too.
IsSet bool `json:"set"` // Affects YAML field names too.
Provides []*DefaultPackage `json:"provides"` // Affects YAML field names too.
ID int `storm:"id,increment" json:"id"` // primary key with auto increment
Name string `json:"name"` // Affects YAML field names too.
Version string `json:"version"` // Affects YAML field names too.
Category string `json:"category"` // Affects YAML field names too.
UseFlags []string `json:"use_flags,omitempty"` // Affects YAML field names too.
State State `json:"state,omitempty"`
PackageRequires []*DefaultPackage `json:"requires"` // Affects YAML field names too.
PackageConflicts []*DefaultPackage `json:"conflicts"` // Affects YAML field names too.
IsSet bool `json:"set,omitempty"` // Affects YAML field names too.
Provides []*DefaultPackage `json:"provides,omitempty"` // Affects YAML field names too.
// TODO: Annotations?
// Path is set only internally when tree is loaded from disk
Path string `json:"path,omitempty"`
Description string `json:"description"`
Uri []string `json:"uri"`
License string `json:"license"`
Description string `json:"description,omitempty"`
Uri []string `json:"uri,omitempty"`
License string `json:"license,omitempty"`
Labels map[string]string `json:labels,omitempty`
}
// State represent the package state
@@ -156,7 +171,13 @@ type State string
// NewPackage returns a new package
func NewPackage(name, version string, requires []*DefaultPackage, conflicts []*DefaultPackage) *DefaultPackage {
return &DefaultPackage{Name: name, Version: version, PackageRequires: requires, PackageConflicts: conflicts}
return &DefaultPackage{
Name: name,
Version: version,
PackageRequires: requires,
PackageConflicts: conflicts,
Labels: make(map[string]string, 0),
}
}
func (p *DefaultPackage) String() string {
@@ -173,6 +194,16 @@ func (p *DefaultPackage) GetFingerPrint() string {
return fmt.Sprintf("%s-%s-%s", p.Name, p.Category, p.Version)
}
func (p *DefaultPackage) HashFingerprint() string {
h := md5.New()
io.WriteString(h, p.GetFingerPrint())
return fmt.Sprintf("%x", h.Sum(nil))
}
func (p *DefaultPackage) HumanReadableString() string {
return fmt.Sprintf("%s/%s-%s", p.Category, p.Name, p.Version)
}
func FromString(s string) Package {
var unescaped DefaultPackage
@@ -204,6 +235,28 @@ func (p *DefaultPackage) IsSelector() bool {
return strings.ContainsAny(p.GetVersion(), "<>=")
}
func (p *DefaultPackage) HasLabel(label string) bool {
ans := false
for k, _ := range p.Labels {
if k == label {
ans = true
break
}
}
return ans
}
func (p *DefaultPackage) MatchLabel(r *regexp.Regexp) bool {
ans := false
for k, v := range p.Labels {
if r.MatchString(k + "=" + v) {
ans = true
break
}
}
return ans
}
// AddUse adds a use to a package
func (p *DefaultPackage) AddUse(use string) {
for _, v := range p.UseFlags {
@@ -288,6 +341,12 @@ func (p *DefaultPackage) SetCategory(s string) {
func (p *DefaultPackage) GetUses() []string {
return p.UseFlags
}
func (p *DefaultPackage) AddLabel(k, v string) {
p.Labels[k] = v
}
func (p *DefaultPackage) GetLabels() map[string]string {
return p.Labels
}
func (p *DefaultPackage) GetProvides() []*DefaultPackage {
return p.Provides
}
@@ -320,23 +379,16 @@ func (p *DefaultPackage) Matches(m Package) bool {
}
return false
}
func (p *DefaultPackage) Bigger(m Package) bool {
low := Lower([]Package{p, m})
if low.Matches(m) {
return true
}
return false
}
func (p *DefaultPackage) Expand(definitiondb PackageDatabase) ([]Package, error) {
var versionsInWorld []Package
func (p *DefaultPackage) Expand(definitiondb PackageDatabase) (Packages, error) {
var versionsInWorld Packages
all, err := definitiondb.FindPackages(p)
if err != nil {
return nil, err
}
for _, w := range all {
match, err := p.SelectorMatchVersion(w.GetVersion())
match, err := p.SelectorMatchVersion(w.GetVersion(), nil)
if err != nil {
return nil, err
}
@@ -348,8 +400,8 @@ func (p *DefaultPackage) Expand(definitiondb PackageDatabase) ([]Package, error)
return versionsInWorld, nil
}
func (p *DefaultPackage) Revdeps(definitiondb PackageDatabase) []Package {
var versionsInWorld []Package
func (p *DefaultPackage) Revdeps(definitiondb PackageDatabase) Packages {
var versionsInWorld Packages
for _, w := range definitiondb.World() {
if w.Matches(p) {
continue
@@ -365,11 +417,74 @@ func (p *DefaultPackage) Revdeps(definitiondb PackageDatabase) []Package {
return versionsInWorld
}
// ExpandedRevdeps returns the package reverse dependencies,
// matching also selectors in versions (>, <, >=, <=)
func (p *DefaultPackage) ExpandedRevdeps(definitiondb PackageDatabase, visited map[string]interface{}) Packages {
var versionsInWorld Packages
if _, ok := visited[p.HumanReadableString()]; ok {
return versionsInWorld
}
visited[p.HumanReadableString()] = true
for _, w := range definitiondb.World() {
if w.Matches(p) {
continue
}
match := false
for _, re := range w.GetRequires() {
if re.Matches(p) {
match = true
}
if !match {
packages, _ := re.Expand(definitiondb)
for _, pa := range packages {
if pa.Matches(p) {
match = true
}
}
}
// if ok, _ := w.RequiresContains(definitiondb, p); ok {
}
if match {
versionsInWorld = append(versionsInWorld, w)
versionsInWorld = append(versionsInWorld, w.ExpandedRevdeps(definitiondb, visited).Unique()...)
}
// }
}
//visited[p.HumanReadableString()] = true
return versionsInWorld.Unique()
}
func (p *DefaultPackage) LabelDeps(definitiondb PackageDatabase, labelKey string) Packages {
var pkgsWithLabelInWorld Packages
// TODO: check if integrate some index to improve
// research instead of iterate all list.
for _, w := range definitiondb.World() {
if w.HasLabel(labelKey) {
pkgsWithLabelInWorld = append(pkgsWithLabelInWorld, w)
}
}
return pkgsWithLabelInWorld
}
func DecodePackage(ID string, db PackageDatabase) (Package, error) {
return db.GetPackage(ID)
}
func (pack *DefaultPackage) RequiresContains(definitiondb PackageDatabase, s Package) (bool, error) {
func (pack *DefaultPackage) scanRequires(definitiondb PackageDatabase, s Package, visited map[string]interface{}) (bool, error) {
if _, ok := visited[pack.HumanReadableString()]; ok {
return false, nil
}
visited[pack.HumanReadableString()] = true
p, err := definitiondb.FindPackage(pack)
if err != nil {
p = pack //relax things
@@ -387,14 +502,27 @@ func (pack *DefaultPackage) RequiresContains(definitiondb PackageDatabase, s Pac
return true, nil
}
}
if contains, err := re.RequiresContains(definitiondb, s); err == nil && contains {
if contains, err := re.scanRequires(definitiondb, s, visited); err == nil && contains {
return true, nil
}
}
return false, nil
}
func Lower(set []Package) Package {
// RequiresContains recursively scans into the database packages dependencies to find a match with the given package
// It is used by the solver during uninstall.
func (pack *DefaultPackage) RequiresContains(definitiondb PackageDatabase, s Package) (bool, error) {
return pack.scanRequires(definitiondb, s, make(map[string]interface{}))
}
// Best returns the best version of the package (the most bigger) from a list
// Accepts a versioner interface to change the ordering policy. If null is supplied
// It defaults to version.WrappedVersioner which supports both semver and debian versioning
func (set Packages) Best(v version.Versioner) Package {
if v == nil {
v = &version.WrappedVersioner{}
}
var versionsMap map[string]Package = make(map[string]Package)
if len(set) == 0 {
panic("Best needs a list with elements")
@@ -405,43 +533,28 @@ func Lower(set []Package) Package {
versionsRaw = append(versionsRaw, p.GetVersion())
versionsMap[p.GetVersion()] = p
}
sorted := v.Sort(versionsRaw)
versions := make([]*version.Version, len(versionsRaw))
for i, raw := range versionsRaw {
v, _ := version.NewVersion(raw)
versions[i] = v
}
// After this, the versions are properly sorted
sort.Sort(version.Collection(versions))
return versionsMap[versions[0].Original()]
return versionsMap[sorted[len(sorted)-1]]
}
func Best(set []Package) Package {
var versionsMap map[string]Package = make(map[string]Package)
if len(set) == 0 {
panic("Best needs a list with elements")
}
versionsRaw := []string{}
func (set Packages) Unique() Packages {
var result Packages
uniq := make(map[string]Package)
for _, p := range set {
versionsRaw = append(versionsRaw, p.GetVersion())
versionsMap[p.GetVersion()] = p
uniq[p.GetFingerPrint()] = p
}
versions := make([]*version.Version, len(versionsRaw))
for i, raw := range versionsRaw {
v, _ := version.NewVersion(raw)
versions[i] = v
for _, p := range uniq {
result = append(result, p)
}
// After this, the versions are properly sorted
sort.Sort(version.Collection(versions))
return versionsMap[versions[len(versions)-1].Original()]
return result
}
func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db PackageDatabase) ([]bf.Formula, error) {
func (pack *DefaultPackage) buildFormula(definitiondb PackageDatabase, db PackageDatabase, visited map[string]interface{}) ([]bf.Formula, error) {
if _, ok := visited[pack.HumanReadableString()]; ok {
return nil, nil
}
visited[pack.HumanReadableString()] = true
p, err := definitiondb.FindPackage(pack)
if err != nil {
p = pack // Relax failures and trust the def
@@ -454,6 +567,22 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
A := bf.Var(encodedA)
var formulas []bf.Formula
// Do conflict with other packages versions (if A is selected, then conflict with other versions of A)
packages, _ := definitiondb.FindPackageVersions(p)
if len(packages) > 0 {
for _, cp := range packages {
encodedB, err := cp.Encode(db)
if err != nil {
return nil, err
}
B := bf.Var(encodedB)
if !p.Matches(cp) {
formulas = append(formulas, bf.Or(bf.Not(A), bf.Or(bf.Not(A), bf.Not(B))))
}
}
}
for _, requiredDef := range p.GetRequires() {
required, err := definitiondb.FindPackage(requiredDef)
if err != nil || requiredDef.IsSelector() {
@@ -465,62 +594,60 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
if err != nil || len(packages) == 0 {
required = requiredDef
} else {
if len(packages) == 1 {
required = packages[0]
} else {
var ALO, priorityConstraints, priorityALO []bf.Formula
// Try to prio best match
// Force the solver to consider first our candidate (if does exists).
// Then builds ALO and AMO for the requires.
c, candidateErr := definitiondb.FindPackageCandidate(requiredDef)
var C bf.Formula
if candidateErr == nil {
// We have a desired candidate, try to look a solution with that included first
for _, o := range packages {
encodedB, err := o.Encode(db)
if err != nil {
return nil, err
}
B := bf.Var(encodedB)
if !o.Matches(c) {
priorityConstraints = append(priorityConstraints, bf.Not(B))
priorityALO = append(priorityALO, B)
}
}
encodedC, err := c.Encode(db)
if err != nil {
return nil, err
}
C = bf.Var(encodedC)
// Or the Candidate is true, or all the others might be not true
// This forces the CDCL sat implementation to look first at a solution with C=true
formulas = append(formulas, bf.Or(bf.Or(C, bf.Or(priorityConstraints...)), bf.Or(bf.Not(C), bf.Or(priorityALO...))))
}
var ALO, priorityConstraints, priorityALO []bf.Formula
// AMO - At most one
// Try to prio best match
// Force the solver to consider first our candidate (if does exists).
// Then builds ALO and AMO for the requires.
c, candidateErr := definitiondb.FindPackageCandidate(requiredDef)
var C bf.Formula
if candidateErr == nil {
// We have a desired candidate, try to look a solution with that included first
for _, o := range packages {
encodedB, err := o.Encode(db)
if err != nil {
return nil, err
}
B := bf.Var(encodedB)
ALO = append(ALO, B)
for _, i := range packages {
encodedI, err := i.Encode(db)
if err != nil {
return nil, err
}
I := bf.Var(encodedI)
if !o.Matches(i) {
formulas = append(formulas, bf.Or(bf.Not(I), bf.Not(B)))
}
if !o.Matches(c) {
priorityConstraints = append(priorityConstraints, bf.Not(B))
priorityALO = append(priorityALO, B)
}
}
formulas = append(formulas, bf.Or(ALO...)) // ALO - At least one
continue
encodedC, err := c.Encode(db)
if err != nil {
return nil, err
}
C = bf.Var(encodedC)
// Or the Candidate is true, or all the others might be not true
// This forces the CDCL sat implementation to look first at a solution with C=true
formulas = append(formulas, bf.Or(bf.Not(A), bf.Or(bf.Or(C, bf.Or(priorityConstraints...)), bf.Or(bf.Not(C), bf.Or(priorityALO...)))))
}
// AMO - At most one
for _, o := range packages {
encodedB, err := o.Encode(db)
if err != nil {
return nil, err
}
B := bf.Var(encodedB)
ALO = append(ALO, B)
for _, i := range packages {
encodedI, err := i.Encode(db)
if err != nil {
return nil, err
}
I := bf.Var(encodedI)
if !o.Matches(i) {
formulas = append(formulas, bf.Or(bf.Not(A), bf.Or(bf.Not(I), bf.Not(B))))
}
}
}
formulas = append(formulas, bf.Or(bf.Not(A), bf.Or(ALO...))) // ALO - At least one
continue
}
}
encodedB, err := required.Encode(db)
@@ -529,8 +656,8 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
}
B := bf.Var(encodedB)
formulas = append(formulas, bf.Or(bf.Not(A), B))
f, err := required.BuildFormula(definitiondb, db)
r := required.(*DefaultPackage) // We know since the implementation is DefaultPackage, that can be only DefaultPackage
f, err := r.buildFormula(definitiondb, db, visited)
if err != nil {
return nil, err
}
@@ -559,8 +686,8 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
B := bf.Var(encodedB)
formulas = append(formulas, bf.Or(bf.Not(A),
bf.Not(B)))
f, err := p.BuildFormula(definitiondb, db)
r := p.(*DefaultPackage) // We know since the implementation is DefaultPackage, that can be only DefaultPackage
f, err := r.buildFormula(definitiondb, db, visited)
if err != nil {
return nil, err
}
@@ -579,16 +706,22 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
formulas = append(formulas, bf.Or(bf.Not(A),
bf.Not(B)))
f, err := required.BuildFormula(definitiondb, db)
r := required.(*DefaultPackage) // We know since the implementation is DefaultPackage, that can be only DefaultPackage
f, err := r.buildFormula(definitiondb, db, visited)
if err != nil {
return nil, err
}
formulas = append(formulas, f...)
}
return formulas, nil
}
func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db PackageDatabase) ([]bf.Formula, error) {
return pack.buildFormula(definitiondb, db, make(map[string]interface{}))
}
func (p *DefaultPackage) Explain() {
fmt.Println("====================")
@@ -608,3 +741,108 @@ func (p *DefaultPackage) Explain() {
fmt.Println("====================")
}
func (p *DefaultPackage) BumpBuildVersion() error {
cat := p.Category
if cat == "" {
// Use fake category for parse package
cat = "app"
}
gp, err := gentoo.ParsePackageStr(
fmt.Sprintf("%s/%s-%s", cat,
p.Name, p.GetVersion()))
if err != nil {
return errors.Wrap(err, "Error on parser version")
}
buildPrefix := ""
buildId := 0
if gp.VersionBuild != "" {
// Check if version build is a number
buildId, err = strconv.Atoi(gp.VersionBuild)
if err == nil {
goto end
}
// POST: is not only a number
// TODO: check if there is a better way to handle all use cases.
r1 := regexp.MustCompile(`^r[0-9]*$`)
if r1 == nil {
return errors.New("Error on create regex for -r[0-9]")
}
if r1.MatchString(gp.VersionBuild) {
buildId, err = strconv.Atoi(strings.ReplaceAll(gp.VersionBuild, "r", ""))
if err == nil {
buildPrefix = "r"
goto end
}
}
p1 := regexp.MustCompile(`^p[0-9]*$`)
if p1 == nil {
return errors.New("Error on create regex for -p[0-9]")
}
if p1.MatchString(gp.VersionBuild) {
buildId, err = strconv.Atoi(strings.ReplaceAll(gp.VersionBuild, "p", ""))
if err == nil {
buildPrefix = "p"
goto end
}
}
rc1 := regexp.MustCompile(`^rc[0-9]*$`)
if rc1 == nil {
return errors.New("Error on create regex for -rc[0-9]")
}
if rc1.MatchString(gp.VersionBuild) {
buildId, err = strconv.Atoi(strings.ReplaceAll(gp.VersionBuild, "rc", ""))
if err == nil {
buildPrefix = "rc"
goto end
}
}
// Check if version build contains a dot
dotIdx := strings.LastIndex(gp.VersionBuild, ".")
if dotIdx > 0 {
buildPrefix = gp.VersionBuild[0 : dotIdx+1]
bVersion := gp.VersionBuild[dotIdx+1:]
buildId, err = strconv.Atoi(bVersion)
if err == nil {
goto end
}
}
buildPrefix = gp.VersionBuild + "."
buildId = 0
}
end:
buildId++
p.Version = fmt.Sprintf("%s%s+%s%d",
gp.Version, gp.VersionSuffix, buildPrefix, buildId)
return nil
}
func (p *DefaultPackage) SelectorMatchVersion(ver string, v version.Versioner) (bool, error) {
if !p.IsSelector() {
return false, errors.New("Package is not a selector")
}
if v == nil {
v = &version.WrappedVersioner{}
}
return v.ValidateSelector(ver, p.GetVersion()), nil
}
func (p *DefaultPackage) VersionMatchSelector(selector string, v version.Versioner) (bool, error) {
if v == nil {
v = &version.WrappedVersioner{}
}
return v.ValidateSelector(p.GetVersion(), selector), nil
}

View File

@@ -16,6 +16,8 @@
package pkg_test
import (
"regexp"
. "github.com/mudler/luet/pkg/package"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -25,11 +27,18 @@ var _ = Describe("Package", func() {
Context("Encoding/Decoding", func() {
a := &DefaultPackage{Name: "test", Version: "1", Category: "t"}
a1 := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
It("Encodes and decodes correctly", func() {
Expect(a.String()).ToNot(Equal(""))
p := FromString(a.String())
Expect(p).To(Equal(a))
})
It("Generates packages fingerprint's hashes", func() {
Expect(a.HashFingerprint()).ToNot(Equal(a1.HashFingerprint()))
Expect(a.HashFingerprint()).To(Equal("c64caa391b79adb598ad98e261aa79a0"))
})
})
Context("Simple package", func() {
@@ -49,11 +58,39 @@ var _ = Describe("Package", func() {
Expect(lst).To(ContainElement(a1))
Expect(lst).ToNot(ContainElement(a01))
Expect(len(lst)).To(Equal(2))
p := Best(lst)
p := lst.Best(nil)
Expect(p).To(Equal(a11))
})
})
Context("Find label on packages", func() {
a := NewPackage("A", ">=1.0", []*DefaultPackage{}, []*DefaultPackage{})
a.AddLabel("project1", "test1")
a.AddLabel("label2", "value1")
b := NewPackage("B", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b.AddLabel("project2", "test2")
b.AddLabel("label2", "value1")
It("Expands correctly", func() {
var err error
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b} {
_, err = definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
re := regexp.MustCompile("project[0-9][=].*")
Expect(err).ToNot(HaveOccurred())
Expect(re).ToNot(BeNil())
Expect(a.HasLabel("label2")).To(Equal(true))
Expect(a.HasLabel("label3")).To(Equal(false))
Expect(a.HasLabel("project1")).To(Equal(true))
Expect(b.HasLabel("project2")).To(Equal(true))
Expect(b.HasLabel("label2")).To(Equal(true))
Expect(b.MatchLabel(re)).To(Equal(true))
Expect(a.MatchLabel(re)).To(Equal(true))
})
})
Context("Check description", func() {
a := NewPackage("A", ">=1.0", []*DefaultPackage{}, []*DefaultPackage{})
a.SetDescription("Description A")
@@ -122,6 +159,93 @@ var _ = Describe("Package", func() {
})
})
Context("revdeps", func() {
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b := NewPackage("B", "1.0", []*DefaultPackage{&DefaultPackage{Name: "A", Version: ">=1.0"}}, []*DefaultPackage{})
c := NewPackage("C", "1.1", []*DefaultPackage{&DefaultPackage{Name: "B", Version: ">=1.0"}}, []*DefaultPackage{})
d := NewPackage("D", "0.1", []*DefaultPackage{c}, []*DefaultPackage{})
e := NewPackage("E", "0.1", []*DefaultPackage{c}, []*DefaultPackage{})
It("doesn't resolve selectors", func() {
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b, c, d, e} {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
lst := a.Revdeps(definitions)
Expect(len(lst)).To(Equal(0))
})
})
Context("Expandedrevdeps", func() {
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b := NewPackage("B", "1.0", []*DefaultPackage{&DefaultPackage{Name: "A", Version: ">=1.0"}}, []*DefaultPackage{})
c := NewPackage("C", "1.1", []*DefaultPackage{&DefaultPackage{Name: "B", Version: ">=1.0"}}, []*DefaultPackage{})
d := NewPackage("D", "0.1", []*DefaultPackage{c}, []*DefaultPackage{})
e := NewPackage("E", "0.1", []*DefaultPackage{c}, []*DefaultPackage{})
It("Computes correctly", func() {
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b, c, d, e} {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
visited := make(map[string]interface{})
lst := a.ExpandedRevdeps(definitions, visited)
Expect(lst).To(ContainElement(c))
Expect(lst).To(ContainElement(d))
Expect(lst).To(ContainElement(e))
Expect(len(lst)).To(Equal(4))
})
})
Context("Expandedrevdeps", func() {
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b := NewPackage("B", "1.0", []*DefaultPackage{&DefaultPackage{Name: "A", Version: ">=1.0"}}, []*DefaultPackage{})
c := NewPackage("C", "1.1", []*DefaultPackage{&DefaultPackage{Name: "B", Version: ">=1.0"}}, []*DefaultPackage{})
d := NewPackage("D", "0.1", []*DefaultPackage{&DefaultPackage{Name: "C", Version: ">=1.0"}}, []*DefaultPackage{})
e := NewPackage("E", "0.1", []*DefaultPackage{&DefaultPackage{Name: "C", Version: ">=1.0"}}, []*DefaultPackage{})
It("Computes correctly", func() {
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b, c, d, e} {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
visited := make(map[string]interface{})
lst := a.ExpandedRevdeps(definitions, visited)
Expect(lst).To(ContainElement(b))
Expect(lst).To(ContainElement(c))
Expect(lst).To(ContainElement(d))
Expect(lst).To(ContainElement(e))
Expect(len(lst)).To(Equal(4))
})
})
Context("Expandedrevdeps", func() {
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b := NewPackage("B", "1.0", []*DefaultPackage{&DefaultPackage{Name: "A", Version: ">=1.0"}}, []*DefaultPackage{})
c := NewPackage("C", "1.1", []*DefaultPackage{&DefaultPackage{Name: "B", Version: ">=1.0"}, &DefaultPackage{Name: "A", Version: ">=0"}}, []*DefaultPackage{})
d := NewPackage("D", "0.1", []*DefaultPackage{&DefaultPackage{Name: "C", Version: ">=1.0"}}, []*DefaultPackage{})
e := NewPackage("E", "0.1", []*DefaultPackage{&DefaultPackage{Name: "C", Version: ">=1.0"}}, []*DefaultPackage{})
It("Computes correctly", func() {
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b, c, d, e} {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
visited := make(map[string]interface{})
lst := a.ExpandedRevdeps(definitions, visited)
Expect(lst).To(ContainElement(b))
Expect(lst).To(ContainElement(c))
Expect(lst).To(ContainElement(d))
Expect(lst).To(ContainElement(e))
Expect(len(lst)).To(Equal(4))
})
})
Context("RequiresContains", func() {
a := NewPackage("A", ">=1.0", []*DefaultPackage{}, []*DefaultPackage{})
a1 := NewPackage("A", "1.0", []*DefaultPackage{a}, []*DefaultPackage{})
@@ -204,7 +328,7 @@ var _ = Describe("Package", func() {
f, err := a1.BuildFormula(definitions, db)
Expect(err).ToNot(HaveOccurred())
Expect(len(f)).To(Equal(2))
Expect(len(f)).To(Equal(8))
// Expect(f[0].String()).To(Equal("or(not(c31f5842), a4910f77)"))
// Expect(f[1].String()).To(Equal("or(not(c31f5842), not(a97670be))"))
})
@@ -245,4 +369,69 @@ var _ = Describe("Package", func() {
Expect(len(a1.GetUses())).To(Equal(0))
})
})
Context("Check Bump build Version", func() {
It("Bump without build version", func() {
a1 := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
err := a1.BumpBuildVersion()
Expect(err).ToNot(HaveOccurred())
Expect(a1.GetVersion()).To(Equal("1.0+1"))
})
It("Bump 2", func() {
p := NewPackage("A", "1.0+1", []*DefaultPackage{}, []*DefaultPackage{})
err := p.BumpBuildVersion()
Expect(err).ToNot(HaveOccurred())
Expect(p.GetVersion()).To(Equal("1.0+2"))
})
It("Bump 3", func() {
p := NewPackage("A", "1.0+100", []*DefaultPackage{}, []*DefaultPackage{})
err := p.BumpBuildVersion()
Expect(err).ToNot(HaveOccurred())
Expect(p.GetVersion()).To(Equal("1.0+101"))
})
It("Bump 4", func() {
p := NewPackage("A", "1.0+r1", []*DefaultPackage{}, []*DefaultPackage{})
err := p.BumpBuildVersion()
Expect(err).ToNot(HaveOccurred())
Expect(p.GetVersion()).To(Equal("1.0+r2"))
})
It("Bump 5", func() {
p := NewPackage("A", "1.0+p1", []*DefaultPackage{}, []*DefaultPackage{})
err := p.BumpBuildVersion()
Expect(err).ToNot(HaveOccurred())
Expect(p.GetVersion()).To(Equal("1.0+p2"))
})
It("Bump 6", func() {
p := NewPackage("A", "1.0+pre20200315", []*DefaultPackage{}, []*DefaultPackage{})
err := p.BumpBuildVersion()
Expect(err).ToNot(HaveOccurred())
Expect(p.GetVersion()).To(Equal("1.0+pre20200315.1"))
})
It("Bump 7", func() {
p := NewPackage("A", "1.0+pre20200315.1", []*DefaultPackage{}, []*DefaultPackage{})
err := p.BumpBuildVersion()
Expect(err).ToNot(HaveOccurred())
Expect(p.GetVersion()).To(Equal("1.0+pre20200315.2"))
})
It("Bump 8", func() {
p := NewPackage("A", "1.0+d-r1", []*DefaultPackage{}, []*DefaultPackage{})
err := p.BumpBuildVersion()
Expect(err).ToNot(HaveOccurred())
Expect(p.GetVersion()).To(Equal("1.0+d-r1.1"))
})
It("Bump 9", func() {
p := NewPackage("A", "1.0+p20200315.1", []*DefaultPackage{}, []*DefaultPackage{})
err := p.BumpBuildVersion()
Expect(err).ToNot(HaveOccurred())
Expect(p.GetVersion()).To(Equal("1.0+p20200315.2"))
})
})
})

View File

@@ -35,7 +35,7 @@ func LoadRepositories(c *LuetConfig) error {
files, err := ioutil.ReadDir(rdir)
if err != nil {
Warning("Skip dir", rdir, ":", err.Error())
Debug("Skip dir", rdir, ":", err.Error())
continue
}

View File

@@ -23,6 +23,7 @@ import (
pkg "github.com/mudler/luet/pkg/package"
toposort "github.com/philopon/go-toposort"
"github.com/pkg/errors"
"github.com/stevenle/topsort"
)
@@ -145,7 +146,7 @@ func (assertions PackagesAssertions) Search(f string) *PackageAssert {
return nil
}
func (assertions PackagesAssertions) Order(definitiondb pkg.PackageDatabase, fingerprint string) PackagesAssertions {
func (assertions PackagesAssertions) Order(definitiondb pkg.PackageDatabase, fingerprint string) (PackagesAssertions, error) {
orderedAssertions := PackagesAssertions{}
unorderedAssertions := PackagesAssertions{}
@@ -191,19 +192,19 @@ func (assertions PackagesAssertions) Order(definitiondb pkg.PackageDatabase, fin
}
result, err := graph.TopSort(fingerprint)
if err != nil {
panic(err)
return nil, errors.Wrap(err, "fail on sorting "+fingerprint)
}
for _, res := range result {
a, ok := tmpMap[res]
if !ok {
panic("fail looking for " + res)
return nil, errors.New("fail looking for " + res)
// continue
}
orderedAssertions = append(orderedAssertions, a)
// orderedAssertions = append(PackagesAssertions{a}, orderedAssertions...) // push upfront
}
//helpers.ReverseAny(orderedAssertions)
return orderedAssertions
return orderedAssertions, nil
}
func (assertions PackagesAssertions) Explain() string {

View File

@@ -72,7 +72,8 @@ var _ = Describe("Decoder", func() {
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
solution = solution.Order(dbDefinitions, A.GetFingerPrint())
solution, err = solution.Order(dbDefinitions, A.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
// Expect(len(solution)).To(Equal(6))
Expect(solution[0].Package.GetName()).To(Equal("G"))
Expect(solution[1].Package.GetName()).To(Equal("H"))
@@ -188,7 +189,8 @@ var _ = Describe("Decoder", func() {
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
solution = solution.Order(dbDefinitions, A.GetFingerPrint())
solution, err = solution.Order(dbDefinitions, A.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
// Expect(len(solution)).To(Equal(6))
Expect(solution[0].Package.GetName()).To(Equal("G"))
Expect(solution[1].Package.GetName()).To(Equal("H"))
@@ -206,7 +208,8 @@ var _ = Describe("Decoder", func() {
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
solution = solution.Order(dbDefinitions, B.GetFingerPrint())
solution, err = solution.Order(dbDefinitions, B.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
hash2 := solution.AssertionHash()
// Expect(len(solution)).To(Equal(6))
@@ -243,7 +246,11 @@ var _ = Describe("Decoder", func() {
solution2, err := s.Install([]pkg.Package{Z})
Expect(err).ToNot(HaveOccurred())
Expect(solution.Order(dbDefinitions, Y.GetFingerPrint()).Drop(Y).AssertionHash() == solution2.Order(dbDefinitions, Z.GetFingerPrint()).Drop(Z).AssertionHash()).To(BeTrue())
orderY, err := solution.Order(dbDefinitions, Y.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
orderZ, err := solution2.Order(dbDefinitions, Z.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
Expect(orderY.Drop(Y).AssertionHash() == orderZ.Drop(Z).AssertionHash()).To(BeTrue())
})
It("Hashes them, Cuts them and could be used for comparison", func() {
@@ -267,9 +274,13 @@ var _ = Describe("Decoder", func() {
solution2, err := s.Install([]pkg.Package{Z})
Expect(err).ToNot(HaveOccurred())
Expect(solution.Order(dbDefinitions, Y.GetFingerPrint()).Cut(Y).Drop(Y)).To(Equal(solution2.Order(dbDefinitions, Z.GetFingerPrint()).Cut(Z).Drop(Z)))
orderY, err := solution.Order(dbDefinitions, Y.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
orderZ, err := solution2.Order(dbDefinitions, Z.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
Expect(orderY.Cut(Y).Drop(Y)).To(Equal(orderZ.Cut(Z).Drop(Z)))
Expect(solution.Order(dbDefinitions, Y.GetFingerPrint()).Cut(Y).Drop(Y).AssertionHash()).To(Equal(solution2.Order(dbDefinitions, Z.GetFingerPrint()).Cut(Z).Drop(Z).AssertionHash()))
Expect(orderY.Cut(Y).Drop(Y).AssertionHash()).To(Equal(orderZ.Cut(Z).Drop(Z).AssertionHash()))
})
})

View File

@@ -77,11 +77,11 @@ type QLearningResolver struct {
Solver PackageSolver
Formula bf.Formula
Targets []pkg.Package
Current []pkg.Package
Targets pkg.Packages
Current pkg.Packages
observedDelta int
observedDeltaChoice []pkg.Package
observedDeltaChoice pkg.Packages
Agent *qlearning.SimpleAgent
}
@@ -177,7 +177,7 @@ func (resolver *QLearningResolver) Try(c Choice) error {
packtoAdd := pkg.FromString(pack)
resolver.Attempted[pack+strconv.Itoa(int(c.Action))] = true // increase the count
s, _ := resolver.Solver.(*Solver)
var filtered []pkg.Package
var filtered pkg.Packages
switch c.Action {
case ActionAdded:

View File

@@ -27,12 +27,12 @@ import (
// PackageSolver is an interface to a generic package solving algorithm
type PackageSolver interface {
SetDefinitionDatabase(pkg.PackageDatabase)
Install(p []pkg.Package) (PackagesAssertions, error)
Uninstall(candidate pkg.Package) ([]pkg.Package, error)
Install(p pkg.Packages) (PackagesAssertions, error)
Uninstall(candidate pkg.Package, checkconflicts bool) (pkg.Packages, error)
ConflictsWithInstalled(p pkg.Package) (bool, error)
ConflictsWith(p pkg.Package, ls []pkg.Package) (bool, error)
World() []pkg.Package
Upgrade() ([]pkg.Package, PackagesAssertions, error)
ConflictsWith(p pkg.Package, ls pkg.Packages) (bool, error)
World() pkg.Packages
Upgrade(checkconflicts bool) (pkg.Packages, PackagesAssertions, error)
SetResolver(PackageResolver)
@@ -43,7 +43,7 @@ type PackageSolver interface {
type Solver struct {
DefinitionDatabase pkg.PackageDatabase
SolverDatabase pkg.PackageDatabase
Wanted []pkg.Package
Wanted pkg.Packages
InstalledDatabase pkg.PackageDatabase
Resolver PackageResolver
@@ -72,11 +72,11 @@ func (s *Solver) SetResolver(r PackageResolver) {
s.Resolver = r
}
func (s *Solver) World() []pkg.Package {
func (s *Solver) World() pkg.Packages {
return s.DefinitionDatabase.World()
}
func (s *Solver) Installed() []pkg.Package {
func (s *Solver) Installed() pkg.Packages {
return s.InstalledDatabase.World()
}
@@ -130,8 +130,8 @@ func (s *Solver) BuildWorld(includeInstalled bool) (bf.Formula, error) {
return bf.And(formulas...), nil
}
func (s *Solver) getList(db pkg.PackageDatabase, lsp []pkg.Package) ([]pkg.Package, error) {
var ls []pkg.Package
func (s *Solver) getList(db pkg.PackageDatabase, lsp pkg.Packages) (pkg.Packages, error) {
var ls pkg.Packages
for _, pp := range lsp {
cp, err := db.FindPackage(pp)
@@ -141,7 +141,7 @@ func (s *Solver) getList(db pkg.PackageDatabase, lsp []pkg.Package) ([]pkg.Packa
if err != nil || len(packages) == 0 {
cp = pp
} else {
cp = pkg.Best(packages)
cp = packages.Best(nil)
}
}
ls = append(ls, cp)
@@ -149,7 +149,7 @@ func (s *Solver) getList(db pkg.PackageDatabase, lsp []pkg.Package) ([]pkg.Packa
return ls, nil
}
func (s *Solver) ConflictsWith(pack pkg.Package, lsp []pkg.Package) (bool, error) {
func (s *Solver) ConflictsWith(pack pkg.Package, lsp pkg.Packages) (bool, error) {
p, err := s.DefinitionDatabase.FindPackage(pack)
if err != nil {
p = pack //Relax search, otherwise we cannot compute solutions for packages not in definitions
@@ -210,14 +210,14 @@ func (s *Solver) ConflictsWithInstalled(p pkg.Package) (bool, error) {
return s.ConflictsWith(p, s.Installed())
}
func (s *Solver) Upgrade() ([]pkg.Package, PackagesAssertions, error) {
func (s *Solver) Upgrade(checkconflicts bool) (pkg.Packages, PackagesAssertions, error) {
// First get candidates that needs to be upgraded..
toUninstall := []pkg.Package{}
toInstall := []pkg.Package{}
toUninstall := pkg.Packages{}
toInstall := pkg.Packages{}
availableCache := map[string][]pkg.Package{}
availableCache := map[string]pkg.Packages{}
for _, p := range s.DefinitionDatabase.World() {
// Each one, should be expanded
availableCache[p.GetName()+p.GetCategory()] = append(availableCache[p.GetName()+p.GetCategory()], p)
@@ -229,19 +229,18 @@ func (s *Solver) Upgrade() ([]pkg.Package, PackagesAssertions, error) {
installedcopy.CreatePackage(p)
packages, ok := availableCache[p.GetName()+p.GetCategory()]
if ok && len(packages) != 0 {
best := pkg.Best(packages)
best := packages.Best(nil)
if best.GetVersion() != p.GetVersion() {
toUninstall = append(toUninstall, p)
toInstall = append(toInstall, best)
}
}
}
s2 := NewSolver(installedcopy, s.DefinitionDatabase, pkg.NewInMemoryDatabase(false))
s2.SetResolver(s.Resolver)
// Then try to uninstall the versions in the system, and store that tree
for _, p := range toUninstall {
r, err := s.Uninstall(p)
r, err := s.Uninstall(p, checkconflicts)
if err != nil {
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't uninstall selected candidate "+p.GetFingerPrint())
}
@@ -262,8 +261,8 @@ func (s *Solver) Upgrade() ([]pkg.Package, PackagesAssertions, error) {
// Uninstall takes a candidate package and return a list of packages that would be removed
// in order to purge the candidate. Returns error if unsat.
func (s *Solver) Uninstall(c pkg.Package) ([]pkg.Package, error) {
var res []pkg.Package
func (s *Solver) Uninstall(c pkg.Package, checkconflicts bool) (pkg.Packages, error) {
var res pkg.Packages
candidate, err := s.InstalledDatabase.FindPackage(c)
if err != nil {
@@ -273,13 +272,13 @@ func (s *Solver) Uninstall(c pkg.Package) ([]pkg.Package, error) {
if err != nil || len(packages) == 0 {
candidate = c
} else {
candidate = pkg.Best(packages)
candidate = packages.Best(nil)
}
//Relax search, otherwise we cannot compute solutions for packages not in definitions
// return nil, errors.Wrap(err, "Package not found between installed")
}
// Build a fake "Installed" - Candidate and its requires tree
var InstalledMinusCandidate []pkg.Package
var InstalledMinusCandidate pkg.Packages
// TODO: Can be optimized
for _, i := range s.Installed() {
@@ -294,17 +293,19 @@ func (s *Solver) Uninstall(c pkg.Package) ([]pkg.Package, error) {
}
}
s2 := NewSolver(pkg.NewInMemoryDatabase(false), s.DefinitionDatabase, pkg.NewInMemoryDatabase(false))
s2.SetResolver(s.Resolver)
// Get the requirements to install the candidate
saved := s.InstalledDatabase
s.InstalledDatabase = pkg.NewInMemoryDatabase(false)
asserts, err := s.Install([]pkg.Package{candidate})
asserts, err := s2.Install(pkg.Packages{candidate})
if err != nil {
return nil, err
}
s.InstalledDatabase = saved
for _, a := range asserts {
if a.Value {
if !checkconflicts {
res = append(res, a.Package.IsFlagged(false))
continue
}
c, err := s.ConflictsWithInstalled(a.Package)
if err != nil {
@@ -402,7 +403,7 @@ func (s *Solver) Solve() (PackagesAssertions, error) {
// Install given a list of packages, returns package assertions to indicate the packages that must be installed in the system in order
// to statisfy all the constraints
func (s *Solver) Install(c []pkg.Package) (PackagesAssertions, error) {
func (s *Solver) Install(c pkg.Packages) (PackagesAssertions, error) {
coll, err := s.getList(s.DefinitionDatabase, c)
if err != nil {

View File

@@ -363,12 +363,19 @@ var _ = Describe("Solver", func() {
It("Selects best version", func() {
E := pkg.NewPackage("E", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
E.SetCategory("test")
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
C.SetCategory("test")
D2 := pkg.NewPackage("D", "1.9", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D2.SetCategory("test")
D := pkg.NewPackage("D", "1.8", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D.SetCategory("test")
D1 := pkg.NewPackage("D", "1.4", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "D", Version: ">=1.0"}}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "D", Version: ">=1.0"}}, []*pkg.DefaultPackage{})
D1.SetCategory("test")
B := pkg.NewPackage("B", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "D", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
B.SetCategory("test")
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "D", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
A.SetCategory("test")
for _, p := range []pkg.Package{A, B, C, D, D1, D2, E} {
_, err := dbDefinitions.CreatePackage(p)
@@ -397,16 +404,23 @@ var _ = Describe("Solver", func() {
It("Support provides", func() {
E := pkg.NewPackage("E", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
E.SetCategory("test")
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
C.SetCategory("test")
D2 := pkg.NewPackage("D", "1.9", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D2.SetCategory("test")
D := pkg.NewPackage("D", "1.8", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D.SetCategory("test")
D1 := pkg.NewPackage("D", "1.4", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "D", Version: ">=1.0"}}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "D", Version: ">=1.0"}}, []*pkg.DefaultPackage{})
D1.SetCategory("test")
B := pkg.NewPackage("B", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "D", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
B.SetCategory("test")
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "D", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
A.SetCategory("test")
D2.SetProvides([]*pkg.DefaultPackage{{Name: "E"}})
A2 := pkg.NewPackage("A", "1.3", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "E", Version: ""}}, []*pkg.DefaultPackage{})
D2.SetProvides([]*pkg.DefaultPackage{{Name: "E", Category: "test"}})
A2 := pkg.NewPackage("A", "1.3", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "E", Version: "", Category: "test"}}, []*pkg.DefaultPackage{})
A2.SetCategory("test")
for _, p := range []pkg.Package{A, B, C, D, D1, D2, A2, E} {
_, err := dbDefinitions.CreatePackage(p)
@@ -429,7 +443,7 @@ var _ = Describe("Solver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(5))
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
})
@@ -470,7 +484,7 @@ var _ = Describe("Solver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(4))
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
})
@@ -511,7 +525,7 @@ var _ = Describe("Solver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(4))
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
})
@@ -533,7 +547,7 @@ var _ = Describe("Solver", func() {
}
s = NewSolver(dbInstalled, dbDefinitions, db)
solution, err := s.Uninstall(A)
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
@@ -559,7 +573,7 @@ var _ = Describe("Solver", func() {
}
s = NewSolver(dbInstalled, dbDefinitions, db)
solution, err := s.Uninstall(&pkg.DefaultPackage{Name: "A", Version: ">1.0"})
solution, err := s.Uninstall(&pkg.DefaultPackage{Name: "A", Version: ">1.0"}, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
@@ -674,7 +688,7 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A)
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
@@ -698,7 +712,7 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A)
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
@@ -721,7 +735,7 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A)
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
@@ -746,7 +760,7 @@ var _ = Describe("Solver", func() {
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A)
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
@@ -772,7 +786,7 @@ var _ = Describe("Solver", func() {
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A)
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
@@ -868,7 +882,7 @@ var _ = Describe("Solver", func() {
Expect(lst).To(ContainElement(a03))
Expect(lst).ToNot(ContainElement(old))
Expect(len(lst)).To(Equal(5))
p := pkg.Best(lst)
p := lst.Best(nil)
Expect(p).To(Equal(a03))
})
})
@@ -893,7 +907,7 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
uninstall, solution, err := s.Upgrade()
uninstall, solution, err := s.Upgrade(true)
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(1))

View File

@@ -26,5 +26,5 @@ type Builder interface {
GetDatabase() pkg.PackageDatabase
WithDatabase(d pkg.PackageDatabase)
GetSourcePath() string
GetSourcePath() []string
}

View File

@@ -50,7 +50,7 @@ type GentooBuilder struct {
}
type EbuildParser interface {
ScanEbuild(string) ([]pkg.Package, error)
ScanEbuild(string) (pkg.Packages, error)
}
func (gb *GentooBuilder) scanEbuild(path string, db pkg.PackageDatabase) error {

View File

@@ -27,8 +27,8 @@ import (
type FakeParser struct {
}
func (f *FakeParser) ScanEbuild(path string) ([]pkg.Package, error) {
return []pkg.Package{&pkg.DefaultPackage{Name: path}}, nil
func (f *FakeParser) ScanEbuild(path string) (pkg.Packages, error) {
return pkg.Packages{&pkg.DefaultPackage{Name: path}}, nil
}
var _ = Describe("GentooBuilder", func() {

View File

@@ -323,7 +323,7 @@ func SourceFile(ctx context.Context, path string, pkg *_gentoo.GentooPackage) (m
}
// ScanEbuild returns a list of packages (always one with SimpleEbuildParser) decoded from an ebuild.
func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
func (ep *SimpleEbuildParser) ScanEbuild(path string) (pkg.Packages, error) {
Debug("Starting parsing of ebuild", path)
pkgstr := filepath.Base(path)
@@ -332,7 +332,7 @@ func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
gp, err := _gentoo.ParsePackageStr(pkgstr)
if err != nil {
return []pkg.Package{}, errors.New("Error on parsing package string")
return pkg.Packages{}, errors.New("Error on parsing package string")
}
pack := &pkg.DefaultPackage{
@@ -350,7 +350,7 @@ func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
vars, err := SourceFile(timeout, path, gp)
if err != nil {
Error("Error on source file ", pack.Name, ": ", err)
return []pkg.Package{}, err
return pkg.Packages{}, err
}
// TODO: Handle this a bit better
@@ -405,8 +405,8 @@ func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
gRDEPEND, err := ParseRDEPEND(rdepend.String())
if err != nil {
Warning("Error on parsing RDEPEND for package ", pack.Category+"/"+pack.Name, err)
return []pkg.Package{pack}, nil
// return []pkg.Package{}, err
return pkg.Packages{pack}, nil
// return pkg.Packages{}, err
}
pack.PackageConflicts = []*pkg.DefaultPackage{}
@@ -436,5 +436,5 @@ func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
Debug("Finished processing ebuild", path, "deps ", len(pack.PackageRequires))
//TODO: Deps and conflicts
return []pkg.Package{pack}, nil
return pkg.Packages{pack}, nil
}

View File

@@ -38,6 +38,20 @@ func NewCompilerRecipe(d pkg.PackageDatabase) Builder {
return &CompilerRecipe{Recipe: Recipe{Database: d}}
}
func ReadDefinitionFile(path string) (pkg.DefaultPackage, error) {
empty := pkg.DefaultPackage{}
dat, err := ioutil.ReadFile(path)
if err != nil {
return empty, errors.Wrap(err, "Error reading file "+path)
}
pack, err := pkg.DefaultPackageFromYaml(dat)
if err != nil {
return empty, errors.Wrap(err, "Error reading yaml "+path)
}
return pack, nil
}
// Recipe is the "general" reciper for Trees
type CompilerRecipe struct {
Recipe
@@ -45,7 +59,7 @@ type CompilerRecipe struct {
func (r *CompilerRecipe) Load(path string) error {
r.SourcePath = path
r.SourcePath = append(r.SourcePath, path)
//tmpfile, err := ioutil.TempFile("", "luet")
//if err != nil {
// return err
@@ -56,17 +70,17 @@ func (r *CompilerRecipe) Load(path string) error {
// the function that handles each file or dir
var ff = func(currentpath string, info os.FileInfo, err error) error {
if err != nil {
return errors.Wrap(err, "Error on walk path "+currentpath)
}
if info.Name() != DefinitionFile {
return nil // Skip with no errors
}
dat, err := ioutil.ReadFile(currentpath)
pack, err := ReadDefinitionFile(currentpath)
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
}
pack, err := pkg.DefaultPackageFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
return err
}
// Path is set only internally when tree is loaded from disk
pack.SetPath(filepath.Dir(currentpath))
@@ -74,13 +88,17 @@ func (r *CompilerRecipe) Load(path string) error {
// Instead of rdeps, have a different tree for build deps.
compileDefPath := pack.Rel(CompilerDefinitionFile)
if helpers.Exists(compileDefPath) {
dat, err = ioutil.ReadFile(compileDefPath)
dat, err := ioutil.ReadFile(compileDefPath)
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
return errors.Wrap(err,
"Error reading file "+CompilerDefinitionFile+" from "+
filepath.Dir(currentpath))
}
packbuild, err := pkg.DefaultPackageFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
return errors.Wrap(err,
"Error reading yaml "+CompilerDefinitionFile+" from "+
filepath.Dir(currentpath))
}
pack.Requires(packbuild.GetRequires())
pack.Conflicts(packbuild.GetConflicts())
@@ -103,4 +121,4 @@ func (r *CompilerRecipe) Load(path string) error {
func (r *CompilerRecipe) GetDatabase() pkg.PackageDatabase { return r.Database }
func (r *CompilerRecipe) WithDatabase(d pkg.PackageDatabase) { r.Database = d }
func (r *CompilerRecipe) GetSourcePath() string { return r.SourcePath }
func (r *CompilerRecipe) GetSourcePath() []string { return r.SourcePath }

View File

@@ -40,7 +40,7 @@ func NewInstallerRecipe(db pkg.PackageDatabase) Builder {
// InstallerRecipe is the "general" reciper for Trees
type InstallerRecipe struct {
SourcePath string
SourcePath []string
Database pkg.PackageDatabase
}
@@ -74,7 +74,7 @@ func (r *InstallerRecipe) Load(path string) error {
// if err != nil {
// return err
// }
r.SourcePath = path
r.SourcePath = append(r.SourcePath, path)
//r.Tree().SetPackageSet(pkg.NewBoltDatabase(tmpfile.Name()))
// TODO: Handle cleaning after? Cleanup implemented in GetPackageSet().Clean()
@@ -114,4 +114,4 @@ func (r *InstallerRecipe) Load(path string) error {
func (r *InstallerRecipe) GetDatabase() pkg.PackageDatabase { return r.Database }
func (r *InstallerRecipe) WithDatabase(d pkg.PackageDatabase) { r.Database = d }
func (r *InstallerRecipe) GetSourcePath() string { return r.SourcePath }
func (r *InstallerRecipe) GetSourcePath() []string { return r.SourcePath }

View File

@@ -37,24 +37,32 @@ func NewGeneralRecipe(db pkg.PackageDatabase) Builder { return &Recipe{Database:
// Recipe is the "general" reciper for Trees
type Recipe struct {
SourcePath string
SourcePath []string
Database pkg.PackageDatabase
}
func (r *Recipe) Save(path string) error {
func WriteDefinitionFile(p pkg.Package, definitionFilePath string) error {
data, err := p.Yaml()
if err != nil {
return err
}
err = ioutil.WriteFile(definitionFilePath, data, 0644)
if err != nil {
return err
}
return nil
}
func (r *Recipe) Save(path string) error {
for _, p := range r.Database.World() {
dir := filepath.Join(path, p.GetCategory(), p.GetName(), p.GetVersion())
os.MkdirAll(dir, os.ModePerm)
data, err := p.Yaml()
if err != nil {
return err
}
err = ioutil.WriteFile(filepath.Join(dir, DefinitionFile), data, 0644)
if err != nil {
return err
}
err := WriteDefinitionFile(p, filepath.Join(dir, DefinitionFile))
if err != nil {
return err
}
}
return nil
}
@@ -65,7 +73,7 @@ func (r *Recipe) Load(path string) error {
// if err != nil {
// return err
// }
r.SourcePath = path
r.SourcePath = append(r.SourcePath, path)
if r.Database == nil {
r.Database = pkg.NewInMemoryDatabase(false)
@@ -109,4 +117,4 @@ func (r *Recipe) Load(path string) error {
func (r *Recipe) GetDatabase() pkg.PackageDatabase { return r.Database }
func (r *Recipe) WithDatabase(d pkg.PackageDatabase) { r.Database = d }
func (r *Recipe) GetSourcePath() string { return r.SourcePath }
func (r *Recipe) GetSourcePath() []string { return r.SourcePath }

View File

@@ -65,7 +65,8 @@ var _ = Describe("Tree", func() {
solution, err := s.Install([]pkg.Package{pack})
Expect(err).ToNot(HaveOccurred())
solution = solution.Order(generalRecipe.GetDatabase(), pack.GetFingerPrint())
solution, err = solution.Order(generalRecipe.GetDatabase(), pack.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
Expect(solution[0].Package.GetName()).To(Equal("a"))
Expect(solution[0].Value).To(BeFalse())
@@ -96,4 +97,78 @@ var _ = Describe("Tree", func() {
})
})
Context("Multiple trees", func() {
It("Merges", func() {
for index := 0; index < 300; index++ { // Just to make sure we don't have false positives
db := pkg.NewInMemoryDatabase(false)
generalRecipe := NewCompilerRecipe(db)
tmpdir, err := ioutil.TempDir("", "package")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
err = generalRecipe.Load("../../tests/fixtures/buildableseed")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().World())).To(Equal(4))
err = generalRecipe.Load("../../tests/fixtures/layers")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().World())).To(Equal(6))
extra, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "extra", Category: "layer", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(extra).ToNot(BeNil())
D, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "d", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(D.GetRequires()[0].GetName()).To(Equal("c"))
CfromD, err := generalRecipe.GetDatabase().FindPackage(D.GetRequires()[0])
Expect(err).ToNot(HaveOccurred())
Expect(len(CfromD.GetRequires()) != 0).To(BeTrue())
Expect(CfromD.GetRequires()[0].GetName()).To(Equal("b"))
s := solver.NewSolver(pkg.NewInMemoryDatabase(false), generalRecipe.GetDatabase(), db)
Dd, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "d", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
solution, err := s.Install([]pkg.Package{Dd})
Expect(err).ToNot(HaveOccurred())
solution, err = solution.Order(generalRecipe.GetDatabase(), Dd.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
pack, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
base, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "base", Category: "layer", Version: "0.2"})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(solver.PackageAssert{Package: pack.(*pkg.DefaultPackage), Value: false}))
Expect(solution).To(ContainElement(solver.PackageAssert{Package: D.(*pkg.DefaultPackage), Value: true}))
Expect(solution).To(ContainElement(solver.PackageAssert{Package: extra.(*pkg.DefaultPackage), Value: false}))
Expect(solution).To(ContainElement(solver.PackageAssert{Package: base.(*pkg.DefaultPackage), Value: false}))
Expect(len(solution)).To(Equal(6))
}
})
})
Context("Simple tree with labels", func() {
It("Read tree with labels", func() {
db := pkg.NewInMemoryDatabase(false)
generalRecipe := NewCompilerRecipe(db)
err := generalRecipe.Load("../../tests/fixtures/labels")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().World())).To(Equal(1))
pack, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "pkgA", Category: "test", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
Expect(pack.HasLabel("label1")).To(Equal(true))
Expect(pack.HasLabel("label3")).To(Equal(false))
})
})
})

View File

@@ -0,0 +1,27 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>,
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package version
// Versioner is responsible of sanitizing versions,
// validating them and ordering by precedence
type Versioner interface {
Sanitize(string) string
Validate(string) error
Sort([]string) []string
ValidateSelector(version string, selector string) bool
}

View File

@@ -14,15 +14,14 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package pkg
package version
import (
"errors"
"fmt"
"regexp"
"strings"
version "github.com/hashicorp/go-version"
semver "github.com/hashicorp/go-version"
)
// Package Selector Condition
@@ -161,6 +160,35 @@ func ParseVersion(v string) (PkgVersionSelector, error) {
ans.Condition = PkgCondNot
}
// Check if build number is present
buildIdx := strings.LastIndex(v, "+")
buildVersion := ""
if buildIdx > 0 {
// <pre-release> ::= <dot-separated pre-release identifiers>
//
// <dot-separated pre-release identifiers> ::=
// <pre-release identifier> | <pre-release identifier> "."
// <dot-separated pre-release identifiers>
//
// <build> ::= <dot-separated build identifiers>
//
// <dot-separated build identifiers> ::= <build identifier>
// | <build identifier> "." <dot-separated build identifiers>
//
// <pre-release identifier> ::= <alphanumeric identifier>
// | <numeric identifier>
//
// <build identifier> ::= <alphanumeric identifier>
// | <digits>
//
// <alphanumeric identifier> ::= <non-digit>
// | <non-digit> <identifier characters>
// | <identifier characters> <non-digit>
// | <identifier characters> <non-digit> <identifier characters>
buildVersion = v[buildIdx:]
v = v[0:buildIdx]
}
regexPkg := regexp.MustCompile(
fmt.Sprintf("(%s|%s|%s|%s|%s|%s)((%s|%s|%s|%s|%s|%s|%s)+)*$",
// Version regex
@@ -216,24 +244,31 @@ func ParseVersion(v string) (PkgVersionSelector, error) {
ans.Condition = PkgCondEqual
}
ans.Version += buildVersion
// NOTE: Now suffix complex like _alpha_rc1 are not supported.
return ans, nil
}
func PackageAdmit(selector, i PkgVersionSelector) (bool, error) {
var v1 *version.Version = nil
var v2 *version.Version = nil
var v1 *semver.Version = nil
var v2 *semver.Version = nil
var ans bool
var err error
var sanitizedSelectorVersion, sanitizedIVersion string
if selector.Version != "" {
v1, err = version.NewVersion(selector.Version)
// TODO: This is temporary!. I promise it.
sanitizedSelectorVersion = strings.ReplaceAll(selector.Version, "_", "-")
v1, err = semver.NewVersion(sanitizedSelectorVersion)
if err != nil {
return false, err
}
}
if i.Version != "" {
v2, err = version.NewVersion(i.Version)
sanitizedIVersion = strings.ReplaceAll(i.Version, "_", "-")
v2, err = semver.NewVersion(sanitizedIVersion)
if err != nil {
return false, err
}
@@ -259,7 +294,7 @@ func PackageAdmit(selector, i PkgVersionSelector) (bool, error) {
// TODO: case of 7.3* where 7.30 is accepted.
if v1 != nil && v2 != nil {
segments := v1.Segments()
n := strings.Count(selector.Version, ".")
n := strings.Count(sanitizedIVersion, ".")
switch n {
case 0:
segments[0]++
@@ -271,8 +306,8 @@ func PackageAdmit(selector, i PkgVersionSelector) (bool, error) {
segments[len(segments)-1]++
}
nextVersion := strings.Trim(strings.Replace(fmt.Sprint(segments), " ", ".", -1), "[]")
constraints, err := version.NewConstraint(
fmt.Sprintf(">= %s, < %s", selector.Version, nextVersion),
constraints, err := semver.NewConstraint(
fmt.Sprintf(">= %s, < %s", sanitizedSelectorVersion, nextVersion),
)
if err != nil {
return false, err
@@ -299,35 +334,3 @@ func PackageAdmit(selector, i PkgVersionSelector) (bool, error) {
return ans, nil
}
func (p *DefaultPackage) SelectorMatchVersion(v string) (bool, error) {
if !p.IsSelector() {
return false, errors.New("Package is not a selector")
}
vS, err := ParseVersion(p.GetVersion())
if err != nil {
return false, err
}
vSI, err := ParseVersion(v)
if err != nil {
return false, err
}
return PackageAdmit(vS, vSI)
}
func (p *DefaultPackage) VersionMatchSelector(selector string) (bool, error) {
vS, err := ParseVersion(selector)
if err != nil {
return false, err
}
vSI, err := ParseVersion(p.GetVersion())
if err != nil {
return false, err
}
return PackageAdmit(vS, vSI)
}

View File

@@ -14,12 +14,12 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package pkg_test
package version_test
import (
gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
. "github.com/mudler/luet/pkg/package"
. "github.com/mudler/luet/pkg/versioner"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
@@ -136,6 +136,72 @@ var _ = Describe("Versions", func() {
})
})
Context("Versions Parser11 - semver", func() {
v, err := ParseVersion("0.1.0+0")
It("ParseVersion10", func() {
var c PkgSelectorCondition = PkgCondEqual
Expect(err).Should(BeNil())
Expect(v.Version).Should(Equal("0.1.0+0"))
Expect(v.VersionSuffix).Should(Equal(""))
Expect(v.Condition).Should(Equal(c))
})
})
Context("Versions Parser12 - semver", func() {
v, err := ParseVersion(">=0.1.0_alpha+AB")
It("ParseVersion10", func() {
var c PkgSelectorCondition = PkgCondGreaterEqual
Expect(err).Should(BeNil())
Expect(v.Version).Should(Equal("0.1.0+AB"))
Expect(v.VersionSuffix).Should(Equal("_alpha"))
Expect(v.Condition).Should(Equal(c))
})
})
Context("Versions Parser13 - semver", func() {
v, err := ParseVersion(">=0.1.0_alpha+0.1.22")
It("ParseVersion10", func() {
var c PkgSelectorCondition = PkgCondGreaterEqual
Expect(err).Should(BeNil())
Expect(v.Version).Should(Equal("0.1.0+0.1.22"))
Expect(v.VersionSuffix).Should(Equal("_alpha"))
Expect(v.Condition).Should(Equal(c))
})
})
Context("Versions Parser14 - semver", func() {
v, err := ParseVersion(">=0.1.0_alpha+0.1.22")
It("ParseVersion10", func() {
var c PkgSelectorCondition = PkgCondGreaterEqual
Expect(err).Should(BeNil())
Expect(v.Version).Should(Equal("0.1.0+0.1.22"))
Expect(v.VersionSuffix).Should(Equal("_alpha"))
Expect(v.Condition).Should(Equal(c))
})
})
Context("Versions Parser15 - semver", func() {
v, err := ParseVersion("<=0.3.222.4.5+AB")
It("ParseVersion10", func() {
var c PkgSelectorCondition = PkgCondLessEqual
Expect(err).Should(BeNil())
Expect(v.Version).Should(Equal("0.3.222.4.5+AB"))
Expect(v.VersionSuffix).Should(Equal(""))
Expect(v.Condition).Should(Equal(c))
})
})
Context("Versions Parser16 - semver", func() {
v, err := ParseVersion("<=1.0.29+pre2_p20191024")
It("ParseVersion10", func() {
var c PkgSelectorCondition = PkgCondLessEqual
Expect(err).Should(BeNil())
Expect(v.Version).Should(Equal("1.0.29+pre2_p20191024"))
Expect(v.VersionSuffix).Should(Equal(""))
Expect(v.Condition).Should(Equal(c))
})
})
Context("Selector1", func() {
v1, err := ParseVersion(">=0.0.20190406.4.9.172-r1")
v2, err2 := ParseVersion("1.0.111")
@@ -184,6 +250,54 @@ var _ = Describe("Versions", func() {
})
})
Context("Selector5", func() {
v1, err := ParseVersion(">0.1.0+0.4")
v2, err2 := ParseVersion("0.1.0+0.3")
match, err3 := PackageAdmit(v1, v2)
It("Selector5", func() {
Expect(err).Should(BeNil())
Expect(err2).Should(BeNil())
Expect(err3).Should(BeNil())
Expect(match).Should(Equal(false))
})
})
Context("Selector6", func() {
v1, err := ParseVersion(">=0.1.0+0.4")
v2, err2 := ParseVersion("0.1.0+0.5")
match, err3 := PackageAdmit(v1, v2)
It("Selector6", func() {
Expect(err).Should(BeNil())
Expect(err2).Should(BeNil())
Expect(err3).Should(BeNil())
Expect(match).Should(Equal(true))
})
})
PContext("Selector7", func() {
v1, err := ParseVersion(">0.1.0+0.4")
v2, err2 := ParseVersion("0.1.0+0.5")
match, err3 := PackageAdmit(v1, v2)
It("Selector7", func() {
Expect(err).Should(BeNil())
Expect(err2).Should(BeNil())
Expect(err3).Should(BeNil())
Expect(match).Should(Equal(true))
})
})
Context("Selector8", func() {
v1, err := ParseVersion(">=0")
v2, err2 := ParseVersion("1.0.29+pre2_p20191024")
match, err3 := PackageAdmit(v1, v2)
It("Selector8", func() {
Expect(err).Should(BeNil())
Expect(err2).Should(BeNil())
Expect(err3).Should(BeNil())
Expect(match).Should(Equal(true))
})
})
Context("Condition Converter 1", func() {
gp, err := gentoo.ParsePackageStr("=layer/build-1.0")
var cond gentoo.PackageCond = gentoo.PkgCondEqual

148
pkg/versioner/versioner.go Normal file
View File

@@ -0,0 +1,148 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>,
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package version
import (
"errors"
"regexp"
"sort"
"strconv"
"strings"
semver "github.com/hashicorp/go-version"
debversion "github.com/knqyf263/go-deb-version"
)
// WrappedVersioner uses different means to return unique result that is understendable by Luet
// It tries different approaches to sort, validate, and sanitize to a common versioning format
// that is understendable by the whole code
type WrappedVersioner struct{}
func DefaultVersioner() Versioner {
return &WrappedVersioner{}
}
func (w *WrappedVersioner) Validate(version string) error {
if !debversion.Valid(version) {
return errors.New("Invalid version")
}
return nil
}
func (w *WrappedVersioner) ValidateSelector(version string, selector string) bool {
vS, err := ParseVersion(selector)
if err != nil {
return false
}
vSI, err := ParseVersion(version)
if err != nil {
return false
}
ok, err := PackageAdmit(vS, vSI)
if err != nil {
return false
}
return ok
}
func (w *WrappedVersioner) Sanitize(s string) string {
return strings.ReplaceAll(s, "_", "-")
}
func (w *WrappedVersioner) IsSemver(v string) bool {
// Taken https://github.com/hashicorp/go-version/blob/2b13044f5cdd3833370d41ce57d8bf3cec5e62b8/version.go#L44
// semver doesn't have a Validate method, so we should filter before
// going to use it blindly (it panics)
semverRegexp := regexp.MustCompile("^" + semver.SemverRegexpRaw + "$")
// See https://github.com/hashicorp/go-version/blob/2b13044f5cdd3833370d41ce57d8bf3cec5e62b8/version.go#L61
matches := semverRegexp.FindStringSubmatch(v)
if matches == nil {
return false
}
segmentsStr := strings.Split(matches[1], ".")
segments := make([]int64, len(segmentsStr))
for i, str := range segmentsStr {
val, err := strconv.ParseInt(str, 10, 64)
if err != nil {
return false
}
segments[i] = int64(val)
}
return (len(segments) != 0)
}
func (w *WrappedVersioner) Sort(toSort []string) []string {
if len(toSort) == 0 {
return toSort
}
var versionsMap map[string]string = make(map[string]string)
versionsRaw := []string{}
result := []string{}
for _, v := range toSort {
sanitizedVersion := w.Sanitize(v)
versionsMap[sanitizedVersion] = v
versionsRaw = append(versionsRaw, sanitizedVersion)
}
versions := make([]*semver.Version, len(versionsRaw))
// Check if all of them are semver, otherwise we cannot do a good comparison
allSemverCompliant := true
for _, raw := range versionsRaw {
if !w.IsSemver(raw) {
allSemverCompliant = false
}
}
if allSemverCompliant {
for i, raw := range versionsRaw {
if w.IsSemver(raw) { // Make sure we include only semver, or go-version will panic
v, _ := semver.NewVersion(raw)
versions[i] = v
}
}
// Try first semver sorting
sort.Sort(semver.Collection(versions))
if len(versions) > 0 {
for _, v := range versions {
result = append(result, versionsMap[v.Original()])
}
return result
}
}
// Try with debian sorting
vs := make([]debversion.Version, len(versionsRaw))
for i, r := range versionsRaw {
v, _ := debversion.NewVersion(r)
vs[i] = v
}
sort.Slice(vs, func(i, j int) bool {
return vs[i].LessThan(vs[j])
})
for _, v := range vs {
result = append(result, versionsMap[v.String()])
}
return result
}

View File

@@ -0,0 +1,32 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package version_test
import (
"testing"
. "github.com/mudler/luet/cmd"
config "github.com/mudler/luet/pkg/config"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func TestVersioner(t *testing.T) {
RegisterFailHandler(Fail)
LoadConfig(config.LuetCfg)
RunSpecs(t, "Versioner Suite")
}

View File

@@ -0,0 +1,82 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package version_test
import (
. "github.com/mudler/luet/pkg/versioner"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Versioner", func() {
Context("Invalid version", func() {
versioner := DefaultVersioner()
It("Sanitize", func() {
sanitized := versioner.Sanitize("foo_bar")
Expect(sanitized).Should(Equal("foo-bar"))
})
})
Context("valid version", func() {
versioner := DefaultVersioner()
It("Validate", func() {
err := versioner.Validate("1.0")
Expect(err).ShouldNot(HaveOccurred())
})
})
Context("invalid version", func() {
versioner := DefaultVersioner()
It("Validate", func() {
err := versioner.Validate("1.0_##")
Expect(err).Should(HaveOccurred())
})
})
Context("Sorting", func() {
versioner := DefaultVersioner()
It("finds the correct ordering", func() {
sorted := versioner.Sort([]string{"1.0", "0.1"})
Expect(sorted).Should(Equal([]string{"0.1", "1.0"}))
})
})
Context("Sorting with invalid characters", func() {
versioner := DefaultVersioner()
It("finds the correct ordering", func() {
sorted := versioner.Sort([]string{"1.0_1", "0.1"})
Expect(sorted).Should(Equal([]string{"0.1", "1.0_1"}))
})
})
Context("Complex Sorting", func() {
versioner := DefaultVersioner()
It("finds the correct ordering", func() {
sorted := versioner.Sort([]string{"1.0", "0.1", "0.22", "1.1", "1.9", "1.10", "11.1"})
Expect(sorted).Should(Equal([]string{"0.1", "0.22", "1.0", "1.1", "1.9", "1.10", "11.1"}))
})
})
// from: https://github.com/knqyf263/go-deb-version/blob/master/version_test.go#L8
Context("Debian Sorting", func() {
versioner := DefaultVersioner()
It("finds the correct ordering", func() {
sorted := versioner.Sort([]string{"2:7.4.052-1ubuntu3.1", "2:7.4.052-1ubuntu1", "2:7.4.052-1ubuntu2", "2:7.4.052-1ubuntu3"})
Expect(sorted).Should(Equal([]string{"2:7.4.052-1ubuntu1", "2:7.4.052-1ubuntu2", "2:7.4.052-1ubuntu3", "2:7.4.052-1ubuntu3.1"}))
})
})
})

60
scripts/ginkgo.coverage.sh Executable file
View File

@@ -0,0 +1,60 @@
#!/usr/bin/env bash
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -euo pipefail
covermode=${COVERMODE:-atomic}
coverdir=$(mktemp -d /tmp/coverage.XXXXXXXXXX)
profile="${coverdir}/cover.out"
coveragetxt="coverage.txt"
generate_cover_data() {
ginkgo -flakeAttempts=3 -race -failFast -cover -r .
echo "" > ${coveragetxt}
find . -type f -name "*.coverprofile" | while read -r file; do cat "$file" >> ${coveragetxt} && mv "$file" "${coverdir}"; done
echo "mode: $covermode" >"$profile"
grep -h -v "^mode:" "$coverdir"/*.coverprofile >>"$profile"
}
push_to_coveralls() {
hash goveralls 2>/dev/null || go get github.com/mattn/goveralls
goveralls -coverprofile="${profile}" -service=circle-ci -repotoken "$COVERALLS_REPO_TOKEN" || echo "push to coveralls failed"
}
push_to_codecov() {
bash <(curl -s https://codecov.io/bash) || echo "push to codecov failed"
}
generate_cover_data
go tool cover -func "${profile}"
case "${1-}" in
--html)
go tool cover -html "${profile}"
;;
--coveralls)
if [ -z "$COVERALLS_REPO_TOKEN" ]; then
# shellcheck disable=SC2016
echo '$COVERALLS_REPO_TOKEN not set. Skipping pushing coverage report to coveralls.io'
exit
fi
push_to_coveralls
;;
--codecov)
push_to_codecov
;;
esac

View File

@@ -0,0 +1,2 @@
image: "alpine"
unpack: true

View File

@@ -0,0 +1,3 @@
category: "seed"
name: "alpine"
version: "1.0"

46
tests/fixtures/docker/build.yaml vendored Normal file
View File

@@ -0,0 +1,46 @@
image: "alpine"
unpack: true
includes:
- /bin$|/bin/.*
- /dev$|/dev/.*
- /etc$
- /etc/alpine-release.*
- /etc/apk.*
- /etc/conf.d.*
- /etc/crontabs.*
- /etc/fstab.*
- /etc/group.*
- /etc/init.d.*
- /etc/inittab.*
- /etc/issue.*
- /etc/logrotate.d.*
- /etc/modprobe.d.*
- /etc/modules.*
- /etc/modules-load.d.*
- /etc/motd.*
- /etc/mtab.*
- /etc/network.*
- /etc/opt.*
- /etc/os-release.*
- /etc/passwd.*
- /etc/periodic.*
- /etc/profile.*
- /etc/protocols.*
- /etc/securetty.*
- /etc/shadow.*
- /etc/shells.*
- /etc/ssl.*
- /etc/sysctl.*
- /etc/udhcpd.*
- /home$|/home/.*
- /lib$|/lib.*
- /media$|/media/.*
- /mnt$|/mnt/.*
- /opt$|/opt/.*
- /root$|/root/.*
- /run$|/run/.*
- /sbin$|/sbin/.*
- /usr$|/usr/.*
- /var$|/var/.*
- /srv$|/srv/.*
- /tmp$|/tmp/.*

3
tests/fixtures/docker/definition.yaml vendored Normal file
View File

@@ -0,0 +1,3 @@
category: "seed"
name: "alpine"
version: "1.0"

2
tests/fixtures/docker/finalize.yaml vendored Normal file
View File

@@ -0,0 +1,2 @@
install:
- touch /tmp/foo

View File

@@ -0,0 +1,4 @@
image: "alpine"
unpack: true
steps:
- apk update && apk add bash

View File

@@ -0,0 +1,3 @@
category: "seed"
name: "alpine"
version: "2.0"

View File

@@ -0,0 +1,5 @@
shell:
- bash
- -c
install:
- echo "$BASH" > /tmp/foo

View File

@@ -0,0 +1,2 @@
image: "alpine"
unpack: true

View File

@@ -0,0 +1,3 @@
category: "seed"
name: "alpine"
version: "1.0"

View File

@@ -0,0 +1,2 @@
install:
- echo "$0" > /tmp/foo

View File

@@ -0,0 +1,3 @@
image: "alpine"
steps:
- echo "test" > /file1

View File

@@ -0,0 +1,6 @@
category: "test"
name: "pkgA"
version: "0.1"
labels:
label1: "value1"
label2: "value2"

View File

@@ -1,2 +1,2 @@
image: "golang:alpine"
image: "golang"
unpack: true

View File

@@ -3,8 +3,8 @@ requires:
name: "base"
version: "0.1"
prelude:
- apk update
- apk add git
- apt-get update
- apt-get install git
- rm -rf /go/pkg/mod
steps:
- echo test > /extra-layer

View File

@@ -3,7 +3,7 @@ requires:
name: "extra"
version: "0.1"
prelude:
- apk add git
- apt-get install git
- mkdir -p /go/src/github.com/Sabayon/
- git clone https://github.com/Sabayon/pkgs-checker /go/src/github.com/Sabayon/pkgs-checker
steps:

View File

@@ -0,0 +1,10 @@
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo c > /c
- echo c > /cd
requires:
- category: "test"
name: "a"
version: ">=1.0"

View File

@@ -0,0 +1,9 @@
category: "test"
name: "c"
version: "1.0"
# Boom?
requires:
- category: "test"
name: "a"
version: ">=0.1"

View File

@@ -0,0 +1,11 @@
image: "alpine"
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo artifact3 > /test3
- echo artifact4 > /test4
requires:
- category: "test"
name: "b"
version: "1.0"

View File

@@ -0,0 +1,8 @@
category: "test"
name: "a"
version: "1.1"
requires:
- category: "test"
name: "b"
version: ">=0.1"

View File

@@ -0,0 +1,11 @@
image: "alpine"
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo artifact3 > /testaa
- echo artifact4 > /testaa2
requires:
- category: "test"
name: "b"
version: "1.0"

View File

@@ -0,0 +1,3 @@
category: "test"
name: "a"
version: "1.0"

View File

@@ -0,0 +1,11 @@
image: "alpine"
prelude:
- echo foo > /test
- echo bar > /test2
steps:
- echo artifact3 > /testlatest
- echo artifact4 > /testlatest2
requires:
- category: "test"
name: "b"
version: "1.0"

View File

@@ -0,0 +1,3 @@
category: "test"
name: "a"
version: "1.2"

View File

@@ -0,0 +1,9 @@
image: "alpine"
prelude:
- echo foo > /test
- echo bar > /test2
- chmod +x generate.sh
steps:
- echo artifact5 > /newc
- echo artifact6 > /newnewc
- ./generate.sh

View File

@@ -0,0 +1,3 @@
category: "test"
name: "b"
version: "1.1"

View File

@@ -0,0 +1 @@
echo generated > /sonewc

View File

@@ -0,0 +1,9 @@
image: "alpine"
prelude:
- echo foo > /test
- echo bar > /test2
- chmod +x generate.sh
steps:
- echo artifact5 > /test5
- echo artifact6 > /test6
- ./generate.sh

Some files were not shown because too many files have changed in this diff Show More