Compare commits

...

242 Commits
0.7.2 ... 0.8.8

Author SHA1 Message Date
Ettore Di Giacinto
abae9c320a Tag 0.8.8 2020-10-19 18:47:36 +02:00
Ettore Di Giacinto
94937cc88a update vendor/ 2020-10-19 17:58:50 +02:00
Ettore Di Giacinto
0aa0411c6e Bump copy dep and handle shallow symlinks 2020-10-19 17:58:43 +02:00
Daniele Rondina
c0cc9ec703 convert: Now use slot for category name 2020-10-18 19:58:15 +02:00
Daniele Rondina
07dff7f197 installer: log packages ignored on create-repo 2020-10-12 08:42:19 +02:00
Ettore Di Giacinto
4028c62367 Tag 0.8.7 2020-10-11 15:17:16 +02:00
Daniele Rondina
51f32c0614 Update vendor/ 2020-10-10 19:59:40 +02:00
Ettore Di Giacinto
3261b2af98 Drop also package file list from db entry 2020-10-10 18:53:43 +02:00
Ettore Di Giacinto
b88a81c7ed Add database command
- Allows to override the system db and create/remove entries as desired.
  The input format is the same metadata as the one generated by the
  artifacts. It contains the Package and the file list that we need.
- Add integration test

Closes #47
2020-10-10 18:26:40 +02:00
Daniele Rondina
d67cf2fa33 Revert "Do image export only if we have to generate the package"
This reverts commit 0857e53b03.
2020-10-07 10:14:06 +02:00
Ettore Di Giacinto
0857e53b03 Do image export only if we have to generate the package 2020-10-06 19:01:25 +02:00
Ettore Di Giacinto
1c1bdca343 Add only-target-package option to luet build 2020-10-06 17:57:57 +02:00
Ettore Di Giacinto
2cb0f3ab5d Tag 0.8.6 2020-10-04 19:58:13 +02:00
Ettore Di Giacinto
74246780d4 Support templated packages 2020-10-04 19:33:15 +02:00
Daniele Rondina
097ea37c97 compiler/artefact: remove debug println 2020-10-02 22:37:38 +02:00
Daniele Rondina
8e23bf139a contrib: Add example of config protect confile 2020-10-02 22:28:08 +02:00
Daniele Rondina
3ba70ae9bd contrib/config/luet.yaml: Add config_protect_skip option 2020-10-02 22:27:38 +02:00
Daniele Rondina
c64660b8d1 Permit to ignore config protect rules
* Added command line option --skip-config-protect

* Added config option config_protect_skip
2020-10-02 22:25:21 +02:00
Daniele Rondina
c8c53644f3 Avoid segfault if tree path doesn't exist 2020-09-17 08:05:05 +02:00
Daniele Rondina
9b381e5d19 Update github.com/Sabayon/pkgs-checker vendor 2020-09-12 16:07:04 +02:00
Ettore Di Giacinto
6f623ae016 Tag 0.8.5 2020-09-10 17:40:33 +02:00
Daniele Rondina
bd80f9acd2 cmd/tree/bump: Add --pkg-version/-p to set specific version 2020-08-30 08:54:38 +02:00
Daniele Rondina
045d25bb28 pkg/package: Add method SetVersion to DefaultPackage 2020-08-30 08:53:37 +02:00
Daniele Rondina
908b6d2bd4 cmd/tree/validate: Fix race and drop errs chan 2020-08-23 12:27:51 +02:00
Ettore Di Giacinto
a3ada624a7 Merge pull request #132 from mudler/validate-buildtime-deps
cmd/tree/validate: Integrate validation of buildtime deps
2020-08-22 12:10:34 +02:00
Daniele Rondina
09c7609a7f cmd/tree/validate: Add error summary 2020-08-20 11:36:56 +02:00
Daniele Rondina
a1acab0e52 cmd/tree/validate: Integrate validation of buildtime deps
By default both runtime and buildtime deps are checked.
With the option --only-buildtime is possible analyze only
buildtime deps or instead with the option --only-runtime
only the runtime deps.

Signed-off-by: Daniele Rondina <geaaru@sabayonlinux.org>
2020-08-20 11:09:39 +02:00
Daniele Rondina
93187182e5 pkg/compiler: Fix typo on error message 2020-08-19 19:24:46 +02:00
Daniele Rondina
5e7cd183be contrib/config/luet.yaml: Update config example 2020-08-08 17:55:26 +02:00
Ettore Di Giacinto
9c0f0e3457 ci: release with GH Actions 2020-08-08 11:54:11 +02:00
Ettore Di Giacinto
1120b1ee59 ci: fix typo 2020-08-07 23:36:33 +02:00
Ettore Di Giacinto
4010033e0c ci: fixup workflows 2020-08-07 19:30:30 +02:00
Ettore Di Giacinto
a076613f66 Fixup import path 2020-08-07 19:30:08 +02:00
Ettore Di Giacinto
c184b4b3bc ci: Pass env by in GH actions 2020-08-06 18:52:42 +02:00
Ettore Di Giacinto
40d1f1785b ci: Get deps before running unit tests 2020-08-06 18:22:13 +02:00
Ettore Di Giacinto
11944f4b8c Disable tty on docker integration test 2020-08-06 18:11:50 +02:00
Ettore Di Giacinto
6f41f8bd8d Add GH action workflows 2020-08-06 18:03:35 +02:00
Ettore Di Giacinto
95b125cb91 Pull images before executing diff tests 2020-08-06 18:03:00 +02:00
Ettore Di Giacinto
f676b50735 Tag 0.8.4 2020-08-05 19:37:32 +02:00
Ettore Di Giacinto
0c0401847e ci: pass PATH also on deploy steps 2020-08-05 19:35:41 +02:00
Ettore Di Giacinto
02a506a5c5 ci: pass PATH by 2020-08-05 19:24:17 +02:00
Ettore Di Giacinto
6f0b657e69 ci: keep envs 2020-08-05 19:21:20 +02:00
Ettore Di Giacinto
51378bdfb6 Run tests as root to verify caps 2020-08-05 19:19:33 +02:00
Ettore Di Giacinto
66513955c7 Compute image diffs internally
Is it more faster in this way as we already have all the needed folders
to the comparison extracted. In this way we don't repeat I/O operation
twice by calling container-diff.

Do not depend on container-diff anymore
2020-08-05 19:09:45 +02:00
Ettore Di Giacinto
694d8656d9 Add xattrs tests 2020-08-05 18:58:50 +02:00
Ettore Di Giacinto
c339e0fed2 Add symlink test 2020-08-05 18:57:27 +02:00
Ettore Di Giacinto
e30bb056d5 Drop IsFlagged from tests 2020-08-02 12:22:43 +02:00
Ettore Di Giacinto
052a551c0c Add "hidden" field to packages
Also drop residual of IsSet which isn't actually used

Related to #26
2020-08-02 11:31:23 +02:00
Ettore Di Giacinto
ffa6fc3829 Tag 0.8.3 2020-07-17 22:42:49 +02:00
Ettore Di Giacinto
07a1058ac1 Add cli option to skip packages if only metadata is present (without checking the image) 2020-07-17 22:42:03 +02:00
Ettore Di Giacinto
3af9109b99 Tag 0.8.2 2020-07-12 15:57:46 +02:00
Ettore Di Giacinto
6e3650d3af Add upgrade_old_repo_revision fixture 2020-07-12 15:38:02 +02:00
Ettore Di Giacinto
5dcf77c987 Add package buildtimestamp and luet upgrade --sync
Annotate the package build time when compiling, and use that from the
client to force upgrade of packages that changed the artifact, but
didn't changed any version.

The client can trigger this behavior with `luet upgrade --sync`
2020-07-12 15:29:38 +02:00
Daniele Rondina
ee0e70ed3d tree/pkglist: Now --deps orders for build/installation order 2020-07-05 10:44:47 +02:00
Daniele Rondina
364b5648b4 repository loader now support .yaml extension 2020-07-04 20:07:32 +02:00
Daniele Rondina
e28a4753f8 tree/validate: Use --deps instead of --rdeps (we support also build deps) 2020-06-27 19:42:03 +02:00
Daniele Rondina
d1d7f5aa74 tree/pkglist: Add --rdeps option for runtime deps 2020-06-27 19:27:45 +02:00
Daniele Rondina
e2260b6956 Add --no-spinner option 2020-06-27 16:45:49 +02:00
Ettore Di Giacinto
764a09ce0c Tag 0.8.1 2020-06-27 13:02:00 +02:00
Ettore Di Giacinto
910f1ad3fe Merge branch 'master' into develop 2020-06-27 13:01:14 +02:00
Ettore Di Giacinto
16e9d7b20c Use packageImage as builder image fingerprint
This allows to have an unique identifier for the builder image id against
different depgraphs combinations. The package fingerprint is not enough,
as an atom could have a difference deptree depending on the requires
constraints.

TODO: Don't use the full image name, but only the hash as a salt
(currently the salt contains ALSO a reference of the image-repository,
as such it doesn't allow to port a tree in a different docker registry)
2020-06-23 18:59:18 +02:00
Ettore Di Giacinto
6088664887 Drop gox-build make target 2020-06-13 18:40:55 +02:00
Ettore Di Giacinto
ee3b59348e Tag 0.8.0 2020-06-12 19:41:37 +02:00
Ettore Di Giacinto
bb41a0c074 Add upx -1 option 2020-06-12 18:47:18 +02:00
Ettore Di Giacinto
6b8f412138 Update vendor 2020-06-12 17:58:13 +02:00
Ettore Di Giacinto
6d68ed073d Show and detect extensions with cobra-extensions 2020-06-12 17:54:57 +02:00
Ettore Di Giacinto
7b51e83902 Merge pull request #119 from mudler/config-protect
Integrate feature config-protect like Gentoo/Funtoo do
2020-06-07 11:33:55 +02:00
Daniele Rondina
3a365c709b Integrate config-protect from package spec 2020-06-06 13:12:54 +02:00
Daniele Rondina
aaa73dc2ac Add integration test for config-protect 2020-06-06 12:35:16 +02:00
Daniele Rondina
0917c2703e helpers/archive: Refactor and add support for config protect when sameOwner is false 2020-06-06 12:34:44 +02:00
Ettore Di Giacinto
264e1e9652 Print inspect output as string 2020-06-06 11:32:47 +02:00
Ettore Di Giacinto
03cc5fcb76 Use cobra for build --full 2020-06-06 11:20:48 +02:00
Ettore Di Giacinto
8aafc7600c Print selected packages 2020-06-06 11:18:54 +02:00
Daniele Rondina
c87db16d31 tarModifierWrapperFunc: Now use strings.HasPrefix for match path 2020-06-06 10:38:11 +02:00
Daniele Rondina
cd903351b3 cmd/config: Add print of config protect data 2020-06-06 10:36:42 +02:00
Daniele Rondina
837eeb04ec config/config_protect: Fix load of configuration files 2020-06-06 10:35:38 +02:00
Ettore Di Giacinto
90a25406a0 Add --full option to build
Don't compile dependencies when computing all the compilation specs from
a tree if are among the target deps

Fixes #41
2020-06-06 08:58:18 +02:00
Daniele Rondina
a19a1488bb Update Luet-lab/moby vendor/ 2020-06-06 08:30:28 +02:00
Daniele Rondina
d946e39a15 pkg/helpers/archive_test: Fix test 2020-06-05 23:53:33 +02:00
Daniele Rondina
a414b4ad4c contrib/config/luet.yaml: Add config_protect_confdir option 2020-06-05 23:40:36 +02:00
Daniele Rondina
415b1dab9a Add test of modifier 2020-06-05 23:12:35 +02:00
Daniele Rondina
16f717f04b tarModifierWrapperFunc: Fix config protect filename 2020-06-05 23:12:19 +02:00
Ettore Di Giacinto
9e0e1199df Check if image exists before skipping compilation 2020-06-03 21:00:30 +02:00
Daniele Rondina
8f0c528c08 Create helpers.UntarProtect for handle protected files
Currently, it's used the archive.ReplaceFileTarWrapper
that requite a []byte of the files replaced. This is not
a good idea if files are big and instead could be better
in the near future reimplement ReplaceFileTarWrapper with
a callback that return io.Reader instead of []byte.

If a protected file is already present on target rootfs
it is created a file with the same prefix used in Gentoo:

._cfgXXXX_<filename>
2020-06-02 11:08:37 +02:00
Daniele Rondina
a2231749ab Begin implementation for catch config protect files 2020-06-02 09:04:40 +02:00
Ettore Di Giacinto
990a5405cf Merge pull request #116 from mudler/review-config-path
Review configuration file parsing logic
2020-05-31 11:05:55 +02:00
Daniele Rondina
1d8a6174bb Drop duplicate code for general.same_owner 2020-05-30 16:51:10 +02:00
Daniele Rondina
341293c403 tests/integration/09_docker.sh: align new logic 2020-05-30 16:47:11 +02:00
Daniele Rondina
9e7c7e69f8 Review configuration file parsing logic
Luet support now these priorities on read configuration file:
- command line option (if available)
- $PWD/.luet.yaml
- $HOME/.luet.yaml
- /etc/luet/luet.yaml
2020-05-30 16:46:29 +02:00
Ettore Di Giacinto
86808ad49b Fixup dockerfile 2020-05-26 21:19:27 +02:00
Ettore Di Giacinto
d59cc42e22 Add target to create smaller binary 2020-05-26 21:07:18 +02:00
Ettore Di Giacinto
cc21e6fa5e Respect user-defined repository naming 2020-05-24 12:16:02 +02:00
geaaru
8c4f5b2911 Merge pull request #113 from mudler/annotations
Add Annotations to package spec
2020-05-23 10:52:32 +02:00
Daniele Rondina
44d68a9583 Add annotations option to package spec 2020-05-23 09:27:38 +02:00
Daniele Rondina
dba6c361c2 Move helpers/cli to cmd/helpers 2020-05-23 08:51:33 +02:00
Ettore Di Giacinto
4197d7af61 Add upgrade by using only the SAT core
- Adds upgrade --universe and upgrade --universe --clean. It will
  attempt to bring the system as much close as the content available in
  the repositories. It differs from a standard upgrade which checks
  directly that what is pulled in doesn't conflict with the system. In
  this new way, we just query the SAT solver to decide that on our
  behalf.
- Add uninstall --full-clean. It uses only the SAT solver to uninstall
  the package and it will drop as many packages as required (including
  revdeps of packages too.
2020-05-22 21:20:58 +02:00
Ettore Di Giacinto
bfde9afc7f Add Nodeps and Full options to upgrade 2020-05-22 21:20:57 +02:00
Ettore Di Giacinto
3237423dde Enhance upgrade output 2020-05-22 21:20:57 +02:00
Ettore Di Giacinto
ab179db96a Don't drop packages that would be re-installed during upgrade
Check for packages that are marked for deletion. If the ones that are
marked for install are depending on it, don't remove them at all
2020-05-22 21:20:52 +02:00
geaaru
916b2a8927 Merge pull request #112 from mudler/package-sanitized
Package sanitized
2020-05-20 11:42:47 +02:00
Daniele Rondina
a16bdddeb2 Add spectooling test suite 2020-05-20 10:26:30 +02:00
Daniele Rondina
e38a4b3d9b Use DefaultPackageSanitized struct for write specs 2020-05-20 09:59:48 +02:00
Ettore Di Giacinto
c52fe9a6b3 Add development version 2020-05-19 23:05:03 +02:00
Ettore Di Giacinto
956e55a1d4 Tag 0.7.9 2020-05-19 23:04:28 +02:00
Ettore Di Giacinto
9971fe9f45 Unique hashes for packages without deps 2020-05-18 19:57:01 +02:00
Daniele Rondina
11759f98e0 package/NewPackage: init Labels on AddLabel 2020-05-18 19:51:47 +02:00
Daniele Rondina
cb2ac15de8 package: Fix typo on labels key 2020-05-17 19:17:14 +02:00
Ettore Di Giacinto
c0b432befa Add development version 2020-05-16 22:46:29 +02:00
Ettore Di Giacinto
8e029a8ee4 Tag 0.7.8 2020-05-16 22:46:04 +02:00
Ettore Di Giacinto
51711dafba Add package_dir to pack a spec dir as the main artifact 2020-05-16 21:34:27 +02:00
Ettore Di Giacinto
2803430515 Add solver test scenario 2020-05-16 11:11:17 +02:00
Daniele Rondina
44213894bc Add commands aliases 2020-05-10 20:24:08 +02:00
Daniele Rondina
a7d1381cb5 cmd/create-repo: Add support for multiple trees 2020-05-10 20:18:10 +02:00
Daniele Rondina
2cb79c0071 cmd/build: Add support for multiple trees 2020-05-10 20:02:12 +02:00
Ettore Di Giacinto
7d17d3babf Merge pull request #109 from mudler/log-cmdline-opts
Log cmdline opts
2020-05-09 11:16:37 +02:00
Daniele Rondina
13df161fc6 logging: permit to disable color and emoji
Now it's possible disable color and emoji on standard
output with:

$> luet <cmd> --color=false --emoji=false

Hereinafter, the list of changes:

* Added logging option logging.color (default true)
* Added logging option logging.enable_emoji (default true)
* Added persistent flag --color
* Added persistent flag --emoji
2020-05-09 10:08:21 +02:00
Daniele Rondina
fe5ab9246f Added option for enable log to file.
Now it's possible logging to file is handled by the enable_logfile
option and by the path.

From cli is now possible:
* enable log to file with the option --enable-logfile
* modify the logfile path with the option --logfile/-l <path>
2020-05-09 10:05:34 +02:00
Daniele Rondina
993bcf9adf tree/validate: Add support for in memory cache with solver check 2020-05-08 20:05:21 +02:00
Ettore Di Giacinto
20cb96e0cc Simplify ordering check
we don't need a map[string]map[string]interface{}, as we don't need to
keep the data around
2020-05-04 17:34:29 +02:00
Daniele Rondina
b68634b58a solver: skip same packages in the order and avoid loop 2020-05-04 12:21:49 +02:00
Ettore Di Giacinto
1b529ea8c5 Add development version 2020-05-03 13:46:53 +02:00
Ettore Di Giacinto
2c2e6065d9 Tag 0.7.7 2020-05-03 13:46:35 +02:00
Ettore Di Giacinto
46014fb9c1 Enhance install output 2020-05-03 13:22:06 +02:00
Ettore Di Giacinto
ada9d886fa Enhance uninstall output 2020-05-03 13:22:00 +02:00
Ettore Di Giacinto
584b980644 Adapt integration test which requires full uninstall 2020-05-03 13:12:14 +02:00
Ettore Di Giacinto
a1d8ef1422 Allow to partially uninstall a package graph, make uninstall --full optional 2020-05-03 13:04:34 +02:00
Ettore Di Giacinto
7b6e4a2176 Add to the Solver the capability to check conflicts with revdeps 2020-05-03 10:34:18 +02:00
Ettore Di Giacinto
3befbfa915 Optimize uninstall computation 2020-05-02 15:43:57 +02:00
Ettore Di Giacinto
8dd756ec96 Add development version 2020-05-02 14:51:08 +02:00
Ettore Di Giacinto
1b6ffac7bd Tag 0.7.6 2020-05-02 14:50:29 +02:00
Ettore Di Giacinto
faef3d093a Add test fixtures 2020-05-02 14:03:34 +02:00
Ettore Di Giacinto
d4c25d74f5 Remove focus from test 2020-05-02 13:27:15 +02:00
Ettore Di Giacinto
2ed9781c88 Don't try to match against repo already installed packages 2020-05-02 13:24:46 +02:00
Ettore Di Giacinto
9d6d6bc0c8 Add more upgrade and reclaim tests scenarios 2020-05-02 12:18:19 +02:00
Ettore Di Giacinto
f8b2837741 Enhance reclaim output 2020-05-02 12:17:54 +02:00
geaaru
878e6d7b9c Merge pull request #103 from mudler/tmpdir-cleanup
Tmpdir cleanup
2020-05-02 09:20:01 +02:00
Daniele Rondina
c5b41946dc Add integration test for tmpdir cleanup 2020-05-02 08:43:25 +02:00
Daniele Rondina
a89c0af2f8 config/config_test: Fix typo 2020-05-01 15:34:17 +02:00
Daniele Rondina
60d9017952 CleanupTmpDir() is not Fatal 2020-05-01 15:27:19 +02:00
Daniele Rondina
1f99fde1c5 Add config test suite 2020-05-01 15:26:53 +02:00
Ettore Di Giacinto
4b63c9eaf9 Add test to be sure we don't index packages not in the tree 2020-05-01 11:48:05 +02:00
Ettore Di Giacinto
286d0fba2c Fix typo 2020-05-01 11:07:14 +02:00
Ettore Di Giacinto
624518bf77 Don't add package to the repository which aren't referenced by the tree 2020-05-01 10:52:40 +02:00
Daniele Rondina
51a4037b1b config: Initialize luet tmp basedir if doesn't exist 2020-05-01 08:18:18 +02:00
Ettore Di Giacinto
b13828c883 Add development version 2020-04-30 22:50:37 +02:00
Ettore Di Giacinto
886cbd0036 Tag 0.7.5 2020-04-30 22:50:12 +02:00
Ettore Di Giacinto
b91288153a Make tree validation cmd concurrent 2020-04-30 21:48:46 +02:00
Ettore Di Giacinto
9cb290b484 Improve database errors 2020-04-30 20:44:34 +02:00
Daniele Rondina
11944873ea Integrate tmpdir_base params and tmpdirs cleanup 2020-04-30 20:29:28 +02:00
Ettore Di Giacinto
322ac99f17 Annotate package runtime definition when reclaiming
This was an issue as we were copying the buildspec instead
2020-04-30 18:56:50 +02:00
Ettore Di Giacinto
6acc5fc97e Make BuildFormula support dependency cycles 2020-04-30 18:56:35 +02:00
Ettore Di Giacinto
6a4557a3b3 Make RequireContains support dependency cycles 2020-04-30 18:56:06 +02:00
Ettore Di Giacinto
fb3c568051 Add development version 2020-04-24 19:46:35 +02:00
Ettore Di Giacinto
14bc26ee22 Tag 0.7.4 2020-04-24 19:46:14 +02:00
Ettore Di Giacinto
18ccdbd6a9 Add --revdep to luet pkglist
Include also the path in the results
2020-04-24 19:05:47 +02:00
Ettore Di Giacinto
7d960b733d Add --revdep to luet search
Fixes #52
2020-04-24 19:05:24 +02:00
Ettore Di Giacinto
919b2c3cfc Don't print misleading messages to the user 2020-04-24 19:04:21 +02:00
Ettore Di Giacinto
a457c53824 Fixups to ExpandedRevdeps 2020-04-24 00:15:18 +02:00
Ettore Di Giacinto
8305d01e76 Walk requires in ExpandedRevdeps
In this way we annotate the visited and we avoid cycles that can be
generated by revdeps
2020-04-23 23:44:42 +02:00
Ettore Di Giacinto
ec32677dc1 Merge branch 'master' into develop 2020-04-22 22:17:49 +02:00
Ettore Di Giacinto
60619c36a3 Merge pull request #95 from mudler/develop (#96) 2020-04-22 22:16:58 +02:00
Ettore Di Giacinto
141f1bf773 Merge pull request #95 from mudler/develop
Merge develop
2020-04-22 22:15:34 +02:00
Ettore Di Giacinto
9321a310a5 Support mount with target folder in box exec
Also mount bind before pivotroot, this fixes bind mounting in general
2020-04-21 17:30:27 +02:00
Ettore Di Giacinto
4cee27da7f Fixup box CLI params and consume new box options 2020-04-20 23:03:03 +02:00
Ettore Di Giacinto
98876c8f20 Add support for env and hostmounts in box 2020-04-20 23:01:49 +02:00
Ettore Di Giacinto
4c2d38be59 Add package ExpandedRevdeps 2020-04-19 11:55:13 +02:00
Ettore Di Giacinto
20aa7c89f8 Use different result structs in pkglist 2020-04-19 11:37:23 +02:00
Ettore Di Giacinto
937609a9f4 Add machine readable output to pkglist 2020-04-19 11:24:41 +02:00
Ettore Di Giacinto
a3f0d848c9 Add builtime search to tree pkglist 2020-04-19 10:53:05 +02:00
Ettore Di Giacinto
8e91c255a3 Make FindPackageMatch match packages HumanReadableString 2020-04-19 10:47:55 +02:00
Ettore Di Giacinto
62381ed46d Structure search machine readable output 2020-04-19 10:31:07 +02:00
Ettore Di Giacinto
1171987ed3 Adapt integration test to search output change 2020-04-19 10:24:39 +02:00
Ettore Di Giacinto
ac871cb0a3 Add --output option to search
In such way it can be parsed by scripts more easily.
It also disable the spinner based on loglevel

Fixes #92
2020-04-18 23:22:00 +02:00
Ettore Di Giacinto
82cd0e17b6 Enhance search output 2020-04-18 22:51:27 +02:00
Ettore Di Giacinto
c972795a30 Tweak upgrade integration test
In this way we also test that the user configuration takes precedence
over the one advertized remotely
2020-04-18 17:35:58 +02:00
Ettore Di Giacinto
4d466d0330 Sync the repository data with user configured data
Fixup a regression, in this way we respect the user configuration in
regards to a repository to be consumed (priority and repository type)
regardless of the one advertized remotely.

Also add informative message about prio and type.
2020-04-18 17:33:46 +02:00
Ettore Di Giacinto
84407e5ae7 Add SetPriority to repository 2020-04-18 17:33:34 +02:00
Ettore Di Giacinto
845797a155 Merge pull request #93 from mudler/develop
Merge develop
2020-04-18 14:45:20 +02:00
Ettore Di Giacinto
83e19359a9 Update bbolt to 1.3.4
Includes fixes for Go 1.14

See: https://github.com/etcd-io/bbolt/issues/187
2020-04-18 13:58:14 +02:00
Ettore Di Giacinto
d77f875a8a Bump go version in travis 2020-04-18 12:41:23 +02:00
Ettore Di Giacinto
3963fb64e5 Update vendor 2020-04-18 11:49:28 +02:00
Ettore Di Giacinto
1d5ce53443 Add luet box command
It adds only the 'exec' subcommand to spawn processes in custom boxes

Relates to #45
2020-04-18 11:43:14 +02:00
Ettore Di Giacinto
a14f0abb5c Update vendor 2020-04-18 11:42:34 +02:00
Ettore Di Giacinto
64bac0823c Consume our moby fork
It handles #91
2020-04-18 11:41:34 +02:00
Ettore Di Giacinto
ee0fe1a86a Allow to finalizer to specify entrypoint command
In this way, finalizer in strict environment can override the default
shell used to run commands.

The shell keyword is a list, as it needs to contain the full command +
args.
2020-04-14 17:46:39 +02:00
Ettore Di Giacinto
333b9cc023 Add reclaim to CLI
Relates to #86
2020-04-13 19:33:30 +02:00
Ettore Di Giacinto
538ba8f5df Add luet reclaim
Reclaim allows to migrate between different system layouts. This
is a experimental feature (yet) and might be revisited in the future.

This change:

- Adds Reclaim(system) to Installer
- Adds unit tests

Relates to #86
2020-04-13 19:32:51 +02:00
Ettore Di Giacinto
eb56956c65 Add Touch file helper 2020-04-13 19:32:50 +02:00
Ettore Di Giacinto
32cf132a0c Drop downloadOnly bool from Installer cli options 2020-04-13 19:32:50 +02:00
Ettore Di Giacinto
4e2d42e397 Drop downloadOnly bool from installer.Install 2020-04-13 19:32:50 +02:00
Ettore Di Giacinto
c2eed1d999 Add quay.io badge 2020-04-13 18:11:02 +02:00
Daniele Rondina
1b8176777a cmd/search: Add filter to result 2020-04-13 15:50:45 +02:00
Ettore Di Giacinto
528f481a7b Annotate package files in metadata
Fixes #87
2020-04-13 10:52:41 +02:00
Ettore Di Giacinto
2f3eabc3ca Annotate git hash and time in displayed version 2020-04-12 14:37:49 +02:00
Ettore Di Giacinto
579a4f20fc Add docker fixture for integration tests 2020-04-11 20:12:43 +02:00
Ettore Di Giacinto
dd45a46ed0 Add docker from scratch integration test 2020-04-11 20:07:10 +02:00
Ettore Di Giacinto
f987ee9fa0 Handle errors from os/user
We use it to generate defaults, if error is found, we imply preserve
permissions
2020-04-11 20:05:56 +02:00
Ettore Di Giacinto
0fe2a8c950 Add development version 2020-04-11 18:30:11 +02:00
Ettore Di Giacinto
9704992c43 Tag 0.7.3 2020-04-11 18:29:46 +02:00
Ettore Di Giacinto
c2beab64cb Merge pull request #84 from mudler/develop
Merge develop
2020-04-11 18:24:02 +02:00
Ettore Di Giacinto
d8919f7250 Check if dependencies are selectors in validate 2020-04-10 20:00:37 +02:00
Daniele Rondina
42c380029c cmd/tree/validate: Fix order analysis 2020-04-10 17:41:27 +02:00
Daniele Rondina
69a82a1ca5 simpledocker: Use debug option for print container-diff results 2020-04-10 09:32:12 +02:00
Daniele Rondina
02c33896d5 simpledocker: Move warning before return 2020-04-10 09:31:08 +02:00
Daniele Rondina
c2e9176ab2 solver.Order now return error 2020-04-09 17:07:57 +02:00
Daniele Rondina
726a749b4b cmd/tree/validate: Integrate order of the solution 2020-04-09 15:58:05 +02:00
Ettore Di Giacinto
0d2668e452 Print warning if container-diffs return errors to stderr 2020-04-08 18:31:40 +02:00
Ettore Di Giacinto
77ba4193aa Add ValidateSelector to versioner interface and consume it
We can refactor furthermore by dropping the package methods, as now we
can consume a versioner in all places that requires it
2020-04-05 15:52:16 +02:00
Ettore Di Giacinto
5a5e7f1dfa Move version logic inside versioner
WIP, it needs yet to be under the interface implementation
2020-04-05 11:16:14 +02:00
Ettore Di Giacinto
cc0999b4c4 Merge pull request #83 from auto-maintainers/branch-be6698fa706d6ccfc3734b805b212f75175fd557
Update dependencies
2020-04-05 09:00:51 +02:00
MarvinHatesOceans
be6698fa70 Update dependencies 2020-04-04 23:56:14 +00:00
Ettore Di Giacinto
6540dcc99d Merge pull request #82 from mudler/packages_type
Add Packages type and Versioner interface
2020-04-04 17:28:49 +02:00
Ettore Di Giacinto
cf8091a7fd Re-enable debian versioning test, add more cases 2020-04-04 16:49:41 +02:00
Ettore Di Giacinto
3caaa01eb8 Add semver detection to versioner
In this way we compare only all version that are compliant to semver,
otherwise we fallback to debian sorting.
2020-04-04 16:48:48 +02:00
Ettore Di Giacinto
4eae0a851c Add versioner unit tests 2020-04-04 16:33:08 +02:00
Ettore Di Giacinto
626419f955 Fix original version mapping in versioner 2020-04-04 16:23:43 +02:00
Ettore Di Giacinto
5bf60deffc Update vendor 2020-04-04 15:35:45 +02:00
Ettore Di Giacinto
84625be9ac Adapt package.Best to take a Versioner interface 2020-04-04 15:33:14 +02:00
Ettore Di Giacinto
aaa8d8b7d6 Update gomod 2020-04-04 15:32:13 +02:00
Ettore Di Giacinto
04f9a88a5f Add versioner interface to wrap versioning operations
This allows to scope responsabilities and enables switching to different
ordering algorithms
2020-04-04 15:29:20 +02:00
Ettore Di Giacinto
9d58b0a0cc Adapt tests to refactor 2020-04-04 14:41:58 +02:00
Ettore Di Giacinto
5e31d940f0 Introduce Packages for []Package 2020-04-04 14:29:08 +02:00
Daniele Rondina
42d1ec5585 cmd/tree/bump: Add --to-stdout option 2020-04-04 13:42:25 +02:00
Daniele Rondina
07e78dd89b Update vendor github.com/Sabayon/pkgs-checker@076438c31739 2020-04-04 11:40:36 +02:00
geaaru
625a6ce773 Merge pull request #80 from mudler/versioning_integration
Versioning integration
2020-04-03 19:24:43 +02:00
Daniele Rondina
d6faa3335f tests/integration/08_versioning.sh: Add version with _ 2020-04-03 18:33:01 +02:00
Daniele Rondina
129ca8da55 Add version sanitized to Best() 2020-04-03 18:32:26 +02:00
Daniele Rondina
9e95d7d2a9 tests/integration/08_versioning.sh: Drop wrong assert 2020-04-03 15:24:56 +02:00
Daniele Rondina
e94674fca3 tests/integration/08_versioning.sh: Simplify test 2020-04-03 15:06:27 +02:00
Daniele Rondina
730b9a087a tests/integration/08_versioning.sh: Fix test 2020-04-02 19:39:31 +02:00
Ettore Di Giacinto
9cf6330146 Build empty packages in versioning fixture 2020-03-29 15:11:20 +02:00
Ettore Di Giacinto
daa0b815c1 Fixup assertEquals on versioning integration test 2020-03-29 14:52:14 +02:00
Ettore Di Giacinto
1779382811 Extend tree validate checks 2020-03-29 14:52:13 +02:00
Ettore Di Giacinto
c705728a2a Build specific package, it triggers version search 2020-03-29 14:52:12 +02:00
Ettore Di Giacinto
15e772d96d Fixup versioning of fixture 2020-03-29 14:52:10 +02:00
Ettore Di Giacinto
0d6dc49a95 Add versioning integration test 2020-03-29 14:52:00 +02:00
Daniele Rondina
8df76c8941 Update vendor github.com/Sabayon/pkgs-checker@fd529912510d 2020-03-29 14:04:55 +02:00
Ettore Di Giacinto
b1ce8d2eba Merge pull request #77 from auto-maintainers/branch-830fdb5bf299f959bcd02fe577c19471daa229cd
Update dependencies
2020-03-29 12:15:14 +02:00
Daniele Rondina
ee66839bd3 pkg/package/version: Better search for last + 2020-03-29 09:10:52 +02:00
Daniele Rondina
9b61ed9e1d Fix panic if luet is executed without args 2020-03-29 08:48:51 +02:00
MarvinHatesOceans
d59fff0740 Update dependencies 2020-03-28 23:19:43 +00:00
Ettore Di Giacinto
aeb10338f6 Add development version 2020-03-28 17:32:06 +01:00
1934 changed files with 509373 additions and 24853 deletions

23
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
on: push
name: Build and release on push
jobs:
release:
name: Test and Release
runs-on: ubuntu-latest
steps:
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: 1.14.x
- name: Checkout code
uses: actions/checkout@v2
- name: Tests
run: sudo -E env "PATH=$PATH" make deps multiarch-build test-integration test-coverage
- name: Build
run: sudo -E env "PATH=$PATH" make multiarch-build && sudo chmod -R 777 release/
- name: Release
uses: fnkr/github-action-ghr@v1
if: startsWith(github.ref, 'refs/tags/')
env:
GHR_PATH: release/
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

21
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
on: pull_request
name: Build and Test
jobs:
test:
strategy:
matrix:
go-version: [1.14.x]
platform: [ubuntu-latest]
runs-on: ${{ matrix.platform }}
steps:
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Checkout code
uses: actions/checkout@v2
- name: setup-docker
uses: docker-practice/actions-setup-docker@0.0.1
- name: Tests
run: sudo -E env "PATH=$PATH" make deps multiarch-build test-integration test-coverage

View File

@@ -2,18 +2,18 @@ language: go
services:
- docker
go:
- "1.12"
- "1.14"
env:
- "GO15VENDOREXPERIMENT=1"
before_install:
- make deps
- curl -LO https://storage.googleapis.com/container-diff/latest/container-diff-linux-amd64 && chmod +x container-diff-linux-amd64 && mkdir -p $HOME/bin && export PATH=$PATH:$HOME/bin && mv container-diff-linux-amd64 $HOME/bin/container-diff
- sudo -E env "PATH=$PATH" apt-get install -y libcap2-bin
- sudo -E env "PATH=$PATH" make deps
script:
- make multiarch-build test-integration test-coverage
after_success:
- |
if [ -n "$TRAVIS_TAG" ] && [ "$TRAVIS_PULL_REQUEST" == "false" ]; then
git config --global user.name "Deployer" && git config --global user.email foo@bar.com
go get github.com/tcnksm/ghr
ghr -u mudler -r luet --replace $TRAVIS_TAG release/
fi
- sudo -E env "PATH=$PATH" make multiarch-build test-integration test-coverage
#after_success:
# - |
# if [ -n "$TRAVIS_TAG" ] && [ "$TRAVIS_PULL_REQUEST" == "false" ]; then
# sudo -E env "PATH=$PATH" git config --global user.name "Deployer" && git config --global user.email foo@bar.com
# sudo -E env "PATH=$PATH" go get github.com/tcnksm/ghr
# sudo -E env "PATH=$PATH" ghr -u mudler -r luet --replace $TRAVIS_TAG release/
# fi

View File

@@ -1,6 +1,7 @@
FROM golang as builder
RUN apt-get update && apt-get install -y upx
ADD . /luet
RUN cd /luet && make build
RUN cd /luet && make build-small
FROM scratch
ENV LUET_NOLOCK=true

View File

@@ -1,10 +1,14 @@
# go tool nm ./luet | grep Commit
override LDFLAGS += -X "github.com/mudler/luet/cmd.BuildTime=$(shell date -u '+%Y-%m-%d %I:%M:%S %Z')"
override LDFLAGS += -X "github.com/mudler/luet/cmd.BuildCommit=$(shell git rev-parse HEAD)"
NAME ?= luet
PACKAGE_NAME ?= $(NAME)
PACKAGE_CONFLICT ?= $(PACKAGE_NAME)-beta
REVISION := $(shell git rev-parse --short HEAD || echo dev)
VERSION := $(shell git describe --tags || echo $(REVISION))
VERSION := $(shell echo $(VERSION) | sed -e 's/^v//g')
ITTERATION := $(shell date +%s)
BUILD_PLATFORMS ?= -osarch="linux/amd64" -osarch="linux/386" -osarch="linux/arm"
ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
@@ -59,16 +63,17 @@ deps:
.PHONY: build
build:
CGO_ENABLED=0 go build
CGO_ENABLED=0 go build -ldflags '$(LDFLAGS)'
.PHONY: build-small
build-small:
@$(MAKE) LDFLAGS+="-s -w" build
upx --brute -1 $(NAME)
.PHONY: image
image:
docker build --rm -t luet/base .
.PHONY: gox-build
gox-build:
CGO_ENABLED=0 gox $(BUILD_PLATFORMS) -output="release/$(NAME)-$(VERSION)-{{.OS}}-{{.Arch}}"
.PHONY: lint
lint:
golint ./... | grep -v "be unexported"
@@ -85,4 +90,4 @@ test-docker:
.PHONY: multiarch-build
multiarch-build:
gox $(BUILD_PLATFORMS) -output="release/$(NAME)-$(VERSION)-{{.OS}}-{{.Arch}}"
CGO_ENABLED=0 gox $(BUILD_PLATFORMS) -ldflags '$(LDFLAGS)' -output="release/$(NAME)-$(VERSION)-{{.OS}}-{{.Arch}}"

View File

@@ -1,4 +1,5 @@
# luet - Container-based Package manager
[![Docker Repository on Quay](https://quay.io/repository/luet/base/status "Docker Repository on Quay")](https://quay.io/repository/luet/base)
[![Go Report Card](https://goreportcard.com/badge/github.com/mudler/luet)](https://goreportcard.com/report/github.com/mudler/luet)
[![Build Status](https://travis-ci.org/mudler/luet.svg?branch=master)](https://travis-ci.org/mudler/luet)
[![GoDoc](https://godoc.org/github.com/mudler/luet?status.svg)](https://godoc.org/github.com/mudler/luet)

36
cmd/box.go Normal file
View File

@@ -0,0 +1,36 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
. "github.com/mudler/luet/cmd/box"
"github.com/spf13/cobra"
)
var boxGroupCmd = &cobra.Command{
Use: "box [command] [OPTIONS]",
Short: "Manage luet boxes",
}
func init() {
RootCmd.AddCommand(boxGroupCmd)
boxGroupCmd.AddCommand(
NewBoxExecCommand(),
)
}

81
cmd/box/exec.go Normal file
View File

@@ -0,0 +1,81 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd_box
import (
"os"
b64 "encoding/base64"
"github.com/mudler/luet/pkg/box"
. "github.com/mudler/luet/pkg/logger"
"github.com/spf13/cobra"
)
func NewBoxExecCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "exec [OPTIONS]",
Short: "Execute a binary in a box",
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
},
Run: func(cmd *cobra.Command, args []string) {
stdin, _ := cmd.Flags().GetBool("stdin")
stdout, _ := cmd.Flags().GetBool("stdout")
stderr, _ := cmd.Flags().GetBool("stderr")
rootfs, _ := cmd.Flags().GetString("rootfs")
base, _ := cmd.Flags().GetBool("decode")
entrypoint, _ := cmd.Flags().GetString("entrypoint")
envs, _ := cmd.Flags().GetStringArray("env")
mounts, _ := cmd.Flags().GetStringArray("mount")
if base {
var ss []string
for _, a := range args {
sDec, _ := b64.StdEncoding.DecodeString(a)
ss = append(ss, string(sDec))
}
//If the command to run is complex,using base64 to avoid bad input
args = ss
}
Info("Executing", args, "in", rootfs)
b := box.NewBox(entrypoint, args, mounts, envs, rootfs, stdin, stdout, stderr)
err := b.Run()
if err != nil {
Fatal(err)
}
},
}
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
ans.Flags().String("rootfs", path, "Rootfs path")
ans.Flags().Bool("stdin", false, "Attach to stdin")
ans.Flags().Bool("stdout", true, "Attach to stdout")
ans.Flags().Bool("stderr", true, "Attach to stderr")
ans.Flags().Bool("decode", false, "Base64 decode")
ans.Flags().StringArrayP("env", "e", []string{}, "Environment settings")
ans.Flags().StringArrayP("mount", "m", []string{}, "List of paths to bind-mount from the host")
ans.Flags().String("entrypoint", "/bin/sh", "Entrypoint command (/bin/sh)")
return ans
}

View File

@@ -18,10 +18,10 @@ import (
"io/ioutil"
"os"
helpers "github.com/mudler/luet/cmd/helpers"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
tree "github.com/mudler/luet/pkg/tree"
@@ -62,7 +62,7 @@ var buildCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
clean := viper.GetBool("clean")
src := viper.GetString("tree")
treePaths := viper.GetStringSlice("tree")
dst := viper.GetString("destination")
concurrency := LuetCfg.GetGeneral().Concurrency
backendType := viper.GetString("backend")
@@ -78,6 +78,9 @@ var buildCmd = &cobra.Command{
nodeps := viper.GetBool("nodeps")
onlydeps := viper.GetBool("onlydeps")
keepExportedImages := viper.GetBool("keep-exported-images")
onlyTarget, _ := cmd.Flags().GetBool("only-target-package")
full, _ := cmd.Flags().GetBool("full")
skip, _ := cmd.Flags().GetBool("skip-if-metadata-exists")
compilerSpecs := compiler.NewLuetCompilationspecs()
var compilerBackend compiler.CompilerBackend
@@ -105,14 +108,21 @@ var buildCmd = &cobra.Command{
generalRecipe := tree.NewCompilerRecipe(db)
Info("Loading", src)
Info("Building in", dst)
err := generalRecipe.Load(src)
if err != nil {
Fatal("Error: " + err.Error())
if len(treePaths) <= 0 {
Fatal("No tree path supplied!")
}
for _, src := range treePaths {
Info("Loading tree", src)
err := generalRecipe.Load(src)
if err != nil {
Fatal("Error: " + err.Error())
}
}
Info("Building in", dst)
stype := LuetCfg.Viper.GetString("solver.type")
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
@@ -135,11 +145,23 @@ var buildCmd = &cobra.Command{
opts.OnlyDeps = onlydeps
opts.NoDeps = nodeps
opts.KeepImageExport = keepExportedImages
opts.SkipIfMetadataExists = skip
opts.PackageTargetOnly = onlyTarget
luetCompiler := compiler.NewLuetCompiler(compilerBackend, generalRecipe.GetDatabase(), opts)
luetCompiler.SetConcurrency(concurrency)
luetCompiler.SetCompressionType(compiler.CompressionImplementation(compressionType))
if !all {
if full {
specs, err := luetCompiler.FromDatabase(generalRecipe.GetDatabase(), true, dst)
if err != nil {
Fatal(err.Error())
}
for _, spec := range specs {
Info(":package: Selecting ", spec.GetPackage().GetName(), spec.GetPackage().GetVersion())
compilerSpecs.Add(spec)
}
} else if !all {
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
@@ -196,12 +218,14 @@ func init() {
Fatal(err)
}
buildCmd.Flags().Bool("clean", true, "Build all packages without considering the packages present in the build directory")
buildCmd.Flags().String("tree", path, "Source luet tree")
buildCmd.Flags().StringSliceP("tree", "t", []string{}, "Path of the tree to use.")
buildCmd.Flags().String("backend", "docker", "backend used (docker,img)")
buildCmd.Flags().Bool("privileged", false, "Privileged (Keep permissions)")
buildCmd.Flags().String("database", "memory", "database used for solving (memory,boltdb)")
buildCmd.Flags().Bool("revdeps", false, "Build with revdeps")
buildCmd.Flags().Bool("all", false, "Build all packages in the tree")
buildCmd.Flags().Bool("all", false, "Build all specfiles in the tree")
buildCmd.Flags().Bool("full", false, "Build all packages (optimized)")
buildCmd.Flags().String("destination", path, "Destination folder")
buildCmd.Flags().String("compression", "none", "Compression alg: none, gzip")
buildCmd.Flags().String("image-repository", "luet/cache", "Default base image string for generated image")
@@ -211,7 +235,8 @@ func init() {
buildCmd.Flags().Bool("nodeps", false, "Build only the target packages, skipping deps (it works only if you already built the deps locally, or by using --pull) ")
buildCmd.Flags().Bool("onlydeps", false, "Build only package dependencies")
buildCmd.Flags().Bool("keep-exported-images", false, "Keep exported images used during building")
buildCmd.Flags().Bool("skip-if-metadata-exists", false, "Skip package if metadata exists")
buildCmd.Flags().Bool("only-target-package", false, "Build packages of only the required target. Otherwise builds all the necessary ones not present in the destination")
buildCmd.Flags().String("solver-type", "", "Solver strategy")
buildCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
buildCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")

View File

@@ -19,14 +19,16 @@ import (
"fmt"
config "github.com/mudler/luet/pkg/config"
installer "github.com/mudler/luet/pkg/installer"
"github.com/spf13/cobra"
)
var configCmd = &cobra.Command{
Use: "config",
Short: "Print config",
Long: `Show luet configuration`,
Use: "config",
Short: "Print config",
Long: `Show luet configuration`,
Aliases: []string{"c"},
Run: func(cmd *cobra.Command, args []string) {
fmt.Println(config.LuetCfg.GetLogging())
fmt.Println(config.LuetCfg.GetGeneral())
@@ -51,6 +53,23 @@ var configCmd = &cobra.Command{
}
}
if len(config.LuetCfg.ConfigProtectConfDir) > 0 {
// Load config protect configs
installer.LoadConfigProtectConfs(config.LuetCfg)
fmt.Println("config_protect_confdir:")
for _, dir := range config.LuetCfg.ConfigProtectConfDir {
fmt.Println(" - ", dir)
}
if len(config.LuetCfg.GetConfigProtectConfFiles()) > 0 {
fmt.Println("protect_conf_files:")
for _, file := range config.LuetCfg.GetConfigProtectConfFiles() {
fmt.Println(" - ", file.String())
}
}
}
},
}

View File

@@ -28,7 +28,7 @@ import (
)
var convertCmd = &cobra.Command{
Use: "convert",
Use: "convert [portage-tree] [luet-tree]",
Short: "convert other package manager tree into luet",
Long: `Parses external PM and produces a luet parsable tree`,
PreRun: func(cmd *cobra.Command, args []string) {

View File

@@ -50,7 +50,7 @@ var createrepoCmd = &cobra.Command{
var err error
var repo installer.Repository
tree := viper.GetString("tree")
treePaths := viper.GetStringSlice("tree")
dst := viper.GetString("output")
packages := viper.GetString("packages")
name := viper.GetString("name")
@@ -74,8 +74,8 @@ var createrepoCmd = &cobra.Command{
Fatal("Error: " + err.Error())
}
if tree == "" {
tree = lrepo.TreePath
if len(treePaths) <= 0 {
treePaths = []string{lrepo.TreePath}
}
if t == "" {
@@ -87,12 +87,12 @@ var createrepoCmd = &cobra.Command{
lrepo.Urls,
lrepo.Priority,
packages,
tree,
treePaths,
pkg.NewInMemoryDatabase(false))
} else {
repo, err = installer.GenerateRepository(name, descr, t, urls, 1, packages,
tree, pkg.NewInMemoryDatabase(false))
treePaths, pkg.NewInMemoryDatabase(false))
}
if err != nil {
@@ -131,7 +131,7 @@ func init() {
Fatal(err)
}
createrepoCmd.Flags().String("packages", path, "Packages folder (output from build)")
createrepoCmd.Flags().String("tree", path, "Source luet tree")
createrepoCmd.Flags().StringSliceP("tree", "t", []string{}, "Path of the source trees to use.")
createrepoCmd.Flags().String("output", path, "Destination folder")
createrepoCmd.Flags().String("name", "luet", "Repository name")
createrepoCmd.Flags().String("descr", "luet", "Repository description")

37
cmd/database.go Normal file
View File

@@ -0,0 +1,37 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
. "github.com/mudler/luet/cmd/database"
"github.com/spf13/cobra"
)
var databaseGroupCmd = &cobra.Command{
Use: "database [command] [OPTIONS]",
Short: "Manage system database (dangerous commands ahead!)",
}
func init() {
RootCmd.AddCommand(databaseGroupCmd)
databaseGroupCmd.AddCommand(
NewDatabaseCreateCommand(),
NewDatabaseRemoveCommand(),
)
}

81
cmd/database/create.go Normal file
View File

@@ -0,0 +1,81 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd_database
import (
"io/ioutil"
"path/filepath"
"github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
. "github.com/mudler/luet/pkg/config"
"github.com/spf13/cobra"
)
func NewDatabaseCreateCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "create <artifact_metadata1.yaml> <artifact_metadata1.yaml>",
Short: "Insert a package in the system DB",
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
for _, a := range args {
dat, err := ioutil.ReadFile(a)
if err != nil {
Fatal("Failed reading ", a, ": ", err.Error())
}
art, err := compiler.NewPackageArtifactFromYaml(dat)
if err != nil {
Fatal("Failed reading yaml ", a, ": ", err.Error())
}
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
files, err := art.FileList()
if err != nil {
Fatal("Failed getting file list for ", a, ": ", err.Error())
}
if _, err := systemDB.CreatePackage(art.GetCompileSpec().GetPackage()); err != nil {
Fatal("Failed to create ", a, ": ", err.Error())
}
if err := systemDB.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: art.GetCompileSpec().GetPackage().GetFingerPrint(), Files: files}); err != nil {
Fatal("Failed setting package files for ", a, ": ", err.Error())
}
Info(art.GetCompileSpec().GetPackage().HumanReadableString(), " created")
}
},
}
return ans
}

69
cmd/database/remove.go Normal file
View File

@@ -0,0 +1,69 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd_database
import (
"path/filepath"
. "github.com/mudler/luet/pkg/logger"
helpers "github.com/mudler/luet/cmd/helpers"
. "github.com/mudler/luet/pkg/config"
pkg "github.com/mudler/luet/pkg/package"
"github.com/spf13/cobra"
)
func NewDatabaseRemoveCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "remove [package1] [package2] ...",
Short: "Remove a package from the system DB (forcefully - you normally don't want to do that)",
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
if err != nil {
Fatal("Invalid package string ", a, ": ", err.Error())
}
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
if err := systemDB.RemovePackage(pack); err != nil {
Fatal("Failed removing ", a, ": ", err.Error())
}
if err := systemDB.RemovePackageFiles(pack); err != nil {
Fatal("Failed removing files for ", a, ": ", err.Error())
}
}
},
}
return ans
}

View File

@@ -15,14 +15,15 @@
package cmd
import (
"fmt"
"os"
b64 "encoding/base64"
"github.com/mudler/luet/pkg/box"
. "github.com/mudler/luet/pkg/logger"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var execCmd = &cobra.Command{
@@ -30,24 +31,20 @@ var execCmd = &cobra.Command{
Short: "Execute a command in the rootfs context",
Long: `Uses unshare technique and pivot root to execute a command inside a folder containing a valid rootfs`,
PreRun: func(cmd *cobra.Command, args []string) {
viper.BindPFlag("stdin", cmd.Flags().Lookup("stdin"))
viper.BindPFlag("stdout", cmd.Flags().Lookup("stdout"))
viper.BindPFlag("stderr", cmd.Flags().Lookup("stderr"))
viper.BindPFlag("rootfs", cmd.Flags().Lookup("rootfs"))
viper.BindPFlag("decode", cmd.Flags().Lookup("decode"))
viper.BindPFlag("entrypoint", cmd.Flags().Lookup("entrypoint"))
},
// If you change this, look at pkg/box/exec that runs this command and adapt
Run: func(cmd *cobra.Command, args []string) {
stdin := viper.GetBool("stdin")
stdout := viper.GetBool("stdout")
stderr := viper.GetBool("stderr")
rootfs := viper.GetString("rootfs")
base := viper.GetBool("decode")
stdin, _ := cmd.Flags().GetBool("stdin")
stdout, _ := cmd.Flags().GetBool("stdout")
stderr, _ := cmd.Flags().GetBool("stderr")
rootfs, _ := cmd.Flags().GetString("rootfs")
base, _ := cmd.Flags().GetBool("decode")
entrypoint, _ := cmd.Flags().GetString("entrypoint")
envs, _ := cmd.Flags().GetStringArray("env")
mounts, _ := cmd.Flags().GetStringArray("mount")
entrypoint := viper.GetString("entrypoint")
if base {
var ss []string
for _, a := range args {
@@ -60,10 +57,10 @@ var execCmd = &cobra.Command{
}
Info("Executing", args, "in", rootfs)
b := box.NewBox(entrypoint, args, rootfs, stdin, stdout, stderr)
b := box.NewBox(entrypoint, args, mounts, envs, rootfs, stdin, stdout, stderr)
err := b.Exec()
if err != nil {
Fatal(err)
Fatal(errors.Wrap(err, fmt.Sprintf("entrypoint: %s rootfs: %s", entrypoint, rootfs)))
}
},
}
@@ -80,6 +77,9 @@ func init() {
execCmd.Flags().Bool("stderr", false, "Attach to stderr")
execCmd.Flags().Bool("decode", false, "Base64 decode")
execCmd.Flags().StringArrayP("env", "e", []string{}, "Environment settings")
execCmd.Flags().StringArrayP("mount", "m", []string{}, "List of paths to bind-mount from the host")
execCmd.Flags().String("entrypoint", "/bin/sh", "Entrypoint command (/bin/sh)")
RootCmd.AddCommand(execCmd)

View File

@@ -14,7 +14,7 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers
package cmd_helpers
import (
"errors"
@@ -23,6 +23,7 @@ import (
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
pkg "github.com/mudler/luet/pkg/package"
version "github.com/mudler/luet/pkg/versioner"
)
func CreateRegexArray(rgx []string) ([]*regexp.Regexp, error) {
@@ -53,14 +54,14 @@ func ParsePackageStr(p string) (*pkg.DefaultPackage, error) {
pkgVersion := ""
if gp.VersionBuild != "" {
pkgVersion = fmt.Sprintf("%s%s%s+%s",
pkg.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
version.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
gp.Version,
gp.VersionSuffix,
gp.VersionBuild,
)
} else {
pkgVersion = fmt.Sprintf("%s%s%s",
pkg.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
version.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
gp.Version,
gp.VersionSuffix,
)

View File

@@ -20,8 +20,8 @@ import (
installer "github.com/mudler/luet/pkg/installer"
helpers "github.com/mudler/luet/cmd/helpers"
. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
@@ -29,8 +29,9 @@ import (
)
var installCmd = &cobra.Command{
Use: "install <pkg1> <pkg2> ...",
Short: "Install a package",
Use: "install <pkg1> <pkg2> ...",
Short: "Install a package",
Aliases: []string{"i"},
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
@@ -41,12 +42,10 @@ var installCmd = &cobra.Command{
LuetCfg.Viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
LuetCfg.Viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
LuetCfg.Viper.BindPFlag("download-only", cmd.Flags().Lookup("download-only"))
},
Long: `Install packages in parallel`,
Run: func(cmd *cobra.Command, args []string) {
var toInstall []pkg.Package
var toInstall pkg.Packages
var systemDB pkg.PackageDatabase
for _, a := range args {
@@ -74,7 +73,6 @@ var installCmd = &cobra.Command{
force := LuetCfg.Viper.GetBool("force")
nodeps := LuetCfg.Viper.GetBool("nodeps")
onlydeps := LuetCfg.Viper.GetBool("onlydeps")
downloadOnly := LuetCfg.Viper.GetBool("download-only")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
LuetCfg.GetSolverOptions().Discount = float32(discount)
@@ -82,6 +80,9 @@ var installCmd = &cobra.Command{
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
// Load config protect configs
installer.LoadConfigProtectConfs(LuetCfg)
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
@@ -99,7 +100,7 @@ var installCmd = &cobra.Command{
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
err := inst.Install(toInstall, system, downloadOnly)
err := inst.Install(toInstall, system)
if err != nil {
Fatal("Error: " + err.Error())
}
@@ -120,7 +121,6 @@ func init() {
installCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful!)")
installCmd.Flags().Bool("onlydeps", false, "Consider **only** package dependencies")
installCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
installCmd.Flags().Bool("download-only", false, "Download dependencies only")
RootCmd.AddCommand(installCmd)
}

88
cmd/reclaim.go Normal file
View File

@@ -0,0 +1,88 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"os"
"path/filepath"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/spf13/cobra"
)
var reclaimCmd = &cobra.Command{
Use: "reclaim",
Short: "Reclaim packages to Luet database from available repositories",
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
},
Long: `Add packages to the systemdb if files belonging to packages
in available repositories exists in the target root.`,
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
// This shouldn't be necessary, but we need to unmarshal the repositories to a concrete struct, thus we need to port them back to the Repositories type
repos := installer.Repositories{}
for _, repo := range LuetCfg.SystemRepositories {
if !repo.Enable {
continue
}
r := installer.NewSystemRepository(repo)
repos = append(repos, r)
}
force := LuetCfg.Viper.GetBool("force")
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
Force: force,
PreserveSystemEssentialData: true,
})
inst.Repositories(repos)
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
err := inst.Reclaim(system)
if err != nil {
Fatal("Error: " + err.Error())
}
},
}
func init() {
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
reclaimCmd.Flags().String("system-dbpath", path, "System db path")
reclaimCmd.Flags().String("system-target", path, "System rootpath")
reclaimCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
RootCmd.AddCommand(reclaimCmd)
}

View File

@@ -35,6 +35,7 @@ $> luet repo update
# Update only repo1 and repo2
$> luet repo update repo1 repo2
`,
Aliases: []string{"up"},
PreRun: func(cmd *cobra.Command, args []string) {
},
Run: func(cmd *cobra.Command, args []string) {

View File

@@ -24,11 +24,11 @@ import (
"strings"
"github.com/marcsauter/single"
extensions "github.com/mudler/cobra-extensions"
config "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
repo "github.com/mudler/luet/pkg/repository"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
@@ -38,41 +38,66 @@ var Verbose bool
var LockedCommands = []string{"install", "uninstall", "upgrade"}
const (
LuetCLIVersion = "0.7.2"
LuetCLIVersion = "0.8.8"
LuetEnvPrefix = "LUET"
)
// Build time and commit information.
//
// ⚠️ WARNING: should only be set by "-ldflags".
var (
BuildTime string
BuildCommit string
)
// RootCmd represents the base command when called without any subcommands
var RootCmd = &cobra.Command{
Use: "luet",
Short: "Package manager for the XXth century!",
Long: `Package manager which uses containers to build packages`,
Version: LuetCLIVersion,
Version: fmt.Sprintf("%s-g%s %s", LuetCLIVersion, BuildCommit, BuildTime),
PersistentPreRun: func(cmd *cobra.Command, args []string) {
err := LoadConfig(config.LuetCfg)
if err != nil {
Fatal("failed to load configuration:", err.Error())
}
// Initialize tmpdir prefix. TODO: Move this with LoadConfig
// directly on sub command to ensure the creation only when it's
// needed.
err = config.LuetCfg.GetSystem().InitTmpDir()
if err != nil {
Fatal("failed on init tmp basedir:", err.Error())
}
},
PersistentPostRun: func(cmd *cobra.Command, args []string) {
// Cleanup all tmp directories used by luet
err := config.LuetCfg.GetSystem().CleanupTmpDir()
if err != nil {
Warning("failed on cleanup tmpdir:", err.Error())
}
},
SilenceErrors: true,
}
func LoadConfig(c *config.LuetConfig) error {
// If a config file is found, read it in.
if err := c.Viper.ReadInConfig(); err != nil {
Debug(err)
}
c.Viper.ReadInConfig()
err := c.Viper.Unmarshal(&config.LuetCfg)
if err != nil {
return err
}
noSpinner := c.Viper.GetBool("no_spinner")
InitAurora()
if !noSpinner {
NewSpinner()
}
Debug("Using config file:", c.Viper.ConfigFileUsed())
NewSpinner()
if c.GetLogging().Path != "" {
if c.GetLogging().EnableLogFile && c.GetLogging().Path != "" {
// Init zap logger
err = ZapLogger()
if err != nil {
@@ -94,31 +119,24 @@ func LoadConfig(c *config.LuetConfig) error {
func Execute() {
if os.Getenv("LUET_NOLOCK") != "true" {
for _, lockedCmd := range LockedCommands {
if os.Args[1] == lockedCmd {
s := single.New("luet")
if err := s.CheckLock(); err != nil && err == single.ErrAlreadyRunning {
Fatal("another instance of the app is already running, exiting")
} else if err != nil {
// Another error occurred, might be worth handling it as well
Fatal("failed to acquire exclusive app lock:", err.Error())
if len(os.Args) > 1 {
for _, lockedCmd := range LockedCommands {
if os.Args[1] == lockedCmd {
s := single.New("luet")
if err := s.CheckLock(); err != nil && err == single.ErrAlreadyRunning {
Fatal("another instance of the app is already running, exiting")
} else if err != nil {
// Another error occurred, might be worth handling it as well
Fatal("failed to acquire exclusive app lock:", err.Error())
}
defer s.TryUnlock()
break
}
defer s.TryUnlock()
break
}
}
}
if err := RootCmd.Execute(); err != nil {
if len(os.Args) > 0 {
for _, c := range RootCmd.Commands() {
if c.Name() == os.Args[1] {
os.Exit(-1) // Something failed
}
}
// Try to load a bin from path.
helpers.Exec("luet-"+os.Args[1], os.Args[1:], os.Environ())
}
fmt.Println(err)
os.Exit(-1)
}
@@ -130,42 +148,81 @@ func init() {
pflags.StringVar(&cfgFile, "config", "", "config file (default is $HOME/.luet.yaml)")
pflags.BoolP("debug", "d", false, "verbose output")
pflags.Bool("fatal", false, "Enables Warnings to exit")
pflags.Bool("enable-logfile", false, "Enable log to file")
pflags.Bool("no-spinner", false, "Disable spinner.")
pflags.Bool("color", config.LuetCfg.GetLogging().Color, "Enable/Disable color.")
pflags.Bool("emoji", config.LuetCfg.GetLogging().EnableEmoji, "Enable/Disable emoji.")
pflags.Bool("skip-config-protect", config.LuetCfg.ConfigProtectSkip,
"Disable config protect analysis.")
pflags.StringP("logfile", "l", config.LuetCfg.GetLogging().Path,
"Logfile path. Empty value disable log to file.")
u, err := user.Current()
// os/user doesn't work in from scratch environments.
// Check if i can retrieve user informations.
_, err := user.Current()
if err != nil {
Fatal("failed to retrieve user identity:", err.Error())
Warning("failed to retrieve user identity:", err.Error())
}
sameOwner := false
if u.Uid == "0" {
sameOwner = true
}
pflags.Bool("same-owner", sameOwner, "Maintain same owner on uncompress.")
pflags.Bool("same-owner", config.LuetCfg.GetGeneral().SameOwner, "Maintain same owner on uncompress.")
pflags.Int("concurrency", runtime.NumCPU(), "Concurrency")
config.LuetCfg.Viper.BindPFlag("general.same_owner", pflags.Lookup("same-owner"))
config.LuetCfg.Viper.BindPFlag("general.debug", pflags.Lookup("debug"))
config.LuetCfg.Viper.BindPFlag("logging.color", pflags.Lookup("color"))
config.LuetCfg.Viper.BindPFlag("logging.enable_emoji", pflags.Lookup("emoji"))
config.LuetCfg.Viper.BindPFlag("logging.enable_logfile", pflags.Lookup("enable-logfile"))
config.LuetCfg.Viper.BindPFlag("logging.path", pflags.Lookup("logfile"))
config.LuetCfg.Viper.BindPFlag("general.concurrency", pflags.Lookup("concurrency"))
config.LuetCfg.Viper.BindPFlag("general.debug", pflags.Lookup("debug"))
config.LuetCfg.Viper.BindPFlag("general.fatal_warnings", pflags.Lookup("fatal"))
config.LuetCfg.Viper.BindPFlag("general.same_owner", pflags.Lookup("same-owner"))
// Currently I maintain this only from cli.
config.LuetCfg.Viper.BindPFlag("no_spinner", pflags.Lookup("no-spinner"))
config.LuetCfg.Viper.BindPFlag("config_protect_skip", pflags.Lookup("skip-config-protect"))
// Extensions must be binary with the "luet-" prefix to be able to be shown in the help.
// we also accept extensions in the relative path where luet is being started, "extensions/"
exts := extensions.Discover("luet", "extensions")
for _, ex := range exts {
cobraCmd := ex.CobraCommand()
RootCmd.AddCommand(cobraCmd)
}
}
// initConfig reads in config file and ENV variables if set.
func initConfig() {
// Luet support these priorities on read configuration file:
// - command line option (if available)
// - $PWD/.luet.yaml
// - $HOME/.luet.yaml
// - /etc/luet/luet.yaml
//
// Note: currently a single viper instance support only one config name.
dir, err := filepath.Abs(filepath.Dir(os.Args[0]))
if err != nil {
Error(err)
os.Exit(1)
}
viper.SetEnvPrefix(LuetEnvPrefix)
viper.SetConfigType("yaml")
viper.SetConfigName(".luet") // name of config file (without extension)
if cfgFile != "" { // enable ability to specify config file via flag
if cfgFile != "" { // enable ability to specify config file via flag
viper.SetConfigFile(cfgFile)
} else {
viper.AddConfigPath(dir)
viper.AddConfigPath(".")
viper.AddConfigPath("$HOME")
viper.AddConfigPath("/etc/luet")
// Retrieve pwd directory
pwdDir, err := os.Getwd()
if err != nil {
Error(err)
os.Exit(1)
}
homeDir := helpers.GetHomeDir()
if helpers.Exists(filepath.Join(pwdDir, ".luet.yaml")) || (homeDir != "" && helpers.Exists(filepath.Join(homeDir, ".luet.yaml"))) {
viper.AddConfigPath(".")
if homeDir != "" {
viper.AddConfigPath(homeDir)
}
viper.SetConfigName(".luet")
} else {
viper.SetConfigName("luet")
viper.AddConfigPath("/etc/luet")
}
}
viper.AutomaticEnv() // read in environment variables that match

View File

@@ -15,21 +15,35 @@
package cmd
import (
"fmt"
"os"
"path/filepath"
"github.com/ghodss/yaml"
. "github.com/mudler/luet/pkg/config"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/spf13/cobra"
)
type PackageResult struct {
Name string `json:"name"`
Category string `json:"category"`
Version string `json:"version"`
Repository string `json:"repository"`
Hidden bool `json:"hidden"`
}
type Results struct {
Packages []PackageResult `json:"packages"`
}
var searchCmd = &cobra.Command{
Use: "search <term>",
Short: "Search packages",
Long: `Search for installed and available packages`,
Use: "search <term>",
Short: "Search packages",
Long: `Search for installed and available packages`,
Aliases: []string{"s"},
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
@@ -41,10 +55,13 @@ var searchCmd = &cobra.Command{
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
var results Results
if len(args) != 1 {
Fatal("Wrong number of arguments (expected 1)")
}
hidden, _ := cmd.Flags().GetBool("hidden")
installed := LuetCfg.Viper.GetBool("installed")
stype := LuetCfg.Viper.GetString("solver.type")
discount := LuetCfg.Viper.GetFloat64("solver.discount")
@@ -52,6 +69,12 @@ var searchCmd = &cobra.Command{
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
searchWithLabel, _ := cmd.Flags().GetBool("by-label")
searchWithLabelMatch, _ := cmd.Flags().GetBool("by-label-regex")
revdeps, _ := cmd.Flags().GetBool("revdeps")
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
LuetCfg.GetLogging().SetLogLevel("error")
}
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
@@ -83,7 +106,7 @@ var searchCmd = &cobra.Command{
Fatal("Error: " + err.Error())
}
Info("--- Search results: ---")
Info("--- Search results (" + args[0] + "): ---")
matches := []installer.PackageMatch{}
if searchWithLabel {
@@ -94,8 +117,34 @@ var searchCmd = &cobra.Command{
matches = synced.Search(args[0])
}
for _, m := range matches {
Info(":package:", m.Package.GetCategory(), m.Package.GetName(),
m.Package.GetVersion(), "repository:", m.Repo.GetName())
if !revdeps {
if !m.Package.IsHidden() || m.Package.IsHidden() && hidden {
Info(fmt.Sprintf(":file_folder:%s", m.Repo.GetName()), fmt.Sprintf(":package:%s", m.Package.HumanReadableString()))
results.Packages = append(results.Packages,
PackageResult{
Name: m.Package.GetName(),
Version: m.Package.GetVersion(),
Category: m.Package.GetCategory(),
Repository: m.Repo.GetName(),
Hidden: m.Package.IsHidden(),
})
}
} else {
visited := make(map[string]interface{})
for _, revdep := range m.Package.ExpandedRevdeps(m.Repo.GetTree().GetDatabase(), visited) {
if !revdep.IsHidden() || revdep.IsHidden() && hidden {
Info(fmt.Sprintf(":file_folder:%s", m.Repo.GetName()), fmt.Sprintf(":package:%s", revdep.HumanReadableString()))
results.Packages = append(results.Packages,
PackageResult{
Name: revdep.GetName(),
Version: revdep.GetVersion(),
Category: revdep.GetCategory(),
Repository: m.Repo.GetName(),
Hidden: revdep.IsHidden(),
})
}
}
}
}
} else {
@@ -108,7 +157,7 @@ var searchCmd = &cobra.Command{
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
var err error
iMatches := []pkg.Package{}
iMatches := pkg.Packages{}
if searchWithLabel {
iMatches, err = system.Database.FindPackageLabel(args[0])
} else if searchWithLabelMatch {
@@ -122,9 +171,53 @@ var searchCmd = &cobra.Command{
}
for _, pack := range iMatches {
Info(":package:", pack.GetCategory(), pack.GetName(), pack.GetVersion())
}
if !revdeps {
if !pack.IsHidden() || pack.IsHidden() && hidden {
Info(fmt.Sprintf(":package:%s", pack.HumanReadableString()))
results.Packages = append(results.Packages,
PackageResult{
Name: pack.GetName(),
Version: pack.GetVersion(),
Category: pack.GetCategory(),
Repository: "system",
Hidden: pack.IsHidden(),
})
}
} else {
visited := make(map[string]interface{})
for _, revdep := range pack.ExpandedRevdeps(system.Database, visited) {
if !revdep.IsHidden() || revdep.IsHidden() && hidden {
Info(fmt.Sprintf(":package:%s", pack.HumanReadableString()))
results.Packages = append(results.Packages,
PackageResult{
Name: revdep.GetName(),
Version: revdep.GetVersion(),
Category: revdep.GetCategory(),
Repository: "system",
Hidden: revdep.IsHidden(),
})
}
}
}
}
}
y, err := yaml.Marshal(results)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
switch out {
case "yaml":
fmt.Println(string(y))
case "json":
j2, err := yaml.YAMLToJSON(y)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(j2))
}
},
@@ -139,10 +232,14 @@ func init() {
searchCmd.Flags().String("system-target", path, "System rootpath")
searchCmd.Flags().Bool("installed", false, "Search between system packages")
searchCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+AvailableResolvers+" )")
searchCmd.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")
searchCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
searchCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
searchCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
searchCmd.Flags().Bool("by-label", false, "Search packages through label")
searchCmd.Flags().Bool("by-label-regex", false, "Search packages through label regex")
searchCmd.Flags().Bool("revdeps", false, "Search package reverse dependencies")
searchCmd.Flags().Bool("hidden", false, "Include hidden packages")
RootCmd.AddCommand(searchCmd)
}

View File

@@ -18,11 +18,11 @@ package cmd_tree
import (
"fmt"
//"os"
//"sort"
. "github.com/mudler/luet/pkg/logger"
spectooling "github.com/mudler/luet/pkg/spectooling"
tree "github.com/mudler/luet/pkg/tree"
version "github.com/mudler/luet/pkg/versioner"
"github.com/spf13/cobra"
)
@@ -41,27 +41,48 @@ func NewTreeBumpCommand() *cobra.Command {
},
Run: func(cmd *cobra.Command, args []string) {
spec, _ := cmd.Flags().GetString("definition-file")
toStdout, _ := cmd.Flags().GetBool("to-stdout")
pkgVersion, _ := cmd.Flags().GetString("pkg-version")
pack, err := tree.ReadDefinitionFile(spec)
if err != nil {
Fatal(err.Error())
}
// Retrieve version build section with Gentoo parser
err = pack.BumpBuildVersion()
if err != nil {
Fatal("Error on increment build version: " + err.Error())
if pkgVersion != "" {
validator := &version.WrappedVersioner{}
err := validator.Validate(pkgVersion)
if err != nil {
Fatal("Invalid version string: " + err.Error())
}
pack.SetVersion(pkgVersion)
} else {
// Retrieve version build section with Gentoo parser
err = pack.BumpBuildVersion()
if err != nil {
Fatal("Error on increment build version: " + err.Error())
}
}
if toStdout {
data, err := spectooling.NewDefaultPackageSanitized(&pack).Yaml()
if err != nil {
Fatal("Error on yaml conversion: " + err.Error())
}
fmt.Println(string(data))
} else {
err = tree.WriteDefinitionFile(&pack, spec)
if err != nil {
Fatal("Error on write definition file: " + err.Error())
err = tree.WriteDefinitionFile(&pack, spec)
if err != nil {
Fatal("Error on write definition file: " + err.Error())
}
fmt.Printf("Bumped package %s/%s-%s.\n", pack.Category, pack.Name, pack.Version)
}
fmt.Printf("Bumped package %s/%s-%s.\n", pack.Category, pack.Name, pack.Version)
},
}
ans.Flags().StringP("pkg-version", "p", "", "Set a specific package version")
ans.Flags().StringP("definition-file", "f", "", "Path of the definition to bump.")
ans.Flags().BoolP("to-stdout", "o", false, "Bump package to output.")
return ans
}

View File

@@ -21,14 +21,28 @@ import (
"sort"
//. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
"github.com/ghodss/yaml"
helpers "github.com/mudler/luet/cmd/helpers"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
tree "github.com/mudler/luet/pkg/tree"
"github.com/spf13/cobra"
)
type TreePackageResult struct {
Name string `json:"name"`
Category string `json:"category"`
Version string `json:"version"`
Path string `json:"path"`
}
type TreeResults struct {
Packages []TreePackageResult `json:"packages"`
}
func pkgDetail(pkg pkg.Package) string {
ans := fmt.Sprintf(`
@@ Package: %s/%s-%s
@@ -55,22 +69,57 @@ func NewTreePkglistCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "pkglist [OPTIONS]",
Short: "List of the packages found in tree.",
Args: cobra.OnlyValidArgs,
Args: cobra.NoArgs,
PreRun: func(cmd *cobra.Command, args []string) {
t, _ := cmd.Flags().GetString("tree")
if t == "" {
t, _ := cmd.Flags().GetStringArray("tree")
if len(t) == 0 {
Fatal("Mandatory tree param missing.")
}
revdeps, _ := cmd.Flags().GetBool("revdeps")
deps, _ := cmd.Flags().GetBool("deps")
if revdeps && deps {
Fatal("Both revdeps and deps option used. Choice only one.")
}
},
Run: func(cmd *cobra.Command, args []string) {
var results TreeResults
var depSolver solver.PackageSolver
treePath, _ := cmd.Flags().GetString("tree")
treePath, _ := cmd.Flags().GetStringArray("tree")
verbose, _ := cmd.Flags().GetBool("verbose")
buildtime, _ := cmd.Flags().GetBool("buildtime")
full, _ := cmd.Flags().GetBool("full")
reciper := tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))
err := reciper.Load(treePath)
if err != nil {
Fatal("Error on load tree ", err)
revdeps, _ := cmd.Flags().GetBool("revdeps")
deps, _ := cmd.Flags().GetBool("deps")
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
LuetCfg.GetLogging().SetLogLevel("error")
}
var reciper tree.Builder
if buildtime {
reciper = tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
} else {
reciper = tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))
}
for _, t := range treePath {
err := reciper.Load(t)
if err != nil {
Fatal("Error on load tree ", err)
}
}
if deps {
emptyInstallationDb := pkg.NewInMemoryDatabase(false)
depSolver = solver.NewSolver(pkg.NewInMemoryDatabase(false),
reciper.GetDatabase(),
emptyInstallationDb)
}
regExcludes, err := helpers.CreateRegexArray(excludes)
@@ -89,7 +138,7 @@ func NewTreePkglistCommand() *cobra.Command {
if full {
pkgstr = pkgDetail(p)
} else if verbose {
pkgstr = fmt.Sprintf("%s/%s-%s", p.GetCategory(), p.GetName(), p.GetVersion())
pkgstr = p.HumanReadableString()
} else {
pkgstr = fmt.Sprintf("%s/%s", p.GetCategory(), p.GetName())
}
@@ -117,20 +166,115 @@ func NewTreePkglistCommand() *cobra.Command {
}
if addPkg {
plist = append(plist, pkgstr)
if revdeps {
visited := make(map[string]interface{})
for _, revdep := range p.ExpandedRevdeps(reciper.GetDatabase(), visited) {
if full {
pkgstr = pkgDetail(revdep)
} else if verbose {
pkgstr = revdep.HumanReadableString()
} else {
pkgstr = fmt.Sprintf("%s/%s", revdep.GetCategory(), revdep.GetName())
}
plist = append(plist, pkgstr)
results.Packages = append(results.Packages, TreePackageResult{
Name: revdep.GetName(),
Version: revdep.GetVersion(),
Category: revdep.GetCategory(),
Path: revdep.GetPath(),
})
}
} else if deps {
Spinner(32)
solution, err := depSolver.Install(pkg.Packages{p})
if err != nil {
Fatal(err.Error())
}
ass := solution.SearchByName(p.GetPackageName())
solution, err = solution.Order(reciper.GetDatabase(), ass.Package.GetFingerPrint())
if err != nil {
Fatal(err.Error())
}
SpinnerStop()
for _, pa := range solution {
if pa.Value {
// Exclude itself
if pa.Package.GetName() == p.GetName() && pa.Package.GetCategory() == p.GetCategory() {
continue
}
if full {
pkgstr = pkgDetail(pa.Package)
} else if verbose {
pkgstr = pa.Package.HumanReadableString()
} else {
pkgstr = fmt.Sprintf("%s/%s", pa.Package.GetCategory(), pa.Package.GetName())
}
plist = append(plist, pkgstr)
results.Packages = append(results.Packages, TreePackageResult{
Name: pa.Package.GetName(),
Version: pa.Package.GetVersion(),
Category: pa.Package.GetCategory(),
Path: pa.Package.GetPath(),
})
}
}
} else {
plist = append(plist, pkgstr)
results.Packages = append(results.Packages, TreePackageResult{
Name: p.GetName(),
Version: p.GetVersion(),
Category: p.GetCategory(),
Path: p.GetPath(),
})
}
}
}
sort.Strings(plist)
for _, p := range plist {
fmt.Println(p)
y, err := yaml.Marshal(results)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
switch out {
case "yaml":
fmt.Println(string(y))
case "json":
j2, err := yaml.YAMLToJSON(y)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(j2))
default:
if !deps {
sort.Strings(plist)
}
for _, p := range plist {
fmt.Println(p)
}
}
},
}
ans.Flags().BoolP("buildtime", "b", false, "Build time match")
ans.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")
ans.Flags().Bool("revdeps", false, "Search package reverse dependencies")
ans.Flags().Bool("deps", false, "Search package dependencies")
ans.Flags().BoolP("verbose", "v", false, "Add package version")
ans.Flags().BoolP("full", "f", false, "Show package detail")
ans.Flags().StringP("tree", "t", "", "Path of the tree to use.")
ans.Flags().StringArrayP("tree", "t", []string{}, "Path of the tree to use.")
ans.Flags().StringSliceVarP(&matches, "matches", "m", []string{},
"Include only matched packages from list. (Use string as regex).")
ans.Flags().StringSliceVarP(&excludes, "exclude", "e", []string{},

View File

@@ -17,12 +17,16 @@
package cmd_tree
import (
"errors"
"fmt"
"os"
"regexp"
"sort"
"strconv"
"sync"
//. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
helpers "github.com/mudler/luet/cmd/helpers"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
@@ -31,174 +35,432 @@ import (
"github.com/spf13/cobra"
)
type ValidateOpts struct {
WithSolver bool
OnlyRuntime bool
OnlyBuildtime bool
RegExcludes []*regexp.Regexp
RegMatches []*regexp.Regexp
Excludes []string
Matches []string
// Runtime validate stuff
RuntimeCacheDeps *pkg.InMemoryDatabase
RuntimeReciper *tree.InstallerRecipe
// Buildtime validate stuff
BuildtimeCacheDeps *pkg.InMemoryDatabase
BuildtimeReciper *tree.CompilerRecipe
Mutex sync.Mutex
BrokenPkgs int
BrokenDeps int
Errors []error
}
func (o *ValidateOpts) IncrBrokenPkgs() {
o.Mutex.Lock()
defer o.Mutex.Unlock()
o.BrokenPkgs++
}
func (o *ValidateOpts) IncrBrokenDeps() {
o.Mutex.Lock()
defer o.Mutex.Unlock()
o.BrokenDeps++
}
func (o *ValidateOpts) AddError(err error) {
o.Mutex.Lock()
defer o.Mutex.Unlock()
o.Errors = append(o.Errors, err)
}
func validatePackage(p pkg.Package, checkType string, opts *ValidateOpts, reciper tree.Builder, cacheDeps *pkg.InMemoryDatabase) error {
var errstr string
var ans error
var depSolver solver.PackageSolver
if opts.WithSolver {
emptyInstallationDb := pkg.NewInMemoryDatabase(false)
depSolver = solver.NewSolver(pkg.NewInMemoryDatabase(false),
reciper.GetDatabase(),
emptyInstallationDb)
}
found, err := reciper.GetDatabase().FindPackages(
&pkg.DefaultPackage{
Name: p.GetName(),
Category: p.GetCategory(),
Version: ">=0",
},
)
if err != nil || len(found) < 1 {
if err != nil {
errstr = err.Error()
} else {
errstr = "No packages"
}
Error(fmt.Sprintf("[%9s] %s/%s-%s: Broken. No versions could be found by database %s",
checkType,
p.GetCategory(), p.GetName(), p.GetVersion(),
errstr,
))
opts.IncrBrokenDeps()
return errors.New(
fmt.Sprintf("[%9s] %s/%s-%s: Broken. No versions could be found by database %s",
checkType,
p.GetCategory(), p.GetName(), p.GetVersion(),
errstr,
))
}
// Ensure that we use the right package from right recipier for deps
pReciper, err := reciper.GetDatabase().FindPackage(
&pkg.DefaultPackage{
Name: p.GetName(),
Category: p.GetCategory(),
Version: p.GetVersion(),
},
)
if err != nil {
errstr = fmt.Sprintf("[%9s] %s/%s-%s: Error on retrieve package - %s.",
checkType,
p.GetCategory(), p.GetName(), p.GetVersion(),
err.Error(),
)
Error(errstr)
return errors.New(errstr)
}
p = pReciper
pkgstr := fmt.Sprintf("%s/%s-%s", p.GetCategory(), p.GetName(),
p.GetVersion())
validpkg := true
if len(opts.Matches) > 0 {
matched := false
for _, rgx := range opts.RegMatches {
if rgx.MatchString(pkgstr) {
matched = true
break
}
}
if !matched {
return nil
}
}
if len(opts.Excludes) > 0 {
excluded := false
for _, rgx := range opts.RegExcludes {
if rgx.MatchString(pkgstr) {
excluded = true
break
}
}
if excluded {
return nil
}
}
Info(fmt.Sprintf("[%9s] Checking package ", checkType)+
fmt.Sprintf("%s/%s-%s", p.GetCategory(), p.GetName(), p.GetVersion()),
"with", len(p.GetRequires()), "dependencies and", len(p.GetConflicts()), "conflicts.")
all := p.GetRequires()
all = append(all, p.GetConflicts()...)
for idx, r := range all {
var deps pkg.Packages
var err error
if r.IsSelector() {
deps, err = reciper.GetDatabase().FindPackages(
&pkg.DefaultPackage{
Name: r.GetName(),
Category: r.GetCategory(),
Version: r.GetVersion(),
},
)
} else {
deps = append(deps, r)
}
if err != nil || len(deps) < 1 {
if err != nil {
errstr = err.Error()
} else {
errstr = "No packages"
}
Error(fmt.Sprintf("[%9s] %s/%s-%s: Broken Dep %s/%s-%s - %s",
checkType,
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
errstr,
))
opts.IncrBrokenDeps()
ans = errors.New(
fmt.Sprintf("[%9s] %s/%s-%s: Broken Dep %s/%s-%s - %s",
checkType,
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
errstr))
validpkg = false
} else {
Debug(fmt.Sprintf("[%9s] Find packages for dep", checkType),
fmt.Sprintf("%s/%s-%s", r.GetCategory(), r.GetName(), r.GetVersion()))
if opts.WithSolver {
Info(fmt.Sprintf("[%9s] :soap: [%2d/%2d] %s/%s-%s: %s/%s-%s",
checkType,
idx+1, len(all),
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
))
// Check if the solver is already been done for the deep
_, err := cacheDeps.Get(r.HashFingerprint(""))
if err == nil {
Debug(fmt.Sprintf("[%9s] :direct_hit: Cache Hit for dep", checkType),
fmt.Sprintf("%s/%s-%s", r.GetCategory(), r.GetName(), r.GetVersion()))
continue
}
Spinner(32)
solution, err := depSolver.Install(pkg.Packages{r})
ass := solution.SearchByName(r.GetPackageName())
if err == nil {
_, err = solution.Order(reciper.GetDatabase(), ass.Package.GetFingerPrint())
}
SpinnerStop()
if err != nil {
Error(fmt.Sprintf("[%9s] %s/%s-%s: solver broken for dep %s/%s-%s - %s",
checkType,
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
err.Error(),
))
ans = errors.New(
fmt.Sprintf("[%9s] %s/%s-%s: solver broken for Dep %s/%s-%s - %s",
checkType,
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
err.Error()))
opts.IncrBrokenDeps()
validpkg = false
}
// Register the key
cacheDeps.Set(r.HashFingerprint(""), "1")
}
}
}
if !validpkg {
opts.IncrBrokenPkgs()
}
return ans
}
func validateWorker(i int,
wg *sync.WaitGroup,
c <-chan pkg.Package,
opts *ValidateOpts) {
defer wg.Done()
for p := range c {
if opts.OnlyBuildtime {
// Check buildtime compiler/deps
err := validatePackage(p, "buildtime", opts, opts.BuildtimeReciper, opts.BuildtimeCacheDeps)
if err != nil {
opts.AddError(err)
continue
}
} else if opts.OnlyRuntime {
// Check runtime installer/deps
err := validatePackage(p, "runtime", opts, opts.RuntimeReciper, opts.RuntimeCacheDeps)
if err != nil {
opts.AddError(err)
continue
}
} else {
// Check runtime installer/deps
err := validatePackage(p, "runtime", opts, opts.RuntimeReciper, opts.RuntimeCacheDeps)
if err != nil {
opts.AddError(err)
continue
}
// Check buildtime compiler/deps
err = validatePackage(p, "buildtime", opts, opts.BuildtimeReciper, opts.BuildtimeCacheDeps)
if err != nil {
opts.AddError(err)
}
}
}
}
func initOpts(opts *ValidateOpts, onlyRuntime, onlyBuildtime, withSolver bool, treePaths []string) {
var err error
opts.OnlyBuildtime = onlyBuildtime
opts.OnlyRuntime = onlyRuntime
opts.WithSolver = withSolver
opts.RuntimeReciper = nil
opts.BuildtimeReciper = nil
opts.BrokenPkgs = 0
opts.BrokenDeps = 0
if onlyBuildtime {
opts.BuildtimeReciper = (tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))).(*tree.CompilerRecipe)
} else if onlyRuntime {
opts.RuntimeReciper = (tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))).(*tree.InstallerRecipe)
} else {
opts.BuildtimeReciper = (tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))).(*tree.CompilerRecipe)
opts.RuntimeReciper = (tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))).(*tree.InstallerRecipe)
}
opts.RuntimeCacheDeps = pkg.NewInMemoryDatabase(false).(*pkg.InMemoryDatabase)
opts.BuildtimeCacheDeps = pkg.NewInMemoryDatabase(false).(*pkg.InMemoryDatabase)
for _, treePath := range treePaths {
Info(fmt.Sprintf("Loading :deciduous_tree: %s...", treePath))
if opts.BuildtimeReciper != nil {
err = opts.BuildtimeReciper.Load(treePath)
if err != nil {
Fatal("Error on load tree ", err)
}
}
if opts.RuntimeReciper != nil {
err = opts.RuntimeReciper.Load(treePath)
if err != nil {
Fatal("Error on load tree ", err)
}
}
}
opts.RegExcludes, err = helpers.CreateRegexArray(opts.Excludes)
if err != nil {
Fatal(err.Error())
}
opts.RegMatches, err = helpers.CreateRegexArray(opts.Matches)
if err != nil {
Fatal(err.Error())
}
}
func NewTreeValidateCommand() *cobra.Command {
var excludes []string
var matches []string
var treePaths []string
var opts ValidateOpts
var ans = &cobra.Command{
Use: "validate [OPTIONS]",
Short: "Validate a tree or a list of packages",
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
onlyRuntime, _ := cmd.Flags().GetBool("only-runtime")
onlyBuildtime, _ := cmd.Flags().GetBool("only-buildtime")
if len(treePaths) < 1 {
Fatal("Mandatory tree param missing.")
}
if onlyRuntime && onlyBuildtime {
Fatal("Both --only-runtime and --only-buildtime options are not possibile.")
}
},
Run: func(cmd *cobra.Command, args []string) {
var depSolver solver.PackageSolver
var errstr string
var reciper tree.Builder
errors := make([]string, 0)
concurrency := LuetCfg.GetGeneral().Concurrency
brokenPkgs := 0
brokenDeps := 0
withSolver, _ := cmd.Flags().GetBool("with-solver")
onlyRuntime, _ := cmd.Flags().GetBool("only-runtime")
onlyBuildtime, _ := cmd.Flags().GetBool("only-buildtime")
reciper := tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))
for _, treePath := range treePaths {
err := reciper.Load(treePath)
if err != nil {
Fatal("Error on load tree ", err)
}
opts.Excludes = excludes
opts.Matches = matches
initOpts(&opts, onlyRuntime, onlyBuildtime, withSolver, treePaths)
// We need at least one valid reciper for get list of the packages.
if onlyBuildtime {
reciper = opts.BuildtimeReciper
} else {
reciper = opts.RuntimeReciper
}
emptyInstallationDb := pkg.NewInMemoryDatabase(false)
if withSolver {
depSolver = solver.NewSolver(pkg.NewInMemoryDatabase(false),
reciper.GetDatabase(),
emptyInstallationDb)
}
all := make(chan pkg.Package)
regExcludes, err := helpers.CreateRegexArray(excludes)
if err != nil {
Fatal(err.Error())
}
regMatches, err := helpers.CreateRegexArray(matches)
if err != nil {
Fatal(err.Error())
}
var wg = new(sync.WaitGroup)
for i := 0; i < concurrency; i++ {
wg.Add(1)
go validateWorker(i, wg, all, &opts)
}
for _, p := range reciper.GetDatabase().World() {
pkgstr := fmt.Sprintf("%s/%s-%s", p.GetCategory(), p.GetName(),
p.GetVersion())
validpkg := true
if len(matches) > 0 {
matched := false
for _, rgx := range regMatches {
if rgx.MatchString(pkgstr) {
matched = true
break
}
}
if !matched {
continue
}
}
if len(excludes) > 0 {
excluded := false
for _, rgx := range regExcludes {
if rgx.MatchString(pkgstr) {
excluded = true
break
}
}
if excluded {
continue
}
}
Info("Checking package "+fmt.Sprintf("%s/%s-%s", p.GetCategory(), p.GetName(), p.GetVersion()), "with", len(p.GetRequires()), "dependencies.")
for _, r := range p.GetRequires() {
deps, err := reciper.GetDatabase().FindPackages(
&pkg.DefaultPackage{
Name: r.GetName(),
Category: r.GetCategory(),
Version: r.GetVersion(),
},
)
if err != nil || len(deps) < 1 {
if err != nil {
errstr = err.Error()
} else {
errstr = "No packages"
}
Error(fmt.Sprintf("%s/%s-%s: Broken Dep %s/%s-%s - %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
errstr,
))
errors = append(errors,
fmt.Sprintf("%s/%s-%s: Broken Dep %s/%s-%s - %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
errstr))
brokenDeps++
validpkg = false
} else {
Debug("Find packages for dep",
fmt.Sprintf("%s/%s-%s", r.GetCategory(), r.GetName(), r.GetVersion()))
if withSolver {
Spinner(32)
_, err := depSolver.Install([]pkg.Package{r})
SpinnerStop()
if err != nil {
Error(fmt.Sprintf("%s/%s-%s: solver broken for dep %s/%s-%s - %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
err.Error(),
))
errors = append(errors,
fmt.Sprintf("%s/%s-%s: solver broken for Dep %s/%s-%s - %s",
p.GetCategory(), p.GetName(), p.GetVersion(),
r.GetCategory(), r.GetName(), r.GetVersion(),
err.Error()))
brokenDeps++
validpkg = false
}
}
}
}
if !validpkg {
brokenPkgs++
}
all <- p
}
close(all)
sort.Strings(errors)
for _, e := range errors {
// Wait separately and once done close the channel
go func() {
wg.Wait()
}()
stringerrs := []string{}
for _, e := range opts.Errors {
stringerrs = append(stringerrs, e.Error())
}
sort.Strings(stringerrs)
for _, e := range stringerrs {
fmt.Println(e)
}
fmt.Println("Broken packages:", brokenPkgs, "(", brokenDeps, "deps ).")
if brokenPkgs > 0 {
os.Exit(1)
// fmt.Println("Broken packages:", brokenPkgs, "(", brokenDeps, "deps ).")
if len(stringerrs) != 0 {
Error(fmt.Sprintf("Found %d broken packages and %d broken deps.",
opts.BrokenPkgs, opts.BrokenDeps))
Fatal("Errors: " + strconv.Itoa(len(stringerrs)))
} else {
Info("All good! :white_check_mark:")
os.Exit(0)
}
},
}
ans.Flags().Bool("only-runtime", false, "Check only runtime dependencies.")
ans.Flags().Bool("only-buildtime", false, "Check only buildtime dependencies.")
ans.Flags().BoolP("with-solver", "s", false,
"Enable check of requires also with solver.")
ans.Flags().StringSliceVarP(&treePaths, "tree", "t", []string{},

View File

@@ -18,8 +18,8 @@ import (
"os"
"path/filepath"
helpers "github.com/mudler/luet/cmd/helpers"
. "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
@@ -28,9 +28,10 @@ import (
)
var uninstallCmd = &cobra.Command{
Use: "uninstall <pkg> <pkg2> ...",
Short: "Uninstall a package or a list of packages",
Long: `Uninstall packages`,
Use: "uninstall <pkg> <pkg2> ...",
Short: "Uninstall a package or a list of packages",
Long: `Uninstall packages`,
Aliases: []string{"rm", "un"},
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
@@ -56,7 +57,10 @@ var uninstallCmd = &cobra.Command{
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
force := LuetCfg.Viper.GetBool("force")
nodeps := LuetCfg.Viper.GetBool("nodeps")
nodeps, _ := cmd.Flags().GetBool("nodeps")
full, _ := cmd.Flags().GetBool("full")
checkconflicts, _ := cmd.Flags().GetBool("conflictscheck")
fullClean, _ := cmd.Flags().GetBool("full-clean")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
@@ -66,10 +70,13 @@ var uninstallCmd = &cobra.Command{
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
NoDeps: nodeps,
Force: force,
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
NoDeps: nodeps,
Force: force,
FullUninstall: full,
FullCleanUninstall: fullClean,
CheckConflicts: checkconflicts,
})
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
@@ -98,8 +105,11 @@ func init() {
uninstallCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
uninstallCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
uninstallCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
uninstallCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful!)")
uninstallCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful! overrides checkconflicts and full!)")
uninstallCmd.Flags().Bool("force", false, "Force uninstall")
uninstallCmd.Flags().Bool("full", false, "Attempts to remove as much packages as possible which aren't required (slow)")
uninstallCmd.Flags().Bool("conflictscheck", true, "Check if the package marked for deletion is required by other packages")
uninstallCmd.Flags().Bool("full-clean", false, "(experimental) Uninstall packages and all the other deps/revdeps of it.")
RootCmd.AddCommand(uninstallCmd)
}

View File

@@ -27,8 +27,9 @@ import (
)
var upgradeCmd = &cobra.Command{
Use: "upgrade",
Short: "Upgrades the system",
Use: "upgrade",
Short: "Upgrades the system",
Aliases: []string{"u"},
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", installCmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", installCmd.Flags().Lookup("system-target"))
@@ -57,6 +58,11 @@ var upgradeCmd = &cobra.Command{
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
force := LuetCfg.Viper.GetBool("force")
nodeps, _ := cmd.Flags().GetBool("nodeps")
full, _ := cmd.Flags().GetBool("full")
universe, _ := cmd.Flags().GetBool("universe")
clean, _ := cmd.Flags().GetBool("clean")
sync, _ := cmd.Flags().GetBool("sync")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
@@ -66,9 +72,14 @@ var upgradeCmd = &cobra.Command{
Debug("Solver", LuetCfg.GetSolverOptions().String())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
Force: force,
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
Force: force,
FullUninstall: full,
NoDeps: nodeps,
SolverUpgrade: universe,
RemoveUnavailableOnUpgrade: clean,
UpgradeNewRevisions: sync,
})
inst.Repositories(repos)
_, err := inst.SyncRepositories(false)
@@ -102,6 +113,11 @@ func init() {
upgradeCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
upgradeCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
upgradeCmd.Flags().Bool("force", false, "Force upgrade by ignoring errors")
upgradeCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful! overrides checkconflicts and full!)")
upgradeCmd.Flags().Bool("full", true, "Attempts to remove as much packages as possible which aren't required (slow)")
upgradeCmd.Flags().Bool("universe", false, "Use ONLY the SAT solver to compute upgrades (experimental)")
upgradeCmd.Flags().Bool("clean", false, "Try to drop removed packages (experimental, only when --universe is enabled)")
upgradeCmd.Flags().Bool("sync", false, "Upgrade packages with new revisions (experimental)")
RootCmd.AddCommand(upgradeCmd)
}

View File

@@ -0,0 +1,3 @@
name: "etc_conf"
dirs:
- "/etc/"

View File

@@ -4,8 +4,11 @@
# Logging configuration section:
# ---------------------------------------------
# logging:
# # Enable loggging to file (if path is not empty)
# enable_logfile: false
#
# Leave empty to skip logging to file.
# path: ""
# path: "/var/log/luet.log"
#
# Set logging level: error|warning|info|debug
# level: "info"
@@ -13,6 +16,12 @@
# Enable JSON log format instead of console mode.
# json_format: false.
#
# Disable/Enable color
# color: true
#
# Enable/Disable emoji
# enable_emoji: true
#
# ---------------------------------------------
# General configuration section:
# ---------------------------------------------
@@ -56,6 +65,10 @@
# The path is append to rootfs option path.
# database_path: "/var/cache/luet"
#
# Define the tmpdir base directory where luet store temporary files.
# Default $TMPDIR/tmpluet
# tmpdir_base: "/tmp/tmpluet"
#
# ---------------------------------------------
# Repositories configurations directories.
# ---------------------------------------------
@@ -66,7 +79,20 @@
# - /etc/luet/repos.conf.d
#
#
# ---------------------------------------------
# ------------------------------------------------
# Config protect configuration files directories.
# -----------------------------------------------
# Define the list of directories where load
# configuration files with the list of config
# protect paths.
# config_protect_confdir:
# - /etc/luet/config.protect.d
#
# Permit to ignore rules defined on
# config protect confdir and packages
# annotation.
# config_protect_skip: false
#
# System repositories
# ---------------------------------------------
# In alternative to define repositories files

39
go.mod
View File

@@ -4,47 +4,46 @@ go 1.12
require (
github.com/DataDog/zstd v1.4.4 // indirect
github.com/MottainaiCI/simplestreams-builder v0.0.0-20190710131531-efb382161f56 // indirect
github.com/Sabayon/pkgs-checker v0.6.2-0.20200315232328-b6efed54b4b1
github.com/Sabayon/pkgs-checker v0.6.3-0.20200912135508-97c41780e9b6
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154
github.com/briandowns/spinner v1.7.0
github.com/cavaliercoder/grab v2.0.0+incompatible
github.com/crillab/gophersat v1.1.9-0.20200211102949-9a8bf7f2f0a3
github.com/docker/docker v0.7.3-0.20180827131323-0c5f8d2b9b23
github.com/docker/docker v17.12.0-ce-rc1.0.20200417035958-130b0bc6032c+incompatible
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd
github.com/ghodss/yaml v1.0.0
github.com/go-yaml/yaml v2.1.0+incompatible // indirect
github.com/hashicorp/go-version v1.2.0
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3
github.com/klauspost/pgzip v1.2.1
github.com/knqyf263/go-deb-version v0.0.0-20190517075300-09fca494f03d
github.com/kr/pretty v0.2.0 // indirect
github.com/kyokomi/emoji v2.1.0+incompatible
github.com/logrusorgru/aurora v0.0.0-20190417123914-21d75270181e
github.com/marcsauter/single v0.0.0-20181104081128-f8bf46f26ec0
github.com/mattn/go-isatty v0.0.10 // indirect
github.com/mattn/go-sqlite3 v2.0.3+incompatible // indirect
github.com/mudler/docker-companion v0.4.6-0.20191110154655-b8b364100616
github.com/onsi/ginkgo v1.10.1
github.com/onsi/gomega v1.7.0
github.com/otiai10/copy v1.0.2
github.com/moby/sys/mount v0.1.1-0.20200320164225-6154f11e6840 // indirect
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87
github.com/onsi/ginkgo v1.12.1
github.com/onsi/gomega v1.10.0
github.com/otiai10/copy v1.2.1-0.20200916181228-26f84a0b1578
github.com/pelletier/go-toml v1.6.0 // indirect
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f
github.com/pkg/errors v0.8.1
github.com/spf13/afero v1.2.2 // indirect
github.com/spf13/cobra v0.0.5
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.5.0
github.com/pkg/errors v0.9.1
github.com/spf13/cobra v1.0.0
github.com/spf13/viper v1.6.3
github.com/stevenle/topsort v0.0.0-20130922064739-8130c1d7596b
go.etcd.io/bbolt v1.3.3
go.etcd.io/bbolt v1.3.4
go.uber.org/atomic v1.5.1 // indirect
go.uber.org/multierr v1.4.0 // indirect
go.uber.org/zap v1.13.0
golang.org/x/crypto v0.0.0-20191227163750-53104e6ec876 // indirect
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f // indirect
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553 // indirect
golang.org/x/sys v0.0.0-20200102141924-c96a22e43c9c // indirect
golang.org/x/tools v0.0.0-20200102200121-6de373a2766c // indirect
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 // indirect
gopkg.in/yaml.v2 v2.2.7
gopkg.in/yaml.v2 v2.2.8
gotest.tools/v3 v3.0.2 // indirect
helm.sh/helm/v3 v3.3.4
mvdan.cc/sh/v3 v3.0.0-beta1
)
replace github.com/docker/docker => github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200605210607-749178b8f80d+incompatible

741
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -20,6 +20,7 @@ import (
"fmt"
"os"
"os/exec"
"strings"
"syscall"
"github.com/pkg/errors"
@@ -33,23 +34,25 @@ type Box interface {
}
type DefaultBox struct {
Name string
Root string
Env []string
Cmd string
Args []string
Name string
Root string
Env []string
Cmd string
Args []string
HostMounts []string
Stdin, Stdout, Stderr bool
}
func NewBox(cmd string, args []string, rootfs string, stdin, stdout, stderr bool) Box {
func NewBox(cmd string, args, hostmounts, env []string, rootfs string, stdin, stdout, stderr bool) Box {
return &DefaultBox{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
Cmd: cmd,
Args: args,
Root: rootfs,
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
Cmd: cmd,
Args: args,
Root: rootfs,
HostMounts: hostmounts,
Env: env,
}
}
@@ -61,10 +64,25 @@ func (b *DefaultBox) Exec() error {
if err := mountDev(b.Root); err != nil {
return errors.Wrap(err, "Failed mounting dev on rootfs")
}
for _, hostMount := range b.HostMounts {
target := hostMount
if strings.Contains(hostMount, ":") {
dest := strings.Split(hostMount, ":")
if len(dest) != 2 {
return errors.New("Invalid arguments for mount, it can be: fullpath, or source:target")
}
hostMount = dest[0]
target = dest[1]
}
if err := mountBind(hostMount, b.Root, target); err != nil {
return errors.Wrap(err, fmt.Sprintf("Failed mounting %s on rootfs", hostMount))
}
}
if err := PivotRoot(b.Root); err != nil {
return errors.Wrap(err, "Failed switching pivot on rootfs")
}
cmd := exec.Command(b.Cmd, b.Args...)
if b.Stdin {
@@ -82,7 +100,7 @@ func (b *DefaultBox) Exec() error {
cmd.Env = b.Env
if err := cmd.Run(); err != nil {
return errors.Wrap(err, fmt.Sprintf("Error running the %s command", b.Cmd))
return errors.Wrap(err, fmt.Sprintf("Error running the %s command in box.Exec", b.Cmd))
}
return nil
}
@@ -111,6 +129,16 @@ func (b *DefaultBox) Run() error {
// Encode the command in base64 to avoid bad input from the args given
execCmd = append(execCmd, "--decode")
for _, m := range b.HostMounts {
execCmd = append(execCmd, "--mount")
execCmd = append(execCmd, m)
}
for _, e := range b.Env {
execCmd = append(execCmd, "--env")
execCmd = append(execCmd, e)
}
for _, a := range b.Args {
execCmd = append(execCmd, b64.StdEncoding.EncodeToString([]byte(a)))
}
@@ -151,7 +179,7 @@ func (b *DefaultBox) Run() error {
}
if err := cmd.Run(); err != nil {
return errors.Wrap(err, "Failed running Box command")
return errors.Wrap(err, "Failed running Box command in box.Run")
}
return nil
}

View File

@@ -75,10 +75,9 @@ func mountProc(newroot string) error {
return nil
}
func mountDev(newroot string) error {
source := "/dev"
target := filepath.Join(newroot, "/dev")
func mountBind(hostfolder, newroot, dst string) error {
source := hostfolder
target := filepath.Join(newroot, dst)
fstype := "bind"
data := ""
@@ -89,3 +88,7 @@ func mountDev(newroot string) error {
return nil
}
func mountDev(newroot string) error {
return mountBind("/dev", newroot, "/dev")
}

View File

@@ -18,6 +18,8 @@ package compiler
import (
"archive/tar"
"bufio"
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
@@ -25,6 +27,7 @@ import (
"path/filepath"
"regexp"
system "github.com/docker/docker/pkg/system"
gzip "github.com/klauspost/pgzip"
//"strconv"
@@ -34,6 +37,7 @@ import (
. "github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
"github.com/pkg/errors"
yaml "gopkg.in/yaml.v2"
@@ -60,6 +64,7 @@ func (i ArtifactIndex) CleanPath() ArtifactIndex {
Dependencies: art.Dependencies,
CompressionType: art.CompressionType,
Checksums: art.Checksums,
Files: art.Files,
})
}
return newIndex
@@ -77,6 +82,7 @@ type PackageArtifact struct {
Checksums Checksums `json:"checksums"`
SourceAssertion solver.PackagesAssertions `json:"-"`
CompressionType CompressionImplementation `json:"compressiontype"`
Files []string `json:"files"`
}
func NewPackageArtifact(path string) Artifact {
@@ -116,9 +122,19 @@ func (a *PackageArtifact) SetCompressionType(t CompressionImplementation) {
func (a *PackageArtifact) GetChecksums() Checksums {
return a.Checksums
}
func (a *PackageArtifact) SetChecksums(c Checksums) {
a.Checksums = c
}
func (a *PackageArtifact) SetFiles(f []string) {
a.Files = f
}
func (a *PackageArtifact) GetFiles() []string {
return a.Files
}
func (a *PackageArtifact) Hash() error {
return a.Checksums.Generate(a)
}
@@ -265,8 +281,97 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
return errors.New("Compression type must be supplied")
}
func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
// If the destination path already exists I rename target file name with postfix.
var destPath string
// Read data. TODO: We need change archive callback to permit to return a Reader
buffer := bytes.Buffer{}
if content != nil {
if _, err := buffer.ReadFrom(content); err != nil {
return nil, nil, err
}
}
// If file is not present on archive but is defined on mods
// I receive the callback. Prevent nil exception.
if header != nil {
switch header.Typeflag {
case tar.TypeReg:
destPath = filepath.Join(dst, path)
default:
// Nothing to do. I return original reader
return header, buffer.Bytes(), nil
}
// Check if exists
if helpers.Exists(destPath) {
for i := 1; i < 1000; i++ {
name := filepath.Join(filepath.Join(filepath.Dir(path),
fmt.Sprintf("._cfg%04d_%s", i, filepath.Base(path))))
if helpers.Exists(name) {
continue
}
Info(fmt.Sprintf("Found protected file %s. Creating %s.", destPath,
filepath.Join(dst, name)))
return &tar.Header{
Mode: header.Mode,
Typeflag: header.Typeflag,
PAXRecords: header.PAXRecords,
Name: name,
}, buffer.Bytes(), nil
}
}
}
return header, buffer.Bytes(), nil
}
func (a *PackageArtifact) GetProtectFiles() []string {
ans := []string{}
if !LuetCfg.ConfigProtectSkip &&
LuetCfg.GetConfigProtectConfFiles() != nil &&
len(LuetCfg.GetConfigProtectConfFiles()) > 0 {
for _, file := range a.Files {
for _, conf := range LuetCfg.GetConfigProtectConfFiles() {
for _, dir := range conf.Directories {
// Note file is without / at begin.
if strings.HasPrefix("/"+file, filepath.Clean(dir)) {
// docker archive modifier works with path without / at begin.
ans = append(ans, file)
goto nextFile
}
}
}
if a.CompileSpec.GetPackage().HasAnnotation(string(pkg.ConfigProtectAnnnotation)) {
dir, ok := a.CompileSpec.GetPackage().GetAnnotations()[string(pkg.ConfigProtectAnnnotation)]
if ok {
if strings.HasPrefix("/"+file, filepath.Clean(dir)) {
ans = append(ans, file)
goto nextFile
}
}
}
nextFile:
}
}
return ans
}
// Unpack Untar and decompress (TODO) to the given path
func (a *PackageArtifact) Unpack(dst string, keepPerms bool) error {
// Create
protectedFiles := a.GetProtectFiles()
tarModifier := helpers.NewTarModifierWrapper(dst, tarModifierWrapperFunc)
switch a.CompressionType {
case GZip:
// Create the uncompressed archive
@@ -295,19 +400,21 @@ func (a *PackageArtifact) Unpack(dst string, keepPerms bool) error {
return errors.Wrap(err, "Cannot copy to "+a.GetPath()+".uncompressed")
}
err = helpers.Untar(a.GetPath()+".uncompressed", dst,
LuetCfg.GetGeneral().SameOwner)
err = helpers.UntarProtect(a.GetPath()+".uncompressed", dst,
LuetCfg.GetGeneral().SameOwner, protectedFiles, tarModifier)
if err != nil {
return err
}
return nil
// Defaults to tar only (covers when "none" is supplied)
default:
return helpers.Untar(a.GetPath(), dst, LuetCfg.GetGeneral().SameOwner)
return helpers.UntarProtect(a.GetPath(), dst, LuetCfg.GetGeneral().SameOwner,
protectedFiles, tarModifier)
}
return errors.New("Compression type must be supplied")
}
// FileList generates the list of file of a package from the local archive
func (a *PackageArtifact) FileList() ([]string, error) {
var tr *tar.Reader
switch a.CompressionType {
@@ -373,6 +480,27 @@ type CopyJob struct {
Artifact string
}
func copyXattr(srcPath, dstPath, attr string) error {
data, err := system.Lgetxattr(srcPath, attr)
if err != nil {
return err
}
if data != nil {
if err := system.Lsetxattr(dstPath, attr, data, 0); err != nil {
return err
}
}
return nil
}
func doCopyXattrs(srcPath, dstPath string) error {
if err := copyXattr(srcPath, dstPath, "security.capability"); err != nil {
return err
}
return copyXattr(srcPath, dstPath, "trusted.overlay.opaque")
}
func worker(i int, wg *sync.WaitGroup, s <-chan CopyJob) {
defer wg.Done()
@@ -386,10 +514,13 @@ func worker(i int, wg *sync.WaitGroup, s <-chan CopyJob) {
// continue
// }
if !helpers.Exists(job.Dst) {
_, err := os.Lstat(job.Dst)
if err != nil {
Debug("Copying ", job.Src)
if err := helpers.CopyFile(job.Src, job.Dst); err != nil {
Warning("Error copying", job, err)
}
doCopyXattrs(job.Src, job.Dst)
}
}
}
@@ -397,14 +528,14 @@ func worker(i int, wg *sync.WaitGroup, s <-chan CopyJob) {
// ExtractArtifactFromDelta extracts deltas from ArtifactLayer from an image in tar format
func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurrency int, keepPerms bool, includes []string, t CompressionImplementation) (Artifact, error) {
archive, err := ioutil.TempDir(os.TempDir(), "archive")
archive, err := LuetCfg.GetSystem().TempDir("archive")
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for archive")
}
defer os.RemoveAll(archive) // clean up
if strings.HasSuffix(src, ".tar") {
rootfs, err := ioutil.TempDir(os.TempDir(), "rootfs")
rootfs, err := LuetCfg.GetSystem().TempDir("rootfs")
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
@@ -446,19 +577,33 @@ func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurren
}
}
}
for _, a := range l.Diffs.Changes {
Debug("File ", a.Name, " changed")
}
for _, a := range l.Diffs.Deletions {
Debug("File ", a.Name, " deleted")
}
}
} else {
// Otherwise just grab all
for _, l := range layers {
// Consider d.Additions (and d.Changes? - warn at least) only
for _, a := range l.Diffs.Additions {
Debug("File ", a.Name, " added")
toCopy <- CopyJob{Src: filepath.Join(src, a.Name), Dst: filepath.Join(archive, a.Name), Artifact: a.Name}
}
for _, a := range l.Diffs.Changes {
Debug("File ", a.Name, " changed")
}
for _, a := range l.Diffs.Deletions {
Debug("File ", a.Name, " deleted")
}
}
}
close(toCopy)
wg.Wait()
a := NewPackageArtifact(dst)
a.SetCompressionType(t)
err = a.Compress(archive, concurrency)

View File

@@ -0,0 +1,199 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package backend
import (
"io/ioutil"
"os"
"path/filepath"
"strings"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/config"
"github.com/pkg/errors"
)
// GenerateChanges generates changes between two images using a backend by leveraging export/extractrootfs methods
// example of json return: [
// {
// "Image1": "luet/base",
// "Image2": "alpine",
// "DiffType": "File",
// "Diff": {
// "Adds": null,
// "Dels": [
// {
// "Name": "/luetbuild",
// "Size": 5830706
// },
// {
// "Name": "/luetbuild/Dockerfile",
// "Size": 50
// },
// {
// "Name": "/luetbuild/output1",
// "Size": 5830656
// }
// ],
// "Mods": null
// }
// }
// ]
func GenerateChanges(b compiler.CompilerBackend, srcImage, dstImage string) ([]compiler.ArtifactLayer, error) {
res := compiler.ArtifactLayer{FromImage: srcImage, ToImage: dstImage}
tmpdiffs, err := config.LuetCfg.GetSystem().TempDir("extraction")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tmpdiffs) // clean up
srcRootFS, err := ioutil.TempDir(tmpdiffs, "src")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(srcRootFS) // clean up
dstRootFS, err := ioutil.TempDir(tmpdiffs, "dst")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(dstRootFS) // clean up
// Handle both files (.tar) or images. If parameters are beginning with / , don't export the images
if !strings.HasPrefix(srcImage, "/") {
srcImageTar, err := ioutil.TempFile(tmpdiffs, "srctar")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.Remove(srcImageTar.Name()) // clean up
srcImageExport := compiler.CompilerBackendOptions{
ImageName: srcImage,
Destination: srcImageTar.Name(),
}
err = b.ExportImage(srcImageExport)
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while exporting src image "+srcImage)
}
srcImage = srcImageTar.Name()
}
srcImageExtract := compiler.CompilerBackendOptions{
SourcePath: srcImage,
Destination: srcRootFS,
}
err = b.ExtractRootfs(srcImageExtract, false) // No need to keep permissions as we just collect file diffs
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking src image "+srcImage)
}
// Handle both files (.tar) or images. If parameters are beginning with / , don't export the images
if !strings.HasPrefix(dstImage, "/") {
dstImageTar, err := ioutil.TempFile(tmpdiffs, "dsttar")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.Remove(dstImageTar.Name()) // clean up
dstImageExport := compiler.CompilerBackendOptions{
ImageName: dstImage,
Destination: dstImageTar.Name(),
}
err = b.ExportImage(dstImageExport)
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while exporting dst image "+dstImage)
}
dstImage = dstImageTar.Name()
}
dstImageExtract := compiler.CompilerBackendOptions{
SourcePath: dstImage,
Destination: dstRootFS,
}
err = b.ExtractRootfs(dstImageExtract, false)
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking dst image "+dstImage)
}
// Get Additions/Changes. dst -> src
err = filepath.Walk(dstRootFS, func(path string, info os.FileInfo, err error) error {
if info.IsDir() {
return nil
}
realpath := strings.Replace(path, dstRootFS, "", -1)
fileInfo, err := os.Lstat(filepath.Join(srcRootFS, realpath))
if err == nil {
var sizeA, sizeB int64
sizeA = fileInfo.Size()
if s, err := os.Lstat(filepath.Join(dstRootFS, realpath)); err == nil {
sizeB = s.Size()
}
if sizeA != sizeB {
// fmt.Println("File changed", path, filepath.Join(srcRootFS, realpath))
res.Diffs.Changes = append(res.Diffs.Changes, compiler.ArtifactNode{
Name: filepath.Join("/", realpath),
Size: int(sizeB),
})
} else {
// fmt.Println("File already exists", path, filepath.Join(srcRootFS, realpath))
}
} else {
var sizeB int64
if s, err := os.Lstat(filepath.Join(dstRootFS, realpath)); err == nil {
sizeB = s.Size()
}
res.Diffs.Additions = append(res.Diffs.Additions, compiler.ArtifactNode{
Name: filepath.Join("/", realpath),
Size: int(sizeB),
})
// fmt.Println("File created", path, filepath.Join(srcRootFS, realpath))
}
return nil
})
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while walking image destination")
}
// Get deletions. src -> dst
err = filepath.Walk(srcRootFS, func(path string, info os.FileInfo, err error) error {
if info.IsDir() {
return nil
}
realpath := strings.Replace(path, srcRootFS, "", -1)
if _, err = os.Lstat(filepath.Join(dstRootFS, realpath)); err != nil {
// fmt.Println("File deleted", path, filepath.Join(srcRootFS, realpath))
res.Diffs.Deletions = append(res.Diffs.Deletions, compiler.ArtifactNode{
Name: filepath.Join("/", realpath),
})
}
return nil
})
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while walking image source")
}
return []compiler.ArtifactLayer{res}, nil
}

View File

@@ -0,0 +1,73 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package backend_test
import (
"github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/compiler/backend"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Docker image diffs", func() {
var b compiler.CompilerBackend
BeforeEach(func() {
b = NewSimpleDockerBackend()
})
Context("Generate diffs from docker images", func() {
It("Detect no changes", func() {
err := b.DownloadImage(compiler.CompilerBackendOptions{
ImageName: "alpine:latest",
})
Expect(err).ToNot(HaveOccurred())
layers, err := GenerateChanges(b, "alpine:latest", "alpine:latest")
Expect(err).ToNot(HaveOccurred())
Expect(len(layers)).To(Equal(1))
Expect(len(layers[0].Diffs.Additions)).To(Equal(0))
Expect(len(layers[0].Diffs.Changes)).To(Equal(0))
Expect(len(layers[0].Diffs.Deletions)).To(Equal(0))
})
It("Detects additions and changed files", func() {
err := b.DownloadImage(compiler.CompilerBackendOptions{
ImageName: "quay.io/mocaccino/micro",
})
Expect(err).ToNot(HaveOccurred())
err = b.DownloadImage(compiler.CompilerBackendOptions{
ImageName: "quay.io/mocaccino/extra",
})
Expect(err).ToNot(HaveOccurred())
layers, err := GenerateChanges(b, "quay.io/mocaccino/micro", "quay.io/mocaccino/extra")
Expect(err).ToNot(HaveOccurred())
Expect(len(layers)).To(Equal(1))
Expect(len(layers[0].Diffs.Changes) > 0).To(BeTrue())
Expect(len(layers[0].Diffs.Changes[0].Name) > 0).To(BeTrue())
Expect(layers[0].Diffs.Changes[0].Size > 0).To(BeTrue())
Expect(len(layers[0].Diffs.Additions) > 0).To(BeTrue())
Expect(len(layers[0].Diffs.Additions[0].Name) > 0).To(BeTrue())
Expect(layers[0].Diffs.Additions[0].Size > 0).To(BeTrue())
Expect(len(layers[0].Diffs.Deletions)).To(Equal(0))
})
})
})

View File

@@ -89,6 +89,19 @@ func (*SimpleDocker) DownloadImage(opts compiler.CompilerBackendOptions) error {
return nil
}
func (*SimpleDocker) ImageExists(imagename string) bool {
buildarg := []string{"inspect", "--type=image", imagename}
Debug(":whale: Checking existance of docker image: " + imagename)
cmd := exec.Command("docker", buildarg...)
out, err := cmd.CombinedOutput()
if err != nil {
Warning("Image not present")
Debug(string(out))
return false
}
return true
}
func (*SimpleDocker) RemoveImage(opts compiler.CompilerBackendOptions) error {
name := opts.ImageName
buildarg := []string{"rmi", name}
@@ -208,58 +221,11 @@ func (*SimpleDocker) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPer
return nil
}
// container-diff diff daemon://luet/base alpine --type=file -j
// [
// {
// "Image1": "luet/base",
// "Image2": "alpine",
// "DiffType": "File",
// "Diff": {
// "Adds": null,
// "Dels": [
// {
// "Name": "/luetbuild",
// "Size": 5830706
// },
// {
// "Name": "/luetbuild/Dockerfile",
// "Size": 50
// },
// {
// "Name": "/luetbuild/output1",
// "Size": 5830656
// }
// ],
// "Mods": null
// }
// }
// ]
// Changes uses container-diff (https://github.com/GoogleContainerTools/container-diff) for retrieving out layer diffs
func (*SimpleDocker) Changes(fromImage, toImage string) ([]compiler.ArtifactLayer, error) {
tmpdiffs, err := ioutil.TempDir(os.TempDir(), "tmpdiffs")
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tmpdiffs) // clean up
// Changes retrieves changes between image layers
func (d *SimpleDocker) Changes(fromImage, toImage string) ([]compiler.ArtifactLayer, error) {
diffs, err := GenerateChanges(d, fromImage, toImage)
diffargs := []string{"diff", fromImage, toImage, "-v", "error", "-q", "--type=file", "-j", "-n", "-c", tmpdiffs}
out, err := exec.Command("container-diff", diffargs...).Output()
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Failed Resolving layer diffs: "+string(out))
}
if config.LuetCfg.GetGeneral().ShowBuildOutput {
Info(string(out))
}
var diffs []compiler.ArtifactLayer
err = json.Unmarshal(out, &diffs)
if err != nil {
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Failed unmarshalling json response: "+string(out))
}
if config.LuetCfg.GetLogging().Level == "debug" {
if config.LuetCfg.GetGeneral().Debug {
summary := compiler.ComputeArtifactLayerSummary(diffs)
for _, l := range summary.Layers {
Debug(fmt.Sprintf("Diff %s -> %s: add %d (%d bytes), del %d (%d bytes), change %d (%d bytes)",
@@ -270,5 +236,5 @@ func (*SimpleDocker) Changes(fromImage, toImage string) ([]compiler.ArtifactLaye
}
}
return diffs, nil
return diffs, err
}

View File

@@ -97,6 +97,13 @@ func (*SimpleImg) CopyImage(src, dst string) error {
return nil
}
func (*SimpleImg) ImageExists(imagename string) bool {
// NOOP: not implemented
// TODO: Since img doesn't have an inspect command,
// we need to parse the ls output manually
return false
}
func (s *SimpleImg) ImageDefinitionToTar(opts compiler.CompilerBackendOptions) error {
if err := s.BuildImage(opts); err != nil {
return errors.Wrap(err, "Failed building image")
@@ -142,8 +149,8 @@ func (*SimpleImg) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerms
// TODO: Use container-diff (https://github.com/GoogleContainerTools/container-diff) for checking out layer diffs
// Changes uses container-diff (https://github.com/GoogleContainerTools/container-diff) for retrieving out layer diffs
func (*SimpleImg) Changes(fromImage, toImage string) ([]compiler.ArtifactLayer, error) {
return NewSimpleDockerBackend().Changes(fromImage, toImage)
func (i *SimpleImg) Changes(fromImage, toImage string) ([]compiler.ArtifactLayer, error) {
return GenerateChanges(i, fromImage, toImage)
}
func (*SimpleImg) Push(opts compiler.CompilerBackendOptions) error {

View File

@@ -20,9 +20,13 @@ import (
"io/ioutil"
"os"
"path/filepath"
"github.com/ghodss/yaml"
"regexp"
"strings"
"sync"
"time"
"github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
@@ -33,6 +37,7 @@ import (
)
const BuildFile = "build.yaml"
const DefinitionFile = "definition.yaml"
type LuetCompiler struct {
*tree.CompilerRecipe
@@ -228,14 +233,38 @@ func (cs *LuetCompiler) stripIncludesFromRootfs(includes []string, rootfs string
return nil
}
func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage string, concurrency int, keepPermissions, keepImg bool, p CompilationSpec) (Artifact, error) {
func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage string, concurrency int, keepPermissions, keepImg bool, p CompilationSpec, generateArtifact bool) (Artifact, error) {
pkgTag := ":package: " + p.GetPackage().GetName()
// Use packageImage as salt into the fp being used
// so the hash is unique also in cases where
// some package deps does have completely different
// depgraphs
// TODO: As the salt contains the packageImage ( in registry/organization/imagename:tag format)
// the images hashes are broken with registry mirrors.
// We should use the image tag, or pass by the package assertion hash which is unique
// and identifies the deptree of the package.
fp := p.GetPackage().HashFingerprint(packageImage)
if buildertaggedImage == "" {
buildertaggedImage = cs.ImageRepository + "-" + fp + "-builder"
Debug(pkgTag, "Creating intermediary image", buildertaggedImage, "from", image)
}
// TODO: Cleanup, not actually hit
if packageImage == "" {
packageImage = cs.ImageRepository + "-" + fp
}
if !cs.Clean {
if art, err := LoadArtifactFromYaml(p); err == nil {
exists := cs.Backend.ImageExists(buildertaggedImage) && cs.Backend.ImageExists(packageImage)
if art, err := LoadArtifactFromYaml(p); err == nil && (cs.Options.SkipIfMetadataExists || exists) {
Debug("Artifact reloaded. Skipping build")
return art, err
}
}
pkgTag := ":package: " + p.GetPackage().GetName()
p.SetSeedImage(image) // In this case, we ignore the build deps as we suppose that the image has them - otherwise we recompose the tree with a solver,
// and we build all the images first.
@@ -265,14 +294,6 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
}
}
fp := p.GetPackage().HashFingerprint()
if buildertaggedImage == "" {
buildertaggedImage = cs.ImageRepository + "-" + fp + "-builder"
}
if packageImage == "" {
packageImage = cs.ImageRepository + "-" + fp
}
Info(pkgTag, "Generating :whale: definition for builder image from", image)
// First we create the builder image
@@ -319,12 +340,6 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
Destination: p.Rel(p.GetPackage().GetFingerPrint() + ".image.tar"),
}
// if !keepPackageImg {
// err = cs.Backend.ImageDefinitionToTar(runnerOpts)
// if err != nil {
// return nil, errors.Wrap(err, "Could not export image to tar")
// }
// } else {
buildPackageImage := true
if cs.Options.PullFirst {
//Best effort pull
@@ -353,17 +368,14 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
return nil, errors.Wrap(err, "Could not push image: "+image+" "+builderOpts.DockerFileName)
}
}
// }
var diffs []ArtifactLayer
var artifact Artifact
unpack := p.ImageUnpack()
if !p.ImageUnpack() {
// we have to get diffs only if spec is not unpacked
diffs, err = cs.Backend.Changes(p.Rel(p.GetPackage().GetFingerPrint()+"-builder.image.tar"), p.Rel(p.GetPackage().GetFingerPrint()+".image.tar"))
if err != nil {
return nil, errors.Wrap(err, "Could not generate changes from layers")
}
// If package_dir was specified in the spec, we want to treat the content of the directory
// as the root of our archive. ImageUnpack is implied to be true. override it
if p.GetPackageDir() != "" {
unpack = true
}
rootfs, err := ioutil.TempDir(p.GetOutputPath(), "rootfs")
@@ -395,7 +407,15 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
}
}
if p.ImageUnpack() {
if !generateArtifact {
return &PackageArtifact{}, nil
}
if unpack {
if p.GetPackageDir() != "" {
Info(":tophat: Packing from output dir", p.GetPackageDir())
rootfs = filepath.Join(rootfs, p.GetPackageDir())
}
if len(p.GetIncludes()) > 0 {
// strip from includes
@@ -403,6 +423,7 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
}
artifact = NewPackageArtifact(p.Rel(p.GetPackage().GetFingerPrint() + ".package.tar"))
artifact.SetCompressionType(cs.CompressionType)
err = artifact.Compress(rootfs, concurrency)
if err != nil {
return nil, errors.Wrap(err, "Error met while creating package archive")
@@ -411,7 +432,10 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
artifact.SetCompileSpec(p)
} else {
Info(pkgTag, "Generating delta")
diffs, err := cs.Backend.Changes(p.Rel(p.GetPackage().GetFingerPrint()+"-builder.image.tar"), p.Rel(p.GetPackage().GetFingerPrint()+".image.tar"))
if err != nil {
return nil, errors.Wrap(err, "Could not generate changes from layers")
}
artifact, err = ExtractArtifactFromDelta(rootfs, p.Rel(p.GetPackage().GetFingerPrint()+".package.tar"), diffs, concurrency, keepPermissions, p.GetIncludes(), cs.CompressionType)
if err != nil {
return nil, errors.Wrap(err, "Could not generate deltas")
@@ -420,33 +444,91 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
artifact.SetCompileSpec(p)
}
filelist, err := artifact.FileList()
if err != nil {
return artifact, errors.Wrap(err, "Failed getting package list")
}
artifact.SetFiles(filelist)
artifact.GetCompileSpec().GetPackage().SetBuildTimestamp(time.Now().String())
err = artifact.WriteYaml(p.GetOutputPath())
if err != nil {
return artifact, err
return artifact, errors.Wrap(err, "Failed while writing metadata file")
}
Info(pkgTag, " :white_check_mark: Done")
return artifact, nil
}
func (cs *LuetCompiler) FromDatabase(db pkg.PackageDatabase, minimum bool, dst string) ([]CompilationSpec, error) {
compilerSpecs := NewLuetCompilationspecs()
w := db.World()
for _, p := range w {
spec, err := cs.FromPackage(p)
if err != nil {
return nil, err
}
if dst != "" {
spec.SetOutputPath(dst)
}
compilerSpecs.Add(spec)
}
switch minimum {
case true:
return cs.ComputeMinimumCompilableSet(compilerSpecs.Unique().All()...)
default:
return compilerSpecs.Unique().All(), nil
}
}
// ComputeMinimumCompilableSet strips specs that are eventually compiled by leafs
func (cs *LuetCompiler) ComputeMinimumCompilableSet(p ...CompilationSpec) ([]CompilationSpec, error) {
// Generate a set with all the deps of the provided specs
// we will use that set to remove the deps from the list of provided compilation specs
allDependencies := solver.PackagesAssertions{} // Get all packages that will be in deps
result := []CompilationSpec{}
for _, spec := range p {
ass, err := cs.ComputeDepTree(spec)
if err != nil {
return result, errors.Wrap(err, "computin specs deptree")
}
allDependencies = append(allDependencies, ass.Drop(spec.GetPackage())...)
}
for _, spec := range p {
if found := allDependencies.Search(spec.GetPackage().GetFingerPrint()); found == nil {
result = append(result, spec)
}
}
return result, nil
}
func (cs *LuetCompiler) ComputeDepTree(p CompilationSpec) (solver.PackagesAssertions, error) {
s := solver.NewResolver(pkg.NewInMemoryDatabase(false), cs.Database, pkg.NewInMemoryDatabase(false), cs.Options.SolverOptions.Resolver())
solution, err := s.Install([]pkg.Package{p.GetPackage()})
solution, err := s.Install(pkg.Packages{p.GetPackage()})
if err != nil {
return nil, errors.Wrap(err, "While computing a solution for "+p.GetPackage().HumanReadableString())
}
dependencies := solution.Order(cs.Database, p.GetPackage().GetFingerPrint())
dependencies, err := solution.Order(cs.Database, p.GetPackage().GetFingerPrint())
if err != nil {
return nil, errors.Wrap(err, "While order a solution for "+p.GetPackage().HumanReadableString())
}
assertions := solver.PackagesAssertions{}
for _, assertion := range dependencies { //highly dependent on the order
if assertion.Value {
nthsolution := dependencies.Cut(assertion.Package)
assertion.Hash = solver.PackageHash{
BuildHash: nthsolution.Drop(assertion.Package).AssertionHash(),
BuildHash: nthsolution.HashFrom(assertion.Package),
PackageHash: nthsolution.AssertionHash(),
}
assertions = append(assertions, assertion)
@@ -471,7 +553,8 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
if len(p.GetPackage().GetRequires()) == 0 && p.GetImage() == "" {
Error("Package with no deps and no seed image supplied, bailing out")
return nil, errors.New("Package " + p.GetPackage().GetFingerPrint() + "with no deps and no seed image supplied, bailing out")
return nil, errors.New("Package " + p.GetPackage().GetFingerPrint() +
" with no deps and no seed image supplied, bailing out")
}
targetAssertion := p.GetSourceAssertion().Search(p.GetPackage().GetFingerPrint())
@@ -480,7 +563,7 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
// - If image is set we just generate a plain dockerfile
// Treat last case (easier) first. The image is provided and we just compute a plain dockerfile with the images listed as above
if p.GetImage() != "" {
return cs.compileWithImage(p.GetImage(), "", targetPackageHash, concurrency, keepPermissions, cs.KeepImg, p)
return cs.compileWithImage(p.GetImage(), "", targetPackageHash, concurrency, keepPermissions, cs.KeepImg, p, true)
}
// - If image is not set, we read a base_image. Then we will build one image from it to kick-off our build based
@@ -494,6 +577,7 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
depsN := 0
currentN := 0
packageDeps := !cs.Options.PackageTargetOnly
if !cs.Options.NoDeps {
Info(":deciduous_tree: Build dependencies for " + p.GetPackage().HumanReadableString())
for _, assertion := range dependencies { //highly dependent on the order
@@ -519,7 +603,7 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
lastHash = currentPackageImageHash
if compileSpec.GetImage() != "" {
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().HumanReadableString()+" from image")
artifact, err := cs.compileWithImage(compileSpec.GetImage(), buildImageHash, currentPackageImageHash, concurrency, keepPermissions, cs.KeepImg, compileSpec)
artifact, err := cs.compileWithImage(compileSpec.GetImage(), buildImageHash, currentPackageImageHash, concurrency, keepPermissions, cs.KeepImg, compileSpec, packageDeps)
if err != nil {
return nil, errors.Wrap(err, "Failed compiling "+compileSpec.GetPackage().HumanReadableString())
}
@@ -529,7 +613,7 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
}
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().HumanReadableString()+" from tree")
artifact, err := cs.compileWithImage(buildImageHash, "", currentPackageImageHash, concurrency, keepPermissions, cs.KeepImg, compileSpec)
artifact, err := cs.compileWithImage(buildImageHash, "", currentPackageImageHash, concurrency, keepPermissions, cs.KeepImg, compileSpec, packageDeps)
if err != nil {
return nil, errors.Wrap(err, "Failed compiling "+compileSpec.GetPackage().HumanReadableString())
// deperrs = append(deperrs, err)
@@ -545,7 +629,7 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
if !cs.Options.OnlyDeps {
Info(":package:", p.GetPackage().HumanReadableString(), ":cyclone: Building package target from:", lastHash)
artifact, err := cs.compileWithImage(lastHash, "", targetPackageHash, concurrency, keepPermissions, cs.KeepImg, p)
artifact, err := cs.compileWithImage(lastHash, "", targetPackageHash, concurrency, keepPermissions, cs.KeepImg, p, true)
if err != nil {
return artifact, err
}
@@ -558,6 +642,8 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
}
}
type templatedata map[string]interface{}
func (cs *LuetCompiler) FromPackage(p pkg.Package) (CompilationSpec, error) {
pack, err := cs.Database.FindPackageCandidate(p)
@@ -569,12 +655,28 @@ func (cs *LuetCompiler) FromPackage(p pkg.Package) (CompilationSpec, error) {
if !helpers.Exists(buildFile) {
return nil, errors.New("No build file present for " + p.GetFingerPrint())
}
dat, err := ioutil.ReadFile(buildFile)
defFile := pack.Rel(DefinitionFile)
if !helpers.Exists(defFile) {
return nil, errors.New("No build file present for " + p.GetFingerPrint())
}
def, err := ioutil.ReadFile(defFile)
if err != nil {
return nil, err
}
return NewLuetCompilationSpec(dat, pack)
build, err := ioutil.ReadFile(buildFile)
if err != nil {
return nil, err
}
var values templatedata
if err = yaml.Unmarshal(def, &values); err != nil {
return nil, err
}
out, err := helpers.RenderHelm(string(build), values)
if err != nil {
return nil, err
}
return NewLuetCompilationSpec([]byte(out), pack)
}
func (cs *LuetCompiler) GetBackend() CompilerBackend {

View File

@@ -108,6 +108,27 @@ var _ = Describe("Compiler", func() {
})
})
Context("Templated packages",func(){
It("Renders", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
tmpdir, err := ioutil.TempDir("", "package")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
err = generalRecipe.Load("../../tests/fixtures/templates")
Expect(err).ToNot(HaveOccurred())
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
pkg ,err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
spec, err := compiler.FromPackage(pkg)
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetImage()).To(Equal("b:bar"))
})
})
Context("Reconstruct image tree", func() {
It("Compiles it", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
@@ -615,6 +636,61 @@ var _ = Describe("Compiler", func() {
})
})
Context("Packages which conents are a package folder", func() {
It("Compiles it in parallel", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/package_dir")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{
Name: "dironly",
Category: "test",
Version: "1.0",
})
Expect(err).ToNot(HaveOccurred())
spec2, err := compiler.FromPackage(&pkg.DefaultPackage{
Name: "dironly_filter",
Category: "test",
Version: "1.0",
})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
tmpdir2, err := ioutil.TempDir("", "tree2")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir2) // clean up
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir2)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec, spec2))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(2))
Expect(len(artifacts[0].GetDependencies())).To(Equal(0))
Expect(helpers.Untar(spec.Rel("dironly-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test1"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test2"))).To(BeTrue())
Expect(helpers.Untar(spec2.Rel("dironly_filter-test-1.0.package.tar"), tmpdir2, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec2.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec2.Rel("test6"))).ToNot(BeTrue())
Expect(helpers.Exists(spec2.Rel("artifact42"))).ToNot(BeTrue())
})
})
Context("Compression", func() {
It("Builds packages in gzip", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
@@ -650,4 +726,62 @@ var _ = Describe("Compiler", func() {
Expect(helpers.Exists(spec.Rel("var"))).ToNot(BeTrue())
})
})
Context("Compilation of whole tree", func() {
It("doesn't include dependencies that would be compiled anyway", func() {
// As some specs are dependent from each other, don't pull it in if they would
// be eventually
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/includeimage")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions())
specs, err := compiler.FromDatabase(generalRecipe.GetDatabase(), true, "")
Expect(err).ToNot(HaveOccurred())
Expect(len(specs)).To(Equal(1))
Expect(specs[0].GetPackage().GetFingerPrint()).To(Equal("b-test-1.0"))
})
})
Context("File list", func() {
It("is generated after the compilation process and annotated in the metadata", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err := generalRecipe.Load("../../tests/fixtures/packagelayers")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), NewDefaultCompilerOptions())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "runtime", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
compiler.SetCompressionType(GZip)
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
artifacts, errs := compiler.CompileParallel(false, NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
Expect(len(artifacts[0].GetDependencies())).To(Equal(1))
Expect(artifacts[0].GetFiles()).To(ContainElement("bin/busybox"))
Expect(helpers.Exists(spec.Rel("runtime-layer-0.1.metadata.yaml"))).To(BeTrue())
art, err := LoadArtifactFromYaml(spec)
Expect(err).ToNot(HaveOccurred())
files := art.GetFiles()
Expect(files).To(ContainElement("bin/busybox"))
})
})
})

View File

@@ -28,9 +28,10 @@ type Compiler interface {
CompileParallel(keepPermissions bool, ps CompilationSpecs) ([]Artifact, []error)
CompileWithReverseDeps(keepPermissions bool, ps CompilationSpecs) ([]Artifact, []error)
ComputeDepTree(p CompilationSpec) (solver.PackagesAssertions, error)
ComputeMinimumCompilableSet(p ...CompilationSpec) ([]CompilationSpec, error)
SetConcurrency(i int)
FromPackage(pkg.Package) (CompilationSpec, error)
FromDatabase(db pkg.PackageDatabase, minimum bool, dst string) ([]CompilationSpec, error)
SetBackend(CompilerBackend)
GetBackend() CompilerBackend
SetCompressionType(t CompressionImplementation)
@@ -51,9 +52,12 @@ type CompilerOptions struct {
Clean bool
KeepImageExport bool
OnlyDeps bool
NoDeps bool
SolverOptions config.LuetSolverOptions
OnlyDeps bool
NoDeps bool
SolverOptions config.LuetSolverOptions
SkipIfMetadataExists bool
PackageTargetOnly bool
}
func NewDefaultCompilerOptions() *CompilerOptions {
@@ -82,6 +86,8 @@ type CompilerBackend interface {
DownloadImage(opts CompilerBackendOptions) error
Push(opts CompilerBackendOptions) error
ImageExists(string) bool
}
type Artifact interface {
@@ -102,6 +108,9 @@ type Artifact interface {
Hash() error
Verify() error
SetFiles(f []string)
GetFiles() []string
GetChecksums() Checksums
SetChecksums(c Checksums)
}
@@ -165,6 +174,9 @@ type CompilationSpec interface {
GetRetrieve() []string
CopyRetrieves(dest string) error
SetPackageDir(string)
GetPackageDir() string
}
type CompilationSpecs interface {

View File

@@ -95,6 +95,7 @@ type LuetCompilationSpec struct {
Seed string `json:"seed"`
Package *pkg.DefaultPackage `json:"package"`
SourceAssertion solver.PackagesAssertions `json:"-"`
PackageDir string `json:"package_dir" yaml:"package_dir"`
Retrieve []string `json:"retrieve"`
@@ -123,6 +124,14 @@ func (cs *LuetCompilationSpec) GetPackage() pkg.Package {
return cs.Package
}
func (cs *LuetCompilationSpec) GetPackageDir() string {
return cs.PackageDir
}
func (cs *LuetCompilationSpec) SetPackageDir(s string) {
cs.PackageDir = s
}
func (cs *LuetCompilationSpec) BuildSteps() []string {
return cs.Steps
}

View File

@@ -27,7 +27,9 @@ import (
"strings"
"time"
"github.com/mudler/luet/pkg/helpers"
solver "github.com/mudler/luet/pkg/solver"
v "github.com/spf13/viper"
)
@@ -35,9 +37,20 @@ var LuetCfg = NewLuetConfig(v.GetViper())
var AvailableResolvers = strings.Join([]string{solver.QLearningResolverType}, " ")
type LuetLoggingConfig struct {
Path string `mapstructure:"path"`
JsonFormat bool `mapstructure:"json_format"`
Level string `mapstructure:"level"`
// Path of the logfile
Path string `mapstructure:"path"`
// Enable/Disable logging to file
EnableLogFile bool `mapstructure:"enable_logfile"`
// Enable JSON format logging in file
JsonFormat bool `mapstructure:"json_format"`
// Log level
Level string `mapstructure:"level"`
// Enable emoji
EnableEmoji bool `mapstructure:"enable_emoji"`
// Enable/Disable color in logging
Color bool `mapstructure:"color"`
}
type LuetGeneralConfig struct {
@@ -80,6 +93,7 @@ type LuetSystemConfig struct {
DatabasePath string `yaml:"database_path" mapstructure:"database_path"`
Rootfs string `yaml:"rootfs" mapstructure:"rootfs"`
PkgsCachePath string `yaml:"pkgs_cache_path" mapstructure:"pkgs_cache_path"`
TmpDirBase string `yaml:"tmpdir_base" mapstructure:"tmpdir_base"`
}
func (sc LuetSystemConfig) GetRepoDatabaseDirPath(name string) string {
@@ -186,9 +200,13 @@ type LuetConfig struct {
System LuetSystemConfig `mapstructure:"system"`
Solver LuetSolverOptions `mapstructure:"solver"`
RepositoriesConfDir []string `mapstructure:"repos_confdir"`
CacheRepositories []LuetRepository `mapstructure:"repetitors"`
SystemRepositories []LuetRepository `mapstructure:"repositories"`
RepositoriesConfDir []string `mapstructure:"repos_confdir"`
ConfigProtectConfDir []string `mapstructure:"config_protect_confdir"`
ConfigProtectSkip bool `mapstructure:"config_protect_skip"`
CacheRepositories []LuetRepository `mapstructure:"repetitors"`
SystemRepositories []LuetRepository `mapstructure:"repositories"`
ConfigProtectConfFiles []ConfigProtectConfFile
}
func NewLuetConfig(viper *v.Viper) *LuetConfig {
@@ -197,13 +215,16 @@ func NewLuetConfig(viper *v.Viper) *LuetConfig {
}
GenDefault(viper)
return &LuetConfig{Viper: viper}
return &LuetConfig{Viper: viper, ConfigProtectConfFiles: nil}
}
func GenDefault(viper *v.Viper) {
viper.SetDefault("logging.level", "info")
viper.SetDefault("logging.path", "")
viper.SetDefault("logging.enable_logfile", false)
viper.SetDefault("logging.path", "/var/log/luet.log")
viper.SetDefault("logging.json_format", false)
viper.SetDefault("logging.enable_emoji", true)
viper.SetDefault("logging.color", true)
viper.SetDefault("general.concurrency", runtime.NumCPU())
viper.SetDefault("general.debug", false)
@@ -212,8 +233,9 @@ func GenDefault(viper *v.Viper) {
viper.SetDefault("general.spinner_charset", 22)
viper.SetDefault("general.fatal_warnings", false)
u, _ := user.Current()
if u.Uid == "0" {
u, err := user.Current()
// os/user doesn't work in from scratch environments
if err != nil || (u != nil && u.Uid == "0") {
viper.SetDefault("general.same_owner", true)
} else {
viper.SetDefault("general.same_owner", false)
@@ -222,9 +244,12 @@ func GenDefault(viper *v.Viper) {
viper.SetDefault("system.database_engine", "boltdb")
viper.SetDefault("system.database_path", "/var/cache/luet")
viper.SetDefault("system.rootfs", "/")
viper.SetDefault("system.tmpdir_base", filepath.Join(os.TempDir(), "tmpluet"))
viper.SetDefault("system.pkgs_cache_path", "packages")
viper.SetDefault("repos_confdir", []string{"/etc/luet/repos.conf.d"})
viper.SetDefault("config_protect_confdir", []string{"/etc/luet/config.protect.d"})
viper.SetDefault("config_protect_skip", false)
viper.SetDefault("cache_repositories", []string{})
viper.SetDefault("system_repositories", []string{})
@@ -254,6 +279,18 @@ func (c *LuetConfig) GetSolverOptions() *LuetSolverOptions {
return &c.Solver
}
func (c *LuetConfig) GetConfigProtectConfFiles() []ConfigProtectConfFile {
return c.ConfigProtectConfFiles
}
func (c *LuetConfig) AddConfigProtectConfFile(file *ConfigProtectConfFile) {
if c.ConfigProtectConfFiles == nil {
c.ConfigProtectConfFiles = []ConfigProtectConfFile{*file}
} else {
c.ConfigProtectConfFiles = append(c.ConfigProtectConfFiles, *file)
}
}
func (c *LuetConfig) GetSystemRepository(name string) (*LuetRepository, error) {
var ans *LuetRepository = nil
@@ -306,12 +343,20 @@ func (c *LuetGeneralConfig) GetSpinnerMs() time.Duration {
return duration
}
func (c *LuetLoggingConfig) SetLogLevel(s string) {
c.Level = s
}
func (c *LuetLoggingConfig) String() string {
ans := fmt.Sprintf(`
logging:
enable_logfile: %t
path: %s
json_format: %t
level: %s`, c.Path, c.JsonFormat, c.Level)
color: %t
enable_emoji: %t
level: %s`, c.EnableLogFile, c.Path, c.JsonFormat,
c.Color, c.EnableEmoji, c.Level)
return ans
}
@@ -322,8 +367,37 @@ system:
database_engine: %s
database_path: %s
pkgs_cache_path: %s
tmpdir_base: %s
rootfs: %s`,
c.DatabaseEngine, c.DatabasePath, c.PkgsCachePath, c.Rootfs)
c.DatabaseEngine, c.DatabasePath, c.PkgsCachePath,
c.TmpDirBase, c.Rootfs)
return ans
}
func (c *LuetSystemConfig) InitTmpDir() error {
if !helpers.Exists(c.TmpDirBase) {
return os.MkdirAll(c.TmpDirBase, os.ModePerm)
}
return nil
}
func (c *LuetSystemConfig) CleanupTmpDir() error {
return os.RemoveAll(c.TmpDirBase)
}
func (c *LuetSystemConfig) TempDir(pattern string) (string, error) {
err := c.InitTmpDir()
if err != nil {
return "", err
}
return ioutil.TempDir(c.TmpDirBase, pattern)
}
func (c *LuetSystemConfig) TempFile(pattern string) (*os.File, error) {
err := c.InitTmpDir()
if err != nil {
return nil, err
}
return ioutil.TempFile(c.TmpDirBase, pattern)
}

View File

@@ -0,0 +1,41 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package config
import (
"fmt"
)
type ConfigProtectConfFile struct {
Filename string
Name string `mapstructure:"name" yaml:"name" json:"name"`
Directories []string `mapstructure:"dirs" yaml:"dirs" json:"dirs"`
}
func NewConfigProtectConfFile(filename string) *ConfigProtectConfFile {
return &ConfigProtectConfFile{
Filename: filename,
Name: "",
Directories: []string{},
}
}
func (c *ConfigProtectConfFile) String() string {
return fmt.Sprintf("[%s] filename: %s, dirs: %s", c.Name, c.Filename,
c.Directories)
}

View File

@@ -0,0 +1,33 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package config_test
import (
"testing"
. "github.com/mudler/luet/cmd"
config "github.com/mudler/luet/pkg/config"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func TestSolver(t *testing.T) {
RegisterFailHandler(Fail)
LoadConfig(config.LuetCfg)
RunSpecs(t, "Config Suite")
}

66
pkg/config/config_test.go Normal file
View File

@@ -0,0 +1,66 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package config_test
import (
"os"
"path/filepath"
"strings"
config "github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Config", func() {
Context("Simple temporary directory creation", func() {
It("Create Temporary directory", func() {
// PRE: tmpdir_base contains default value.
tmpDir, err := config.LuetCfg.GetSystem().TempDir("test1")
Expect(err).ToNot(HaveOccurred())
Expect(strings.HasPrefix(tmpDir, filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
Expect(helpers.Exists(tmpDir)).To(BeTrue())
defer os.RemoveAll(tmpDir)
})
It("Create Temporary file", func() {
// PRE: tmpdir_base contains default value.
tmpFile, err := config.LuetCfg.GetSystem().TempFile("testfile1")
Expect(err).ToNot(HaveOccurred())
Expect(strings.HasPrefix(tmpFile.Name(), filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
Expect(helpers.Exists(tmpFile.Name())).To(BeTrue())
defer os.Remove(tmpFile.Name())
})
It("Config1", func() {
cfg := config.LuetCfg
cfg.GetLogging().Color = false
Expect(cfg.GetLogging().Color).To(BeFalse())
})
})
})

View File

@@ -17,6 +17,7 @@ package helpers
import (
"archive/tar"
"bytes"
"io"
"os"
"path/filepath"
@@ -49,6 +50,146 @@ func Tar(src, dest string) error {
return err
}
type TarModifierWrapperFunc func(path, dst string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error)
type TarModifierWrapper struct {
DestinationPath string
Modifier TarModifierWrapperFunc
}
func NewTarModifierWrapper(dst string, modifier TarModifierWrapperFunc) *TarModifierWrapper {
return &TarModifierWrapper{
DestinationPath: dst,
Modifier: modifier,
}
}
func (m *TarModifierWrapper) GetModifier() archive.TarModifierFunc {
return func(path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
return m.Modifier(m.DestinationPath, path, header, content)
}
}
func UntarProtect(src, dst string, sameOwner bool, protectedFiles []string, modifier *TarModifierWrapper) error {
var ans error
if len(protectedFiles) <= 0 {
return Untar(src, dst, sameOwner)
}
// POST: we have files to protect. I create a ReplaceFileTarWrapper
in, err := os.Open(src)
if err != nil {
return err
}
defer in.Close()
// Create modifier map
mods := make(map[string]archive.TarModifierFunc)
for _, file := range protectedFiles {
mods[file] = modifier.GetModifier()
}
if sameOwner {
// PRE: i have root privileged.
replacerArchive := archive.ReplaceFileTarWrapper(in, mods)
opts := &archive.TarOptions{
// NOTE: NoLchown boolean is used for chmod of the symlink
// Probably it's needed set this always to true.
NoLchown: true,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
ContinueOnError: true,
}
ans = archive.Untar(replacerArchive, dst, opts)
} else {
ans = unTarIgnoreOwner(dst, in, mods)
}
return ans
}
func unTarIgnoreOwner(dest string, in io.ReadCloser, mods map[string]archive.TarModifierFunc) error {
tr := tar.NewReader(in)
for {
header, err := tr.Next()
var data []byte
var headerReplaced = false
switch {
case err == io.EOF:
goto tarEof
case err != nil:
return err
case header == nil:
continue
}
// the target location where the dir/file should be created
target := filepath.Join(dest, header.Name)
if mods != nil {
modifier, ok := mods[header.Name]
if ok {
header, data, err = modifier(header.Name, header, tr)
if err != nil {
return err
}
// Override target path
target = filepath.Join(dest, header.Name)
headerReplaced = true
}
}
// Check the file type
switch header.Typeflag {
// if its a dir and it doesn't exist create it
case tar.TypeDir:
if _, err := os.Stat(target); err != nil {
if err := os.MkdirAll(target, 0755); err != nil {
return err
}
}
// handle creation of file
case tar.TypeReg:
f, err := os.OpenFile(target, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode))
if err != nil {
return err
}
// copy over contents
if headerReplaced {
_, err = io.Copy(f, bytes.NewReader(data))
} else {
_, err = io.Copy(f, tr)
}
if err != nil {
return err
}
// manually close here after each file operation; defering would cause each
// file close to wait until all operations have completed.
f.Close()
case tar.TypeSymlink:
source := header.Linkname
err := os.Symlink(source, target)
if err != nil {
return err
}
}
}
tarEof:
return nil
}
// Untar just a wrapper around the docker functions
func Untar(src, dest string, sameOwner bool) error {
var ans error
@@ -67,65 +208,12 @@ func Untar(src, dest string, sameOwner bool) error {
// Probably it's needed set this always to true.
NoLchown: true,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
ContinueOnError: true,
}
ans = archive.Untar(in, dest, opts)
} else {
var fileReader io.ReadCloser = in
tr := tar.NewReader(fileReader)
for {
header, err := tr.Next()
switch {
case err == io.EOF:
goto tarEof
case err != nil:
return err
case header == nil:
continue
}
// the target location where the dir/file should be created
target := filepath.Join(dest, header.Name)
// Check the file type
switch header.Typeflag {
// if its a dir and it doesn't exist create it
case tar.TypeDir:
if _, err := os.Stat(target); err != nil {
if err := os.MkdirAll(target, 0755); err != nil {
return err
}
}
// handle creation of file
case tar.TypeReg:
f, err := os.OpenFile(target, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode))
if err != nil {
return err
}
// copy over contents
if _, err := io.Copy(f, tr); err != nil {
return err
}
// manually close here after each file operation; defering would cause each
// file close to wait until all operations have completed.
f.Close()
case tar.TypeSymlink:
source := header.Linkname
err := os.Symlink(source, target)
if err != nil {
return err
}
}
}
tarEof:
ans = unTarIgnoreOwner(dest, in, nil)
}
return ans

134
pkg/helpers/archive_test.go Normal file
View File

@@ -0,0 +1,134 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers_test
import (
"archive/tar"
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"github.com/docker/docker/pkg/archive"
. "github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
// Code from moby/moby pkg/archive/archive_test
func prepareUntarSourceDirectory(numberOfFiles int, targetPath string, makeLinks bool) (int, error) {
fileData := []byte("fooo")
for n := 0; n < numberOfFiles; n++ {
fileName := fmt.Sprintf("file-%d", n)
if err := ioutil.WriteFile(filepath.Join(targetPath, fileName), fileData, 0700); err != nil {
return 0, err
}
if makeLinks {
if err := os.Link(filepath.Join(targetPath, fileName), filepath.Join(targetPath, fileName+"-link")); err != nil {
return 0, err
}
}
}
totalSize := numberOfFiles * len(fileData)
return totalSize, nil
}
func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
// If the destination path already exists I rename target file name with postfix.
var basePath string
// Read data. TODO: We need change archive callback to permit to return a Reader
buffer := bytes.Buffer{}
if content != nil {
if _, err := buffer.ReadFrom(content); err != nil {
return nil, nil, err
}
}
if header != nil {
switch header.Typeflag {
case tar.TypeReg:
basePath = filepath.Base(path)
default:
// Nothing to do. I return original reader
return header, buffer.Bytes(), nil
}
if basePath == "file-0" {
name := filepath.Join(filepath.Join(filepath.Dir(path), fmt.Sprintf("._cfg%04d_%s", 1, basePath)))
return &tar.Header{
Mode: header.Mode,
Typeflag: header.Typeflag,
PAXRecords: header.PAXRecords,
Name: name,
}, buffer.Bytes(), nil
} else if basePath == "file-1" {
return header, []byte("newcontent"), nil
}
// else file not present
}
return header, buffer.Bytes(), nil
}
var _ = Describe("Helpers Archive", func() {
Context("Untar Protect", func() {
It("Detect existing and not-existing files", func() {
archiveSourceDir, err := ioutil.TempDir("", "archive-source")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(archiveSourceDir)
_, err = prepareUntarSourceDirectory(10, archiveSourceDir, false)
Expect(err).ToNot(HaveOccurred())
targetDir, err := ioutil.TempDir("", "archive-target")
Expect(err).ToNot(HaveOccurred())
// defer os.RemoveAll(targetDir)
sourceArchive, err := archive.TarWithOptions(archiveSourceDir, &archive.TarOptions{})
Expect(err).ToNot(HaveOccurred())
defer sourceArchive.Close()
tarModifier := NewTarModifierWrapper(targetDir, tarModifierWrapperFunc)
mods := make(map[string]archive.TarModifierFunc)
mods["file-0"] = tarModifier.GetModifier()
mods["file-1"] = tarModifier.GetModifier()
mods["file-9999"] = tarModifier.GetModifier()
replacerArchive := archive.ReplaceFileTarWrapper(sourceArchive, mods)
//replacerArchive := archive.ReplaceFileTarWrapper(sourceArchive, mods)
opts := &archive.TarOptions{
// NOTE: NoLchown boolean is used for chmod of the symlink
// Probably it's needed set this always to true.
NoLchown: true,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
ContinueOnError: true,
}
err = archive.Untar(replacerArchive, targetDir, opts)
Expect(err).ToNot(HaveOccurred())
Expect(Exists(filepath.Join(targetDir, "._cfg0001_file-0"))).Should(Equal(true))
})
})
})

View File

@@ -19,6 +19,7 @@ import (
"io/ioutil"
"os"
"path/filepath"
"time"
copy "github.com/otiai10/copy"
)
@@ -39,6 +40,25 @@ func ListDir(dir string) ([]string, error) {
return content, err
}
// Touch creates an empty file
func Touch(f string) error {
_, err := os.Stat(f)
if os.IsNotExist(err) {
file, err := os.Create(f)
if err != nil {
return err
}
defer file.Close()
} else {
currentTime := time.Now().Local()
err = os.Chtimes(f, currentTime, currentTime)
if err != nil {
return err
}
}
return nil
}
// Exists reports whether the named file or directory exists.
func Exists(name string) bool {
if _, err := os.Stat(name); err != nil {
@@ -73,7 +93,7 @@ func ensureDir(fileName string) {
// of the source file. The file mode will be copied from the source and
// the copied data is synced/flushed to stable storage.
func CopyFile(src, dst string) (err error) {
return copy.Copy(src, dst)
return copy.Copy(src, dst, copy.Options{OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
}
func IsDirectory(path string) (bool, error) {
@@ -90,5 +110,5 @@ func IsDirectory(path string) (bool, error) {
func CopyDir(src string, dst string) (err error) {
src = filepath.Clean(src)
dst = filepath.Clean(dst)
return copy.Copy(src, dst)
return copy.Copy(src, dst, copy.Options{OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
}

34
pkg/helpers/helm.go Normal file
View File

@@ -0,0 +1,34 @@
package helpers
import (
"helm.sh/helm/v3/pkg/chart"
"helm.sh/helm/v3/pkg/chartutil"
"helm.sh/helm/v3/pkg/engine"
"github.com/pkg/errors"
)
// RenderHelm renders the template string with helm
func RenderHelm(template string, values map[string]interface{}) (string,error) {
c := &chart.Chart{
Metadata: &chart.Metadata{
Name: "",
Version: "",
},
Templates: []*chart.File{
{Name: "templates", Data: []byte(template)},
},
Values: map[string]interface{}{"Values":values},
}
v, err := chartutil.CoalesceValues(c, map[string]interface{}{})
if err != nil {
return "",errors.Wrap(err,"while rendering template")
}
out, err := engine.Render(c, v)
if err != nil {
return "",errors.Wrap(err,"while rendering template")
}
return out["templates"],nil
}

32
pkg/helpers/helm_test.go Normal file
View File

@@ -0,0 +1,32 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers_test
import (
. "github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Helpers", func() {
Context("RenderHelm", func() {
It("Renders templates", func() {
out, err := RenderHelm("{{.Values.Test}}",map[string]interface{}{"Test":"foo"})
Expect(err).ToNot(HaveOccurred())
Expect(out).To(Equal("foo"))
})
})
})

49
pkg/helpers/match.go Normal file
View File

@@ -0,0 +1,49 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers
import (
"regexp"
)
func MapMatchRegex(m *map[string]string, r *regexp.Regexp) bool {
ans := false
if m != nil {
for k, v := range *m {
if r.MatchString(k + "=" + v) {
ans = true
break
}
}
}
return ans
}
func MapHasKey(m *map[string]string, label string) bool {
ans := false
if m != nil {
for k, _ := range *m {
if k == label {
ans = true
break
}
}
}
return ans
}

View File

@@ -16,7 +16,9 @@
package helpers
import (
"os"
"os/exec"
"os/user"
"syscall"
"github.com/pkg/errors"
@@ -30,3 +32,17 @@ func Exec(cmd string, args []string, env []string) error {
}
return syscall.Exec(path, args, env)
}
func GetHomeDir() (ans string) {
// os/user doesn't work in from scratch environments
u, err := user.Current()
if err == nil {
ans = u.HomeDir
} else {
ans = ""
}
if os.Getenv("HOME") != "" {
ans = os.Getenv("HOME")
}
return ans
}

View File

@@ -17,7 +17,6 @@ package client
import (
"fmt"
"io/ioutil"
"math"
"net/url"
"os"
@@ -79,7 +78,7 @@ func (c *HttpClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Arti
Info("Use artifact", artifactName, "from cache.")
} else {
temp, err = ioutil.TempDir(os.TempDir(), "tree")
temp, err = config.LuetCfg.GetSystem().TempDir("tree")
if err != nil {
return nil, err
}
@@ -139,7 +138,7 @@ func (c *HttpClient) DownloadFile(name string) (string, error) {
ok := false
temp, err = ioutil.TempDir(os.TempDir(), "tree")
temp, err = config.LuetCfg.GetSystem().TempDir("tree")
if err != nil {
return "", err
}
@@ -148,7 +147,7 @@ func (c *HttpClient) DownloadFile(name string) (string, error) {
for _, uri := range c.RepoData.Urls {
file, err = ioutil.TempFile(os.TempDir(), "HttpClient")
file, err = config.LuetCfg.GetSystem().TempFile("HttpClient")
if err != nil {
continue
}

View File

@@ -16,7 +16,6 @@
package client
import (
"io/ioutil"
"os"
"path"
"path/filepath"
@@ -76,7 +75,7 @@ func (c *LocalClient) DownloadFile(name string) (string, error) {
ok := false
for _, uri := range c.RepoData.Urls {
Info("Downloading file", name, "from", uri)
file, err = ioutil.TempFile(os.TempDir(), "localclient")
file, err = config.LuetCfg.GetSystem().TempFile("localclient")
if err != nil {
continue
}

View File

@@ -0,0 +1,86 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package installer
import (
"io/ioutil"
"path"
"regexp"
"github.com/ghodss/yaml"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
)
func LoadConfigProtectConfs(c *LuetConfig) error {
var regexConfs = regexp.MustCompile(`.yml$`)
for _, cdir := range c.ConfigProtectConfDir {
Debug("Parsing Config Protect Directory", cdir, "...")
files, err := ioutil.ReadDir(cdir)
if err != nil {
Debug("Skip dir", cdir, ":", err.Error())
continue
}
for _, file := range files {
if file.IsDir() {
continue
}
if !regexConfs.MatchString(file.Name()) {
Debug("File", file.Name(), "skipped.")
continue
}
content, err := ioutil.ReadFile(path.Join(cdir, file.Name()))
if err != nil {
Warning("On read file", file.Name(), ":", err.Error())
Warning("File", file.Name(), "skipped.")
continue
}
r, err := LoadConfigProtectConFile(file.Name(), content)
if err != nil {
Warning("On parse file", file.Name(), ":", err.Error())
Warning("File", file.Name(), "skipped.")
continue
}
if r.Name == "" || len(r.Directories) == 0 {
Warning("Invalid config protect file", file.Name())
Warning("File", file.Name(), "skipped.")
continue
}
c.AddConfigProtectConfFile(r)
}
}
return nil
}
func LoadConfigProtectConFile(filename string, data []byte) (*ConfigProtectConfFile, error) {
ans := NewConfigProtectConfFile(filename)
err := yaml.Unmarshal(data, &ans)
if err != nil {
return nil, err
}
return ans, nil
}

View File

@@ -26,23 +26,37 @@ import (
)
type LuetFinalizer struct {
Shell []string `json:"shell"`
Install []string `json:"install"`
Uninstall []string `json:"uninstall"` // TODO: Where to store?
}
func (f *LuetFinalizer) RunInstall(s *System) error {
for _, c := range f.Install {
if s.Target == "/" {
var cmd string
var args []string
if len(f.Shell) == 0 {
// Default to sh otherwise
cmd = "sh"
args = []string{"-c"}
} else {
cmd = f.Shell[0]
if len(f.Shell) > 1 {
args = f.Shell[1:]
}
}
Info("finalizer on / :", "sh", "-c", c)
cmd := exec.Command("sh", "-c", c)
for _, c := range f.Install {
toRun := append(args, c)
Info("Executing finalizer on ", s.Target, cmd, toRun)
if s.Target == "/" {
cmd := exec.Command(cmd, toRun...)
stdoutStderr, err := cmd.CombinedOutput()
if err != nil {
return errors.Wrap(err, "Failed running command: "+string(stdoutStderr))
}
Info(string(stdoutStderr))
} else {
b := box.NewBox("sh", []string{"-c", c}, s.Target, false, true, true)
b := box.NewBox(cmd, toRun, []string{}, []string{}, s.Target, false, true, true)
err := b.Run()
if err != nil {
return errors.Wrap(err, "Failed running command ")

View File

@@ -16,6 +16,7 @@
package installer
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
@@ -35,12 +36,15 @@ import (
)
type LuetInstallerOptions struct {
SolverOptions config.LuetSolverOptions
Concurrency int
NoDeps bool
OnlyDeps bool
Force bool
PreserveSystemEssentialData bool
SolverOptions config.LuetSolverOptions
Concurrency int
NoDeps bool
OnlyDeps bool
Force bool
PreserveSystemEssentialData bool
FullUninstall, FullCleanUninstall bool
CheckConflicts bool
SolverUpgrade, RemoveUnavailableOnUpgrade, UpgradeNewRevisions bool
}
type LuetInstaller struct {
@@ -64,24 +68,80 @@ func (l *LuetInstaller) Upgrade(s *System) error {
if err != nil {
return err
}
Info(":thinking: Computing upgrade, please hang tight")
// First match packages against repositories by priority
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
// compute a "big" world
solv := solver.NewResolver(s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
uninstall, solution, err := solv.Upgrade(false)
if err != nil {
return errors.Wrap(err, "Failed solving solution for upgrade")
var uninstall pkg.Packages
var solution solver.PackagesAssertions
if l.Options.SolverUpgrade {
uninstall, solution, err = solv.UpgradeUniverse(l.Options.RemoveUnavailableOnUpgrade)
if err != nil {
return errors.Wrap(err, "Failed solving solution for upgrade")
}
} else {
uninstall, solution, err = solv.Upgrade(!l.Options.FullUninstall, l.Options.NoDeps)
if err != nil {
return errors.Wrap(err, "Failed solving solution for upgrade")
}
}
toInstall := []pkg.Package{}
if len(uninstall) > 0 {
Info("Packages marked for uninstall:")
}
for _, p := range uninstall {
Info(fmt.Sprintf("- %s", p.HumanReadableString()))
}
if len(solution) > 0 {
Info("Packages marked for upgrade:")
}
toInstall := pkg.Packages{}
for _, assertion := range solution {
// Be sure to filter from solutions packages already installed in the system
if _, err := s.Database.FindPackage(assertion.Package); err != nil && assertion.Value {
Info(fmt.Sprintf("- %s", assertion.Package.HumanReadableString()))
toInstall = append(toInstall, assertion.Package)
}
}
if l.Options.UpgradeNewRevisions {
Info("Checking packages with new revisions available")
for _, p := range s.Database.World() {
matches := syncedRepos.PackageMatches(pkg.Packages{p})
if len(matches) == 0 {
// Package missing. the user should run luet upgrade --universe
Info("Installed packages seems to be missing from remote repositories.")
Info("It is suggested to run 'luet upgrade --universe'")
continue
}
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
return errors.New("Package in compilespec empty")
}
if artefact.GetCompileSpec().GetPackage().Matches(p) && artefact.GetCompileSpec().GetPackage().GetBuildTimestamp() != p.GetBuildTimestamp() {
toInstall = append(toInstall, matches[0].Package).Unique()
uninstall = append(uninstall, p).Unique()
Info(
fmt.Sprintf("- %s ( %s vs %s ) repo: %s (date: %s)",
p.HumanReadableString(),
artefact.GetCompileSpec().GetPackage().GetBuildTimestamp(),
p.GetBuildTimestamp(),
matches[0].Repo.GetName(),
matches[0].Repo.GetLastUpdate(),
))
}
}
}
}
return l.swap(syncedRepos, uninstall, toInstall, s)
}
@@ -107,7 +167,7 @@ func (l *LuetInstaller) SyncRepositories(inMemory bool) (Repositories, error) {
return syncedRepos, nil
}
func (l *LuetInstaller) Swap(toRemove []pkg.Package, toInstall []pkg.Package, s *System) error {
func (l *LuetInstaller) Swap(toRemove pkg.Packages, toInstall pkg.Packages, s *System) error {
syncedRepos, err := l.SyncRepositories(true)
if err != nil {
return err
@@ -115,7 +175,7 @@ func (l *LuetInstaller) Swap(toRemove []pkg.Package, toInstall []pkg.Package, s
return l.swap(syncedRepos, toRemove, toInstall, s)
}
func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove []pkg.Package, toInstall []pkg.Package, s *System) error {
func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) error {
// First match packages against repositories by priority
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
@@ -132,14 +192,12 @@ func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove []pkg.Package, t
// if the old A results installed in the system. This is due to the fact that
// now the solver enforces the constraints and explictly denies two packages
// of the same version installed.
forced := false
if l.Options.Force {
forced = true
}
forced := l.Options.Force
l.Options.Force = true
for _, u := range toRemove {
Info(":package: Marked for deletion", u.HumanReadableString())
Info(":package:", u.HumanReadableString(), "Marked for deletion")
err := l.Uninstall(u, s)
if err != nil && !l.Options.Force {
@@ -153,7 +211,7 @@ func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove []pkg.Package, t
return l.install(syncedRepos, toInstall, s)
}
func (l *LuetInstaller) Install(cp []pkg.Package, s *System, downloadOnly bool) error {
func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
syncedRepos, err := l.SyncRepositories(true)
if err != nil {
return err
@@ -161,7 +219,7 @@ func (l *LuetInstaller) Install(cp []pkg.Package, s *System, downloadOnly bool)
return l.install(syncedRepos, cp, s)
}
func (l *LuetInstaller) download(syncedRepos Repositories, cp []pkg.Package) error {
func (l *LuetInstaller) download(syncedRepos Repositories, cp pkg.Packages) error {
toDownload := map[string]ArtifactMatch{}
// FIXME: This can be optimized. We don't need to re-match this to the repository
@@ -169,7 +227,7 @@ func (l *LuetInstaller) download(syncedRepos Repositories, cp []pkg.Package) err
// Gathers things to download
for _, currentPack := range cp {
matches := syncedRepos.PackageMatches([]pkg.Package{currentPack})
matches := syncedRepos.PackageMatches(pkg.Packages{currentPack})
if len(matches) == 0 {
return errors.New("Failed matching solutions against repository for " + currentPack.HumanReadableString() + " where are definitions coming from?!")
}
@@ -206,8 +264,59 @@ func (l *LuetInstaller) download(syncedRepos Repositories, cp []pkg.Package) err
return nil
}
func (l *LuetInstaller) install(syncedRepos Repositories, cp []pkg.Package, s *System) error {
var p []pkg.Package
// Reclaim adds packages to the system database
// if files from artifacts in the repositories are found
// in the system target
func (l *LuetInstaller) Reclaim(s *System) error {
syncedRepos, err := l.SyncRepositories(true)
if err != nil {
return err
}
var toMerge []ArtifactMatch = []ArtifactMatch{}
for _, repo := range syncedRepos {
for _, artefact := range repo.GetIndex() {
Debug("Checking if",
artefact.GetCompileSpec().GetPackage().HumanReadableString(),
"from", repo.GetName(), "is installed")
FILES:
for _, f := range artefact.GetFiles() {
if helpers.Exists(filepath.Join(s.Target, f)) {
p, err := repo.GetTree().GetDatabase().FindPackage(artefact.GetCompileSpec().GetPackage())
if err != nil {
return err
}
Info("Found package:", p.HumanReadableString())
toMerge = append(toMerge, ArtifactMatch{Artifact: artefact, Package: p})
break FILES
}
}
}
}
for _, match := range toMerge {
pack := match.Package
vers, _ := s.Database.FindPackageVersions(pack)
if len(vers) >= 1 {
Warning("Filtering out package " + pack.HumanReadableString() + ", already reclaimed")
continue
}
_, err := s.Database.CreatePackage(pack)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Failed creating package")
}
s.Database.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: pack.GetFingerPrint(), Files: match.Artifact.GetFiles()})
Info("Reclaimed package:", pack.HumanReadableString())
}
Info("Done!")
return nil
}
func (l *LuetInstaller) install(syncedRepos Repositories, cp pkg.Packages, s *System) error {
var p pkg.Packages
// Check if the package is installed first
for _, pi := range cp {
@@ -237,7 +346,7 @@ func (l *LuetInstaller) install(syncedRepos Repositories, cp []pkg.Package, s *S
syncedRepos.SyncDatabase(allRepos)
p = syncedRepos.ResolveSelectors(p)
toInstall := map[string]ArtifactMatch{}
var packagesToInstall []pkg.Package
var packagesToInstall pkg.Packages
var err error
var solution solver.PackagesAssertions
@@ -258,10 +367,15 @@ func (l *LuetInstaller) install(syncedRepos Repositories, cp []pkg.Package, s *S
packagesToInstall = append(packagesToInstall, currentPack)
}
}
Info(":deciduous_tree: Finding packages to install")
// Gathers things to install
for _, currentPack := range packagesToInstall {
matches := syncedRepos.PackageMatches([]pkg.Package{currentPack})
// Check if package is already installed.
if _, err := s.Database.FindPackage(currentPack); err == nil {
// skip matching if it is installed already
continue
}
matches := syncedRepos.PackageMatches(pkg.Packages{currentPack})
if len(matches) == 0 {
return errors.New("Failed matching solutions against repository for " + currentPack.HumanReadableString() + " where are definitions coming from?!")
}
@@ -272,9 +386,11 @@ func (l *LuetInstaller) install(syncedRepos Repositories, cp []pkg.Package, s *S
}
if matches[0].Package.Matches(artefact.GetCompileSpec().GetPackage()) {
currentPack.SetBuildTimestamp(artefact.GetCompileSpec().GetPackage().GetBuildTimestamp())
// Filter out already installed
if _, err := s.Database.FindPackage(currentPack); err != nil {
toInstall[currentPack.GetFingerPrint()] = ArtifactMatch{Package: currentPack, Artifact: artefact, Repository: matches[0].Repo}
Info("\t:package:", currentPack.HumanReadableString(), "from repository", matches[0].Repo.GetName())
}
break A
}
@@ -326,7 +442,10 @@ func (l *LuetInstaller) install(syncedRepos Repositories, cp []pkg.Package, s *S
// TODO: Lower those errors as warning
for _, w := range p {
// Finalizers needs to run in order and in sequence.
ordered := solution.Order(allRepos, w.GetFingerPrint())
ordered, err := solution.Order(allRepos, w.GetFingerPrint())
if err != nil {
return errors.Wrap(err, "While order a solution for "+w.HumanReadableString())
}
ORDER:
for _, ass := range ordered {
if ass.Value {
@@ -510,20 +629,48 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
}
func (l *LuetInstaller) Uninstall(p pkg.Package, s *System) error {
Spinner(32)
defer SpinnerStop()
Info("Uninstalling :package:", p.HumanReadableString(), "hang tight")
// compute uninstall from all world - remove packages in parallel - run uninstall finalizer (in order) TODO - mark the uninstallation in db
// Get installed definition
checkConflicts := true
if l.Options.Force == true {
checkConflicts := l.Options.CheckConflicts
full := l.Options.FullUninstall
if l.Options.Force == true { // IF forced, we want to remove the package and all its requires
checkConflicts = false
full = false
}
// Create a temporary DB with the installed packages
// so the solver is much faster finding the deptree
installedtmp := pkg.NewInMemoryDatabase(false)
for _, i := range s.Database.World() {
_, err := installedtmp.CreatePackage(i)
if err != nil {
return errors.Wrap(err, "Failed create temporary in-memory db")
}
}
if !l.Options.NoDeps {
solv := solver.NewResolver(s.Database, s.Database, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
solution, err := solv.Uninstall(p, checkConflicts)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
Info("Finding :package:", p.HumanReadableString(), "dependency graph :deciduous_tree:")
solv := solver.NewResolver(installedtmp, installedtmp, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
var solution pkg.Packages
var err error
if l.Options.FullCleanUninstall {
solution, err = solv.UninstallUniverse(pkg.Packages{p})
if err != nil {
return errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
}
} else {
solution, err = solv.Uninstall(p, checkConflicts, full)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
}
}
for _, p := range solution {
Info("Uninstalling", p.HumanReadableString())
err := l.uninstall(p, s)
@@ -537,7 +684,7 @@ func (l *LuetInstaller) Uninstall(p pkg.Package, s *System) error {
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Uninstall failed")
}
Info(":package: ", p.HumanReadableString(), "uninstalled")
Info(":package:", p.HumanReadableString(), "uninstalled")
}
return nil

View File

@@ -82,7 +82,7 @@ var _ = Describe("Installer", func() {
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
@@ -115,7 +115,7 @@ urls:
Expect(repo.GetType()).To(Equal("disk"))
systemDB := pkg.NewInMemoryDatabase(false)
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system, false)
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
@@ -198,7 +198,7 @@ urls:
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
treeFile := NewDefaultTreeRepositoryFile()
treeFile.SetCompressionType(compiler.None)
repo.SetRepositoryFile(REPOFILE_TREE_KEY, treeFile)
@@ -234,7 +234,7 @@ urls:
Expect(repo.GetType()).To(Equal("disk"))
systemDB := pkg.NewInMemoryDatabase(false)
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system, false)
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
@@ -316,7 +316,7 @@ urls:
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
@@ -354,7 +354,7 @@ urls:
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system, false)
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
@@ -385,6 +385,149 @@ urls:
})
It("Installs new packages from a syste with others installed", func() {
//repo:=NewLuetSystemRepository()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(spec.BuildSteps()).To(Equal([]string{"echo artifact5 > /test5", "echo artifact6 > /test6", "./generate.sh"}))
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2", "chmod +x generate.sh"}))
spec.SetOutputPath(tmpdir)
c.SetConcurrency(2)
artifact, err := c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
bolt, err := ioutil.TempDir("", "db")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(bolt) // clean up
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
tmpdir2, err := ioutil.TempDir("", "tree2")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe2 := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe2.Load("../../tests/fixtures/alpine")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe2.GetDatabase().GetPackages())).To(Equal(1))
c = compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err = c.FromPackage(&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
spec.SetOutputPath(tmpdir2)
c.SetConcurrency(2)
artifact, err = c.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
repo, err = GenerateRepository("test", "description", "disk", []string{tmpdir2}, 1, tmpdir2, []string{"../../tests/fixtures/alpine"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
err = repo.Write(tmpdir2, false)
Expect(err).ToNot(HaveOccurred())
fakeroot, err = ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst = NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err = NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir2+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir2))
Expect(repo.GetType()).To(Equal("disk"))
system.Target = fakeroot
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
})
})
Context("Simple upgrades", func() {
@@ -426,7 +569,7 @@ urls:
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/upgrade", pkg.NewInMemoryDatabase(false))
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
@@ -464,7 +607,7 @@ urls:
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system, false)
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
@@ -501,6 +644,140 @@ urls:
})
It("Handles package drops", func() {
//repo:=NewLuetSystemRepository()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
generalRecipeNewRepo := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/upgrade_old_repo")
Expect(err).ToNot(HaveOccurred())
err = generalRecipeNewRepo.Load("../../tests/fixtures/upgrade_new_repo")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
Expect(len(generalRecipeNewRepo.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
c2 := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipeNewRepo.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
spec3, err := c.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
spec2, err := c2.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
tmpdirnewrepo, err := ioutil.TempDir("", "tree2")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdirnewrepo) // clean up
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdirnewrepo)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(2)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec3))
Expect(errs).To(BeEmpty())
_, errs = c2.CompileParallel(false, compiler.NewLuetCompilationspecs(spec2))
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade_old_repo"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
repoupgrade, err := GenerateRepository("test", "description", "disk", []string{tmpdirnewrepo}, 1, tmpdirnewrepo, []string{"../../tests/fixtures/upgrade_new_repo"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
err = repoupgrade.Write(tmpdirnewrepo, false)
Expect(err).ToNot(HaveOccurred())
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
repoupgrade2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdirnewrepo+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
bolt, err := ioutil.TempDir("", "db")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(bolt) // clean up
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(len(system.Database.GetPackages())).To(Equal(1))
p, err := system.Database.GetPackage(system.Database.GetPackages()[0])
Expect(err).ToNot(HaveOccurred())
Expect(p.GetName()).To(Equal("b"))
files, err := systemDB.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(files).To(Equal([]string{"artifact42", "test5", "test6"}))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repoupgrade2})
err = inst.Upgrade(system)
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
// New version - new files
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
// New package should be there
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
})
})
Context("Compressed packages", func() {
@@ -541,7 +818,7 @@ urls:
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/upgrade", pkg.NewInMemoryDatabase(false))
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
@@ -581,7 +858,7 @@ urls:
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system, false)
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
@@ -620,4 +897,267 @@ urls:
})
Context("Existing files", func() {
It("Reclaims them", func() {
//repo:=NewLuetSystemRepository()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/upgrade")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(4))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
spec2, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
spec3, err := c.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(2)
c.SetCompressionType(compiler.GZip)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec2, spec3))
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
bolt, err := ioutil.TempDir("", "db")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(bolt) // clean up
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(0))
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
Expect(helpers.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
err = inst.Reclaim(system)
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(2))
})
It("Upgrades reclaimed packages", func() {
//repo:=NewLuetSystemRepository()
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/upgrade_old_repo")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
c := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
spec3, err := c.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
spec3.SetOutputPath(tmpdir)
c.SetConcurrency(1)
c.SetCompressionType(compiler.GZip)
_, errs := c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec, spec3))
Expect(errs).To(BeEmpty())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/upgrade_old_repo"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
fakeroot, err := ioutil.TempDir("", "fakeroot")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(fakeroot) // clean up
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
Expect(repo.GetType()).To(Equal("disk"))
bolt, err := ioutil.TempDir("", "db")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(bolt) // clean up
systemDB := pkg.NewBoltDatabase(filepath.Join(bolt, "db.db"))
system := &System{Database: systemDB, Target: fakeroot}
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(0))
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
Expect(helpers.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
Expect(helpers.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
Expect(helpers.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
err = inst.Reclaim(system)
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(len(system.Database.World())).To(Equal(2))
generalRecipe2 := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe2.Load("../../tests/fixtures/upgrade_new_repo")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe2.GetDatabase().GetPackages())).To(Equal(3))
c = compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err = c.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
tmpdir2, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir2) // clean up
spec.SetOutputPath(tmpdir2)
_, errs = c.CompileParallel(false, compiler.NewLuetCompilationspecs(spec))
Expect(errs).To(BeEmpty())
repo, err = GenerateRepository("test", "description", "disk", []string{tmpdir2}, 1, tmpdir2, []string{"../../tests/fixtures/upgrade_new_repo"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
err = repo.Write(tmpdir2, false)
Expect(err).ToNot(HaveOccurred())
inst = NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
repo2, err = NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir2+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
inst.Repositories(Repositories{repo2})
err = inst.Upgrade(system)
Expect(err).ToNot(HaveOccurred())
// Nothing should be there anymore (files, packagedb entry)
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
// New version - new files
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).To(HaveOccurred())
// New package should be there
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.1"})
Expect(err).ToNot(HaveOccurred())
})
})
})

View File

@@ -23,12 +23,14 @@ import (
)
type Installer interface {
Install([]pkg.Package, *System, bool) error
Install(pkg.Packages, *System) error
Uninstall(pkg.Package, *System) error
Upgrade(s *System) error
Reclaim(s *System) error
Repositories([]Repository)
SyncRepositories(bool) (Repositories, error)
Swap([]pkg.Package, []pkg.Package, *System) error
Swap(pkg.Packages, pkg.Packages, *System) error
}
type Client interface {
@@ -65,8 +67,9 @@ type Repository interface {
SetLastUpdate(string)
Client() Client
SetPriority(int)
GetRepositoryFile(string) (LuetRepositoryFile, error)
SetRepositoryFile(string, LuetRepositoryFile)
SetName(p string)
Serialize() (*LuetSystemRepositoryMetadata, LuetSystemRepositorySerialized)
}

View File

@@ -35,7 +35,6 @@ import (
tree "github.com/mudler/luet/pkg/tree"
"github.com/ghodss/yaml"
. "github.com/logrusorgru/aurora"
"github.com/pkg/errors"
)
@@ -136,7 +135,7 @@ func (m *LuetSystemRepositoryMetadata) ReadFile(file string, removeFile bool) er
return nil
}
func (m *LuetSystemRepositoryMetadata) ToArtificatIndex() (ans compiler.ArtifactIndex) {
func (m *LuetSystemRepositoryMetadata) ToArtifactIndex() (ans compiler.ArtifactIndex) {
for _, a := range m.Index {
ans = append(ans, a)
}
@@ -160,6 +159,7 @@ func NewDefaultMetaRepositoryFile() LuetRepositoryFile {
func (f *LuetRepositoryFile) SetFileName(n string) {
f.FileName = n
}
func (f *LuetRepositoryFile) GetFileName() string {
return f.FileName
}
@@ -176,14 +176,18 @@ func (f *LuetRepositoryFile) GetChecksums() compiler.Checksums {
return f.Checksums
}
func GenerateRepository(name, descr, t string, urls []string, priority int, src, treeDir string, db pkg.PackageDatabase) (Repository, error) {
func GenerateRepository(name, descr, t string, urls []string, priority int, src string, treesDir []string, db pkg.PackageDatabase) (Repository, error) {
art, err := buildPackageIndex(src)
if err != nil {
return nil, err
}
tr := tree.NewInstallerRecipe(db)
err = tr.Load(treeDir)
for _, treeDir := range treesDir {
err := tr.Load(treeDir)
if err != nil {
return nil, err
}
}
art, err := buildPackageIndex(src, tr.GetDatabase())
if err != nil {
return nil, err
}
@@ -239,7 +243,7 @@ func NewLuetSystemRepositoryFromYaml(data []byte, db pkg.PackageDatabase) (Repos
return r, err
}
func buildPackageIndex(path string) ([]compiler.Artifact, error) {
func buildPackageIndex(path string, db pkg.PackageDatabase) ([]compiler.Artifact, error) {
var art []compiler.Artifact
var ff = func(currentpath string, info os.FileInfo, err error) error {
@@ -257,6 +261,15 @@ func buildPackageIndex(path string) ([]compiler.Artifact, error) {
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
// We want to include packages that are ONLY referenced in the tree.
// the ones which aren't should be deleted. (TODO: by another cli command?)
if _, notfound := db.FindPackage(artifact.GetCompileSpec().GetPackage()); notfound != nil {
Info(fmt.Sprintf("Package %s not found in tree. Ignoring it.",
artifact.GetCompileSpec().GetPackage().HumanReadableString()))
return nil
}
art = append(art, artifact)
return nil
@@ -270,6 +283,10 @@ func buildPackageIndex(path string) ([]compiler.Artifact, error) {
return art, nil
}
func (r *LuetSystemRepository) SetPriority(n int) {
r.LuetRepository.Priority = n
}
func (r *LuetSystemRepository) GetName() string {
return r.LuetRepository.Name
}
@@ -287,6 +304,11 @@ func (r *LuetSystemRepository) GetType() string {
func (r *LuetSystemRepository) SetType(p string) {
r.LuetRepository.Type = p
}
func (r *LuetSystemRepository) SetName(p string) {
r.LuetRepository.Name = p
}
func (r *LuetSystemRepository) AddUrl(p string) {
r.LuetRepository.Urls = append(r.LuetRepository.Urls, p)
}
@@ -405,7 +427,7 @@ func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
))
// Create tree and repository file
archive, err := ioutil.TempDir(os.TempDir(), "archive")
archive, err := config.LuetCfg.GetSystem().TempDir("archive")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for archive")
}
@@ -441,7 +463,7 @@ func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
meta, serialized := r.Serialize()
// Create metadata file and repository file
metaTmpDir, err := ioutil.TempDir(os.TempDir(), "metadata")
metaTmpDir, err := config.LuetCfg.GetSystem().TempDir("metadata")
defer os.RemoveAll(metaTmpDir) // clean up
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for metadata")
@@ -504,6 +526,7 @@ func (r *LuetSystemRepository) Client() Client {
func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
var repoUpdated bool = false
var treefs, metafs string
aurora := GetAurora()
Debug("Sync of the repository", r.Name, "in progress...")
c := r.Client()
@@ -548,15 +571,14 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
}
} else {
treefs, err = ioutil.TempDir(os.TempDir(), "treefs")
treefs, err = config.LuetCfg.GetSystem().TempDir("treefs")
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
// If we always remove them, later on, no other structure can access
// Note: If we always remove them, later on, no other structure can access
// to the tree for e.g. to retrieve finalizers
//defer os.RemoveAll(treefs)
metafs, err = ioutil.TempDir(os.TempDir(), "metafs")
metafs, err = config.LuetCfg.GetSystem().TempDir("metafs")
if err != nil {
return nil, errors.Wrap(err, "Error met whilte creating tempdir for metafs")
}
@@ -636,13 +658,14 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
tsec, _ := strconv.ParseInt(repo.GetLastUpdate(), 10, 64)
InfoC(
Bold(Red(":house: Repository "+r.GetName()+" revision: ")).String() +
Bold(Green(repo.GetRevision())).String() + " - " +
Bold(Green(time.Unix(tsec, 0).String())).String(),
aurora.Bold(
aurora.Red(":house: Repository "+repo.GetName()+" revision: ")).String() +
aurora.Bold(aurora.Green(repo.GetRevision())).String() + " - " +
aurora.Bold(aurora.Green(time.Unix(tsec, 0).String())).String(),
)
} else {
Info("Repository", r.GetName(), "is already up to date.")
Info("Repository", repo.GetName(), "is already up to date.")
}
meta, err := NewLuetSystemRepositoryMetadata(
@@ -651,7 +674,7 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
if err != nil {
return nil, errors.Wrap(err, "While processing "+REPOSITORY_METAFILE)
}
repo.SetIndex(meta.ToArtificatIndex())
repo.SetIndex(meta.ToArtifactIndex())
reciper := tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))
err = reciper.Load(treefs)
@@ -661,9 +684,21 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
repo.SetTree(reciper)
repo.SetTreePath(treefs)
// Copy the local available data to the one which was synced
// e.g. locally we can override the type (disk), or priority
// while remotely it could be advertized differently
repo.SetUrls(r.GetUrls())
repo.SetAuthentication(r.GetAuthentication())
repo.SetType(r.GetType())
repo.SetPriority(r.GetPriority())
repo.SetName(r.GetName())
InfoC(
aurora.Bold(
aurora.Yellow(":information_source: Repository "+repo.GetName()+" priority: ")).String() +
aurora.Bold(aurora.Green(repo.GetPriority())).String() + " - type " +
aurora.Bold(aurora.Green(repo.GetType())).String(),
)
return repo, nil
}
@@ -701,9 +736,9 @@ func (r Repositories) Less(i, j int) bool {
return r[i].GetPriority() < r[j].GetPriority()
}
func (r Repositories) World() []pkg.Package {
func (r Repositories) World() pkg.Packages {
cache := map[string]pkg.Package{}
world := []pkg.Package{}
world := pkg.Packages{}
// Get Uniques. Walk in reverse so the definitions of most prio-repo overwrites lower ones
// In this way, when we will walk again later the deps sorting them by most higher prio we have better chance of success.
@@ -741,7 +776,7 @@ type PackageMatch struct {
Package pkg.Package
}
func (re Repositories) PackageMatches(p []pkg.Package) []PackageMatch {
func (re Repositories) PackageMatches(p pkg.Packages) []PackageMatch {
// TODO: Better heuristic. here we pick the first repo that contains the atom, sorted by priority but
// we should do a permutations and get the best match, and in case there are more solutions the user should be able to pick
sort.Sort(re)
@@ -762,10 +797,10 @@ PACKAGE:
}
func (re Repositories) ResolveSelectors(p []pkg.Package) []pkg.Package {
func (re Repositories) ResolveSelectors(p pkg.Packages) pkg.Packages {
// If a selector is given, get the best from each repo
sort.Sort(re) // respect prio
var matches []pkg.Package
var matches pkg.Packages
PACKAGE:
for _, pack := range p {
REPOSITORY:
@@ -798,7 +833,7 @@ func (re Repositories) SearchPackages(p string, o LuetSearchOpts) []PackageMatch
var err error
for _, r := range re {
var repoMatches []pkg.Package
var repoMatches pkg.Packages
switch o.Mode {
case SRegexPkg:

View File

@@ -18,6 +18,7 @@ package installer_test
import (
// . "github.com/mudler/luet/pkg/installer"
"io/ioutil"
"os"
@@ -34,7 +35,7 @@ import (
var _ = Describe("Repository", func() {
Context("Generation", func() {
It("Generate repository metadat", func() {
It("Generate repository metadata", func() {
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
@@ -82,7 +83,7 @@ var _ = Describe("Repository", func() {
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
@@ -95,6 +96,105 @@ var _ = Describe("Repository", func() {
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
})
It("Generate repository metadata of files ONLY referenced in a tree", func() {
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe.Load("../../tests/fixtures/buildable")
Expect(err).ToNot(HaveOccurred())
generalRecipe2 := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
err = generalRecipe2.Load("../../tests/fixtures/finalizers")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe2.GetDatabase().GetPackages())).To(Equal(1))
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler2 := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe2.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec2, err := compiler2.FromPackage(&pkg.DefaultPackage{Name: "alpine", Category: "seed", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
compiler := compiler.NewLuetCompiler(backend.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), compiler.NewDefaultCompilerOptions())
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
Expect(spec2.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err = ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(spec.BuildSteps()).To(Equal([]string{"echo artifact5 > /test5", "echo artifact6 > /test6", "./generate.sh"}))
Expect(spec.GetPreBuildSteps()).To(Equal([]string{"echo foo > /test", "echo bar > /test2", "chmod +x generate.sh"}))
spec.SetOutputPath(tmpdir)
spec2.SetOutputPath(tmpdir)
compiler.SetConcurrency(1)
compiler2.SetConcurrency(1)
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
artifact2, err := compiler2.Compile(false, spec2)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(artifact2.GetPath())).To(BeTrue())
Expect(helpers.Untar(artifact2.GetPath(), tmpdir, false)).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
content1, err := helpers.Read(spec.Rel("test5"))
Expect(err).ToNot(HaveOccurred())
content2, err := helpers.Read(spec.Rel("test6"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("artifact5\n"))
Expect(content2).To(Equal("artifact6\n"))
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.package.tar"))).To(BeTrue())
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.metadata.yaml"))).To(BeTrue())
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, []string{"../../tests/fixtures/buildable"}, pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
Expect(repo.GetName()).To(Equal("test"))
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
err = repo.Write(tmpdir, false)
Expect(err).ToNot(HaveOccurred())
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
// We check now that the artifact not referenced in the tree
// (spec2) is not indexed in the repository
repository, err := NewLuetSystemRepositoryFromYaml([]byte(`
name: "test"
type: "disk"
urls:
- "`+tmpdir+`"
`), pkg.NewInMemoryDatabase(false))
Expect(err).ToNot(HaveOccurred())
repos, err := repository.Sync(true)
Expect(err).ToNot(HaveOccurred())
_, err = repos.GetTree().GetDatabase().FindPackage(spec.GetPackage())
Expect(err).ToNot(HaveOccurred())
_, err = repos.GetTree().GetDatabase().FindPackage(spec2.GetPackage())
Expect(err).To(HaveOccurred()) // should throw error
})
})
Context("Matching packages", func() {
It("Matches packages in different repositories by priority", func() {

View File

@@ -9,6 +9,6 @@ type System struct {
Target string
}
func (s *System) World() ([]pkg.Package, error) {
func (s *System) World() (pkg.Packages, error) {
return s.Database.World(), nil
}

View File

@@ -3,6 +3,7 @@ package logger
import (
"fmt"
"os"
"regexp"
. "github.com/mudler/luet/pkg/config"
@@ -15,6 +16,7 @@ import (
var s *spinner.Spinner = nil
var z *zap.Logger = nil
var aurora Aurora = nil
func NewSpinner() {
if s == nil {
@@ -24,6 +26,16 @@ func NewSpinner() {
}
}
func InitAurora() {
if aurora == nil {
aurora = NewAurora(LuetCfg.GetLogging().Color)
}
}
func GetAurora() Aurora {
return aurora
}
func ZapLogger() error {
var err error
if z == nil {
@@ -53,33 +65,52 @@ func ZapLogger() error {
}
func Spinner(i int) {
var confLevel int
if LuetCfg.GetGeneral().Debug {
confLevel = 3
} else {
confLevel = level2Number(LuetCfg.GetLogging().Level)
}
if 2 > confLevel {
return
}
if i > 43 {
i = 43
}
if !LuetCfg.GetGeneral().Debug && !s.Active() {
if s != nil && !s.Active() {
// s.UpdateCharSet(spinner.CharSets[i])
s.Start() // Start the spinner
}
}
func SpinnerText(suffix, prefix string) {
s.Lock()
defer s.Unlock()
if LuetCfg.GetGeneral().Debug {
fmt.Println(fmt.Sprintf("%s %s",
Bold(Cyan(prefix)).String(),
Bold(Magenta(suffix)).BgBlack().String(),
))
} else {
s.Suffix = Bold(Magenta(suffix)).BgBlack().String()
s.Prefix = Bold(Cyan(prefix)).String()
if s != nil {
s.Lock()
defer s.Unlock()
if LuetCfg.GetGeneral().Debug {
fmt.Println(fmt.Sprintf("%s %s",
Bold(Cyan(prefix)).String(),
Bold(Magenta(suffix)).BgBlack().String(),
))
} else {
s.Suffix = Bold(Magenta(suffix)).BgBlack().String()
s.Prefix = Bold(Cyan(prefix)).String()
}
}
}
func SpinnerStop() {
if !LuetCfg.GetGeneral().Debug {
var confLevel int
if LuetCfg.GetGeneral().Debug {
confLevel = 3
} else {
confLevel = level2Number(LuetCfg.GetLogging().Level)
}
if 2 > confLevel {
return
}
if s != nil {
s.Stop()
}
}
@@ -143,7 +174,7 @@ func msg(level string, withoutColor bool, msg ...interface{}) {
var levelMsg string
if withoutColor {
if withoutColor || !LuetCfg.GetLogging().Color {
levelMsg = message
} else {
switch level {
@@ -158,7 +189,12 @@ func msg(level string, withoutColor bool, msg ...interface{}) {
}
}
levelMsg = emoji.Sprint(levelMsg)
if LuetCfg.GetLogging().EnableEmoji {
levelMsg = emoji.Sprint(levelMsg)
} else {
re := regexp.MustCompile(`[:][\w]+[:]`)
levelMsg = re.ReplaceAllString(levelMsg, "")
}
if z != nil {
log2File(level, message)

View File

@@ -0,0 +1,23 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package pkg
type AnnotationKey string
const (
ConfigProtectAnnnotation AnnotationKey = "config_protect"
)

View File

@@ -33,7 +33,7 @@ type PackageSet interface {
GetPackage(ID string) (Package, error)
Clean() error
FindPackage(Package) (Package, error)
FindPackages(p Package) ([]Package, error)
FindPackages(p Package) (Packages, error)
UpdatePackage(p Package) error
GetAllPackages(packages chan Package) error
RemovePackage(Package) error
@@ -41,13 +41,13 @@ type PackageSet interface {
GetPackageFiles(Package) ([]string, error)
SetPackageFiles(*PackageFile) error
RemovePackageFiles(Package) error
FindPackageVersions(p Package) ([]Package, error)
World() []Package
FindPackageVersions(p Package) (Packages, error)
World() Packages
FindPackageCandidate(p Package) (Package, error)
FindPackageLabel(labelKey string) ([]Package, error)
FindPackageLabelMatch(pattern string) ([]Package, error)
FindPackageMatch(pattern string) ([]Package, error)
FindPackageLabel(labelKey string) (Packages, error)
FindPackageLabelMatch(pattern string) (Packages, error)
FindPackageMatch(pattern string) (Packages, error)
}
type PackageFile struct {

View File

@@ -17,6 +17,7 @@ package pkg
import (
"encoding/base64"
"fmt"
"os"
"regexp"
"strconv"
@@ -228,19 +229,19 @@ func (db *BoltDatabase) getProvide(p Package) (Package, error) {
db.Unlock()
if !ok {
return nil, errors.New("No versions found for package")
return nil, errors.New(fmt.Sprintf("No versions found for: %s", p.HumanReadableString()))
}
for ve, _ := range versions {
match, err := p.VersionMatchSelector(ve)
match, err := p.VersionMatchSelector(ve, nil)
if err != nil {
return nil, errors.Wrap(err, "Error on match version")
}
if match {
pa, ok := db.ProvidesDatabase[p.GetPackageName()][ve]
if !ok {
return nil, errors.New("No versions found for package")
return nil, errors.New(fmt.Sprintf("No versions found for: %s", p.HumanReadableString()))
}
return pa, nil //pick the first (we shouldn't have providers that are conflicting)
// TODO: A find dbcall here would recurse, but would give chance to have providers of providers
@@ -309,12 +310,12 @@ func (db *BoltDatabase) RemovePackage(p Package) error {
var found DefaultPackage
err = bolt.Select(q.Eq("Name", p.GetName()), q.Eq("Category", p.GetCategory()), q.Eq("Version", p.GetVersion())).Limit(1).Delete(&found)
if err != nil {
return errors.Wrap(err, "No package found to delete")
return errors.New(fmt.Sprintf("Package not found: %s", p.HumanReadableString()))
}
return nil
}
func (db *BoltDatabase) World() []Package {
func (db *BoltDatabase) World() Packages {
var all []Package
// FIXME: This should all be locked in the db - for now forbid the solver to be run in threads.
@@ -324,7 +325,7 @@ func (db *BoltDatabase) World() []Package {
all = append(all, pack)
}
}
return all
return Packages(all)
}
func (db *BoltDatabase) FindPackageCandidate(p Package) (Package, error) {
@@ -338,7 +339,7 @@ func (db *BoltDatabase) FindPackageCandidate(p Package) (Package, error) {
if err != nil || len(packages) == 0 {
required = p
} else {
required = Best(packages)
required = packages.Best(nil)
}
return required, nil
@@ -351,7 +352,7 @@ func (db *BoltDatabase) FindPackageCandidate(p Package) (Package, error) {
// FindPackages return the list of the packages beloging to cat/name (any versions in requested range)
// FIXME: Optimize, see inmemorydb
func (db *BoltDatabase) FindPackages(p Package) ([]Package, error) {
func (db *BoltDatabase) FindPackages(p Package) (Packages, error) {
// Provides: Treat as the replaced package here
if provided, err := db.getProvide(p); err == nil {
p = provided
@@ -362,7 +363,7 @@ func (db *BoltDatabase) FindPackages(p Package) ([]Package, error) {
continue
}
match, err := p.SelectorMatchVersion(w.GetVersion())
match, err := p.SelectorMatchVersion(w.GetVersion(), nil)
if err != nil {
return nil, errors.Wrap(err, "Error on match selector")
}
@@ -370,11 +371,11 @@ func (db *BoltDatabase) FindPackages(p Package) ([]Package, error) {
versionsInWorld = append(versionsInWorld, w)
}
}
return versionsInWorld, nil
return Packages(versionsInWorld), nil
}
// FindPackageVersions return the list of the packages beloging to cat/name
func (db *BoltDatabase) FindPackageVersions(p Package) ([]Package, error) {
func (db *BoltDatabase) FindPackageVersions(p Package) (Packages, error) {
var versionsInWorld []Package
for _, w := range db.World() {
if w.GetName() != p.GetName() || w.GetCategory() != p.GetCategory() {
@@ -383,10 +384,10 @@ func (db *BoltDatabase) FindPackageVersions(p Package) ([]Package, error) {
versionsInWorld = append(versionsInWorld, w)
}
return versionsInWorld, nil
return Packages(versionsInWorld), nil
}
func (db *BoltDatabase) FindPackageLabel(labelKey string) ([]Package, error) {
func (db *BoltDatabase) FindPackageLabel(labelKey string) (Packages, error) {
var ans []Package
for _, k := range db.GetPackages() {
@@ -398,10 +399,10 @@ func (db *BoltDatabase) FindPackageLabel(labelKey string) ([]Package, error) {
ans = append(ans, pack)
}
}
return ans, nil
return Packages(ans), nil
}
func (db *BoltDatabase) FindPackageLabelMatch(pattern string) ([]Package, error) {
func (db *BoltDatabase) FindPackageLabelMatch(pattern string) (Packages, error) {
var ans []Package
re := regexp.MustCompile(pattern)
@@ -419,10 +420,10 @@ func (db *BoltDatabase) FindPackageLabelMatch(pattern string) ([]Package, error)
}
}
return ans, nil
return Packages(ans), nil
}
func (db *BoltDatabase) FindPackageMatch(pattern string) ([]Package, error) {
func (db *BoltDatabase) FindPackageMatch(pattern string) (Packages, error) {
var ans []Package
re := regexp.MustCompile(pattern)
@@ -436,10 +437,10 @@ func (db *BoltDatabase) FindPackageMatch(pattern string) ([]Package, error) {
return ans, err
}
if re.MatchString(pack.GetCategory() + pack.GetName()) {
if re.MatchString(pack.HumanReadableString()) {
ans = append(ans, pack)
}
}
return ans, nil
return Packages(ans), nil
}

View File

@@ -18,6 +18,7 @@ package pkg
import (
"encoding/base64"
"encoding/json"
"fmt"
"regexp"
"sync"
@@ -59,7 +60,7 @@ func (db *InMemoryDatabase) Get(s string) (string, error) {
defer db.Unlock()
pa, ok := db.Database[s]
if !ok {
return "", errors.New("No key found with that id")
return "", errors.New(fmt.Sprintf("No key found for: %s", s))
}
return pa, nil
}
@@ -178,7 +179,7 @@ func (db *InMemoryDatabase) getProvide(p Package) (Package, error) {
for ve, _ := range versions {
match, err := p.VersionMatchSelector(ve)
match, err := p.VersionMatchSelector(ve, nil)
if err != nil {
return nil, errors.Wrap(err, "Error on match version")
}
@@ -223,7 +224,7 @@ func (db *InMemoryDatabase) FindPackage(p Package) (Package, error) {
}
// FindPackages return the list of the packages beloging to cat/name
func (db *InMemoryDatabase) FindPackageVersions(p Package) ([]Package, error) {
func (db *InMemoryDatabase) FindPackageVersions(p Package) (Packages, error) {
versions, ok := db.CacheNoVersion[p.GetPackageName()]
if !ok {
return nil, errors.New("No versions found for package")
@@ -236,11 +237,11 @@ func (db *InMemoryDatabase) FindPackageVersions(p Package) ([]Package, error) {
}
versionsInWorld = append(versionsInWorld, w)
}
return versionsInWorld, nil
return Packages(versionsInWorld), nil
}
// FindPackages return the list of the packages beloging to cat/name (any versions in requested range)
func (db *InMemoryDatabase) FindPackages(p Package) ([]Package, error) {
func (db *InMemoryDatabase) FindPackages(p Package) (Packages, error) {
// Provides: Treat as the replaced package here
if provided, err := db.getProvide(p); err == nil {
@@ -248,11 +249,11 @@ func (db *InMemoryDatabase) FindPackages(p Package) ([]Package, error) {
}
versions, ok := db.CacheNoVersion[p.GetPackageName()]
if !ok {
return nil, errors.New("No versions found for package")
return nil, errors.New(fmt.Sprintf("No versions found for: %s", p.HumanReadableString()))
}
var versionsInWorld []Package
for ve, _ := range versions {
match, err := p.SelectorMatchVersion(ve)
match, err := p.SelectorMatchVersion(ve, nil)
if err != nil {
return nil, errors.Wrap(err, "Error on match selector")
}
@@ -265,7 +266,7 @@ func (db *InMemoryDatabase) FindPackages(p Package) ([]Package, error) {
versionsInWorld = append(versionsInWorld, w)
}
}
return versionsInWorld, nil
return Packages(versionsInWorld), nil
}
func (db *InMemoryDatabase) UpdatePackage(p Package) error {
@@ -277,7 +278,7 @@ func (db *InMemoryDatabase) UpdatePackage(p Package) error {
return db.Set(p.GetFingerPrint(), enc)
return errors.New("Package not found")
return errors.New(fmt.Sprintf("Package not found: %s", p.HumanReadableString()))
}
func (db *InMemoryDatabase) GetPackages() []string {
@@ -302,7 +303,7 @@ func (db *InMemoryDatabase) GetPackageFiles(p Package) ([]string, error) {
pa, ok := db.FileDatabase[p.GetFingerPrint()]
if !ok {
return pa, errors.New("No key found with that id")
return pa, errors.New(fmt.Sprintf("No key found for: %s", p.HumanReadableString()))
}
return pa, nil
@@ -327,7 +328,7 @@ func (db *InMemoryDatabase) RemovePackage(p Package) error {
delete(db.Database, p.GetFingerPrint())
return nil
}
func (db *InMemoryDatabase) World() []Package {
func (db *InMemoryDatabase) World() Packages {
var all []Package
// FIXME: This should all be locked in the db - for now forbid the solver to be run in threads.
for _, k := range db.GetPackages() {
@@ -336,7 +337,7 @@ func (db *InMemoryDatabase) World() []Package {
all = append(all, pack)
}
}
return all
return Packages(all)
}
func (db *InMemoryDatabase) FindPackageCandidate(p Package) (Package, error) {
@@ -349,7 +350,7 @@ func (db *InMemoryDatabase) FindPackageCandidate(p Package) (Package, error) {
if err != nil || len(packages) == 0 {
required = p
} else {
required = Best(packages)
required = packages.Best(nil)
}
return required, nil
@@ -360,7 +361,7 @@ func (db *InMemoryDatabase) FindPackageCandidate(p Package) (Package, error) {
}
func (db *InMemoryDatabase) FindPackageLabel(labelKey string) ([]Package, error) {
func (db *InMemoryDatabase) FindPackageLabel(labelKey string) (Packages, error) {
var ans []Package
for _, k := range db.GetPackages() {
@@ -373,10 +374,10 @@ func (db *InMemoryDatabase) FindPackageLabel(labelKey string) ([]Package, error)
}
}
return ans, nil
return Packages(ans), nil
}
func (db *InMemoryDatabase) FindPackageLabelMatch(pattern string) ([]Package, error) {
func (db *InMemoryDatabase) FindPackageLabelMatch(pattern string) (Packages, error) {
var ans []Package
re := regexp.MustCompile(pattern)
@@ -394,10 +395,10 @@ func (db *InMemoryDatabase) FindPackageLabelMatch(pattern string) ([]Package, er
}
}
return ans, nil
return Packages(ans), nil
}
func (db *InMemoryDatabase) FindPackageMatch(pattern string) ([]Package, error) {
func (db *InMemoryDatabase) FindPackageMatch(pattern string) (Packages, error) {
var ans []Package
re := regexp.MustCompile(pattern)
@@ -411,10 +412,10 @@ func (db *InMemoryDatabase) FindPackageMatch(pattern string) ([]Package, error)
return ans, err
}
if re.MatchString(pack.GetCategory() + pack.GetName()) {
if re.MatchString(pack.HumanReadableString()) {
ans = append(ans, pack)
}
}
return ans, nil
return Packages(ans), nil
}

View File

@@ -23,14 +23,15 @@ import (
"io"
"path/filepath"
"regexp"
"sort"
"strconv"
"strings"
"github.com/mudler/luet/pkg/helpers"
version "github.com/mudler/luet/pkg/versioner"
gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
"github.com/crillab/gophersat/bf"
"github.com/ghodss/yaml"
version "github.com/hashicorp/go-version"
"github.com/jinzhu/copier"
"github.com/pkg/errors"
)
@@ -41,27 +42,29 @@ type Package interface {
Encode(PackageDatabase) (string, error)
BuildFormula(PackageDatabase, PackageDatabase) ([]bf.Formula, error)
IsFlagged(bool) Package
Flagged() bool
GetFingerPrint() string
GetPackageName() string
Requires([]*DefaultPackage) Package
Conflicts([]*DefaultPackage) Package
Revdeps(PackageDatabase) []Package
LabelDeps(PackageDatabase, string) []Package
Revdeps(PackageDatabase) Packages
ExpandedRevdeps(definitiondb PackageDatabase, visited map[string]interface{}) Packages
LabelDeps(PackageDatabase, string) Packages
GetProvides() []*DefaultPackage
SetProvides([]*DefaultPackage) Package
GetRequires() []*DefaultPackage
GetConflicts() []*DefaultPackage
Expand(PackageDatabase) ([]Package, error)
Expand(PackageDatabase) (Packages, error)
SetCategory(string)
GetName() string
SetName(string)
GetCategory() string
GetVersion() string
SetVersion(string)
RequiresContains(PackageDatabase, Package) (bool, error)
Matches(m Package) bool
BumpBuildVersion() error
@@ -91,23 +94,36 @@ type Package interface {
HasLabel(string) bool
MatchLabel(*regexp.Regexp) bool
AddAnnotation(string, string)
GetAnnotations() map[string]string
HasAnnotation(string) bool
MatchAnnotation(*regexp.Regexp) bool
IsHidden() bool
IsSelector() bool
VersionMatchSelector(string) (bool, error)
SelectorMatchVersion(string) (bool, error)
VersionMatchSelector(string, version.Versioner) (bool, error)
SelectorMatchVersion(string, version.Versioner) (bool, error)
String() string
HumanReadableString() string
HashFingerprint() string
HashFingerprint(string) string
SetBuildTimestamp(s string)
GetBuildTimestamp() string
Clone() Package
}
type Tree interface {
GetPackageSet() PackageDatabase
Prelude() string // A tree might have a prelude to be able to consume a tree
SetPackageSet(s PackageDatabase)
World() ([]Package, error)
World() (Packages, error)
FindPackage(Package) (Package, error)
}
type Packages []Package
// >> Unmarshallers
// DefaultPackageFromYaml decodes a package from yaml bytes
func DefaultPackageFromYaml(yml []byte) (DefaultPackage, error) {
@@ -149,19 +165,21 @@ type DefaultPackage struct {
State State `json:"state,omitempty"`
PackageRequires []*DefaultPackage `json:"requires"` // Affects YAML field names too.
PackageConflicts []*DefaultPackage `json:"conflicts"` // Affects YAML field names too.
IsSet bool `json:"set,omitempty"` // Affects YAML field names too.
Provides []*DefaultPackage `json:"provides,omitempty"` // Affects YAML field names too.
Hidden bool `json:"hidden,omitempty"` // Affects YAML field names too.
// TODO: Annotations?
// Annotations are used for core features/options
Annotations map[string]string `json:"annotations,omitempty"` // Affects YAML field names too
// Path is set only internally when tree is loaded from disk
Path string `json:"path,omitempty"`
Description string `json:"description,omitempty"`
Uri []string `json:"uri,omitempty"`
License string `json:"license,omitempty"`
Description string `json:"description,omitempty"`
Uri []string `json:"uri,omitempty"`
License string `json:"license,omitempty"`
BuildTimestamp string `json:"buildtimestamp,omitempty"`
Labels map[string]string `json:labels,omitempty`
Labels map[string]string `json:"labels,omitempty"` // Affects YAML field names too.
}
// State represent the package state
@@ -174,7 +192,7 @@ func NewPackage(name, version string, requires []*DefaultPackage, conflicts []*D
Version: version,
PackageRequires: requires,
PackageConflicts: conflicts,
Labels: make(map[string]string, 0),
Labels: nil,
}
}
@@ -192,9 +210,9 @@ func (p *DefaultPackage) GetFingerPrint() string {
return fmt.Sprintf("%s-%s-%s", p.Name, p.Category, p.Version)
}
func (p *DefaultPackage) HashFingerprint() string {
func (p *DefaultPackage) HashFingerprint(salt string) string {
h := md5.New()
io.WriteString(h, p.GetFingerPrint())
io.WriteString(h, fmt.Sprintf("%s-%s", p.GetFingerPrint(), salt))
return fmt.Sprintf("%x", h.Sum(nil))
}
@@ -216,6 +234,16 @@ func (p *DefaultPackage) GetPackageName() string {
return fmt.Sprintf("%s-%s", p.Name, p.Category)
}
// GetBuildTimestamp returns the package build timestamp
func (p *DefaultPackage) GetBuildTimestamp() string {
return p.BuildTimestamp
}
// SetBuildTimestamp sets the package Build timestamp
func (p *DefaultPackage) SetBuildTimestamp(s string) {
p.BuildTimestamp = s
}
// GetPath returns the path where the definition file was found
func (p *DefaultPackage) GetPath() string {
return p.Path
@@ -233,26 +261,24 @@ func (p *DefaultPackage) IsSelector() bool {
return strings.ContainsAny(p.GetVersion(), "<>=")
}
func (p *DefaultPackage) IsHidden() bool {
return p.Hidden
}
func (p *DefaultPackage) HasLabel(label string) bool {
ans := false
for k, _ := range p.Labels {
if k == label {
ans = true
break
}
}
return ans
return helpers.MapHasKey(&p.Labels, label)
}
func (p *DefaultPackage) MatchLabel(r *regexp.Regexp) bool {
ans := false
for k, v := range p.Labels {
if r.MatchString(k + "=" + v) {
ans = true
break
}
}
return ans
return helpers.MapMatchRegex(&p.Labels, r)
}
func (p *DefaultPackage) HasAnnotation(label string) bool {
return helpers.MapHasKey(&p.Annotations, label)
}
func (p *DefaultPackage) MatchAnnotation(r *regexp.Regexp) bool {
return helpers.MapMatchRegex(&p.Annotations, r)
}
// AddUse adds a use to a package
@@ -285,7 +311,6 @@ func (p *DefaultPackage) Encode(db PackageDatabase) (string, error) {
func (p *DefaultPackage) Yaml() ([]byte, error) {
j, err := p.JSON()
if err != nil {
return []byte{}, err
}
y, err := yaml.JSONToYAML(j)
@@ -296,15 +321,6 @@ func (p *DefaultPackage) Yaml() ([]byte, error) {
return y, nil
}
func (p *DefaultPackage) IsFlagged(b bool) Package {
p.IsSet = b
return p
}
func (p *DefaultPackage) Flagged() bool {
return p.IsSet
}
func (p *DefaultPackage) GetName() string {
return p.Name
}
@@ -312,6 +328,9 @@ func (p *DefaultPackage) GetName() string {
func (p *DefaultPackage) GetVersion() string {
return p.Version
}
func (p *DefaultPackage) SetVersion(v string) {
p.Version = v
}
func (p *DefaultPackage) GetDescription() string {
return p.Description
}
@@ -336,15 +355,32 @@ func (p *DefaultPackage) GetCategory() string {
func (p *DefaultPackage) SetCategory(s string) {
p.Category = s
}
func (p *DefaultPackage) SetName(s string) {
p.Name = s
}
func (p *DefaultPackage) GetUses() []string {
return p.UseFlags
}
func (p *DefaultPackage) AddLabel(k, v string) {
if p.Labels == nil {
p.Labels = make(map[string]string, 0)
}
p.Labels[k] = v
}
func (p *DefaultPackage) AddAnnotation(k, v string) {
if p.Annotations == nil {
p.Annotations = make(map[string]string, 0)
}
p.Annotations[k] = v
}
func (p *DefaultPackage) GetLabels() map[string]string {
return p.Labels
}
func (p *DefaultPackage) GetAnnotations() map[string]string {
return p.Annotations
}
func (p *DefaultPackage) GetProvides() []*DefaultPackage {
return p.Provides
}
@@ -378,15 +414,15 @@ func (p *DefaultPackage) Matches(m Package) bool {
return false
}
func (p *DefaultPackage) Expand(definitiondb PackageDatabase) ([]Package, error) {
var versionsInWorld []Package
func (p *DefaultPackage) Expand(definitiondb PackageDatabase) (Packages, error) {
var versionsInWorld Packages
all, err := definitiondb.FindPackages(p)
if err != nil {
return nil, err
}
for _, w := range all {
match, err := p.SelectorMatchVersion(w.GetVersion())
match, err := p.SelectorMatchVersion(w.GetVersion(), nil)
if err != nil {
return nil, err
}
@@ -398,8 +434,8 @@ func (p *DefaultPackage) Expand(definitiondb PackageDatabase) ([]Package, error)
return versionsInWorld, nil
}
func (p *DefaultPackage) Revdeps(definitiondb PackageDatabase) []Package {
var versionsInWorld []Package
func (p *DefaultPackage) Revdeps(definitiondb PackageDatabase) Packages {
var versionsInWorld Packages
for _, w := range definitiondb.World() {
if w.Matches(p) {
continue
@@ -415,8 +451,54 @@ func (p *DefaultPackage) Revdeps(definitiondb PackageDatabase) []Package {
return versionsInWorld
}
func (p *DefaultPackage) LabelDeps(definitiondb PackageDatabase, labelKey string) []Package {
var pkgsWithLabelInWorld []Package
// ExpandedRevdeps returns the package reverse dependencies,
// matching also selectors in versions (>, <, >=, <=)
func (p *DefaultPackage) ExpandedRevdeps(definitiondb PackageDatabase, visited map[string]interface{}) Packages {
var versionsInWorld Packages
if _, ok := visited[p.HumanReadableString()]; ok {
return versionsInWorld
}
visited[p.HumanReadableString()] = true
for _, w := range definitiondb.World() {
if w.Matches(p) {
continue
}
match := false
for _, re := range w.GetRequires() {
if re.Matches(p) {
match = true
}
if !match {
packages, _ := re.Expand(definitiondb)
for _, pa := range packages {
if pa.Matches(p) {
match = true
}
}
}
// if ok, _ := w.RequiresContains(definitiondb, p); ok {
}
if match {
versionsInWorld = append(versionsInWorld, w)
versionsInWorld = append(versionsInWorld, w.ExpandedRevdeps(definitiondb, visited).Unique()...)
}
// }
}
//visited[p.HumanReadableString()] = true
return versionsInWorld.Unique()
}
func (p *DefaultPackage) LabelDeps(definitiondb PackageDatabase, labelKey string) Packages {
var pkgsWithLabelInWorld Packages
// TODO: check if integrate some index to improve
// research instead of iterate all list.
for _, w := range definitiondb.World() {
@@ -432,7 +514,11 @@ func DecodePackage(ID string, db PackageDatabase) (Package, error) {
return db.GetPackage(ID)
}
func (pack *DefaultPackage) RequiresContains(definitiondb PackageDatabase, s Package) (bool, error) {
func (pack *DefaultPackage) scanRequires(definitiondb PackageDatabase, s Package, visited map[string]interface{}) (bool, error) {
if _, ok := visited[pack.HumanReadableString()]; ok {
return false, nil
}
visited[pack.HumanReadableString()] = true
p, err := definitiondb.FindPackage(pack)
if err != nil {
p = pack //relax things
@@ -450,7 +536,7 @@ func (pack *DefaultPackage) RequiresContains(definitiondb PackageDatabase, s Pac
return true, nil
}
}
if contains, err := re.RequiresContains(definitiondb, s); err == nil && contains {
if contains, err := re.scanRequires(definitiondb, s, visited); err == nil && contains {
return true, nil
}
}
@@ -458,7 +544,19 @@ func (pack *DefaultPackage) RequiresContains(definitiondb PackageDatabase, s Pac
return false, nil
}
func Best(set []Package) Package {
// RequiresContains recursively scans into the database packages dependencies to find a match with the given package
// It is used by the solver during uninstall.
func (pack *DefaultPackage) RequiresContains(definitiondb PackageDatabase, s Package) (bool, error) {
return pack.scanRequires(definitiondb, s, make(map[string]interface{}))
}
// Best returns the best version of the package (the most bigger) from a list
// Accepts a versioner interface to change the ordering policy. If null is supplied
// It defaults to version.WrappedVersioner which supports both semver and debian versioning
func (set Packages) Best(v version.Versioner) Package {
if v == nil {
v = &version.WrappedVersioner{}
}
var versionsMap map[string]Package = make(map[string]Package)
if len(set) == 0 {
panic("Best needs a list with elements")
@@ -469,20 +567,28 @@ func Best(set []Package) Package {
versionsRaw = append(versionsRaw, p.GetVersion())
versionsMap[p.GetVersion()] = p
}
sorted := v.Sort(versionsRaw)
versions := make([]*version.Version, len(versionsRaw))
for i, raw := range versionsRaw {
v, _ := version.NewVersion(raw)
versions[i] = v
}
// After this, the versions are properly sorted
sort.Sort(version.Collection(versions))
return versionsMap[versions[len(versions)-1].Original()]
return versionsMap[sorted[len(sorted)-1]]
}
func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db PackageDatabase) ([]bf.Formula, error) {
func (set Packages) Unique() Packages {
var result Packages
uniq := make(map[string]Package)
for _, p := range set {
uniq[p.GetFingerPrint()] = p
}
for _, p := range uniq {
result = append(result, p)
}
return result
}
func (pack *DefaultPackage) buildFormula(definitiondb PackageDatabase, db PackageDatabase, visited map[string]interface{}) ([]bf.Formula, error) {
if _, ok := visited[pack.HumanReadableString()]; ok {
return nil, nil
}
visited[pack.HumanReadableString()] = true
p, err := definitiondb.FindPackage(pack)
if err != nil {
p = pack // Relax failures and trust the def
@@ -584,8 +690,8 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
}
B := bf.Var(encodedB)
formulas = append(formulas, bf.Or(bf.Not(A), B))
f, err := required.BuildFormula(definitiondb, db)
r := required.(*DefaultPackage) // We know since the implementation is DefaultPackage, that can be only DefaultPackage
f, err := r.buildFormula(definitiondb, db, visited)
if err != nil {
return nil, err
}
@@ -614,8 +720,8 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
B := bf.Var(encodedB)
formulas = append(formulas, bf.Or(bf.Not(A),
bf.Not(B)))
f, err := p.BuildFormula(definitiondb, db)
r := p.(*DefaultPackage) // We know since the implementation is DefaultPackage, that can be only DefaultPackage
f, err := r.buildFormula(definitiondb, db, visited)
if err != nil {
return nil, err
}
@@ -634,7 +740,8 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
formulas = append(formulas, bf.Or(bf.Not(A),
bf.Not(B)))
f, err := required.BuildFormula(definitiondb, db)
r := required.(*DefaultPackage) // We know since the implementation is DefaultPackage, that can be only DefaultPackage
f, err := r.buildFormula(definitiondb, db, visited)
if err != nil {
return nil, err
}
@@ -645,13 +752,16 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
return formulas, nil
}
func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db PackageDatabase) ([]bf.Formula, error) {
return pack.buildFormula(definitiondb, db, make(map[string]interface{}))
}
func (p *DefaultPackage) Explain() {
fmt.Println("====================")
fmt.Println("Name: ", p.GetName())
fmt.Println("Category: ", p.GetCategory())
fmt.Println("Version: ", p.GetVersion())
fmt.Println("Installed: ", p.IsSet)
for _, req := range p.GetRequires() {
fmt.Println("\t-> ", req)
@@ -750,3 +860,22 @@ end:
return nil
}
func (p *DefaultPackage) SelectorMatchVersion(ver string, v version.Versioner) (bool, error) {
if !p.IsSelector() {
return false, errors.New("Package is not a selector")
}
if v == nil {
v = &version.WrappedVersioner{}
}
return v.ValidateSelector(ver, p.GetVersion()), nil
}
func (p *DefaultPackage) VersionMatchSelector(selector string, v version.Versioner) (bool, error) {
if v == nil {
v = &version.WrappedVersioner{}
}
return v.ValidateSelector(p.GetVersion(), selector), nil
}

View File

@@ -36,8 +36,8 @@ var _ = Describe("Package", func() {
})
It("Generates packages fingerprint's hashes", func() {
Expect(a.HashFingerprint()).ToNot(Equal(a1.HashFingerprint()))
Expect(a.HashFingerprint()).To(Equal("c64caa391b79adb598ad98e261aa79a0"))
Expect(a.HashFingerprint("")).ToNot(Equal(a1.HashFingerprint("")))
Expect(a.HashFingerprint("")).To(Equal("76972ef6991ec6102f33b401105c1351"))
})
})
@@ -46,6 +46,7 @@ var _ = Describe("Package", func() {
a1 := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
a11 := NewPackage("A", "1.1", []*DefaultPackage{}, []*DefaultPackage{})
a01 := NewPackage("A", "0.1", []*DefaultPackage{}, []*DefaultPackage{})
re := regexp.MustCompile("project[0-9][=].*")
It("Expands correctly", func() {
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a1, a11, a01} {
@@ -58,8 +59,10 @@ var _ = Describe("Package", func() {
Expect(lst).To(ContainElement(a1))
Expect(lst).ToNot(ContainElement(a01))
Expect(len(lst)).To(Equal(2))
p := Best(lst)
p := lst.Best(nil)
Expect(p).To(Equal(a11))
// Test annotation with null map
Expect(a.MatchAnnotation(re)).To(Equal(false))
})
})
@@ -91,6 +94,34 @@ var _ = Describe("Package", func() {
})
})
Context("Find annotations on packages", func() {
a := NewPackage("A", ">=1.0", []*DefaultPackage{}, []*DefaultPackage{})
a.AddAnnotation("project1", "test1")
a.AddAnnotation("label2", "value1")
b := NewPackage("B", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b.AddAnnotation("project2", "test2")
b.AddAnnotation("label2", "value1")
It("Expands correctly", func() {
var err error
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b} {
_, err = definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
re := regexp.MustCompile("project[0-9][=].*")
Expect(err).ToNot(HaveOccurred())
Expect(re).ToNot(BeNil())
Expect(a.HasAnnotation("label2")).To(Equal(true))
Expect(a.HasAnnotation("label3")).To(Equal(false))
Expect(a.HasAnnotation("project1")).To(Equal(true))
Expect(b.HasAnnotation("project2")).To(Equal(true))
Expect(b.HasAnnotation("label2")).To(Equal(true))
Expect(b.MatchAnnotation(re)).To(Equal(true))
Expect(a.MatchAnnotation(re)).To(Equal(true))
})
})
Context("Check description", func() {
a := NewPackage("A", ">=1.0", []*DefaultPackage{}, []*DefaultPackage{})
a.SetDescription("Description A")
@@ -159,6 +190,93 @@ var _ = Describe("Package", func() {
})
})
Context("revdeps", func() {
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b := NewPackage("B", "1.0", []*DefaultPackage{&DefaultPackage{Name: "A", Version: ">=1.0"}}, []*DefaultPackage{})
c := NewPackage("C", "1.1", []*DefaultPackage{&DefaultPackage{Name: "B", Version: ">=1.0"}}, []*DefaultPackage{})
d := NewPackage("D", "0.1", []*DefaultPackage{c}, []*DefaultPackage{})
e := NewPackage("E", "0.1", []*DefaultPackage{c}, []*DefaultPackage{})
It("doesn't resolve selectors", func() {
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b, c, d, e} {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
lst := a.Revdeps(definitions)
Expect(len(lst)).To(Equal(0))
})
})
Context("Expandedrevdeps", func() {
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b := NewPackage("B", "1.0", []*DefaultPackage{&DefaultPackage{Name: "A", Version: ">=1.0"}}, []*DefaultPackage{})
c := NewPackage("C", "1.1", []*DefaultPackage{&DefaultPackage{Name: "B", Version: ">=1.0"}}, []*DefaultPackage{})
d := NewPackage("D", "0.1", []*DefaultPackage{c}, []*DefaultPackage{})
e := NewPackage("E", "0.1", []*DefaultPackage{c}, []*DefaultPackage{})
It("Computes correctly", func() {
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b, c, d, e} {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
visited := make(map[string]interface{})
lst := a.ExpandedRevdeps(definitions, visited)
Expect(lst).To(ContainElement(c))
Expect(lst).To(ContainElement(d))
Expect(lst).To(ContainElement(e))
Expect(len(lst)).To(Equal(4))
})
})
Context("Expandedrevdeps", func() {
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b := NewPackage("B", "1.0", []*DefaultPackage{&DefaultPackage{Name: "A", Version: ">=1.0"}}, []*DefaultPackage{})
c := NewPackage("C", "1.1", []*DefaultPackage{&DefaultPackage{Name: "B", Version: ">=1.0"}}, []*DefaultPackage{})
d := NewPackage("D", "0.1", []*DefaultPackage{&DefaultPackage{Name: "C", Version: ">=1.0"}}, []*DefaultPackage{})
e := NewPackage("E", "0.1", []*DefaultPackage{&DefaultPackage{Name: "C", Version: ">=1.0"}}, []*DefaultPackage{})
It("Computes correctly", func() {
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b, c, d, e} {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
visited := make(map[string]interface{})
lst := a.ExpandedRevdeps(definitions, visited)
Expect(lst).To(ContainElement(b))
Expect(lst).To(ContainElement(c))
Expect(lst).To(ContainElement(d))
Expect(lst).To(ContainElement(e))
Expect(len(lst)).To(Equal(4))
})
})
Context("Expandedrevdeps", func() {
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
b := NewPackage("B", "1.0", []*DefaultPackage{&DefaultPackage{Name: "A", Version: ">=1.0"}}, []*DefaultPackage{})
c := NewPackage("C", "1.1", []*DefaultPackage{&DefaultPackage{Name: "B", Version: ">=1.0"}, &DefaultPackage{Name: "A", Version: ">=0"}}, []*DefaultPackage{})
d := NewPackage("D", "0.1", []*DefaultPackage{&DefaultPackage{Name: "C", Version: ">=1.0"}}, []*DefaultPackage{})
e := NewPackage("E", "0.1", []*DefaultPackage{&DefaultPackage{Name: "C", Version: ">=1.0"}}, []*DefaultPackage{})
It("Computes correctly", func() {
definitions := NewInMemoryDatabase(false)
for _, p := range []Package{a, b, c, d, e} {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
visited := make(map[string]interface{})
lst := a.ExpandedRevdeps(definitions, visited)
Expect(lst).To(ContainElement(b))
Expect(lst).To(ContainElement(c))
Expect(lst).To(ContainElement(d))
Expect(lst).To(ContainElement(e))
Expect(len(lst)).To(Equal(4))
})
})
Context("RequiresContains", func() {
a := NewPackage("A", ">=1.0", []*DefaultPackage{}, []*DefaultPackage{})
a1 := NewPackage("A", "1.0", []*DefaultPackage{a}, []*DefaultPackage{})
@@ -196,7 +314,6 @@ var _ = Describe("Package", func() {
Expect(p.GetVersion()).To(Equal(a.GetVersion()))
Expect(p.GetName()).To(Equal(a.GetName()))
Expect(p.Flagged()).To(Equal(a.Flagged()))
Expect(p.GetFingerPrint()).To(Equal(a.GetFingerPrint()))
Expect(len(p.GetConflicts())).To(Equal(len(a.GetConflicts())))
Expect(len(p.GetRequires())).To(Equal(len(a.GetRequires())))
@@ -256,7 +373,6 @@ var _ = Describe("Package", func() {
a2 := a.Clone()
Expect(a2.GetVersion()).To(Equal(a.GetVersion()))
Expect(a2.GetName()).To(Equal(a.GetName()))
Expect(a2.Flagged()).To(Equal(a.Flagged()))
Expect(a2.GetFingerPrint()).To(Equal(a.GetFingerPrint()))
Expect(len(a2.GetConflicts())).To(Equal(len(a.GetConflicts())))
Expect(len(a2.GetRequires())).To(Equal(len(a.GetRequires())))

View File

@@ -28,7 +28,7 @@ import (
)
func LoadRepositories(c *LuetConfig) error {
var regexRepo = regexp.MustCompile(`.yml$`)
var regexRepo = regexp.MustCompile(`.yml$|.yaml$`)
for _, rdir := range c.RepositoriesConfDir {
Debug("Parsing Repository Directory", rdir, "...")

View File

@@ -35,12 +35,22 @@ var _ = Describe("Repository", func() {
It("Chec Load Repository 1", func() {
Expect(err).Should(BeNil())
Expect(len(cfg.SystemRepositories)).Should(Equal(1))
Expect(len(cfg.SystemRepositories)).Should(Equal(2))
Expect(cfg.SystemRepositories[0].Name).Should(Equal("test1"))
Expect(cfg.SystemRepositories[0].Priority).Should(Equal(999))
Expect(cfg.SystemRepositories[0].Type).Should(Equal("disk"))
Expect(len(cfg.SystemRepositories[0].Urls)).Should(Equal(1))
Expect(cfg.SystemRepositories[0].Urls[0]).Should(Equal("tests/repos/test1"))
})
It("Chec Load Repository 2", func() {
Expect(err).Should(BeNil())
Expect(len(cfg.SystemRepositories)).Should(Equal(2))
Expect(cfg.SystemRepositories[1].Name).Should(Equal("test2"))
Expect(cfg.SystemRepositories[1].Priority).Should(Equal(1000))
Expect(cfg.SystemRepositories[1].Type).Should(Equal("disk"))
Expect(len(cfg.SystemRepositories[1].Urls)).Should(Equal(1))
Expect(cfg.SystemRepositories[1].Urls[0]).Should(Equal("tests/repos/test2"))
})
})
})

View File

@@ -23,6 +23,7 @@ import (
pkg "github.com/mudler/luet/pkg/package"
toposort "github.com/philopon/go-toposort"
"github.com/pkg/errors"
"github.com/stevenle/topsort"
)
@@ -145,7 +146,7 @@ func (assertions PackagesAssertions) Search(f string) *PackageAssert {
return nil
}
func (assertions PackagesAssertions) Order(definitiondb pkg.PackageDatabase, fingerprint string) PackagesAssertions {
func (assertions PackagesAssertions) Order(definitiondb pkg.PackageDatabase, fingerprint string) (PackagesAssertions, error) {
orderedAssertions := PackagesAssertions{}
unorderedAssertions := PackagesAssertions{}
@@ -169,10 +170,10 @@ func (assertions PackagesAssertions) Order(definitiondb pkg.PackageDatabase, fin
sort.Sort(unorderedAssertions)
// Build a topological graph
//graph := toposort.NewGraph(len(unorderedAssertions))
// graph.AddNodes(fingerprints...)
for _, a := range unorderedAssertions {
currentPkg := a.Package
added := map[string]interface{}{}
REQUIRES:
for _, requiredDef := range currentPkg.GetRequires() {
if def, err := definitiondb.FindPackage(requiredDef); err == nil { // Provides: Get a chance of being override here
requiredDef = def.(*pkg.DefaultPackage)
@@ -184,26 +185,29 @@ func (assertions PackagesAssertions) Order(definitiondb pkg.PackageDatabase, fin
if req != nil {
requiredDef = req.Package
}
if _, ok := added[requiredDef.GetFingerPrint()]; ok {
continue REQUIRES
}
// Expand also here, as we need to order them (or instead the solver should give back the dep correctly?)
graph.AddEdge(currentPkg.GetFingerPrint(), requiredDef.GetFingerPrint())
added[requiredDef.GetFingerPrint()] = nil
}
}
result, err := graph.TopSort(fingerprint)
if err != nil {
panic(err)
return nil, errors.Wrap(err, "fail on sorting "+fingerprint)
}
for _, res := range result {
a, ok := tmpMap[res]
if !ok {
panic("fail looking for " + res)
return nil, errors.New("fail looking for " + res)
// continue
}
orderedAssertions = append(orderedAssertions, a)
// orderedAssertions = append(PackagesAssertions{a}, orderedAssertions...) // push upfront
}
//helpers.ReverseAny(orderedAssertions)
return orderedAssertions
return orderedAssertions, nil
}
func (assertions PackagesAssertions) Explain() string {
@@ -247,6 +251,34 @@ func (a PackagesAssertions) Less(i, j int) bool {
}
func (a PackagesAssertions) TrueLen() int {
count := 0
for _, ass := range a {
if ass.Value {
count++
}
}
return count
}
// HashFrom computes the assertion hash From a given package. It drops it from the assertions
// and checks it's not the only one. if it's unique it marks it specially - so the hash
// which is generated is unique for the selected package
func (assertions PackagesAssertions) HashFrom(p pkg.Package) string {
var assertionhash string
// When we don't have any solution to hash for, we need to generate an UUID by ourselves
latestsolution := assertions.Drop(p)
if latestsolution.TrueLen() == 0 {
assertionhash = assertions.Mark(p).AssertionHash()
} else {
assertionhash = latestsolution.AssertionHash()
}
return assertionhash
}
func (assertions PackagesAssertions) AssertionHash() string {
var fingerprint string
for _, assertion := range assertions { // Note: Always order them first!
@@ -283,3 +315,18 @@ func (assertions PackagesAssertions) Cut(p pkg.Package) PackagesAssertions {
}
return ass
}
// Mark returns a new assertion with the package marked
func (assertions PackagesAssertions) Mark(p pkg.Package) PackagesAssertions {
ass := PackagesAssertions{}
for _, a := range assertions {
if a.Package.Matches(p) {
marked := a.Package.Clone()
marked.SetName("@@" + marked.GetName())
a = PackageAssert{Package: marked.(*pkg.DefaultPackage), Value: a.Value, Hash: a.Hash}
}
ass = append(ass, a)
}
return ass
}

View File

@@ -19,6 +19,7 @@ import (
"strconv"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -72,7 +73,8 @@ var _ = Describe("Decoder", func() {
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
solution = solution.Order(dbDefinitions, A.GetFingerPrint())
solution, err = solution.Order(dbDefinitions, A.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
// Expect(len(solution)).To(Equal(6))
Expect(solution[0].Package.GetName()).To(Equal("G"))
Expect(solution[1].Package.GetName()).To(Equal("H"))
@@ -188,7 +190,8 @@ var _ = Describe("Decoder", func() {
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
solution = solution.Order(dbDefinitions, A.GetFingerPrint())
solution, err = solution.Order(dbDefinitions, A.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
// Expect(len(solution)).To(Equal(6))
Expect(solution[0].Package.GetName()).To(Equal("G"))
Expect(solution[1].Package.GetName()).To(Equal("H"))
@@ -206,7 +209,8 @@ var _ = Describe("Decoder", func() {
Expect(len(solution)).To(Equal(6))
Expect(err).ToNot(HaveOccurred())
solution = solution.Order(dbDefinitions, B.GetFingerPrint())
solution, err = solution.Order(dbDefinitions, B.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
hash2 := solution.AssertionHash()
// Expect(len(solution)).To(Equal(6))
@@ -243,7 +247,11 @@ var _ = Describe("Decoder", func() {
solution2, err := s.Install([]pkg.Package{Z})
Expect(err).ToNot(HaveOccurred())
Expect(solution.Order(dbDefinitions, Y.GetFingerPrint()).Drop(Y).AssertionHash() == solution2.Order(dbDefinitions, Z.GetFingerPrint()).Drop(Z).AssertionHash()).To(BeTrue())
orderY, err := solution.Order(dbDefinitions, Y.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
orderZ, err := solution2.Order(dbDefinitions, Z.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
Expect(orderY.Drop(Y).AssertionHash() == orderZ.Drop(Z).AssertionHash()).To(BeTrue())
})
It("Hashes them, Cuts them and could be used for comparison", func() {
@@ -267,10 +275,120 @@ var _ = Describe("Decoder", func() {
solution2, err := s.Install([]pkg.Package{Z})
Expect(err).ToNot(HaveOccurred())
Expect(solution.Order(dbDefinitions, Y.GetFingerPrint()).Cut(Y).Drop(Y)).To(Equal(solution2.Order(dbDefinitions, Z.GetFingerPrint()).Cut(Z).Drop(Z)))
orderY, err := solution.Order(dbDefinitions, Y.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
orderZ, err := solution2.Order(dbDefinitions, Z.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
Expect(orderY.Cut(Y).Drop(Y)).To(Equal(orderZ.Cut(Z).Drop(Z)))
Expect(solution.Order(dbDefinitions, Y.GetFingerPrint()).Cut(Y).Drop(Y).AssertionHash()).To(Equal(solution2.Order(dbDefinitions, Z.GetFingerPrint()).Cut(Z).Drop(Z).AssertionHash()))
Expect(orderY.Cut(Y).Drop(Y).AssertionHash()).To(Equal(orderZ.Cut(Z).Drop(Z).AssertionHash()))
})
It("HashFrom can be used equally", func() {
X := pkg.NewPackage("X", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
Y := pkg.NewPackage("Y", "", []*pkg.DefaultPackage{X}, []*pkg.DefaultPackage{})
Z := pkg.NewPackage("Z", "", []*pkg.DefaultPackage{X}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{X, Y, Z} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Install([]pkg.Package{Y})
Expect(err).ToNot(HaveOccurred())
solution2, err := s.Install([]pkg.Package{Z})
Expect(err).ToNot(HaveOccurred())
orderY, err := solution.Order(dbDefinitions, Y.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
orderZ, err := solution2.Order(dbDefinitions, Z.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
Expect(orderY.Cut(Y).Drop(Y)).To(Equal(orderZ.Cut(Z).Drop(Z)))
Expect(orderY.Cut(Y).HashFrom(Y)).To(Equal(orderZ.Cut(Z).HashFrom(Z)))
})
It("Unique hashes for single packages", func() {
X := pkg.NewPackage("X", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
F := pkg.NewPackage("F", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("X", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{X, F, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Install([]pkg.Package{X})
Expect(err).ToNot(HaveOccurred())
solution2, err := s.Install([]pkg.Package{F})
Expect(err).ToNot(HaveOccurred())
solution3, err := s.Install([]pkg.Package{D})
Expect(err).ToNot(HaveOccurred())
Expect(solution.AssertionHash()).ToNot(Equal(solution2.AssertionHash()))
Expect(solution3.AssertionHash()).To(Equal(solution.AssertionHash()))
})
It("Unique hashes for empty assertions", func() {
empty := solver.PackagesAssertions{}
empty2 := solver.PackagesAssertions{}
Expect(empty.AssertionHash()).To(Equal(empty2.AssertionHash()))
})
It("Unique hashes for single packages with HashFrom", func() {
X := pkg.NewPackage("X", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
F := pkg.NewPackage("F", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("X", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
Y := pkg.NewPackage("Y", "", []*pkg.DefaultPackage{X}, []*pkg.DefaultPackage{})
empty := solver.PackagesAssertions{}
for _, p := range []pkg.Package{X, F, D, Y} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Install([]pkg.Package{X})
Expect(err).ToNot(HaveOccurred())
solution2, err := s.Install([]pkg.Package{F})
Expect(err).ToNot(HaveOccurred())
solution3, err := s.Install([]pkg.Package{D})
Expect(err).ToNot(HaveOccurred())
solution4, err := s.Install([]pkg.Package{Y})
Expect(err).ToNot(HaveOccurred())
Expect(solution.HashFrom(X)).ToNot(Equal(solution2.HashFrom(F)))
Expect(solution3.HashFrom(D)).To(Equal(solution.HashFrom(X)))
Expect(empty.AssertionHash()).ToNot(Equal(solution3.HashFrom(D)))
Expect(empty.AssertionHash()).ToNot(Equal(solution2.HashFrom(F)))
Expect(solution4.Drop(Y).AssertionHash()).To(Equal(solution4.HashFrom(Y)))
})
})
})

View File

@@ -77,11 +77,11 @@ type QLearningResolver struct {
Solver PackageSolver
Formula bf.Formula
Targets []pkg.Package
Current []pkg.Package
Targets pkg.Packages
Current pkg.Packages
observedDelta int
observedDeltaChoice []pkg.Package
observedDeltaChoice pkg.Packages
Agent *qlearning.SimpleAgent
}
@@ -177,7 +177,7 @@ func (resolver *QLearningResolver) Try(c Choice) error {
packtoAdd := pkg.FromString(pack)
resolver.Attempted[pack+strconv.Itoa(int(c.Action))] = true // increase the count
s, _ := resolver.Solver.(*Solver)
var filtered []pkg.Package
var filtered pkg.Packages
switch c.Action {
case ActionAdded:

View File

@@ -18,6 +18,8 @@ package solver
import (
//. "github.com/mudler/luet/pkg/logger"
"fmt"
"github.com/pkg/errors"
"github.com/crillab/gophersat/bf"
@@ -27,12 +29,17 @@ import (
// PackageSolver is an interface to a generic package solving algorithm
type PackageSolver interface {
SetDefinitionDatabase(pkg.PackageDatabase)
Install(p []pkg.Package) (PackagesAssertions, error)
Uninstall(candidate pkg.Package, checkconflicts bool) ([]pkg.Package, error)
Install(p pkg.Packages) (PackagesAssertions, error)
Uninstall(candidate pkg.Package, checkconflicts, full bool) (pkg.Packages, error)
ConflictsWithInstalled(p pkg.Package) (bool, error)
ConflictsWith(p pkg.Package, ls []pkg.Package) (bool, error)
World() []pkg.Package
Upgrade(checkconflicts bool) ([]pkg.Package, PackagesAssertions, error)
ConflictsWith(p pkg.Package, ls pkg.Packages) (bool, error)
Conflicts(pack pkg.Package, lsp pkg.Packages) (bool, error)
World() pkg.Packages
Upgrade(checkconflicts, full bool) (pkg.Packages, PackagesAssertions, error)
UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAssertions, error)
UninstallUniverse(toremove pkg.Packages) (pkg.Packages, error)
SetResolver(PackageResolver)
@@ -43,7 +50,7 @@ type PackageSolver interface {
type Solver struct {
DefinitionDatabase pkg.PackageDatabase
SolverDatabase pkg.PackageDatabase
Wanted []pkg.Package
Wanted pkg.Packages
InstalledDatabase pkg.PackageDatabase
Resolver PackageResolver
@@ -72,11 +79,11 @@ func (s *Solver) SetResolver(r PackageResolver) {
s.Resolver = r
}
func (s *Solver) World() []pkg.Package {
func (s *Solver) World() pkg.Packages {
return s.DefinitionDatabase.World()
}
func (s *Solver) Installed() []pkg.Package {
func (s *Solver) Installed() pkg.Packages {
return s.InstalledDatabase.World()
}
@@ -91,6 +98,16 @@ func (s *Solver) noRulesWorld() bool {
return true
}
func (s *Solver) noRulesInstalled() bool {
for _, p := range s.Installed() {
if len(p.GetConflicts()) != 0 || len(p.GetRequires()) != 0 {
return false
}
}
return true
}
func (s *Solver) BuildInstalled() (bf.Formula, error) {
var formulas []bf.Formula
for _, p := range s.Installed() {
@@ -130,8 +147,8 @@ func (s *Solver) BuildWorld(includeInstalled bool) (bf.Formula, error) {
return bf.And(formulas...), nil
}
func (s *Solver) getList(db pkg.PackageDatabase, lsp []pkg.Package) ([]pkg.Package, error) {
var ls []pkg.Package
func (s *Solver) getList(db pkg.PackageDatabase, lsp pkg.Packages) (pkg.Packages, error) {
var ls pkg.Packages
for _, pp := range lsp {
cp, err := db.FindPackage(pp)
@@ -141,7 +158,7 @@ func (s *Solver) getList(db pkg.PackageDatabase, lsp []pkg.Package) ([]pkg.Packa
if err != nil || len(packages) == 0 {
cp = pp
} else {
cp = pkg.Best(packages)
cp = packages.Best(nil)
}
}
ls = append(ls, cp)
@@ -149,7 +166,44 @@ func (s *Solver) getList(db pkg.PackageDatabase, lsp []pkg.Package) ([]pkg.Packa
return ls, nil
}
func (s *Solver) ConflictsWith(pack pkg.Package, lsp []pkg.Package) (bool, error) {
// Conflicts acts like ConflictsWith, but uses package's reverse dependencies to
// determine if it conflicts with the given set
func (s *Solver) Conflicts(pack pkg.Package, lsp pkg.Packages) (bool, error) {
p, err := s.DefinitionDatabase.FindPackage(pack)
if err != nil {
p = pack
}
ls, err := s.getList(s.DefinitionDatabase, lsp)
if err != nil {
return false, errors.Wrap(err, "Package not found in definition db")
}
if s.noRulesWorld() {
return false, nil
}
temporarySet := pkg.NewInMemoryDatabase(false)
for _, p := range ls {
temporarySet.CreatePackage(p)
}
visited := make(map[string]interface{})
revdeps := p.ExpandedRevdeps(temporarySet, visited)
var revdepsErr error
for _, r := range revdeps {
if revdepsErr == nil {
revdepsErr = errors.New("")
}
revdepsErr = errors.New(fmt.Sprintf("%s\n%s", revdepsErr.Error(), r.HumanReadableString()))
}
return len(revdeps) != 0, revdepsErr
}
// ConflictsWith return true if a package is part of the requirement set of a list of package
// return false otherwise (and thus it is NOT relevant to the given list)
func (s *Solver) ConflictsWith(pack pkg.Package, lsp pkg.Packages) (bool, error) {
p, err := s.DefinitionDatabase.FindPackage(pack)
if err != nil {
p = pack //Relax search, otherwise we cannot compute solutions for packages not in definitions
@@ -210,14 +264,171 @@ func (s *Solver) ConflictsWithInstalled(p pkg.Package) (bool, error) {
return s.ConflictsWith(p, s.Installed())
}
func (s *Solver) Upgrade(checkconflicts bool) ([]pkg.Package, PackagesAssertions, error) {
// UninstallUniverse takes a list of candidate package and return a list of packages that would be removed
// in order to purge the candidate. Uses the solver to check constraints and nothing else
//
// It can be compared to the counterpart Uninstall as this method acts like a uninstall --full
// it removes all the packages and its deps. taking also in consideration other packages that might have
// revdeps
func (s *Solver) UninstallUniverse(toremove pkg.Packages) (pkg.Packages, error) {
if s.noRulesInstalled() {
return s.getList(s.InstalledDatabase, toremove)
}
// resolve to packages from the db
toRemove, err := s.getList(s.InstalledDatabase, toremove)
if err != nil {
return nil, errors.Wrap(err, "Package not found in definition db")
}
var formulas []bf.Formula
r, err := s.BuildInstalled()
if err != nil {
return nil, errors.Wrap(err, "Package not found in definition db")
}
// SAT encode the clauses against the world
for _, p := range toRemove.Unique() {
encodedP, err := p.Encode(s.InstalledDatabase)
if err != nil {
return nil, errors.Wrap(err, "Package not found in definition db")
}
P := bf.Var(encodedP)
formulas = append(formulas, bf.And(bf.Not(P), r))
}
markedForRemoval := pkg.Packages{}
model := bf.Solve(bf.And(formulas...))
if model == nil {
return nil, errors.New("Failed finding a solution")
}
assertion, err := DecodeModel(model, s.InstalledDatabase)
if err != nil {
return nil, errors.Wrap(err, "while decoding model from solution")
}
for _, a := range assertion {
if !a.Value {
if p, err := s.InstalledDatabase.FindPackage(a.Package); err == nil {
markedForRemoval = append(markedForRemoval, p)
}
}
}
return markedForRemoval, nil
}
// UpgradeUniverse mark packages for removal and returns a solution. It considers
// the Universe db as authoritative
// See also on the subject: https://arxiv.org/pdf/1007.1021.pdf
func (s *Solver) UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAssertions, error) {
// we first figure out which aren't up-to-date
// which has to be removed
// and which needs to be upgraded
notUptodate := pkg.Packages{}
removed := pkg.Packages{}
toUpgrade := pkg.Packages{}
// TODO: this is memory expensive, we need to optimize this
universe := pkg.NewInMemoryDatabase(false)
for _, p := range s.DefinitionDatabase.World() {
universe.CreatePackage(p)
}
for _, p := range s.Installed() {
universe.CreatePackage(p)
}
// Grab all the installed ones, see if they are eligible for update
for _, p := range s.Installed() {
available, err := universe.FindPackageVersions(p)
if err != nil {
removed = append(removed, p)
}
if len(available) == 0 {
continue
}
bestmatch := available.Best(nil)
// Found a better version available
if !bestmatch.Matches(p) {
notUptodate = append(notUptodate, p)
toUpgrade = append(toUpgrade, bestmatch)
}
}
// resolve to packages from the db to be able to encode correctly
oldPackages, err := s.getList(universe, notUptodate)
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't get package marked for removal from universe")
}
updates, err := s.getList(universe, toUpgrade)
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't get package marked for update from universe")
}
var formulas []bf.Formula
// Build constraints for the whole defdb
r, err := s.BuildWorld(true)
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't build world constraints")
}
// Treat removed packages from universe as marked for deletion
if dropremoved {
oldPackages = append(oldPackages, removed...)
}
// SAT encode the clauses against the world
for _, p := range oldPackages.Unique() {
encodedP, err := p.Encode(universe)
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't encode package")
}
P := bf.Var(encodedP)
formulas = append(formulas, bf.And(bf.Not(P), r))
}
for _, p := range updates {
encodedP, err := p.Encode(universe)
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't encode package")
}
P := bf.Var(encodedP)
formulas = append(formulas, bf.And(P, r))
}
markedForRemoval := pkg.Packages{}
model := bf.Solve(bf.And(formulas...))
if model == nil {
return nil, nil, errors.New("Failed finding a solution")
}
assertion, err := DecodeModel(model, universe)
if err != nil {
return nil, nil, errors.Wrap(err, "while decoding model from solution")
}
for _, a := range assertion {
if !a.Value {
if p, err := s.InstalledDatabase.FindPackage(a.Package); err == nil {
markedForRemoval = append(markedForRemoval, p)
}
}
}
return markedForRemoval, assertion, nil
}
func (s *Solver) Upgrade(checkconflicts, full bool) (pkg.Packages, PackagesAssertions, error) {
// First get candidates that needs to be upgraded..
toUninstall := []pkg.Package{}
toInstall := []pkg.Package{}
toUninstall := pkg.Packages{}
toInstall := pkg.Packages{}
availableCache := map[string][]pkg.Package{}
availableCache := map[string]pkg.Packages{}
for _, p := range s.DefinitionDatabase.World() {
// Each one, should be expanded
availableCache[p.GetName()+p.GetCategory()] = append(availableCache[p.GetName()+p.GetCategory()], p)
@@ -229,18 +440,26 @@ func (s *Solver) Upgrade(checkconflicts bool) ([]pkg.Package, PackagesAssertions
installedcopy.CreatePackage(p)
packages, ok := availableCache[p.GetName()+p.GetCategory()]
if ok && len(packages) != 0 {
best := pkg.Best(packages)
best := packages.Best(nil)
if best.GetVersion() != p.GetVersion() {
toUninstall = append(toUninstall, p)
toInstall = append(toInstall, best)
}
}
}
s2 := NewSolver(installedcopy, s.DefinitionDatabase, pkg.NewInMemoryDatabase(false))
s2.SetResolver(s.Resolver)
if !full {
ass := PackagesAssertions{}
for _, i := range toInstall {
ass = append(ass, PackageAssert{Package: i.(*pkg.DefaultPackage), Value: true})
}
}
// Then try to uninstall the versions in the system, and store that tree
for _, p := range toUninstall {
r, err := s.Uninstall(p, checkconflicts)
r, err := s.Uninstall(p, checkconflicts, false)
if err != nil {
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't uninstall selected candidate "+p.GetFingerPrint())
}
@@ -250,8 +469,8 @@ func (s *Solver) Upgrade(checkconflicts bool) ([]pkg.Package, PackagesAssertions
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't remove copy of package targetted for removal")
}
}
}
r, e := s2.Install(toInstall)
return toUninstall, r, e
// To that tree, ask to install the versions that should be upgraded, and try to solve
@@ -261,8 +480,8 @@ func (s *Solver) Upgrade(checkconflicts bool) ([]pkg.Package, PackagesAssertions
// Uninstall takes a candidate package and return a list of packages that would be removed
// in order to purge the candidate. Returns error if unsat.
func (s *Solver) Uninstall(c pkg.Package, checkconflicts bool) ([]pkg.Package, error) {
var res []pkg.Package
func (s *Solver) Uninstall(c pkg.Package, checkconflicts, full bool) (pkg.Packages, error) {
var res pkg.Packages
candidate, err := s.InstalledDatabase.FindPackage(c)
if err != nil {
@@ -272,13 +491,23 @@ func (s *Solver) Uninstall(c pkg.Package, checkconflicts bool) ([]pkg.Package, e
if err != nil || len(packages) == 0 {
candidate = c
} else {
candidate = pkg.Best(packages)
candidate = packages.Best(nil)
}
//Relax search, otherwise we cannot compute solutions for packages not in definitions
// return nil, errors.Wrap(err, "Package not found between installed")
}
// Build a fake "Installed" - Candidate and its requires tree
var InstalledMinusCandidate []pkg.Package
var InstalledMinusCandidate pkg.Packages
// We are asked to not perform a full uninstall (checking all the possible requires that could
// be removed). Let's only check if we can remove the selected package
if !full && checkconflicts {
if conflicts, err := s.Conflicts(candidate, s.Installed()); conflicts {
return nil, err
} else {
return pkg.Packages{candidate}, nil
}
}
// TODO: Can be optimized
for _, i := range s.Installed() {
@@ -296,14 +525,14 @@ func (s *Solver) Uninstall(c pkg.Package, checkconflicts bool) ([]pkg.Package, e
s2 := NewSolver(pkg.NewInMemoryDatabase(false), s.DefinitionDatabase, pkg.NewInMemoryDatabase(false))
s2.SetResolver(s.Resolver)
// Get the requirements to install the candidate
asserts, err := s2.Install([]pkg.Package{candidate})
asserts, err := s2.Install(pkg.Packages{candidate})
if err != nil {
return nil, err
}
for _, a := range asserts {
if a.Value {
if !checkconflicts {
res = append(res, a.Package.IsFlagged(false))
res = append(res, a.Package)
continue
}
@@ -314,7 +543,7 @@ func (s *Solver) Uninstall(c pkg.Package, checkconflicts bool) ([]pkg.Package, e
// If doesn't conflict with installed we just consider it for removal and look for the next one
if !c {
res = append(res, a.Package.IsFlagged(false))
res = append(res, a.Package)
continue
}
@@ -324,7 +553,7 @@ func (s *Solver) Uninstall(c pkg.Package, checkconflicts bool) ([]pkg.Package, e
return nil, err
}
if !c {
res = append(res, a.Package.IsFlagged(false))
res = append(res, a.Package)
}
}
@@ -403,7 +632,7 @@ func (s *Solver) Solve() (PackagesAssertions, error) {
// Install given a list of packages, returns package assertions to indicate the packages that must be installed in the system in order
// to statisfy all the constraints
func (s *Solver) Install(c []pkg.Package) (PackagesAssertions, error) {
func (s *Solver) Install(c pkg.Packages) (PackagesAssertions, error) {
coll, err := s.getList(s.DefinitionDatabase, c)
if err != nil {

View File

@@ -360,6 +360,51 @@ var _ = Describe("Solver", func() {
Expect(err).ToNot(HaveOccurred())
})
It("Install only package requires", func() {
E := pkg.NewPackage("E", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
C := pkg.NewPackage("C", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{
Name: "A",
Version: ">=1.0",
}}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "1.9", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "1.1", []*pkg.DefaultPackage{
&pkg.DefaultPackage{
Name: "D",
Version: ">=0",
},
}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "1.2", []*pkg.DefaultPackage{
&pkg.DefaultPackage{
Name: "D",
Version: ">=1.0",
},
}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D, E} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
s = NewSolver(dbInstalled, dbDefinitions, db)
solution, err := s.Install([]pkg.Package{C})
Expect(solution).To(ContainElement(PackageAssert{Package: A, Value: true}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: B, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: D, Value: true}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: D, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(4))
Expect(err).ToNot(HaveOccurred())
})
It("Selects best version", func() {
E := pkg.NewPackage("E", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
@@ -529,57 +574,366 @@ var _ = Describe("Solver", func() {
Expect(err).ToNot(HaveOccurred())
})
It("Uninstalls simple package correctly", func() {
Context("Uninstall", func() {
It("Uninstalls simple package correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
s = NewSolver(dbInstalled, dbDefinitions, db)
solution, err := s.Uninstall(A, true, true)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(solution).To(ContainElement(A))
// Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(len(solution)).To(Equal(1))
})
It("Uninstalls simple package expanded correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "1.2", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
s = NewSolver(dbInstalled, dbDefinitions, db)
solution, err := s.Uninstall(&pkg.DefaultPackage{Name: "A", Version: ">1.0"}, true, true)
Expect(err).ToNot(HaveOccurred())
}
s = NewSolver(dbInstalled, dbDefinitions, db)
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(A.IsFlagged(false)))
// Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(len(solution)).To(Equal(1))
})
It("Uninstalls simple packages not in world correctly", func() {
// Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(len(solution)).To(Equal(1))
})
It("Uninstalls simple package expanded correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "1.2", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(solution).To(ContainElement(A))
// Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(len(solution)).To(Equal(1))
})
It("Uninstalls complex packages not in world correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
Expect(err).ToNot(HaveOccurred())
}
s = NewSolver(dbInstalled, dbDefinitions, db)
solution, err := s.Uninstall(&pkg.DefaultPackage{Name: "A", Version: ">1.0"}, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(A.IsFlagged(false)))
Expect(len(solution)).To(Equal(1))
})
It("Uninstalls complex packages correctly, even if shared deps are required by system packages", func() {
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).ToNot(ContainElement(B))
Expect(len(solution)).To(Equal(1))
})
It("Uninstalls complex packages in world correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{C}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(C))
Expect(len(solution)).To(Equal(2))
})
It("Uninstalls complex package correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{D}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
// C // installed
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(B))
Expect(solution).To(ContainElement(D))
Expect(len(solution)).To(Equal(3))
})
It("UninstallUniverse simple package correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
s = NewSolver(dbInstalled, dbDefinitions, db)
solution, err := s.UninstallUniverse(pkg.Packages{A})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
// Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(len(solution)).To(Equal(1))
})
It("UninstallUniverse simple package expanded correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "1.2", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
s = NewSolver(dbInstalled, dbDefinitions, db)
solution, err := s.UninstallUniverse(pkg.Packages{
&pkg.DefaultPackage{Name: "A", Version: ">1.0"}})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
// Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(len(solution)).To(Equal(1))
})
It("UninstallUniverse simple packages not in world correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.UninstallUniverse(pkg.Packages{A})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
// Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(len(solution)).To(Equal(1))
})
It("UninstallUniverse complex packages not in world correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.UninstallUniverse(pkg.Packages{A})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(B))
Expect(len(solution)).To(Equal(2))
})
It("UninstallUniverse complex packages correctly, even if shared deps are required by system packages", func() {
// Here we diff a lot from standard Uninstall:
// all the packages that has reverse deps will be removed (aka --full)
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.UninstallUniverse(pkg.Packages{A})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(B))
Expect(solution).To(ContainElement(C))
Expect(len(solution)).To(Equal(3))
})
It("UninstallUniverse complex packages in world correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{C}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.UninstallUniverse(pkg.Packages{A})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(C))
Expect(len(solution)).To(Equal(2))
})
It("UninstallUniverse complex package correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{D}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
// C // installed
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.UninstallUniverse(pkg.Packages{A})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(B))
Expect(solution).To(ContainElement(D))
Expect(len(solution)).To(Equal(3))
})
// Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(len(solution)).To(Equal(1))
})
It("Find conflicts", func() {
@@ -672,14 +1026,16 @@ var _ = Describe("Solver", func() {
Expect(err).ToNot(HaveOccurred())
Expect(val).ToNot(BeTrue())
})
It("Uninstalls simple packages not in world correctly", func() {
It("Find conflicts using revdeps", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{B, C, D} {
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{A}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
@@ -688,43 +1044,20 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
val, err := s.Conflicts(A, dbInstalled.World())
Expect(err.Error()).To(Equal("\n/B-"))
Expect(val).To(BeTrue())
// Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(len(solution)).To(Equal(1))
})
It("Uninstalls complex packages not in world correctly", func() {
It("Find nested conflicts with revdeps", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{D}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
Expect(len(solution)).To(Equal(1))
})
It("Uninstalls complex packages correctly, even if shared deps are required by system packages", func() {
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{A}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
@@ -735,46 +1068,42 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
Expect(solution).ToNot(ContainElement(B.IsFlagged(false)))
Expect(len(solution)).To(Equal(1))
val, err := s.Conflicts(D, dbInstalled.World())
Expect(err.Error()).To(Equal("\n/A-\n/B-"))
Expect(val).To(BeTrue())
})
It("Uninstalls complex packages in world correctly", func() {
It("Doesn't find nested conflicts with revdeps", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{C}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{D}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{A}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, C, D} {
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true)
val, err := s.Conflicts(C, dbInstalled.World())
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
Expect(solution).To(ContainElement(C.IsFlagged(false)))
Expect(len(solution)).To(Equal(2))
Expect(val).ToNot(BeTrue())
})
It("Uninstalls complex package correctly", func() {
It("Doesn't find conflicts with revdeps", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{D}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
// C // installed
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
@@ -785,16 +1114,9 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true)
val, err := s.Conflicts(C, dbInstalled.World())
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A.IsFlagged(false)))
Expect(solution).To(ContainElement(B.IsFlagged(false)))
Expect(solution).To(ContainElement(D.IsFlagged(false)))
Expect(len(solution)).To(Equal(3))
Expect(val).ToNot(BeTrue())
})
})
@@ -882,7 +1204,7 @@ var _ = Describe("Solver", func() {
Expect(lst).To(ContainElement(a03))
Expect(lst).ToNot(ContainElement(old))
Expect(len(lst)).To(Equal(5))
p := pkg.Best(lst)
p := lst.Best(nil)
Expect(p).To(Equal(a03))
})
})
@@ -907,7 +1229,7 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
uninstall, solution, err := s.Upgrade(true)
uninstall, solution, err := s.Upgrade(true, true)
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(1))
@@ -920,5 +1242,30 @@ var _ = Describe("Solver", func() {
Expect(len(solution)).To(Equal(3))
})
It("UpgradeUniverse upgrades correctly", func() {
for _, p := range []pkg.Package{A1, B, C} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
uninstall, solution, err := s.UpgradeUniverse(true)
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(1))
Expect(uninstall[0].GetName()).To(Equal("a"))
Expect(uninstall[0].GetVersion()).To(Equal("1.1"))
Expect(solution).To(ContainElement(PackageAssert{Package: A1, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: false}))
Expect(solution).To(ContainElement(PackageAssert{Package: A, Value: false}))
Expect(len(solution)).To(Equal(4))
})
})
})

View File

@@ -0,0 +1,112 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>,
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package spectooling
import (
pkg "github.com/mudler/luet/pkg/package"
"gopkg.in/yaml.v2"
)
type DefaultPackageSanitized struct {
Name string `json:"name" yaml:"name"`
Version string `json:"version" yaml:"version"`
Category string `json:"category" yaml:"category"`
UseFlags []string `json:"use_flags,omitempty" yaml:"use_flags,omitempty"`
PackageRequires []*DefaultPackageSanitized `json:"requires,omitempty" yaml:"requires,omitempty"`
PackageConflicts []*DefaultPackageSanitized `json:"conflicts,omitempty" yaml:"conflicts,omitempty"`
Provides []*DefaultPackageSanitized `json:"provides,omitempty" yaml:"provides,omitempty"`
Annotations map[string]string `json:"annotations,omitempty" yaml:"annotations,omitempty"`
// Path is set only internally when tree is loaded from disk
Path string `json:"path,omitempty" yaml:"path,omitempty"`
Description string `json:"description,omitempty" yaml:"description,omitempty"`
Uri []string `json:"uri,omitempty" yaml:"uri,omitempty"`
License string `json:"license,omitempty" yaml:"license,omitempty"`
Hidden bool `json:"hidden,omitempty" yaml:"hidden,omitempty"`
Labels map[string]string `json:"labels,omitempty" yaml:"labels,omitempty"`
}
func NewDefaultPackageSanitized(p pkg.Package) *DefaultPackageSanitized {
ans := &DefaultPackageSanitized{
Name: p.GetName(),
Version: p.GetVersion(),
Category: p.GetCategory(),
UseFlags: p.GetUses(),
Hidden: p.IsHidden(),
Path: p.GetPath(),
Description: p.GetDescription(),
Uri: p.GetURI(),
License: p.GetLicense(),
Labels: p.GetLabels(),
Annotations: p.GetAnnotations(),
}
if p.GetRequires() != nil && len(p.GetRequires()) > 0 {
ans.PackageRequires = []*DefaultPackageSanitized{}
for _, r := range p.GetRequires() {
// I avoid recursive call of NewDefaultPackageSanitized
ans.PackageRequires = append(ans.PackageRequires,
&DefaultPackageSanitized{
Name: r.Name,
Version: r.Version,
Category: r.Category,
Hidden: r.IsHidden(),
},
)
}
}
if p.GetConflicts() != nil && len(p.GetConflicts()) > 0 {
ans.PackageConflicts = []*DefaultPackageSanitized{}
for _, c := range p.GetConflicts() {
// I avoid recursive call of NewDefaultPackageSanitized
ans.PackageConflicts = append(ans.PackageConflicts,
&DefaultPackageSanitized{
Name: c.Name,
Version: c.Version,
Category: c.Category,
Hidden: c.IsHidden(),
},
)
}
}
if p.GetProvides() != nil && len(p.GetProvides()) > 0 {
ans.Provides = []*DefaultPackageSanitized{}
for _, prov := range p.GetProvides() {
// I avoid recursive call of NewDefaultPackageSanitized
ans.Provides = append(ans.Provides,
&DefaultPackageSanitized{
Name: prov.Name,
Version: prov.Version,
Category: prov.Category,
Hidden: prov.IsHidden(),
},
)
}
}
return ans
}
func (p *DefaultPackageSanitized) Yaml() ([]byte, error) {
return yaml.Marshal(p)
}

View File

@@ -0,0 +1,87 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>,
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package spectooling_test
import (
pkg "github.com/mudler/luet/pkg/package"
. "github.com/mudler/luet/pkg/spectooling"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Spec Tooling", func() {
Context("Conversion1", func() {
b := pkg.NewPackage("B", "1.0", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
c := pkg.NewPackage("C", "1.0", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
d := pkg.NewPackage("D", "1.0", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
p1 := pkg.NewPackage("A", "1.0", []*pkg.DefaultPackage{b, c}, []*pkg.DefaultPackage{d})
virtual := pkg.NewPackage("E", "1.0", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
virtual.SetCategory("virtual")
p1.Provides = []*pkg.DefaultPackage{virtual}
p1.AddLabel("label1", "value1")
p1.AddLabel("label2", "value2")
p1.SetDescription("Package1")
p1.SetCategory("cat1")
p1.SetLicense("GPL")
p1.AddURI("https://github.com/mudler/luet")
p1.AddUse("systemd")
It("Convert pkg1", func() {
res := NewDefaultPackageSanitized(p1)
expected_res := &DefaultPackageSanitized{
Name: "A",
Version: "1.0",
Category: "cat1",
PackageRequires: []*DefaultPackageSanitized{
&DefaultPackageSanitized{
Name: "B",
Version: "1.0",
},
&DefaultPackageSanitized{
Name: "C",
Version: "1.0",
},
},
PackageConflicts: []*DefaultPackageSanitized{
&DefaultPackageSanitized{
Name: "D",
Version: "1.0",
},
},
Provides: []*DefaultPackageSanitized{
&DefaultPackageSanitized{
Name: "E",
Category: "virtual",
Version: "1.0",
},
},
Labels: map[string]string{
"label1": "value1",
"label2": "value2",
},
Description: "Package1",
License: "GPL",
Uri: []string{"https://github.com/mudler/luet"},
UseFlags: []string{"systemd"},
}
Expect(res).To(Equal(expected_res))
})
})
})

View File

@@ -0,0 +1,33 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>,
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package spectooling_test
import (
"testing"
. "github.com/mudler/luet/cmd"
config "github.com/mudler/luet/pkg/config"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func TestSolver(t *testing.T) {
RegisterFailHandler(Fail)
LoadConfig(config.LuetCfg)
RunSpecs(t, "Spec Tooling Suite")
}

View File

@@ -50,7 +50,7 @@ type GentooBuilder struct {
}
type EbuildParser interface {
ScanEbuild(string) ([]pkg.Package, error)
ScanEbuild(string) (pkg.Packages, error)
}
func (gb *GentooBuilder) scanEbuild(path string, db pkg.PackageDatabase) error {

View File

@@ -27,8 +27,8 @@ import (
type FakeParser struct {
}
func (f *FakeParser) ScanEbuild(path string) ([]pkg.Package, error) {
return []pkg.Package{&pkg.DefaultPackage{Name: path}}, nil
func (f *FakeParser) ScanEbuild(path string) (pkg.Packages, error) {
return pkg.Packages{&pkg.DefaultPackage{Name: path}}, nil
}
var _ = Describe("GentooBuilder", func() {

View File

@@ -323,7 +323,7 @@ func SourceFile(ctx context.Context, path string, pkg *_gentoo.GentooPackage) (m
}
// ScanEbuild returns a list of packages (always one with SimpleEbuildParser) decoded from an ebuild.
func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
func (ep *SimpleEbuildParser) ScanEbuild(path string) (pkg.Packages, error) {
Debug("Starting parsing of ebuild", path)
pkgstr := filepath.Base(path)
@@ -332,7 +332,7 @@ func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
gp, err := _gentoo.ParsePackageStr(pkgstr)
if err != nil {
return []pkg.Package{}, errors.New("Error on parsing package string")
return pkg.Packages{}, errors.New("Error on parsing package string")
}
pack := &pkg.DefaultPackage{
@@ -350,7 +350,13 @@ func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
vars, err := SourceFile(timeout, path, gp)
if err != nil {
Error("Error on source file ", pack.Name, ": ", err)
return []pkg.Package{}, err
return pkg.Packages{}, err
}
// Retrieve slot
slot, ok := vars["SLOT"]
if ok && slot.String() != "0" {
pack.SetCategory(fmt.Sprintf("%s-%s", gp.Category, slot.String()))
}
// TODO: Handle this a bit better
@@ -405,8 +411,8 @@ func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
gRDEPEND, err := ParseRDEPEND(rdepend.String())
if err != nil {
Warning("Error on parsing RDEPEND for package ", pack.Category+"/"+pack.Name, err)
return []pkg.Package{pack}, nil
// return []pkg.Package{}, err
return pkg.Packages{pack}, nil
// return pkg.Packages{}, err
}
pack.PackageConflicts = []*pkg.DefaultPackage{}
@@ -417,6 +423,7 @@ func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
for _, d := range gRDEPEND.GetDependencies() {
//TODO: Resolve to db or create a new one.
//TODO: handle SLOT too.
dep := &pkg.DefaultPackage{
Name: d.Dep.Name,
Version: d.Dep.Version + d.Dep.VersionSuffix,
@@ -436,5 +443,5 @@ func (ep *SimpleEbuildParser) ScanEbuild(path string) ([]pkg.Package, error) {
Debug("Finished processing ebuild", path, "deps ", len(pack.PackageRequires))
//TODO: Deps and conflicts
return []pkg.Package{pack}, nil
return pkg.Packages{pack}, nil
}

View File

@@ -21,6 +21,7 @@
package tree
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
@@ -70,10 +71,12 @@ func (r *InstallerRecipe) Save(path string) error {
func (r *InstallerRecipe) Load(path string) error {
// tmpfile, err := ioutil.TempFile("", "luet")
// if err != nil {
// return err
// }
if !helpers.Exists(path) {
return errors.New(fmt.Sprintf(
"Path %s doesn't exit.", path,
))
}
r.SourcePath = append(r.SourcePath, path)
//r.Tree().SetPackageSet(pkg.NewBoltDatabase(tmpfile.Name()))

View File

@@ -21,11 +21,14 @@
package tree
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
helpers "github.com/mudler/luet/pkg/helpers"
pkg "github.com/mudler/luet/pkg/package"
spectooling "github.com/mudler/luet/pkg/spectooling"
"github.com/pkg/errors"
)
@@ -42,7 +45,7 @@ type Recipe struct {
}
func WriteDefinitionFile(p pkg.Package, definitionFilePath string) error {
data, err := p.Yaml()
data, err := spectooling.NewDefaultPackageSanitized(p).Yaml()
if err != nil {
return err
}
@@ -73,6 +76,12 @@ func (r *Recipe) Load(path string) error {
// if err != nil {
// return err
// }
if !helpers.Exists(path) {
return errors.New(fmt.Sprintf(
"Path %s doesn't exit.", path,
))
}
r.SourcePath = append(r.SourcePath, path)
if r.Database == nil {

View File

@@ -23,6 +23,7 @@ package tree_test
import (
"io/ioutil"
"os"
"regexp"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
@@ -65,7 +66,8 @@ var _ = Describe("Tree", func() {
solution, err := s.Install([]pkg.Package{pack})
Expect(err).ToNot(HaveOccurred())
solution = solution.Order(generalRecipe.GetDatabase(), pack.GetFingerPrint())
solution, err = solution.Order(generalRecipe.GetDatabase(), pack.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
Expect(solution[0].Package.GetName()).To(Equal("a"))
Expect(solution[0].Value).To(BeFalse())
@@ -137,7 +139,8 @@ var _ = Describe("Tree", func() {
solution, err := s.Install([]pkg.Package{Dd})
Expect(err).ToNot(HaveOccurred())
solution = solution.Order(generalRecipe.GetDatabase(), Dd.GetFingerPrint())
solution, err = solution.Order(generalRecipe.GetDatabase(), Dd.GetFingerPrint())
Expect(err).ToNot(HaveOccurred())
pack, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -169,4 +172,23 @@ var _ = Describe("Tree", func() {
})
})
Context("Simple tree with annotations", func() {
It("Read tree with annotations", func() {
db := pkg.NewInMemoryDatabase(false)
generalRecipe := NewCompilerRecipe(db)
r := regexp.MustCompile("^label")
err := generalRecipe.Load("../../tests/fixtures/annotations")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().World())).To(Equal(1))
pack, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "pkgA", Category: "test", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
Expect(pack.HasAnnotation("label1")).To(Equal(true))
Expect(pack.HasAnnotation("label3")).To(Equal(false))
Expect(pack.MatchAnnotation(r)).To(Equal(true))
})
})
})

View File

@@ -0,0 +1,27 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>,
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package version
// Versioner is responsible of sanitizing versions,
// validating them and ordering by precedence
type Versioner interface {
Sanitize(string) string
Validate(string) error
Sort([]string) []string
ValidateSelector(version string, selector string) bool
}

View File

@@ -14,15 +14,14 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package pkg
package version
import (
"errors"
"fmt"
"regexp"
"strings"
version "github.com/hashicorp/go-version"
semver "github.com/hashicorp/go-version"
)
// Package Selector Condition
@@ -162,7 +161,7 @@ func ParseVersion(v string) (PkgVersionSelector, error) {
}
// Check if build number is present
buildIdx := strings.Index(v, "+")
buildIdx := strings.LastIndex(v, "+")
buildVersion := ""
if buildIdx > 0 {
// <pre-release> ::= <dot-separated pre-release identifiers>
@@ -252,8 +251,8 @@ func ParseVersion(v string) (PkgVersionSelector, error) {
}
func PackageAdmit(selector, i PkgVersionSelector) (bool, error) {
var v1 *version.Version = nil
var v2 *version.Version = nil
var v1 *semver.Version = nil
var v2 *semver.Version = nil
var ans bool
var err error
var sanitizedSelectorVersion, sanitizedIVersion string
@@ -262,14 +261,14 @@ func PackageAdmit(selector, i PkgVersionSelector) (bool, error) {
// TODO: This is temporary!. I promise it.
sanitizedSelectorVersion = strings.ReplaceAll(selector.Version, "_", "-")
v1, err = version.NewVersion(sanitizedSelectorVersion)
v1, err = semver.NewVersion(sanitizedSelectorVersion)
if err != nil {
return false, err
}
}
if i.Version != "" {
sanitizedIVersion = strings.ReplaceAll(i.Version, "_", "-")
v2, err = version.NewVersion(sanitizedIVersion)
v2, err = semver.NewVersion(sanitizedIVersion)
if err != nil {
return false, err
}
@@ -307,7 +306,7 @@ func PackageAdmit(selector, i PkgVersionSelector) (bool, error) {
segments[len(segments)-1]++
}
nextVersion := strings.Trim(strings.Replace(fmt.Sprint(segments), " ", ".", -1), "[]")
constraints, err := version.NewConstraint(
constraints, err := semver.NewConstraint(
fmt.Sprintf(">= %s, < %s", sanitizedSelectorVersion, nextVersion),
)
if err != nil {
@@ -335,35 +334,3 @@ func PackageAdmit(selector, i PkgVersionSelector) (bool, error) {
return ans, nil
}
func (p *DefaultPackage) SelectorMatchVersion(v string) (bool, error) {
if !p.IsSelector() {
return false, errors.New("Package is not a selector")
}
vS, err := ParseVersion(p.GetVersion())
if err != nil {
return false, err
}
vSI, err := ParseVersion(v)
if err != nil {
return false, err
}
return PackageAdmit(vS, vSI)
}
func (p *DefaultPackage) VersionMatchSelector(selector string) (bool, error) {
vS, err := ParseVersion(selector)
if err != nil {
return false, err
}
vSI, err := ParseVersion(p.GetVersion())
if err != nil {
return false, err
}
return PackageAdmit(vS, vSI)
}

View File

@@ -14,12 +14,12 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package pkg_test
package version_test
import (
gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
. "github.com/mudler/luet/pkg/package"
. "github.com/mudler/luet/pkg/versioner"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)

148
pkg/versioner/versioner.go Normal file
View File

@@ -0,0 +1,148 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>,
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package version
import (
"errors"
"regexp"
"sort"
"strconv"
"strings"
semver "github.com/hashicorp/go-version"
debversion "github.com/knqyf263/go-deb-version"
)
// WrappedVersioner uses different means to return unique result that is understendable by Luet
// It tries different approaches to sort, validate, and sanitize to a common versioning format
// that is understendable by the whole code
type WrappedVersioner struct{}
func DefaultVersioner() Versioner {
return &WrappedVersioner{}
}
func (w *WrappedVersioner) Validate(version string) error {
if !debversion.Valid(version) {
return errors.New("Invalid version")
}
return nil
}
func (w *WrappedVersioner) ValidateSelector(version string, selector string) bool {
vS, err := ParseVersion(selector)
if err != nil {
return false
}
vSI, err := ParseVersion(version)
if err != nil {
return false
}
ok, err := PackageAdmit(vS, vSI)
if err != nil {
return false
}
return ok
}
func (w *WrappedVersioner) Sanitize(s string) string {
return strings.ReplaceAll(s, "_", "-")
}
func (w *WrappedVersioner) IsSemver(v string) bool {
// Taken https://github.com/hashicorp/go-version/blob/2b13044f5cdd3833370d41ce57d8bf3cec5e62b8/version.go#L44
// semver doesn't have a Validate method, so we should filter before
// going to use it blindly (it panics)
semverRegexp := regexp.MustCompile("^" + semver.SemverRegexpRaw + "$")
// See https://github.com/hashicorp/go-version/blob/2b13044f5cdd3833370d41ce57d8bf3cec5e62b8/version.go#L61
matches := semverRegexp.FindStringSubmatch(v)
if matches == nil {
return false
}
segmentsStr := strings.Split(matches[1], ".")
segments := make([]int64, len(segmentsStr))
for i, str := range segmentsStr {
val, err := strconv.ParseInt(str, 10, 64)
if err != nil {
return false
}
segments[i] = int64(val)
}
return (len(segments) != 0)
}
func (w *WrappedVersioner) Sort(toSort []string) []string {
if len(toSort) == 0 {
return toSort
}
var versionsMap map[string]string = make(map[string]string)
versionsRaw := []string{}
result := []string{}
for _, v := range toSort {
sanitizedVersion := w.Sanitize(v)
versionsMap[sanitizedVersion] = v
versionsRaw = append(versionsRaw, sanitizedVersion)
}
versions := make([]*semver.Version, len(versionsRaw))
// Check if all of them are semver, otherwise we cannot do a good comparison
allSemverCompliant := true
for _, raw := range versionsRaw {
if !w.IsSemver(raw) {
allSemverCompliant = false
}
}
if allSemverCompliant {
for i, raw := range versionsRaw {
if w.IsSemver(raw) { // Make sure we include only semver, or go-version will panic
v, _ := semver.NewVersion(raw)
versions[i] = v
}
}
// Try first semver sorting
sort.Sort(semver.Collection(versions))
if len(versions) > 0 {
for _, v := range versions {
result = append(result, versionsMap[v.Original()])
}
return result
}
}
// Try with debian sorting
vs := make([]debversion.Version, len(versionsRaw))
for i, r := range versionsRaw {
v, _ := debversion.NewVersion(r)
vs[i] = v
}
sort.Slice(vs, func(i, j int) bool {
return vs[i].LessThan(vs[j])
})
for _, v := range vs {
result = append(result, versionsMap[v.String()])
}
return result
}

View File

@@ -0,0 +1,32 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package version_test
import (
"testing"
. "github.com/mudler/luet/cmd"
config "github.com/mudler/luet/pkg/config"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func TestVersioner(t *testing.T) {
RegisterFailHandler(Fail)
LoadConfig(config.LuetCfg)
RunSpecs(t, "Versioner Suite")
}

View File

@@ -0,0 +1,82 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package version_test
import (
. "github.com/mudler/luet/pkg/versioner"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Versioner", func() {
Context("Invalid version", func() {
versioner := DefaultVersioner()
It("Sanitize", func() {
sanitized := versioner.Sanitize("foo_bar")
Expect(sanitized).Should(Equal("foo-bar"))
})
})
Context("valid version", func() {
versioner := DefaultVersioner()
It("Validate", func() {
err := versioner.Validate("1.0")
Expect(err).ShouldNot(HaveOccurred())
})
})
Context("invalid version", func() {
versioner := DefaultVersioner()
It("Validate", func() {
err := versioner.Validate("1.0_##")
Expect(err).Should(HaveOccurred())
})
})
Context("Sorting", func() {
versioner := DefaultVersioner()
It("finds the correct ordering", func() {
sorted := versioner.Sort([]string{"1.0", "0.1"})
Expect(sorted).Should(Equal([]string{"0.1", "1.0"}))
})
})
Context("Sorting with invalid characters", func() {
versioner := DefaultVersioner()
It("finds the correct ordering", func() {
sorted := versioner.Sort([]string{"1.0_1", "0.1"})
Expect(sorted).Should(Equal([]string{"0.1", "1.0_1"}))
})
})
Context("Complex Sorting", func() {
versioner := DefaultVersioner()
It("finds the correct ordering", func() {
sorted := versioner.Sort([]string{"1.0", "0.1", "0.22", "1.1", "1.9", "1.10", "11.1"})
Expect(sorted).Should(Equal([]string{"0.1", "0.22", "1.0", "1.1", "1.9", "1.10", "11.1"}))
})
})
// from: https://github.com/knqyf263/go-deb-version/blob/master/version_test.go#L8
Context("Debian Sorting", func() {
versioner := DefaultVersioner()
It("finds the correct ordering", func() {
sorted := versioner.Sort([]string{"2:7.4.052-1ubuntu3.1", "2:7.4.052-1ubuntu1", "2:7.4.052-1ubuntu2", "2:7.4.052-1ubuntu3"})
Expect(sorted).Should(Equal([]string{"2:7.4.052-1ubuntu1", "2:7.4.052-1ubuntu2", "2:7.4.052-1ubuntu3", "2:7.4.052-1ubuntu3.1"}))
})
})
})

View File

@@ -0,0 +1,2 @@
image: "alpine"
unpack: true

View File

@@ -0,0 +1,3 @@
category: "seed"
name: "alpine"
version: "1.0"

View File

@@ -0,0 +1,3 @@
image: "alpine"
steps:
- echo "test" > /file1

View File

@@ -0,0 +1,6 @@
category: "test"
name: "pkgA"
version: "0.1"
annotations:
label1: "value1"
label2: "value2"

View File

@@ -0,0 +1,9 @@
image: "alpine"
prelude:
- apk add libcap
unpack: true
includes:
- /file1
steps:
- echo "test" > /file1
- setcap cap_net_raw+ep /file1

View File

@@ -0,0 +1,3 @@
category: "test"
name: "caps"
version: "0.1"

View File

@@ -0,0 +1,6 @@
image: "alpine"
prelude:
- apk add libcap
steps:
- echo "test" > /file2
- setcap cap_net_raw+ep /file2

View File

@@ -0,0 +1,3 @@
category: "test"
name: "caps2"
version: "0.1"

Some files were not shown because too many files have changed in this diff Show More