Compare commits

..

73 Commits

Author SHA1 Message Date
Ettore Di Giacinto
fa9dc9da53 Tag 0.9.13 2020-12-08 15:49:40 +01:00
Ettore Di Giacinto
9911888d18 Stabilize test 2020-12-08 14:56:51 +01:00
Ettore Di Giacinto
2906180c43 Upgrade universe test expectations are changed 2020-12-08 14:15:52 +01:00
Ettore Di Giacinto
cf5e4e1305 Detect removed also when availables aren't found 2020-12-08 12:28:20 +01:00
Ettore Di Giacinto
519586f6bc Search for removed in Def DB 2020-12-08 12:07:28 +01:00
Ettore Di Giacinto
6dbc422b8f Apply solver change to UpgradeUniverse also to the parallel variant and adapt tests
Similarly, we want just to consider what is being uninstalled and the
new rules of the package that is going to be upgraded
2020-12-08 11:43:38 +01:00
Ettore Di Giacinto
a3cfebf438 Create BuildFormula from installed with InstallDatabase
Instead of using the DefinitionDB which supposedly contains only the
relations present in the online repositories. In this way the solver its
more consistent and tries to solve with only the internal definitions.

This also fixes quirks with luet upgrade --universe
2020-12-08 10:58:08 +01:00
Ettore Di Giacinto
24201b25ef Apply solver change also to the parallel variant 2020-12-08 10:41:03 +01:00
Ettore Di Giacinto
7c53296530 Adapt tests 2020-12-08 10:39:15 +01:00
Ettore Di Giacinto
a3cb0ed17f When attempting to uninstall, do it from the internal db so it can resolve the current versions 2020-12-08 02:04:54 +01:00
Ettore Di Giacinto
9ca5d24856 Tag 0.9.12 2020-12-07 20:18:49 +01:00
Ettore Di Giacinto
9a34296be0 Build step is always required for tagging images 2020-12-07 19:39:56 +01:00
Ettore Di Giacinto
ebd18ae22c Set builderTagged image afterwards 2020-12-07 18:58:14 +01:00
Ettore Di Giacinto
7f10a19be5 Don't hide build output 2020-12-07 18:56:39 +01:00
Ettore Di Giacinto
6bf7368993 Don't replace buildertaggedImage if there aren't build steps 2020-12-07 18:39:15 +01:00
Ettore Di Giacinto
338f310d67 Tag and push an image when virtual is supplied, to have a track of it in the image graph tree 2020-12-07 17:59:30 +01:00
Ettore Di Giacinto
3fd1bdbfc8 ADD automatically extracts as well 2020-12-07 17:21:06 +01:00
Ettore Di Giacinto
59d78c3f5c While upgrading always use nodeps while computing uninstall 2020-12-07 17:20:55 +01:00
Ettore Di Giacinto
86c256a062 Generate empty tar 2020-12-07 17:20:32 +01:00
Ettore Di Giacinto
876e3659fb Turn full off by default on upgrade 2020-12-07 00:48:28 +01:00
Ettore Di Giacinto
3c0dd2b71d Adapt test 2020-12-07 00:07:57 +01:00
Ettore Di Giacinto
e9b4d66a3e Retrieve should be rendered also for step images 2020-12-07 00:00:32 +01:00
Ettore Di Giacinto
5047316b70 Try to build only when strictly necessary 2020-12-06 23:50:51 +01:00
Ettore Di Giacinto
02edc10c58 Tag 0.9.11 2020-12-06 22:52:15 +01:00
Ettore Di Giacinto
d479ada402 Don't consider deps while uninstalling during package Swap
Beside being forced, it also doesn't need to look deep into the deps, as
we already have precalculated those
2020-12-06 22:48:48 +01:00
Ettore Di Giacinto
7b800c9a20 Pre-compute swap step
Otherwise, while upgrading, it could happen that package dependencies
aren't downloaded before, and they would just be installed in the middle
of installation, after removal already happened.
2020-12-06 22:11:17 +01:00
Ettore Di Giacinto
18e6e085d5 Sort correctly also subfolders 2020-12-05 23:17:05 +01:00
Ettore Di Giacinto
6d19f8d2cc Tag 0.9.10 2020-12-03 21:02:57 +01:00
Ettore Di Giacinto
67c43eb936 Don't bail out if package is installed and we have a list 2020-12-03 20:03:37 +01:00
Ettore Di Giacinto
cf80e5fc09 Resolvers might omit packages 2020-12-03 18:53:57 +01:00
Ettore Di Giacinto
d668d8344b Accept selectors on uninstall and fixup failure logic 2020-12-03 18:32:24 +01:00
Ettore Di Giacinto
b17ac447f1 Display matched packages only, and check if they are available 2020-12-03 17:25:29 +01:00
Ettore Di Giacinto
c8bcd88f1f Add command usage in CLI
Add Long description for missing commands along with practical examples
2020-12-02 23:15:23 +01:00
Ettore Di Giacinto
034fb54c25 Update README 2020-12-02 21:18:21 +01:00
Ettore Di Giacinto
6dbf19f085 Use single image to build packages 2020-12-02 21:18:12 +01:00
Ettore Di Giacinto
43db64c089 Tag 0.9.9 2020-12-02 19:12:43 +01:00
Ettore Di Giacinto
9423b7c1e3 Add image build events, and add luet replace
Enhance also some commands descriptions
2020-12-02 18:24:35 +01:00
Ettore Di Giacinto
75dbc2dcb4 Adapt integration tests 2020-11-29 13:56:58 +01:00
Ettore Di Giacinto
f3e2e0a184 Add CLI tests 2020-11-29 11:51:27 +01:00
Ettore Di Giacinto
8237506bd3 Accept specific versions in cli input and avoid gentoo parser by default
This is a breaking change as changes the way packages can be given as
arguments to luet.

From this change, the following applies:

- If a package string contains @, the right part is parsed as version
  (e.g. foo/bar@1.1)
- If a package contains "/" and no "@", cat/name is applied (e.g.
  foo/bar)
- If a package doesn't contain either, is implied its just a name
  without category
- If a package contains "=" at the beginning, the gentoo parsing default
  is being used ( e.g. =foo/bar-1.1 )

Fixes #154
2020-11-29 11:48:49 +01:00
Ettore Di Giacinto
9784d6192a Don't hide error on pulling image 2020-11-28 18:03:43 +01:00
Ettore Di Giacinto
87004c8e78 Tag 0.9.8 2020-11-28 16:29:38 +01:00
Ettore Di Giacinto
0fe30ddcfd Add ability to interpolate during build
Now build takes a --values argument, which is a yaml file that can be
used to interpolate the specs that are going to be compiled.
2020-11-28 15:47:29 +01:00
Ettore Di Giacinto
44d33eceba Set workdir also on step image
Otherwise with DOCKER_SQUASH=true it wouldn't be coherent on where to
find the package files
2020-11-28 12:07:07 +01:00
Ettore Di Giacinto
ca994b07ab Tag 0.9.7 2020-11-28 00:34:46 +01:00
Ettore Di Giacinto
8ce135fe12 Add DOCKER_SQUASH 2020-11-27 23:38:31 +01:00
Ettore Di Giacinto
18d9366bca Minor fixes 2020-11-24 18:27:49 +01:00
Ettore Di Giacinto
c0206e5849 Tag 0.9.6 2020-11-23 20:18:42 +01:00
Ettore Di Giacinto
9fab46aa9e Add also description 2020-11-23 19:15:54 +01:00
Ettore Di Giacinto
5b54aeb822 Update vendor 2020-11-23 19:14:07 +01:00
Ettore Di Giacinto
7a10ff2742 Enhance search output with tables and alias to '.' when no args are specified 2020-11-23 19:13:54 +01:00
Ettore Di Giacinto
db1b190fb5 Minor fixup and cleanups around the new prompt feature 2020-11-23 18:20:30 +01:00
Ettore Di Giacinto
b349665ff2 Add user prompts
Fixes #106
2020-11-22 23:43:29 +01:00
Ettore Di Giacinto
3959cfd623 Tag 0.9.5 2020-11-20 19:02:54 +01:00
Ettore Di Giacinto
53ab0e0dd2 Merge pull request #151 from mudler/download-progress-bar
Download progress bar
2020-11-20 19:00:25 +01:00
Daniele Rondina
651ea17548 Update vendor/ (progress bar deps) 2020-11-20 18:16:49 +01:00
Daniele Rondina
60d5c9dfd5 Add download progress bar 2020-11-20 18:12:23 +01:00
Ettore Di Giacinto
1f807f369a Move revdeps computation to db 2020-11-20 17:23:21 +01:00
Ettore Di Giacinto
4e1b006a08 Cleanup vendor 2020-11-19 18:53:08 +01:00
Ettore Di Giacinto
47f0049efa Tag 0.9.4 2020-11-19 18:52:22 +01:00
Ettore Di Giacinto
0cc2b72831 Drop converter code, will be in a separate extension 2020-11-19 18:10:16 +01:00
Ettore Di Giacinto
f2df3faee5 Now Uninstall takes multiple packages 2020-11-19 18:05:27 +01:00
Daniele Rondina
287098f101 Update vendor github.com/cavaliercoder/grab 2020-11-19 00:56:59 +01:00
Daniele Rondina
f9a7113ab9 client/http: Add experimental download info 2020-11-19 00:56:28 +01:00
Ettore Di Giacinto
c3559d952c Tag 0.9.3 2020-11-15 13:38:30 +01:00
Ettore Di Giacinto
fc863fc8e5 Add collections integration test 2020-11-15 13:22:21 +01:00
Ettore Di Giacinto
ac149e9336 Use candidate for search, as doesn't have a selector 2020-11-15 11:47:32 +01:00
Ettore Di Giacinto
b9c8e50e42 Allow to define multiple templated packages with collections
Collections, similarly to packages, have a `build.yaml` and
a `finalize.yaml` that are templated for each package.
They have a `collection.yaml` containing a list of
packages that are part of the tree.
2020-11-15 00:13:46 +01:00
Ettore Di Giacinto
cf7df00a65 Add luet tree images command to show images tree 2020-11-14 14:51:11 +01:00
Daniele Rondina
83f924da35 spectools: Add DefaultPackageSanitized.Clone() 2020-11-14 12:42:49 +01:00
Ettore Di Giacinto
c82d23f9f2 Update go-pluggable 2020-11-13 19:50:10 +01:00
Ettore Di Giacinto
0e46e763d5 Move bus implementation to a separate repo, hook to events in luet 2020-11-13 18:25:44 +01:00
Ettore Di Giacinto
a793b44e83 Wip 2020-11-12 23:21:10 +01:00
749 changed files with 101122 additions and 128204 deletions

View File

@@ -1,13 +1,14 @@
# luet - Container-based Package manager
[![Docker Repository on Quay](https://quay.io/repository/luet/base/status "Docker Repository on Quay")](https://quay.io/repository/luet/base)
[![Go Report Card](https://goreportcard.com/badge/github.com/mudler/luet)](https://goreportcard.com/report/github.com/mudler/luet)
[![Build Status](https://travis-ci.org/mudler/luet.svg?branch=master)](https://travis-ci.org/mudler/luet)
[![GoDoc](https://godoc.org/github.com/mudler/luet?status.svg)](https://godoc.org/github.com/mudler/luet)
[![codecov](https://codecov.io/gh/mudler/luet/branch/master/graph/badge.svg)](https://codecov.io/gh/mudler/luet)
Luet is a multi-platform Package Manager based off from containers - it uses Docker (and other tech) to sandbox your builds and generate packages from them. It has zero dependencies and it is well suitable for "from scratch" environments. It can also version entire rootfs and enables delivery of OTA-alike updates, making it a perfect fit for the Edge computing era and IoT embedded devices.
Luet is a multi-platform Package Manager based off from containers - it uses Docker (and others) to build packages. It has zero dependencies and it is well suitable for "from scratch" environments. It can also version entire rootfs and enables delivery of OTA-alike updates, making it a perfect fit for the Edge computing era and IoT embedded devices.
It offers a simple [specfile format](https://luet-lab.github.io/docs/docs/concepts/specfile/) in YAML notation to define both packages and rootfs. As it is based on containers, it can be used to build seed stages for Linux From Scratch installations and it can build and track updates for those systems.
It offers a simple [specfile format](https://luet-lab.github.io/docs/docs/concepts/specfile/) in YAML notation to define both packages and rootfs. As it is based on containers, it can be also used to build stages for Linux From Scratch installations and it can build and track updates for those systems.
It is written entirely in Golang and where used as package manager, it can run in from scratch environment, with zero dependencies.
@@ -16,20 +17,35 @@ It is written entirely in Golang and where used as package manager, it can run i
- Luet can reuse Gentoo's portage tree hierarchy, and it is heavily inspired from it.
- It builds, installs, uninstalls and perform upgrades on machines
- Installer doesn't depend on anything ( 0 dep installer !), statically built
- You can install it aside also with your current distro package manager, and start building and distributing your packages
- Support for packages as "layers"
- It uses SAT solving techniques to solve the deptree ( Inspired by [OPIUM](https://ranjitjhala.github.io/static/opium.pdf) )
- [It uses SAT solving techniques to solve the deptree](https://luet-lab.github.io/docs/docs/concepts/constraints/) ( Inspired by [OPIUM](https://ranjitjhala.github.io/static/opium.pdf) )
- Support for collections and templated package definitions
- [Can be extended with Plugins and Extensions](https://luet-lab.github.io/docs/docs/plugins-and-extensions/)
## Install
To install luet, you can grab a release on the [Release page](https://github.com/mudler/luet/releases) or compile it in your machine (requires Golang installed):
To install luet, you can grab a release on the [Release page](https://github.com/mudler/luet/releases) or to install it in your system:
$ git clone https://github.com/mudler/luet.git
$ cd luet
$ make build
```bash
$ curl https://get.mocaccino.org/luet/get_luet_root.sh | sudo sh
$ luet search ...
$ luet install ..
$ luet --help
```
## Status
## Build from source
Luet is not feature-complete yet, it can build, install/uninstall/upgrade packages - but it doesn't support yet all the features you would normally expect from a Package Manager nowadays.
```bash
$ git clone https://github.com/mudler/luet.git
$ cd luet
$ make build
```
## Documentation
[Documentation](https://luet-lab.github.io/docs) is available, or
run `luet --help`, any subcommand is documented as well, try e.g.: `luet build --help`.
# Dependency solving
@@ -48,10 +64,6 @@ when they arises while trying to validate your queries against the system model.
To leverage it, simply pass ```--solver-type qlearning``` to the subcommands that supports it ( you can check out by invoking ```--help``` ).
## Documentation
[Documentation](https://luet-lab.github.io/docs) is available, or
run `luet --help`, any subcommand is documented as well, try e.g.: `luet build --help`.
## Authors

View File

@@ -36,8 +36,30 @@ import (
var buildCmd = &cobra.Command{
Use: "build <package name> <package name> <package name> ...",
Short: "build a package or a tree",
Long: `build packages or trees from luet tree definitions. Packages are in [category]/[name]-[version] form`,
PreRun: func(cmd *cobra.Command, args []string) {
Long: `Builds one or more packages from a tree (current directory is implied):
$ luet build utils/busybox utils/yq ...
Builds all packages
$ luet build --all
Builds only the leaf packages:
$ luet build --full
Build package revdeps:
$ luet build --revdeps utils/yq
Build package without dependencies (needs the images already in the host, or either need to be available online):
$ luet build --nodeps utils/yq ...
Build packages specifying multiple definition trees:
$ luet build --tree overlay/path --tree overlay/path2 utils/yq ...
`, PreRun: func(cmd *cobra.Command, args []string) {
viper.BindPFlag("clean", cmd.Flags().Lookup("clean"))
viper.BindPFlag("tree", cmd.Flags().Lookup("tree"))
viper.BindPFlag("destination", cmd.Flags().Lookup("destination"))
@@ -49,6 +71,7 @@ var buildCmd = &cobra.Command{
viper.BindPFlag("compression", cmd.Flags().Lookup("compression"))
viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
viper.BindPFlag("values", cmd.Flags().Lookup("values"))
viper.BindPFlag("image-repository", cmd.Flags().Lookup("image-repository"))
viper.BindPFlag("push", cmd.Flags().Lookup("push"))
@@ -75,6 +98,8 @@ var buildCmd = &cobra.Command{
databaseType := viper.GetString("database")
compressionType := viper.GetString("compression")
imageRepository := viper.GetString("image-repository")
values := viper.GetString("values")
push := viper.GetBool("push")
pull := viper.GetBool("pull")
keepImages := viper.GetBool("keep-images")
@@ -157,7 +182,7 @@ var buildCmd = &cobra.Command{
opts.KeepImageExport = keepExportedImages
opts.SkipIfMetadataExists = skip
opts.PackageTargetOnly = onlyTarget
opts.BuildValuesFile = values
var solverOpts solver.Options
if concurrent {
solverOpts = solver.Options{Type: solver.ParallelSimple, Concurrency: concurrency}
@@ -291,6 +316,7 @@ func init() {
buildCmd.Flags().Bool("revdeps", false, "Build with revdeps")
buildCmd.Flags().Bool("all", false, "Build all specfiles in the tree")
buildCmd.Flags().Bool("full", false, "Build all packages (optimized)")
buildCmd.Flags().String("values", "", "Build values file to interpolate with each package")
buildCmd.Flags().String("destination", path, "Destination folder")
buildCmd.Flags().String("compression", "none", "Compression alg: none, gzip")

View File

@@ -1,101 +0,0 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"io/ioutil"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
tree "github.com/mudler/luet/pkg/tree"
"github.com/mudler/luet/pkg/tree/builder/gentoo"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var convertCmd = &cobra.Command{
Use: "convert [portage-tree] [luet-tree]",
Short: "convert other package manager tree into luet",
Long: `Parses external PM and produces a luet parsable tree`,
PreRun: func(cmd *cobra.Command, args []string) {
viper.BindPFlag("type", cmd.Flags().Lookup("type"))
viper.BindPFlag("database", cmd.Flags().Lookup("database"))
},
Run: func(cmd *cobra.Command, args []string) {
t := viper.GetString("type")
databaseType := viper.GetString("database")
var db pkg.PackageDatabase
if len(args) != 2 {
Fatal("Incorrect number of arguments")
}
input := args[0]
output := args[1]
Info("Converting trees from " + input + " [" + t + "]")
var builder tree.Parser
switch t {
case "gentoo":
builder = gentoo.NewGentooBuilder(
&gentoo.SimpleEbuildParser{},
LuetCfg.GetGeneral().Concurrency,
gentoo.InMemory)
default: // dup
builder = gentoo.NewGentooBuilder(
&gentoo.SimpleEbuildParser{},
LuetCfg.GetGeneral().Concurrency,
gentoo.InMemory)
}
switch databaseType {
case "memory":
db = pkg.NewInMemoryDatabase(false)
case "boltdb":
tmpdir, err := ioutil.TempDir("", "package")
if err != nil {
Fatal(err)
}
db = pkg.NewBoltDatabase(tmpdir)
}
defer db.Clean()
packageTree, err := builder.Generate(input)
if err != nil {
Fatal("Error: " + err.Error())
}
defer packageTree.Clean()
Info("Tree generated")
generalRecipe := tree.NewGeneralRecipe(packageTree)
Info("Saving generated tree to " + output)
err = generalRecipe.Save(output)
if err != nil {
Fatal("Error: " + err.Error())
}
},
}
func init() {
convertCmd.Flags().String("type", "gentoo", "source type")
convertCmd.Flags().String("database", "memory", "database used for solving (memory,boltdb)")
RootCmd.AddCommand(convertCmd)
}

View File

@@ -30,7 +30,26 @@ import (
var createrepoCmd = &cobra.Command{
Use: "create-repo",
Short: "Create a luet repository from a build",
Long: `Generate and renew repository metadata`,
Long: `Builds tree metadata from a set of packages and a tree definition:
$ luet create-repo
Provide specific paths for packages, tree, and metadata output which is generated:
$ luet create-repo --packages my/packages/path --tree my/tree/path --output my/packages/path ...
Provide name and description of the repository:
$ luet create-repo --name "foo" --description "bar" ...
Change compression method:
$ luet create-repo --tree-compression gzip --meta-compression gzip
Create a repository from the metadata description defined in the luet.yaml config file:
$ luet create-repo --repo repository1
`,
PreRun: func(cmd *cobra.Command, args []string) {
viper.BindPFlag("packages", cmd.Flags().Lookup("packages"))
viper.BindPFlag("tree", cmd.Flags().Lookup("tree"))

View File

@@ -25,6 +25,10 @@ import (
var databaseGroupCmd = &cobra.Command{
Use: "database [command] [OPTIONS]",
Short: "Manage system database (dangerous commands ahead!)",
Long: `Allows to manipulate Luet internal database of installed packages. Use with caution!
Removing packages by hand from the database can result in a broken system, and thus it's not reccomended.
`,
}
func init() {

View File

@@ -32,7 +32,18 @@ func NewDatabaseCreateCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "create <artifact_metadata1.yaml> <artifact_metadata1.yaml>",
Short: "Insert a package in the system DB",
Args: cobra.OnlyValidArgs,
Long: `Inserts a package in the system database:
$ luet database create foo.yaml
"luet database create" injects a package in the system database without actually installing it, use it with caution.
This commands takes multiple yaml input file representing package artifacts, that are usually generated while building packages.
The yaml must contain the package definition, and the file list at least.
For reference, inspect a "metadata.yaml" file generated while running "luet build"`,
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))

View File

@@ -31,7 +31,13 @@ func NewDatabaseRemoveCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "remove [package1] [package2] ...",
Short: "Remove a package from the system DB (forcefully - you normally don't want to do that)",
Args: cobra.OnlyValidArgs,
Long: `Removes a package in the system database without actually uninstalling it:
$ luet database remove foo/bar
This commands takes multiple packages as arguments and prunes their entries from the system database.
`,
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))

View File

@@ -20,6 +20,7 @@ import (
"errors"
"fmt"
"regexp"
"strings"
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
pkg "github.com/mudler/luet/pkg/package"
@@ -41,7 +42,41 @@ func CreateRegexArray(rgx []string) ([]*regexp.Regexp, error) {
return ans, nil
}
func packageData(p string) (string, string) {
cat := ""
name := ""
if strings.Contains(p, "/") {
packagedata := strings.Split(p, "/")
cat = packagedata[0]
name = packagedata[1]
} else {
name = p
}
return cat, name
}
func ParsePackageStr(p string) (*pkg.DefaultPackage, error) {
if !strings.HasPrefix(p, "=") {
ver := ">=0"
cat := ""
name := ""
if strings.Contains(p, "@") {
packageinfo := strings.Split(p, "@")
ver = packageinfo[1]
cat, name = packageData(packageinfo[0])
} else {
cat, name = packageData(p)
}
return &pkg.DefaultPackage{
Name: name,
Category: cat,
Version: ver,
Uri: make([]string, 0),
}, nil
}
gp, err := _gentoo.ParsePackageStr(p)
if err != nil {
return nil, err

View File

@@ -13,7 +13,7 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package gentoo_test
package cmd_helpers_test
import (
"testing"
@@ -25,8 +25,8 @@ import (
. "github.com/onsi/gomega"
)
func TestGentooBuilder(t *testing.T) {
func TestSolver(t *testing.T) {
RegisterFailHandler(Fail)
LoadConfig(config.LuetCfg)
RunSpecs(t, "Gentoo Suite")
RunSpecs(t, "CLI helpers test Suite")
}

70
cmd/helpers/cli_test.go Normal file
View File

@@ -0,0 +1,70 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd_helpers_test
import (
. "github.com/mudler/luet/cmd/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("CLI Helpers", func() {
Context("Can parse package strings correctly", func() {
It("accept single package names", func() {
pack, err := ParsePackageStr("foo")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal(""))
Expect(pack.GetVersion()).To(Equal(">=0"))
})
It("accept unversioned packages with category", func() {
pack, err := ParsePackageStr("cat/foo")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal(">=0"))
})
It("accept versioned packages with category", func() {
pack, err := ParsePackageStr("cat/foo@1.1")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal("1.1"))
})
It("accept versioned ranges with category", func() {
pack, err := ParsePackageStr("cat/foo@>=1.1")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal(">=1.1"))
})
It("accept gentoo regex parsing without versions", func() {
pack, err := ParsePackageStr("=cat/foo")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal(">=0"))
})
It("accept gentoo regex parsing with versions", func() {
pack, err := ParsePackageStr("=cat/foo-1.2")
Expect(err).ToNot(HaveOccurred())
Expect(pack.GetName()).To(Equal("foo"))
Expect(pack.GetCategory()).To(Equal("cat"))
Expect(pack.GetVersion()).To(Equal("1.2"))
})
})
})

View File

@@ -30,8 +30,24 @@ import (
)
var installCmd = &cobra.Command{
Use: "install <pkg1> <pkg2> ...",
Short: "Install a package",
Use: "install <pkg1> <pkg2> ...",
Short: "Install a package",
Long: `Installs one or more packages without asking questions:
$ luet install -y utils/busybox utils/yq ...
To install only deps of a package:
$ luet install --onlydeps utils/busybox ...
To not install deps of a package:
$ luet install --nodeps utils/busybox ...
To force install a package:
$ luet install --force utils/busybox ...
`,
Aliases: []string{"i"},
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
@@ -43,8 +59,8 @@ var installCmd = &cobra.Command{
LuetCfg.Viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
LuetCfg.Viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
LuetCfg.Viper.BindPFlag("yes", cmd.Flags().Lookup("yes"))
},
Long: `Install packages in parallel`,
Run: func(cmd *cobra.Command, args []string) {
var toInstall pkg.Packages
var systemDB pkg.PackageDatabase
@@ -75,6 +91,7 @@ var installCmd = &cobra.Command{
nodeps := LuetCfg.Viper.GetBool("nodeps")
onlydeps := LuetCfg.Viper.GetBool("onlydeps")
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
yes := LuetCfg.Viper.GetBool("yes")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
@@ -99,6 +116,7 @@ var installCmd = &cobra.Command{
Force: force,
OnlyDeps: onlydeps,
PreserveSystemEssentialData: true,
Ask: !yes,
})
inst.Repositories(repos)
@@ -131,6 +149,7 @@ func init() {
installCmd.Flags().Bool("onlydeps", false, "Consider **only** package dependencies")
installCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
installCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
installCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
RootCmd.AddCommand(installCmd)
}

View File

@@ -31,7 +31,16 @@ import (
var packCmd = &cobra.Command{
Use: "pack <package name>",
Short: "pack a custom package",
Long: `pack and creates metadata directly from a source path`,
Long: `Pack creates a package from a directory, generating the metadata required from a tree to generate a repository.
Pack can be used to manually replace what "luet build" does automatically by reading the packages build.yaml files.
$ mkdir -p output/etc/foo
$ echo "my config" > output/etc/foo
$ luet pack foo/bar@1.1 --source output
Afterwards, you can use the content generated and associate it with a tree and a corresponding definition.yaml file with "luet create-repo".
`,
PreRun: func(cmd *cobra.Command, args []string) {
viper.BindPFlag("destination", cmd.Flags().Lookup("destination"))
viper.BindPFlag("compression", cmd.Flags().Lookup("compression"))

View File

@@ -35,8 +35,12 @@ var reclaimCmd = &cobra.Command{
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
},
Long: `Add packages to the systemdb if files belonging to packages
in available repositories exists in the target root.`,
Long: `Reclaim tries to find association between packages in the online repositories and the system one.
$ luet reclaim
It scans the target file system, and if finds a match with a package available in the repositories, it marks as installed in the system database.
`,
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase

156
cmd/replace.go Normal file
View File

@@ -0,0 +1,156 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"os"
"path/filepath"
installer "github.com/mudler/luet/pkg/installer"
"github.com/mudler/luet/pkg/solver"
helpers "github.com/mudler/luet/cmd/helpers"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/spf13/cobra"
)
var replaceCmd = &cobra.Command{
Use: "replace <pkg1> <pkg2> --for <pkg3> --for <pkg4> ...",
Short: "replace a set of packages",
Aliases: []string{"r"},
Long: `Replaces one or a group of packages without asking questions:
$ luet replace -y system/busybox ... --for shells/bash --for system/coreutils ...
`,
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
LuetCfg.Viper.BindPFlag("solver.type", cmd.Flags().Lookup("solver-type"))
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
LuetCfg.Viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
LuetCfg.Viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
LuetCfg.Viper.BindPFlag("for", cmd.Flags().Lookup("for"))
LuetCfg.Viper.BindPFlag("yes", cmd.Flags().Lookup("yes"))
},
Run: func(cmd *cobra.Command, args []string) {
var toUninstall pkg.Packages
var toAdd pkg.Packages
var systemDB pkg.PackageDatabase
f := LuetCfg.Viper.GetStringSlice("for")
stype := LuetCfg.Viper.GetString("solver.type")
discount := LuetCfg.Viper.GetFloat64("solver.discount")
rate := LuetCfg.Viper.GetFloat64("solver.rate")
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
force := LuetCfg.Viper.GetBool("force")
nodeps := LuetCfg.Viper.GetBool("nodeps")
onlydeps := LuetCfg.Viper.GetBool("onlydeps")
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
yes := LuetCfg.Viper.GetBool("yes")
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
if err != nil {
Fatal("Invalid package string ", a, ": ", err.Error())
}
toUninstall = append(toUninstall, pack)
}
for _, a := range f {
pack, err := helpers.ParsePackageStr(a)
if err != nil {
Fatal("Invalid package string ", a, ": ", err.Error())
}
toAdd = append(toAdd, pack)
}
// This shouldn't be necessary, but we need to unmarshal the repositories to a concrete struct, thus we need to port them back to the Repositories type
repos := installer.Repositories{}
for _, repo := range LuetCfg.SystemRepositories {
if !repo.Enable {
continue
}
r := installer.NewSystemRepository(repo)
repos = append(repos, r)
}
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
LuetCfg.GetSolverOptions().Discount = float32(discount)
LuetCfg.GetSolverOptions().MaxAttempts = attempts
if concurrent {
LuetCfg.GetSolverOptions().Implementation = solver.ParallelSimple
} else {
LuetCfg.GetSolverOptions().Implementation = solver.SingleCoreSimple
}
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
// Load config protect configs
installer.LoadConfigProtectConfs(LuetCfg)
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: LuetCfg.GetGeneral().Concurrency,
SolverOptions: *LuetCfg.GetSolverOptions(),
NoDeps: nodeps,
Force: force,
OnlyDeps: onlydeps,
PreserveSystemEssentialData: true,
Ask: !yes,
})
inst.Repositories(repos)
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
} else {
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
err := inst.Swap(toUninstall, toAdd, system)
if err != nil {
Fatal("Error: " + err.Error())
}
},
}
func init() {
path, err := os.Getwd()
if err != nil {
Fatal(err)
}
replaceCmd.Flags().String("system-dbpath", path, "System db path")
replaceCmd.Flags().String("system-target", path, "System rootpath")
replaceCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+AvailableResolvers+" )")
replaceCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
replaceCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
replaceCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
replaceCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful!)")
replaceCmd.Flags().Bool("onlydeps", false, "Consider **only** package dependencies")
replaceCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
replaceCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
replaceCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
replaceCmd.Flags().StringSlice("for", []string{}, "Packages that has to be installed in place of others")
RootCmd.AddCommand(replaceCmd)
}

View File

@@ -24,6 +24,8 @@ import (
"strings"
"github.com/marcsauter/single"
bus "github.com/mudler/luet/pkg/bus"
extensions "github.com/mudler/cobra-extensions"
config "github.com/mudler/luet/pkg/config"
helpers "github.com/mudler/luet/pkg/helpers"
@@ -38,7 +40,7 @@ var Verbose bool
var LockedCommands = []string{"install", "uninstall", "upgrade"}
const (
LuetCLIVersion = "0.9.2"
LuetCLIVersion = "0.9.13"
LuetEnvPrefix = "LUET"
)
@@ -52,9 +54,31 @@ var (
// RootCmd represents the base command when called without any subcommands
var RootCmd = &cobra.Command{
Use: "luet",
Short: "Package manager for the XXth century!",
Long: `Package manager which uses containers to build packages`,
Use: "luet",
Short: "Container based package manager",
Long: `Luet is a single-binary package manager based on containers to build packages.
To install a package:
$ luet install package
To search for a package in the repositories:
$ luet search package
To list all packages installed in the system:
$ luet search --installed .
To show hidden packages:
$ luet search --hidden package
To build a package, from a tree definition:
$ luet build --tree tree/path package
`,
Version: fmt.Sprintf("%s-g%s %s", LuetCLIVersion, BuildCommit, BuildTime),
PersistentPreRun: func(cmd *cobra.Command, args []string) {
err := LoadConfig(config.LuetCfg)
@@ -68,6 +92,18 @@ var RootCmd = &cobra.Command{
if err != nil {
Fatal("failed on init tmp basedir:", err.Error())
}
viper.BindPFlag("plugin", cmd.Flags().Lookup("plugin"))
plugin := viper.GetStringSlice("plugin")
bus.Manager.Load(plugin...).Register()
if len(bus.Manager.Plugins) != 0 {
Info(":lollipop:Enabled plugins:")
for _, p := range bus.Manager.Plugins {
Info("\t:arrow_right:", p.Name)
}
}
},
PersistentPostRun: func(cmd *cobra.Command, args []string) {
// Cleanup all tmp directories used by luet
@@ -156,6 +192,7 @@ func init() {
"Disable config protect analysis.")
pflags.StringP("logfile", "l", config.LuetCfg.GetLogging().Path,
"Logfile path. Empty value disable log to file.")
pflags.StringSlice("plugin", []string{}, "A list of runtime plugins to load")
// os/user doesn't work in from scratch environments.
// Check if i can retrieve user informations.
@@ -175,6 +212,8 @@ func init() {
config.LuetCfg.Viper.BindPFlag("general.debug", pflags.Lookup("debug"))
config.LuetCfg.Viper.BindPFlag("general.fatal_warnings", pflags.Lookup("fatal"))
config.LuetCfg.Viper.BindPFlag("general.same_owner", pflags.Lookup("same-owner"))
config.LuetCfg.Viper.BindPFlag("plugin", pflags.Lookup("plugin"))
// Currently I maintain this only from cli.
config.LuetCfg.Viper.BindPFlag("no_spinner", pflags.Lookup("no-spinner"))
config.LuetCfg.Viper.BindPFlag("config_protect_skip", pflags.Lookup("skip-config-protect"))

View File

@@ -18,8 +18,11 @@ import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/ghodss/yaml"
"github.com/jedib0t/go-pretty/table"
"github.com/jedib0t/go-pretty/v6/list"
. "github.com/mudler/luet/pkg/config"
installer "github.com/mudler/luet/pkg/installer"
. "github.com/mudler/luet/pkg/logger"
@@ -44,10 +47,60 @@ func (r PackageResult) String() string {
return fmt.Sprintf("%s/%s-%s required for %s", r.Category, r.Name, r.Version, r.Target)
}
var rows table.Row = table.Row{"Package", "Category", "Name", "Version", "Repository", "Description", "License", "URI"}
func packageToRow(repo string, p pkg.Package) table.Row {
return table.Row{p.HumanReadableString(), p.GetCategory(), p.GetName(), p.GetVersion(), repo, p.GetDescription(), p.GetLicense(), strings.Join(p.GetURI(), "\n")}
}
func packageToList(l list.Writer, repo string, p pkg.Package) {
l.AppendItem(p.HumanReadableString())
l.Indent()
l.AppendItem(fmt.Sprintf("Category: %s", p.GetCategory()))
l.AppendItem(fmt.Sprintf("Name: %s", p.GetName()))
l.AppendItem(fmt.Sprintf("Version: %s", p.GetVersion()))
l.AppendItem(fmt.Sprintf("Description: %s", p.GetDescription()))
l.AppendItem(fmt.Sprintf("Repository: %s ", repo))
l.AppendItem(fmt.Sprintf("Uri: %s ", strings.Join(p.GetURI(), "\n")))
l.UnIndent()
}
var searchCmd = &cobra.Command{
Use: "search <term>",
Short: "Search packages",
Long: `Search for installed and available packages`,
Use: "search <term>",
Short: "Search packages",
Long: `Search for installed and available packages
To search a package in the repositories:
$ luet search <regex>
To search a package and display results in a table (wide screens):
$ luet search --table <regex>
To look into the installed packages:
$ luet search --installed <regex>
Note: the regex argument is optional, if omitted implies "all"
To search a package by label:
$ luet search --by-label <label>
or by regex against the label:
$ luet search --by-label-regex <label>
It can also show a package revdeps by:
$ luet search --revdeps <regex>
Search can also return results in the terminal in different ways: as terminal output, as json or as yaml.
$ luet search --json <regex> # JSON output
$ luet search --yaml <regex> # YAML output
`,
Aliases: []string{"s"},
PreRun: func(cmd *cobra.Command, args []string) {
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
@@ -61,10 +114,11 @@ var searchCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
var results Results
if len(args) != 1 {
if len(args) > 1 {
Fatal("Wrong number of arguments (expected 1)")
} else if len(args) == 0 {
args = []string{"."}
}
hidden, _ := cmd.Flags().GetBool("hidden")
installed := LuetCfg.Viper.GetBool("installed")
@@ -75,6 +129,7 @@ var searchCmd = &cobra.Command{
searchWithLabel, _ := cmd.Flags().GetBool("by-label")
searchWithLabelMatch, _ := cmd.Flags().GetBool("by-label-regex")
revdeps, _ := cmd.Flags().GetBool("revdeps")
tableMode, _ := cmd.Flags().GetBool("table")
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
@@ -86,6 +141,9 @@ var searchCmd = &cobra.Command{
LuetCfg.GetSolverOptions().Discount = float32(discount)
LuetCfg.GetSolverOptions().MaxAttempts = attempts
l := list.NewWriter()
t := table.NewWriter()
t.AppendHeader(rows)
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
if !installed {
@@ -124,7 +182,8 @@ var searchCmd = &cobra.Command{
for _, m := range matches {
if !revdeps {
if !m.Package.IsHidden() || m.Package.IsHidden() && hidden {
Info(fmt.Sprintf(":file_folder:%s", m.Repo.GetName()), fmt.Sprintf(":package:%s", m.Package.HumanReadableString()))
t.AppendRow(packageToRow(m.Repo.GetName(), m.Package))
packageToList(l, m.Repo.GetName(), m.Package)
results.Packages = append(results.Packages,
PackageResult{
Name: m.Package.GetName(),
@@ -135,10 +194,11 @@ var searchCmd = &cobra.Command{
})
}
} else {
visited := make(map[string]interface{})
for _, revdep := range m.Package.ExpandedRevdeps(m.Repo.GetTree().GetDatabase(), visited) {
packs, _ := m.Repo.GetTree().GetDatabase().GetRevdeps(m.Package)
for _, revdep := range packs {
if !revdep.IsHidden() || revdep.IsHidden() && hidden {
Info(fmt.Sprintf(":file_folder:%s", m.Repo.GetName()), fmt.Sprintf(":package:%s", revdep.HumanReadableString()))
t.AppendRow(packageToRow(m.Repo.GetName(), revdep))
packageToList(l, m.Repo.GetName(), revdep)
results.Packages = append(results.Packages,
PackageResult{
Name: revdep.GetName(),
@@ -178,7 +238,8 @@ var searchCmd = &cobra.Command{
for _, pack := range iMatches {
if !revdeps {
if !pack.IsHidden() || pack.IsHidden() && hidden {
Info(fmt.Sprintf(":package:%s", pack.HumanReadableString()))
t.AppendRow(packageToRow("system", pack))
packageToList(l, "system", pack)
results.Packages = append(results.Packages,
PackageResult{
Name: pack.GetName(),
@@ -189,11 +250,11 @@ var searchCmd = &cobra.Command{
})
}
} else {
visited := make(map[string]interface{})
for _, revdep := range pack.ExpandedRevdeps(system.Database, visited) {
packs, _ := system.Database.GetRevdeps(pack)
for _, revdep := range packs {
if !revdep.IsHidden() || revdep.IsHidden() && hidden {
Info(fmt.Sprintf(":package:%s", pack.HumanReadableString()))
t.AppendRow(packageToRow("system", pack))
packageToList(l, "system", pack)
results.Packages = append(results.Packages,
PackageResult{
Name: revdep.GetName(),
@@ -208,6 +269,16 @@ var searchCmd = &cobra.Command{
}
}
t.AppendFooter(rows)
t.SetStyle(table.StyleColoredBright)
l.SetStyle(list.StyleConnectedRounded)
if tableMode {
Info(t.Render())
} else {
Info(l.Render())
}
y, err := yaml.Marshal(results)
if err != nil {
fmt.Printf("err: %v\n", err)
@@ -245,6 +316,7 @@ func init() {
searchCmd.Flags().Bool("by-label-regex", false, "Search packages through label regex")
searchCmd.Flags().Bool("revdeps", false, "Search package reverse dependencies")
searchCmd.Flags().Bool("hidden", false, "Include hidden packages")
searchCmd.Flags().Bool("table", false, "show output in a table (wider screens)")
RootCmd.AddCommand(searchCmd)
}

View File

@@ -34,5 +34,6 @@ func init() {
NewTreePkglistCommand(),
NewTreeValidateCommand(),
NewTreeBumpCommand(),
NewTreeImageCommand(),
)
}

136
cmd/tree/images.go Normal file
View File

@@ -0,0 +1,136 @@
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd_tree
import (
"fmt"
//. "github.com/mudler/luet/pkg/config"
"github.com/ghodss/yaml"
helpers "github.com/mudler/luet/cmd/helpers"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
. "github.com/mudler/luet/pkg/config"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
tree "github.com/mudler/luet/pkg/tree"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
func NewTreeImageCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "images [OPTIONS]",
Short: "List of the images of a package",
PreRun: func(cmd *cobra.Command, args []string) {
t, _ := cmd.Flags().GetStringArray("tree")
if len(t) == 0 {
Fatal("Mandatory tree param missing.")
}
if len(args) != 1 {
Fatal("Expects one package as parameter")
}
viper.BindPFlag("image-repository", cmd.Flags().Lookup("image-repository"))
},
Run: func(cmd *cobra.Command, args []string) {
var results TreeResults
treePath, _ := cmd.Flags().GetStringArray("tree")
imageRepository := viper.GetString("image-repository")
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
LuetCfg.GetLogging().SetLogLevel("error")
}
reciper := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
for _, t := range treePath {
err := reciper.Load(t)
if err != nil {
Fatal("Error on load tree ", err)
}
}
compilerBackend := backend.NewSimpleDockerBackend()
opts := compiler.NewDefaultCompilerOptions()
opts.SolverOptions = *LuetCfg.GetSolverOptions()
opts.ImageRepository = imageRepository
solverOpts := solver.Options{Type: solver.SingleCoreSimple, Concurrency: 1}
luetCompiler := compiler.NewLuetCompiler(compilerBackend, reciper.GetDatabase(), opts, solverOpts)
a := args[0]
pack, err := helpers.ParsePackageStr(a)
if err != nil {
Fatal("Invalid package string ", a, ": ", err.Error())
}
spec, err := luetCompiler.FromPackage(pack)
if err != nil {
Fatal("Error: " + err.Error())
}
asserts, err := luetCompiler.ComputeDepTree(spec)
for _, assertion := range asserts { //highly dependent on the order
//buildImageHash := imageRepository + ":" + assertion.Hash.BuildHash
currentPackageImageHash := imageRepository + ":" + assertion.Hash.PackageHash
results.Packages = append(results.Packages, TreePackageResult{
Name: assertion.Package.GetName(),
Version: assertion.Package.GetVersion(),
Category: assertion.Package.GetCategory(),
Image: currentPackageImageHash,
})
}
y, err := yaml.Marshal(results)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
switch out {
case "yaml":
fmt.Println(string(y))
case "json":
j2, err := yaml.YAMLToJSON(y)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(j2))
default:
for _, p := range results.Packages {
fmt.Println(fmt.Sprintf("%s/%s-%s: %s", p.Category, p.Name, p.Version, p.Image))
}
}
},
}
ans.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")
ans.Flags().StringArrayP("tree", "t", []string{}, "Path of the tree to use.")
ans.Flags().String("image-repository", "luet/cache", "Default base image string for generated image")
return ans
}

View File

@@ -37,6 +37,7 @@ type TreePackageResult struct {
Category string `json:"category"`
Version string `json:"version"`
Path string `json:"path"`
Image string `json:"image"`
}
type TreeResults struct {
@@ -167,9 +168,8 @@ func NewTreePkglistCommand() *cobra.Command {
if addPkg {
if revdeps {
visited := make(map[string]interface{})
for _, revdep := range p.ExpandedRevdeps(reciper.GetDatabase(), visited) {
packs, _ := reciper.GetDatabase().GetRevdeps(p)
for _, revdep := range packs {
if full {
pkgstr = pkgDetail(revdep)
} else if verbose {

View File

@@ -42,6 +42,7 @@ var uninstallCmd = &cobra.Command{
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
LuetCfg.Viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
LuetCfg.Viper.BindPFlag("yes", cmd.Flags().Lookup("yes"))
},
Run: func(cmd *cobra.Command, args []string) {
var systemDB pkg.PackageDatabase
@@ -63,6 +64,7 @@ var uninstallCmd = &cobra.Command{
checkconflicts, _ := cmd.Flags().GetBool("conflictscheck")
fullClean, _ := cmd.Flags().GetBool("full-clean")
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
yes := LuetCfg.Viper.GetBool("yes")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
@@ -86,6 +88,7 @@ var uninstallCmd = &cobra.Command{
FullUninstall: full,
FullCleanUninstall: fullClean,
CheckConflicts: checkconflicts,
Ask: !yes,
})
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
@@ -120,6 +123,7 @@ func init() {
uninstallCmd.Flags().Bool("conflictscheck", true, "Check if the package marked for deletion is required by other packages")
uninstallCmd.Flags().Bool("full-clean", false, "(experimental) Uninstall packages and all the other deps/revdeps of it.")
uninstallCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
uninstallCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
RootCmd.AddCommand(uninstallCmd)
}

View File

@@ -39,6 +39,7 @@ var upgradeCmd = &cobra.Command{
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
LuetCfg.Viper.BindPFlag("force", cmd.Flags().Lookup("force"))
LuetCfg.Viper.BindPFlag("yes", cmd.Flags().Lookup("yes"))
},
Long: `Upgrades packages in parallel`,
Run: func(cmd *cobra.Command, args []string) {
@@ -65,6 +66,7 @@ var upgradeCmd = &cobra.Command{
clean, _ := cmd.Flags().GetBool("clean")
sync, _ := cmd.Flags().GetBool("sync")
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
yes := LuetCfg.Viper.GetBool("yes")
LuetCfg.GetSolverOptions().Type = stype
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
@@ -90,12 +92,9 @@ var upgradeCmd = &cobra.Command{
SolverUpgrade: universe,
RemoveUnavailableOnUpgrade: clean,
UpgradeNewRevisions: sync,
Ask: !yes,
})
inst.Repositories(repos)
_, err := inst.SyncRepositories(false)
if err != nil {
Fatal("Error: " + err.Error())
}
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
systemDB = pkg.NewBoltDatabase(
@@ -104,8 +103,8 @@ var upgradeCmd = &cobra.Command{
systemDB = pkg.NewInMemoryDatabase(true)
}
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
err = inst.Upgrade(system)
if err != nil {
if err := inst.Upgrade(system); err != nil {
Fatal("Error: " + err.Error())
}
},
@@ -124,11 +123,12 @@ func init() {
upgradeCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
upgradeCmd.Flags().Bool("force", false, "Force upgrade by ignoring errors")
upgradeCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful! overrides checkconflicts and full!)")
upgradeCmd.Flags().Bool("full", true, "Attempts to remove as much packages as possible which aren't required (slow)")
upgradeCmd.Flags().Bool("full", false, "Attempts to remove as much packages as possible which aren't required (slow)")
upgradeCmd.Flags().Bool("universe", false, "Use ONLY the SAT solver to compute upgrades (experimental)")
upgradeCmd.Flags().Bool("clean", false, "Try to drop removed packages (experimental, only when --universe is enabled)")
upgradeCmd.Flags().Bool("sync", false, "Upgrade packages with new revisions (experimental)")
upgradeCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
upgradeCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
RootCmd.AddCommand(upgradeCmd)
}

16
go.mod
View File

@@ -7,13 +7,16 @@ require (
github.com/Sabayon/pkgs-checker v0.7.2
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154
github.com/briandowns/spinner v1.7.0
github.com/cavaliercoder/grab v2.0.0+incompatible
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec
github.com/crillab/gophersat v1.3.2-0.20201023142334-3fc2ac466765
github.com/docker/docker v17.12.0-ce-rc1.0.20200417035958-130b0bc6032c+incompatible
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd
github.com/fsouza/go-dockerclient v1.6.4
github.com/ghodss/yaml v1.0.0
github.com/hashicorp/go-version v1.2.0
github.com/jedib0t/go-pretty v4.3.0+incompatible
github.com/jedib0t/go-pretty/v6 v6.0.5
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3
github.com/klauspost/pgzip v1.2.1
github.com/knqyf263/go-deb-version v0.0.0-20190517075300-09fca494f03d
@@ -21,17 +24,18 @@ require (
github.com/kyokomi/emoji v2.1.0+incompatible
github.com/logrusorgru/aurora v0.0.0-20190417123914-21d75270181e
github.com/marcsauter/single v0.0.0-20181104081128-f8bf46f26ec0
github.com/mattn/go-isatty v0.0.10 // indirect
github.com/moby/sys/mount v0.1.1-0.20200320164225-6154f11e6840 // indirect
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290
github.com/onsi/ginkgo v1.12.1
github.com/onsi/gomega v1.10.0
github.com/onsi/ginkgo v1.14.2
github.com/onsi/gomega v1.10.3
github.com/otiai10/copy v1.2.1-0.20200916181228-26f84a0b1578
github.com/pelletier/go-toml v1.6.0 // indirect
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f
github.com/pkg/errors v0.9.1
github.com/schollz/progressbar/v3 v3.7.1
github.com/spf13/cobra v1.0.0
github.com/spf13/viper v1.6.3
go.etcd.io/bbolt v1.3.4
@@ -39,11 +43,9 @@ require (
go.uber.org/multierr v1.4.0 // indirect
go.uber.org/zap v1.13.0
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f // indirect
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553 // indirect
gopkg.in/yaml.v2 v2.2.8
gopkg.in/yaml.v2 v2.3.0
gotest.tools/v3 v3.0.2 // indirect
helm.sh/helm/v3 v3.3.4
mvdan.cc/sh/v3 v3.0.0-beta1
)
replace github.com/docker/docker => github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200605210607-749178b8f80d+incompatible

80
go.sum
View File

@@ -71,6 +71,7 @@ github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj
github.com/aryann/difflib v0.0.0-20170710044230-e206f873d14a/go.mod h1:DAHtR1m6lCRdSC2Tm3DSWRPvIPr6xNKyeHdqDQSQT+A=
github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535 h1:4daAzAu0S6Vi7/lbWECcX0j45yZReDZ56BQsrVBOEEY=
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535/go.mod h1:oGkLhpf+kjZl6xBf758TQhh5XrAeiJv/7FRz/2spLIg=
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154 h1:2lbe+CPe6eQf2EA3jjLdLFZKGv3cbYqVIDjKnzcyOXg=
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154/go.mod h1:cMLKpjHSP4q0P133fV15ojQgwWWB2IMv+hrFsmBF/wI=
@@ -97,17 +98,21 @@ github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8n
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
github.com/cavaliercoder/grab v2.0.0+incompatible h1:wZHbBQx56+Yxjx2TCGDcenhh3cJn7cCLMfkEPmySTSE=
github.com/cavaliercoder/grab v2.0.0+incompatible/go.mod h1:tTBkfNqSBfuMmMBFaO2phgyhdYhiZQ/+iXCZDzcDsMI=
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec h1:4XvMn0XuV7qxCH22gbnR79r+xTUaLOSA0GW/egpO3SQ=
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec/go.mod h1:NbXoa59CCAGqtRm7kRrcZIk2dTCJMRVF8QI3BOD7isY=
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5/go.mod h1:/iP1qXHoty45bqomnu2LM+VVyAEdWN+vtSHGlQgyxbw=
github.com/chuckpreslar/emission v0.0.0-20170206194824-a7ddd980baf9 h1:xz6Nv3zcwO2Lila35hcb0QloCQsc38Al13RNEzWRpX4=
github.com/chuckpreslar/emission v0.0.0-20170206194824-a7ddd980baf9/go.mod h1:2wSM9zJkl1UQEFZgSd68NfCgRz1VL1jzy/RjCg+ULrs=
github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0 h1:sDMmm+q/3+BukdIpxwO365v/Rbspp2Nt5XntgQRXq8Q=
github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f h1:tSNMc+rJDfmYntojat8lljbt1mgKNpTxUZJsSzJ9Y1s=
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
@@ -201,6 +206,8 @@ github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVB
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsouza/go-dockerclient v1.6.4 h1:B+L+1lz1LUrNgEUUh8PSG76s70EYC49ssv2xvTefTMM=
github.com/fsouza/go-dockerclient v1.6.4/go.mod h1:GOdftxWLWIbIWKbIMDroKFJzPdg6Iw7r+jX1DDZdVsA=
github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
@@ -224,6 +231,7 @@ github.com/go-openapi/analysis v0.19.2/go.mod h1:3P1osvZa9jKjb8ed2TPng3f0i/UY9sn
github.com/go-openapi/analysis v0.19.5/go.mod h1:hkEAkxagaIvIP7VTn8ygJNkd4kAYON2rCu0v0ObL0AU=
github.com/go-openapi/errors v0.17.0/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0=
github.com/go-openapi/errors v0.18.0/go.mod h1:LcZQpmvG4wyF5j4IhA73wkLFQg+QJXOQHVjmcZxhka0=
github.com/go-openapi/errors v0.19.2 h1:a2kIyV3w+OS3S97zxUndRVD46+FhGOUBDFY7nmu4CsY=
github.com/go-openapi/errors v0.19.2/go.mod h1:qX0BLWsyaKfvhluLejVpVNwNRdXZhEbTA4kxxpKBC94=
github.com/go-openapi/jsonpointer v0.0.0-20160704185906-46af16f9f7b1/go.mod h1:+35s3my2LFTysnkMfxsJBAMHj/DoqoB9knIWoYG/Vk0=
github.com/go-openapi/jsonpointer v0.17.0/go.mod h1:cOnomiV+CVVwFLk0A/MExoFMjwdsUdVpsRhURCKh+3M=
@@ -251,6 +259,7 @@ github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8
github.com/go-openapi/strfmt v0.17.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU=
github.com/go-openapi/strfmt v0.18.0/go.mod h1:P82hnJI0CXkErkXi8IKjPbNBM6lV6+5pLP5l494TcyU=
github.com/go-openapi/strfmt v0.19.0/go.mod h1:+uW+93UVvGGq2qGaZxdDeJqSAqBqBdl+ZPMF/cC8nDY=
github.com/go-openapi/strfmt v0.19.3 h1:eRfyY5SkaNJCAwmmMcADjY31ow9+N7MCLW7oRkbsINA=
github.com/go-openapi/strfmt v0.19.3/go.mod h1:0yX7dbo8mKIvc3XSKp7MNfxw4JytCfCD6+bY1AVL9LU=
github.com/go-openapi/swag v0.0.0-20160704191624-1d0bd113de87/go.mod h1:DXUve3Dpr1UfpPtxFw+EFuQ41HhCWZfha5jSVRG7C7I=
github.com/go-openapi/swag v0.17.0/go.mod h1:AByQ+nYG6gQg71GINrmuDXCPWdL640yX49/kXLo40Tg=
@@ -262,6 +271,7 @@ github.com/go-openapi/validate v0.19.2/go.mod h1:1tRCw7m3jtI8eNWEEliiAqUIcBztB2K
github.com/go-openapi/validate v0.19.5/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85nY1c2z52x1Gk4=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gobuffalo/envy v1.7.0/go.mod h1:n7DRkBerg/aorDM8kbduw5dN3oXGswK5liaSCx4T5NI=
github.com/gobuffalo/envy v1.7.1/go.mod h1:FurDp9+EDPE4aIUS3ZLyD+7/9fpx7YRt/ukY6jIHf0w=
@@ -294,6 +304,13 @@ github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5y
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db h1:woRePGFeVFfLKN/pOkfl+p/TAqKOfFu+7KPlMVpok/w=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golangplus/bytes v0.0.0-20160111154220-45c989fe5450/go.mod h1:Bk6SMAONeMXrxql8uvOKuAZSu8aM5RUGv+1C6IJaEho=
@@ -379,6 +396,10 @@ github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJ
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
github.com/jedib0t/go-pretty v4.3.0+incompatible h1:CGs8AVhEKg/n9YbUenWmNStRW2PHJzaeDodcfvRAbIo=
github.com/jedib0t/go-pretty v4.3.0+incompatible/go.mod h1:XemHduiw8R651AF9Pt4FwCTKeG3oo7hrHJAoznj9nag=
github.com/jedib0t/go-pretty/v6 v6.0.5 h1:oOo0/jSb3NEYKT6l1hhFXoX2UZnkanMuCE2DVT1mqnE=
github.com/jedib0t/go-pretty/v6 v6.0.5/go.mod h1:MTr6FgcfNdnN5wPVBzJ6mhJeDyiF0yBvS2TMXEV/XSU=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3 h1:sHsPfNMAG70QAvKbddQ0uScZCHQoZsT5NykGRCeeeIs=
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3/go.mod h1:yL958EeXv8Ylng6IfnvG4oflryUi3vgA3xPs9hmII1s=
@@ -398,6 +419,7 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/k0kubun/go-ansi v0.0.0-20180517002512-3bf9e2903213/go.mod h1:vNUNkEQ1e29fT/6vq2aBdFsgNPmy8qMdSay1npru+Sw=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
@@ -459,11 +481,13 @@ github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNx
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
github.com/mattn/go-isatty v0.0.10 h1:qxFzApOv4WsAL965uUPIsXzAKCZxN2p9UqdhFS4ZW10=
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-oci8 v0.0.7/go.mod h1:wjDx6Xm9q7dFtHJvIlrI99JytznLw5wQ4R+9mNXJwGI=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/QdE+0=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-shellwords v1.0.10/go.mod h1:EZzvwXDESEeg03EKmM+RmDnNOPKG4lLtQsUlTZDWQ8Y=
github.com/mattn/go-sqlite3 v1.9.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/go-sqlite3 v1.12.0 h1:u/x3mp++qUxvYfulZ4HKOvVO0JWhk7HtE8lWhbGz/Do=
@@ -472,6 +496,8 @@ github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
github.com/mitchellh/copystructure v1.0.0 h1:Laisrj+bAB6b/yJwB5Bt3ITZhGJdqmxquMKeZ+mmkFQ=
github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
@@ -504,6 +530,8 @@ github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d h1:fKh+rvw
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d/go.mod h1:puRUWSwyecW2V355tKncwPVPRAjQBduPsFjG0mrV/Nw=
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87 h1:mGz7T8KvmHH0gLWPI5tQne8xl2cO3T8wrrb6Aa16Jxo=
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87/go.mod h1:1w4zI1LYXDeiUXqedPcrT5eQJnmKR6dbg5iJMgSIP/Y=
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82 h1:Hkefw2tzoKATVUTFsCtDlUnY180+OE851qGbq45ATxk=
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82/go.mod h1:4P/ULate+2QxoAQtojaRjyO5VGMhV0KLnSdAS8nuBbo=
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290 h1:426hFyXMpXeqIeGJn2cGAW9ogvM2Jf+Jv23gtVPvBLM=
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290/go.mod h1:uP5BBgFxq2wNWo7n1vnY5SSbgL0WDshVJrOO12tZ/lA=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
@@ -534,6 +562,8 @@ github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1 h1:mFwc4LvZ0xpSvDZ3E+k8Yte0hLOMxXUlP+yXtJqkYfQ=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.2 h1:8mVmC9kjFFmA8H4pKMUhcblgifdkOIXPvbhN1T36q1M=
github.com/onsi/ginkgo v1.14.2/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
@@ -542,6 +572,9 @@ github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1Cpa
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.0 h1:Gwkk+PTu/nfOwNMtUB/mRUv0X7ewW5dO4AERT1ThVKo=
github.com/onsi/gomega v1.10.0/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.10.3 h1:gph6h/qe9GSUw1NhH1gp+qb+h8rXD8Cy60Z32Qw3ELA=
github.com/onsi/gomega v1.10.3/go.mod h1:V9xEwhxec5O8UDM77eCW8vLymOMltsqPVYWrpDsH8xc=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/openSUSE/umoci v0.1.1-0.20191030112807-c0dd46ae078f h1:G9hyzNrFbTgp9KEoGRcNYxAT41lo7hDy9oxXT1Y7WHI=
github.com/openSUSE/umoci v0.1.1-0.20191030112807-c0dd46ae078f/go.mod h1:3p4KA5nwyY65lVmQZxv7tm0YEylJ+t1fY91ORsVXv58=
@@ -592,7 +625,6 @@ github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f h1:WyCn68lTiy
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f/go.mod h1:/iRjX3DdSK956SzsUdV55J+wIsQ+2IBWmBrB4RvZfk4=
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pkg/diff v0.0.0-20190930165518-531926345625/go.mod h1:kFj35MyHn14a6pIgWhm46KVjJr5CHys3eEYxkuKD1EI=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1-0.20171018195549-f15c970de5b7/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
@@ -641,7 +673,6 @@ github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFR
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.3.2/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.4.0/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.5.0/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rootless-containers/proto v0.1.0 h1:gS1JOMEtk1YDYHCzBAf/url+olMJbac7MTrgSeP6zh4=
github.com/rootless-containers/proto v0.1.0/go.mod h1:vgkUFZbQd0gcE/K/ZwtE4MYjZPu0UNHLXIQxhyqAFh8=
github.com/rubenv/sql-migrate v0.0.0-20200616145509-8d140a17f351/go.mod h1:DCgfY80j8GYL7MLEfvcpSFvjD0L5yZq/aZUJmhZklyg=
@@ -652,6 +683,8 @@ github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQD
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/schollz/progressbar/v3 v3.7.1 h1:aQR/t6d+1nURSdoMn6c7n0vJi5xQ3KndpF0n7R5wrik=
github.com/schollz/progressbar/v3 v3.7.1/go.mod h1:CG/f0JmacksUc6TkZToO7tVq4t03zIQSQUtTd7F9GR4=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
@@ -717,6 +750,7 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tj/assert v0.0.0-20171129193455-018094318fb0/go.mod h1:mZ9/Rh9oLWpLLDRpvE+3b7gP/C2YyLFYxNmcLnPTMe0=
github.com/tj/go-elastic v0.0.0-20171221160941-36157cbbebc2/go.mod h1:WjeM0Oo1eNAjXGDx2yma7uG2XoyRZTq1uv3M/o7imD0=
@@ -757,6 +791,7 @@ go.etcd.io/bbolt v1.3.4/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.2 h1:jxcFYjlkl8xaERsgLo+RNquI0epW6zuy/ZRQs6jnrFA=
go.mongodb.org/mongo-driver v1.1.2/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
@@ -794,7 +829,6 @@ golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20190621222207-cc06ce4a13d4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975 h1:/Tl7pH94bvbAAHBdZJT947M/+gp0+CqQXDtMRC0fseo=
@@ -802,6 +836,8 @@ golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201112155050-0c6587e931a9 h1:umElSU9WZirRdgu2yFHY0ayQkEnKiOC1TtM3fWXFnoU=
golang.org/x/crypto v0.0.0-20201112155050-0c6587e931a9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
@@ -838,8 +874,9 @@ golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20190912160710-24e19bdeb0f2 h1:4dVFTC832rPn4pomLSz1vA+are2+dU19w1H8OngV7nc=
golang.org/x/net v0.0.0-20190912160710-24e19bdeb0f2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553 h1:efeOvDhwQ29Dj3SdAV/MJf8oukgn+8D8WgaCaRMchF8=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0 h1:wBouT66WTYFXdxfVdz9sVWARVd/2vfGcmI45D2gj45M=
golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
@@ -852,6 +889,7 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180816055513-1c9583448a9c/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -878,19 +916,27 @@ golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190913121621-c3b328c6e5a7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be h1:QAcqgptGM8IQBC9K/RC4o+O9YmqEm0diQn9QmZw/0mU=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae h1:/WDfKMnPU+m5M4xB+6x4kaepxRw6jWvR5iDRdvjHgy8=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f h1:+Nyd8tzPX9R7BWHguqsrbFdRx3WQ/1ib8I44HXV5yTA=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201113135734-0a15ea8d9b02 h1:5Ftd3YbC/kANXWCBjvppvUmv1BMakgFcBKA7MpYYp4M=
golang.org/x/sys v0.0.0-20201113135734-0a15ea8d9b02/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -959,6 +1005,13 @@ google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.1 h1:zvIju4sqAGvwKspUQOhwnpcqSbzi7/H6QomNNjTL4sk=
google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0 h1:4MY060fB1DLGMB/7MBTLnwQUY6+F09GEiz6SsrNqyzM=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -991,6 +1044,8 @@ gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
@@ -1029,9 +1084,6 @@ k8s.io/kubernetes v1.13.0/go.mod h1:ocZa8+6APFNC2tX1DZASIbocyYT5jHzqFVsY5aoB7Jk=
k8s.io/metrics v0.18.8/go.mod h1:j7JzZdiyhLP2BsJm/Fzjs+j5Lb1Y7TySjhPWqBPwRXA=
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89 h1:d4vVOjXm687F1iLSP2q3lyPPuyvTUt3aVoBpi2DqRsU=
k8s.io/utils v0.0.0-20200324210504-a9aa75ae1b89/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
mvdan.cc/editorconfig v0.1.1-0.20191109213504-890940e3f00e/go.mod h1:Ge4atmRUYqueGppvJ7JNrtqpqokoJEFxYbP0Z+WeKS8=
mvdan.cc/sh/v3 v3.0.0-beta1 h1:UqiwBEXEPzelaGxuvixaOtzc7WzKtrElePJ8HqvW7K8=
mvdan.cc/sh/v3 v3.0.0-beta1/go.mod h1:rBIndNJFYPp8xSppiZcGIk6B5d1g3OEARxEaXjPxwVI=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT7lCHcxMU+mDHEm+nx46H4zuuHZkDP6icnhu0=
sigs.k8s.io/kustomize v2.0.3+incompatible/go.mod h1:MkjgH3RdOWrievjo6c9T245dYlB5QeXV4WCbnt/PEpU=
sigs.k8s.io/structured-merge-diff/v3 v3.0.0-20200116222232-67a7b8c61874/go.mod h1:PlARxl6Hbt/+BC80dRLi1qAmnMqwqDg62YvvVkZjemw=

67
pkg/bus/events.go Normal file
View File

@@ -0,0 +1,67 @@
package bus
import (
"github.com/mudler/go-pluggable"
)
var (
// Package events
// EventPackageInstall is the event fired when a new package is being installed
EventPackageInstall pluggable.EventType = "package.install"
// EventPackageUnInstall is the event fired when a new package is being uninstalled
EventPackageUnInstall pluggable.EventType = "package.uninstall"
// Package build
// EventPackagePreBuild is the event fired before a package is being built
EventPackagePreBuild pluggable.EventType = "package.pre.build"
// EventPackagePreBuildArtifact is the event fired before a package artifact is being built
EventPackagePreBuildArtifact pluggable.EventType = "package.pre.build_artifact"
// EventPackagePostBuildArtifact is the event fired after a package artifact was built
EventPackagePostBuildArtifact pluggable.EventType = "package.post.build_artifact"
// EventPackagePostBuild is the event fired after a package was built
EventPackagePostBuild pluggable.EventType = "package.post.build"
// Image build
// EventImagePreBuild is the event fired before a image is being built
EventImagePreBuild pluggable.EventType = "image.pre.build"
// EventImagePrePull is the event fired before a image is being pulled
EventImagePrePull pluggable.EventType = "image.pre.pull"
// EventImagePrePush is the event fired before a image is being pushed
EventImagePrePush pluggable.EventType = "image.pre.push"
// EventImagePostBuild is the event fired after an image is being built
EventImagePostBuild pluggable.EventType = "image.post.build"
// EventImagePostPull is the event fired after an image is being pulled
EventImagePostPull pluggable.EventType = "image.post.pull"
// EventImagePostPush is the event fired after an image is being pushed
EventImagePostPush pluggable.EventType = "image.post.push"
// Repository events
// EventRepositoryPreBuild is the event fired before a repository is being built
EventRepositoryPreBuild pluggable.EventType = "repository.pre.build"
// EventRepositoryPostBuild is the event fired after a repository was built
EventRepositoryPostBuild pluggable.EventType = "repository.post.build"
)
// Manager is the bus instance manager, which subscribes plugins to events emitted by Luet
var Manager *pluggable.Manager = pluggable.NewManager(
[]pluggable.EventType{
EventPackageInstall,
EventPackageUnInstall,
EventPackagePreBuild,
EventPackagePreBuildArtifact,
EventPackagePostBuildArtifact,
EventPackagePostBuild,
EventRepositoryPreBuild,
EventRepositoryPostBuild,
EventImagePreBuild,
EventImagePrePull,
EventImagePrePush,
EventImagePostBuild,
EventImagePostPull,
EventImagePostPush,
},
)

View File

@@ -34,6 +34,7 @@ import (
"strings"
"sync"
bus "github.com/mudler/luet/pkg/bus"
. "github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
@@ -170,6 +171,8 @@ func (a *PackageArtifact) WriteYaml(dst string) error {
return errors.Wrap(err, "While marshalling for PackageArtifact YAML")
}
bus.Manager.Publish(bus.EventPackagePreBuildArtifact, a)
mangle, err := NewPackageArtifactFromYaml(data)
if err != nil {
return errors.Wrap(err, "Generated invalid artifact")
@@ -191,6 +194,7 @@ func (a *PackageArtifact) WriteYaml(dst string) error {
return errors.Wrap(err, "While writing PackageArtifact YAML")
}
//a.CompileSpec.GetPackage().SetPath(p)
bus.Manager.Publish(bus.EventPackagePostBuildArtifact, a)
return nil
}

View File

@@ -96,6 +96,7 @@ ENV PACKAGE_CATEGORY=app-admin`))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM luet/base
WORKDIR /luetbuild
ENV PACKAGE_NAME=enman
ENV PACKAGE_VERSION=1.4.0
ENV PACKAGE_CATEGORY=app-admin

View File

@@ -24,6 +24,7 @@ import (
"path/filepath"
"strings"
docker "github.com/fsouza/go-dockerclient"
capi "github.com/mudler/docker-companion/api"
"github.com/mudler/luet/pkg/compiler"
@@ -45,6 +46,7 @@ func (*SimpleDocker) BuildImage(opts compiler.CompilerBackendOptions) error {
name := opts.ImageName
path := opts.SourcePath
dockerfileName := opts.DockerFileName
buildarg := []string{"build", "-f", dockerfileName, "-t", name, "."}
Debug(":whale2: Building image " + name)
@@ -56,6 +58,21 @@ func (*SimpleDocker) BuildImage(opts compiler.CompilerBackendOptions) error {
}
Info(":whale: Building image " + name + " done")
if os.Getenv("DOCKER_SQUASH") == "true" {
Info(":whale: Squashing image " + name)
var client *docker.Client
client, err = docker.NewClientFromEnv()
if err != nil {
return errors.Wrap(err, "could not connect to the Docker daemon")
}
err = capi.Squash(client, name, name)
if err != nil {
return errors.Wrap(err, "Failed squashing image")
}
Info(":whale: Squashing image " + name + " done")
}
if config.LuetCfg.GetGeneral().ShowBuildOutput {
Info(string(out))
} else {
@@ -83,7 +100,7 @@ func (*SimpleDocker) DownloadImage(opts compiler.CompilerBackendOptions) error {
cmd := exec.Command("docker", buildarg...)
out, err := cmd.CombinedOutput()
if err != nil {
return errors.Wrap(err, "Failed building image: "+string(out))
return errors.Wrap(err, "Failed pulling image: "+string(out))
}
Info(":whale: Downloaded image:", name)
return nil

View File

@@ -87,6 +87,7 @@ ENV PACKAGE_CATEGORY=app-admin`))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM luet/base
WORKDIR /luetbuild
ENV PACKAGE_NAME=enman
ENV PACKAGE_VERSION=1.4.0
ENV PACKAGE_CATEGORY=app-admin

View File

@@ -16,6 +16,7 @@
package compiler
import (
"archive/tar"
"fmt"
"io/ioutil"
"os"
@@ -26,6 +27,9 @@ import (
"sync"
"time"
bus "github.com/mudler/luet/pkg/bus"
yaml "gopkg.in/yaml.v2"
"github.com/mudler/luet/pkg/helpers"
. "github.com/mudler/luet/pkg/logger"
pkg "github.com/mudler/luet/pkg/package"
@@ -36,6 +40,7 @@ import (
const BuildFile = "build.yaml"
const DefinitionFile = "definition.yaml"
const CollectionFile = "collection.yaml"
type LuetCompiler struct {
*tree.CompilerRecipe
@@ -300,13 +305,13 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
fp := p.GetPackage().HashFingerprint(packageImage)
if buildertaggedImage == "" {
buildertaggedImage = cs.ImageRepository + "-" + fp + "-builder"
buildertaggedImage = cs.ImageRepository + ":builder-" + fp
Debug(pkgTag, "Creating intermediary image", buildertaggedImage, "from", image)
}
// TODO: Cleanup, not actually hit
if packageImage == "" {
packageImage = cs.ImageRepository + "-" + fp
packageImage = cs.ImageRepository + ":builder-invalid" + fp
}
p.SetSeedImage(image) // In this case, we ignore the build deps as we suppose that the image has them - otherwise we recompose the tree with a solver,
@@ -336,13 +341,15 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
}
}
Info(pkgTag, ":whale: Generating 'builder' image definition from", image)
// First we create the builder image
if err := p.WriteBuildImageDefinition(filepath.Join(buildDir, p.GetPackage().GetFingerPrint()+"-builder.dockerfile")); err != nil {
return builderOpts, runnerOpts, errors.Wrap(err, "Could not generate image definition")
}
if len(p.GetPreBuildSteps()) == 0 {
buildertaggedImage = image
}
// Then we write the step image, which uses the builder one
if err := p.WriteStepImageDefinition(buildertaggedImage, filepath.Join(buildDir, p.GetPackage().GetFingerPrint()+".dockerfile")); err != nil {
return builderOpts, runnerOpts, errors.Wrap(err, "Could not generate image definition")
@@ -364,27 +371,42 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
buildAndPush := func(opts CompilerBackendOptions) error {
buildImage := true
if cs.Options.PullFirst {
if err := cs.Backend.DownloadImage(opts); err == nil {
bus.Manager.Publish(bus.EventImagePrePull, opts)
err := cs.Backend.DownloadImage(opts)
if err == nil {
buildImage = false
} else {
Warning("Failed to download '" + opts.ImageName + "'. Will keep going and build the image unless you use --fatal")
Warning(err.Error())
}
bus.Manager.Publish(bus.EventImagePostPull, opts)
}
if buildImage {
bus.Manager.Publish(bus.EventImagePreBuild, opts)
if err := cs.Backend.BuildImage(opts); err != nil {
return errors.Wrap(err, "Could not build image: "+image+" "+opts.DockerFileName)
}
bus.Manager.Publish(bus.EventImagePostBuild, opts)
if cs.Options.Push {
bus.Manager.Publish(bus.EventImagePrePush, opts)
if err = cs.Backend.Push(opts); err != nil {
return errors.Wrap(err, "Could not push image: "+image+" "+opts.DockerFileName)
}
bus.Manager.Publish(bus.EventImagePostPush, opts)
}
}
return nil
}
if err := buildAndPush(builderOpts); err != nil {
return builderOpts, runnerOpts, errors.Wrap(err, "Could not push image: "+image+" "+builderOpts.DockerFileName)
if len(p.GetPreBuildSteps()) != 0 {
Info(pkgTag, ":whale: Generating 'builder' image from", image, "as", buildertaggedImage, "with prelude steps")
if err := buildAndPush(builderOpts); err != nil {
return builderOpts, runnerOpts, errors.Wrap(err, "Could not push image: "+image+" "+builderOpts.DockerFileName)
}
}
// Even if we might not have any steps to build, we do that so we can tag the image used in this moment and use that to cache it in a registry, or in the system.
// acting as a docker tag.
Info(pkgTag, ":whale: Generating 'package' image from", buildertaggedImage, "as", packageImage, "with build steps")
if err := buildAndPush(runnerOpts); err != nil {
return builderOpts, runnerOpts, errors.Wrap(err, "Could not push image: "+image+" "+builderOpts.DockerFileName)
}
@@ -407,6 +429,23 @@ func (cs *LuetCompiler) genArtifact(p CompilationSpec, builderOpts, runnerOpts C
unpack = true
}
if len(p.BuildSteps()) == 0 && len(p.GetPreBuildSteps()) == 0 && !unpack {
fakePackage := p.Rel(p.GetPackage().GetFingerPrint() + ".package.tar")
// We can't generate delta in this case. It implies the package is a virtual, and nothing has to be done really
file, err := os.Create(fakePackage)
if err != nil {
return nil, errors.Wrap(err, "Failed creating virtual package")
}
defer file.Close()
tw := tar.NewWriter(file)
defer tw.Close()
artifact := NewPackageArtifact(fakePackage)
artifact.SetCompressionType(cs.CompressionType)
return artifact, nil
}
// prepare folder content of the image with the package compiled inside
if err := cs.Backend.ExportImage(runnerOpts); err != nil {
return nil, errors.Wrap(err, "Failed exporting image")
@@ -596,6 +635,14 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
targetAssertion := p.GetSourceAssertion().Search(p.GetPackage().GetFingerPrint())
targetPackageHash := cs.ImageRepository + ":" + targetAssertion.Hash.PackageHash
bus.Manager.Publish(bus.EventPackagePreBuild, struct {
CompileSpec CompilationSpec
Assert solver.PackageAssert
}{
CompileSpec: p,
Assert: *targetAssertion,
})
// - If image is set we just generate a plain dockerfile
// Treat last case (easier) first. The image is provided and we just compute a plain dockerfile with the images listed as above
if p.GetImage() != "" {
@@ -636,6 +683,14 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
Debug(pkgTag, " :arrow_right_hook: :whale: Builder image from", buildImageHash)
Debug(pkgTag, " :arrow_right_hook: :whale: Package image name", currentPackageImageHash)
bus.Manager.Publish(bus.EventPackagePreBuild, struct {
CompileSpec CompilationSpec
Assert solver.PackageAssert
}{
CompileSpec: compileSpec,
Assert: assertion,
})
lastHash = currentPackageImageHash
if compileSpec.GetImage() != "" {
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().HumanReadableString()+" from image")
@@ -655,6 +710,15 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
// deperrs = append(deperrs, err)
// break // stop at first error
}
bus.Manager.Publish(bus.EventPackagePostBuild, struct {
CompileSpec CompilationSpec
Artifact Artifact
}{
CompileSpec: compileSpec,
Artifact: artifact,
})
departifacts = append(departifacts, artifact)
Info(pkgTag, ":white_check_mark: Done")
}
@@ -673,6 +737,14 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p Compila
artifact.SetDependencies(departifacts)
artifact.SetSourceAssertion(p.GetSourceAssertion())
bus.Manager.Publish(bus.EventPackagePostBuild, struct {
CompileSpec CompilationSpec
Artifact Artifact
}{
CompileSpec: p,
Artifact: artifact,
})
return artifact, err
} else {
return departifacts[len(departifacts)-1], nil
@@ -688,12 +760,51 @@ func (cs *LuetCompiler) FromPackage(p pkg.Package) (CompilationSpec, error) {
return nil, err
}
out, err := helpers.RenderFiles(pack.Rel(BuildFile), pack.Rel(DefinitionFile))
if err != nil {
return nil, errors.Wrap(err, "rendering file "+pack.Rel(BuildFile))
var dataresult []byte
val := pack.Rel(DefinitionFile)
if _, err := os.Stat(pack.Rel(CollectionFile)); err == nil {
val = pack.Rel(CollectionFile)
data, err := ioutil.ReadFile(val)
if err != nil {
return nil, errors.Wrap(err, "rendering file "+val)
}
dataBuild, err := ioutil.ReadFile(pack.Rel(BuildFile))
if err != nil {
return nil, errors.Wrap(err, "rendering file "+val)
}
packsRaw, err := pkg.GetRawPackages(data)
raw := packsRaw.Find(pack.GetName(), pack.GetCategory(), pack.GetVersion())
d := map[string]interface{}{}
if len(cs.Options.BuildValuesFile) > 0 {
defBuild, err := ioutil.ReadFile(cs.Options.BuildValuesFile)
if err != nil {
return nil, errors.Wrap(err, "rendering file "+val)
}
err = yaml.Unmarshal(defBuild, &d)
if err != nil {
return nil, errors.Wrap(err, "rendering file "+val)
}
}
dat, err := helpers.RenderHelm(string(dataBuild), raw, d)
if err != nil {
return nil, errors.Wrap(err, "rendering file "+pack.Rel(BuildFile))
}
dataresult = []byte(dat)
} else {
out, err := helpers.RenderFiles(pack.Rel(BuildFile), val, cs.Options.BuildValuesFile)
if err != nil {
return nil, errors.Wrap(err, "rendering file "+pack.Rel(BuildFile))
}
dataresult = []byte(out)
}
return NewLuetCompilationSpec([]byte(out), pack)
return NewLuetCompilationSpec(dataresult, pack)
}
func (cs *LuetCompiler) GetBackend() CompilerBackend {

View File

@@ -56,6 +56,7 @@ type CompilerOptions struct {
NoDeps bool
SolverOptions config.LuetSolverOptions
SkipIfMetadataExists bool
BuildValuesFile string
PackageTargetOnly bool
}

View File

@@ -242,10 +242,24 @@ RUN ` + s
func (cs *LuetCompilationSpec) RenderStepImage(image string) (string, error) {
spec := `
FROM ` + image + `
WORKDIR /luetbuild
ENV PACKAGE_NAME=` + cs.Package.GetName() + `
ENV PACKAGE_VERSION=` + cs.Package.GetVersion() + `
ENV PACKAGE_CATEGORY=` + cs.Package.GetCategory()
if len(cs.Retrieve) > 0 {
for _, s := range cs.Retrieve {
//var file string
// if helpers.IsValidUrl(s) {
// file = s
// } else {
// file = cs.Rel(s)
// }
spec = spec + `
ADD ` + s + ` /luetbuild/`
}
}
for _, s := range cs.Env {
spec = spec + `
ENV ` + s

View File

@@ -96,6 +96,7 @@ ENV test=1`))
Expect(err).ToNot(HaveOccurred())
Expect(dockerfile).To(Equal(`
FROM luet/base
WORKDIR /luetbuild
ENV PACKAGE_NAME=enman
ENV PACKAGE_VERSION=1.4.0
ENV PACKAGE_CATEGORY=app-admin
@@ -168,9 +169,12 @@ ENV test=1`))
Expect(dockerfile).To(Equal(`
FROM luet/base
WORKDIR /luetbuild
ENV PACKAGE_NAME=a
ENV PACKAGE_VERSION=1.0
ENV PACKAGE_CATEGORY=test
ADD test /luetbuild/
ADD http://www.google.com /luetbuild/
ENV test=1
RUN echo foo > /test
RUN echo bar > /test2`))

View File

@@ -71,6 +71,15 @@ type LuetSolverOptions struct {
Implementation solver.SolverType `mapstructure:"implementation"`
}
func (opts LuetSolverOptions) ResolverIsSet() bool {
switch opts.Type {
case solver.QLearningResolverType:
return true
default:
return false
}
}
func (opts LuetSolverOptions) Resolver() solver.PackageResolver {
switch opts.Type {
case solver.QLearningResolverType:

View File

@@ -19,6 +19,8 @@ import (
"io/ioutil"
"os"
"path/filepath"
"sort"
"strings"
"time"
copy "github.com/otiai10/copy"
@@ -41,6 +43,8 @@ func OrderFiles(target string, files []string) ([]string, []string) {
}
}
dirs := []string{}
for _, f := range files {
target := filepath.Join(target, f)
fi, err := os.Lstat(target)
@@ -48,11 +52,16 @@ func OrderFiles(target string, files []string) ([]string, []string) {
continue
}
if m := fi.Mode(); m.IsDir() {
newFiles = append(newFiles, f)
dirs = append(dirs, f)
}
}
return newFiles, notPresent
// Compare how many sub paths there are, and push at the end the ones that have less subpaths
sort.Slice(dirs, func(i, j int) bool {
return len(strings.Split(dirs[i], string(os.PathSeparator))) > len(strings.Split(dirs[j], string(os.PathSeparator)))
})
return append(newFiles, dirs...), notPresent
}
func ListDir(dir string) ([]string, error) {

View File

@@ -60,5 +60,27 @@ var _ = Describe("Helpers", func() {
Expect(ordered).To(Equal([]string{"baz", "bar/foo", "foo", "baz2/foo", "bar", "baz2"}))
Expect(notExisting).To(Equal([]string{"notexisting"}))
})
It("orders correctly when there are folders with folders", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
err = os.MkdirAll(filepath.Join(testDir, "bar"), os.ModePerm)
Expect(err).ToNot(HaveOccurred())
err = os.MkdirAll(filepath.Join(testDir, "foo"), os.ModePerm)
Expect(err).ToNot(HaveOccurred())
err = os.MkdirAll(filepath.Join(testDir, "foo", "bar"), os.ModePerm)
Expect(err).ToNot(HaveOccurred())
err = os.MkdirAll(filepath.Join(testDir, "foo", "baz"), os.ModePerm)
Expect(err).ToNot(HaveOccurred())
err = os.MkdirAll(filepath.Join(testDir, "foo", "baz", "fa"), os.ModePerm)
Expect(err).ToNot(HaveOccurred())
ordered, _ := OrderFiles(testDir, []string{"foo", "foo/bar", "bar", "foo/baz/fa", "foo/baz"})
Expect(ordered).To(Equal([]string{"foo/baz/fa", "foo/bar", "foo/baz", "foo", "bar"}))
})
})
})

View File

@@ -11,7 +11,7 @@ import (
)
// RenderHelm renders the template string with helm
func RenderHelm(template string, values map[string]interface{}) (string, error) {
func RenderHelm(template string, values, d map[string]interface{}) (string, error) {
c := &chart.Chart{
Metadata: &chart.Metadata{
Name: "",
@@ -23,7 +23,7 @@ func RenderHelm(template string, values map[string]interface{}) (string, error)
Values: map[string]interface{}{"Values": values},
}
v, err := chartutil.CoalesceValues(c, map[string]interface{}{})
v, err := chartutil.CoalesceValues(c, map[string]interface{}{"Values": d})
if err != nil {
return "", errors.Wrap(err, "while rendering template")
}
@@ -37,7 +37,7 @@ func RenderHelm(template string, values map[string]interface{}) (string, error)
type templatedata map[string]interface{}
func RenderFiles(toTemplate, valuesFile string) (string, error) {
func RenderFiles(toTemplate, valuesFile string, defaultFile string) (string, error) {
raw, err := ioutil.ReadFile(toTemplate)
if err != nil {
return "", errors.Wrap(err, "reading file "+toTemplate)
@@ -46,14 +46,26 @@ func RenderFiles(toTemplate, valuesFile string) (string, error) {
if !Exists(valuesFile) {
return "", errors.Wrap(err, "file not existing "+valuesFile)
}
def, err := ioutil.ReadFile(valuesFile)
val, err := ioutil.ReadFile(valuesFile)
if err != nil {
return "", errors.Wrap(err, "reading file "+valuesFile)
}
var values templatedata
if err = yaml.Unmarshal(def, &values); err != nil {
d := templatedata{}
if len(defaultFile) > 0 {
def, err := ioutil.ReadFile(defaultFile)
if err != nil {
return "", errors.Wrap(err, "reading file "+valuesFile)
}
if err = yaml.Unmarshal(def, &d); err != nil {
return "", errors.Wrap(err, "unmarshalling file "+toTemplate)
}
}
if err = yaml.Unmarshal(val, &values); err != nil {
return "", errors.Wrap(err, "unmarshalling file "+toTemplate)
}
return RenderHelm(string(raw), values)
return RenderHelm(string(raw), values, d)
}

View File

@@ -16,17 +16,132 @@
package helpers_test
import (
"io/ioutil"
"os"
"path/filepath"
. "github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
func writeFile(path string, content string) {
err := ioutil.WriteFile(path, []byte(content), 0644)
Expect(err).ToNot(HaveOccurred())
}
var _ = Describe("Helpers", func() {
Context("RenderHelm", func() {
It("Renders templates", func() {
out, err := RenderHelm("{{.Values.Test}}", map[string]interface{}{"Test": "foo"})
out, err := RenderHelm("{{.Values.Test}}{{.Values.Bar}}", map[string]interface{}{"Test": "foo"}, map[string]interface{}{"Bar": "bar"})
Expect(err).ToNot(HaveOccurred())
Expect(out).To(Equal("foo"))
Expect(out).To(Equal("foobar"))
})
It("Renders templates with overrides", func() {
out, err := RenderHelm("{{.Values.Test}}{{.Values.Bar}}", map[string]interface{}{"Test": "foo", "Bar": "baz"}, map[string]interface{}{"Bar": "bar"})
Expect(err).ToNot(HaveOccurred())
Expect(out).To(Equal("foobar"))
})
It("Renders templates", func() {
out, err := RenderHelm("{{.Values.Test}}{{.Values.Bar}}", map[string]interface{}{"Test": "foo", "Bar": "bar"}, map[string]interface{}{})
Expect(err).ToNot(HaveOccurred())
Expect(out).To(Equal("foobar"))
})
It("Render files default overrides", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
toTemplate := filepath.Join(testDir, "totemplate.yaml")
values := filepath.Join(testDir, "values.yaml")
d := filepath.Join(testDir, "default.yaml")
writeFile(toTemplate, `{{.Values.foo}}`)
writeFile(values, `
foo: "bar"
`)
writeFile(d, `
foo: "baz"
`)
Expect(err).ToNot(HaveOccurred())
res, err := RenderFiles(toTemplate, values, d)
Expect(err).ToNot(HaveOccurred())
Expect(res).To(Equal("baz"))
})
It("Render files from values", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
toTemplate := filepath.Join(testDir, "totemplate.yaml")
values := filepath.Join(testDir, "values.yaml")
d := filepath.Join(testDir, "default.yaml")
writeFile(toTemplate, `{{.Values.foo}}`)
writeFile(values, `
foo: "bar"
`)
writeFile(d, `
faa: "baz"
`)
Expect(err).ToNot(HaveOccurred())
res, err := RenderFiles(toTemplate, values, d)
Expect(err).ToNot(HaveOccurred())
Expect(res).To(Equal("bar"))
})
It("Render files from values if no default", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
toTemplate := filepath.Join(testDir, "totemplate.yaml")
values := filepath.Join(testDir, "values.yaml")
writeFile(toTemplate, `{{.Values.foo}}`)
writeFile(values, `
foo: "bar"
`)
Expect(err).ToNot(HaveOccurred())
res, err := RenderFiles(toTemplate, values, "")
Expect(err).ToNot(HaveOccurred())
Expect(res).To(Equal("bar"))
})
It("doesn't interpolate if no one provides the values", func() {
testDir, err := ioutil.TempDir(os.TempDir(), "test")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(testDir)
toTemplate := filepath.Join(testDir, "totemplate.yaml")
values := filepath.Join(testDir, "values.yaml")
d := filepath.Join(testDir, "default.yaml")
writeFile(toTemplate, `{{.Values.foo}}`)
writeFile(values, `
foao: "bar"
`)
writeFile(d, `
faa: "baz"
`)
Expect(err).ToNot(HaveOccurred())
res, err := RenderFiles(toTemplate, values, d)
Expect(err).ToNot(HaveOccurred())
Expect(res).To(Equal(""))
})
})
})

View File

@@ -22,6 +22,7 @@ import (
"os"
"path"
"path/filepath"
"time"
. "github.com/mudler/luet/pkg/logger"
@@ -30,6 +31,8 @@ import (
"github.com/mudler/luet/pkg/helpers"
"github.com/cavaliercoder/grab"
"github.com/schollz/progressbar/v3"
)
type HttpClient struct {
@@ -101,20 +104,61 @@ func (c *HttpClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Arti
}
resp := client.Do(req)
bar := progressbar.NewOptions64(
resp.Size(),
progressbar.OptionSetDescription(
fmt.Sprintf("[cyan] %s - [reset]",
filepath.Base(resp.Request.HTTPRequest.URL.RequestURI()))),
progressbar.OptionSetRenderBlankState(true),
progressbar.OptionEnableColorCodes(config.LuetCfg.GetLogging().Color),
progressbar.OptionClearOnFinish(),
progressbar.OptionShowBytes(true),
progressbar.OptionShowCount(),
progressbar.OptionSetPredictTime(true),
progressbar.OptionFullWidth(),
progressbar.OptionSetTheme(progressbar.Theme{
Saucer: "[white]=[reset]",
SaucerHead: "[white]>[reset]",
SaucerPadding: " ",
BarStart: "[",
BarEnd: "]",
}))
bar.Reset()
// start download loop
t := time.NewTicker(500 * time.Millisecond)
defer t.Stop()
download_loop:
for {
select {
case <-t.C:
bar.Set64(resp.BytesComplete())
case <-resp.Done:
// download is complete
break download_loop
}
}
if err = resp.Err(); err != nil {
continue
}
Info("Downloaded", artifactName, "of",
fmt.Sprintf("%.2f", (float64(resp.BytesComplete())/1000)/1000), "MB (",
fmt.Sprintf("%.2f", (float64(resp.BytesPerSecond())/1024)/1024), "MiB/s )")
Debug("Copying file ", filepath.Join(temp, artifactName), "to", cacheFile)
err = helpers.CopyFile(filepath.Join(temp, artifactName), cacheFile)
if err != nil {
continue
}
Info("\nDownloaded", artifactName, "of",
fmt.Sprintf("%.2f", (float64(resp.BytesComplete())/1000)/1000), "MB (",
fmt.Sprintf("%.2f", (float64(resp.BytesPerSecond())/1024)/1024), "MiB/s )")
Debug("\nCopying file ", filepath.Join(temp, artifactName), "to", cacheFile)
err = helpers.CopyFile(filepath.Join(temp, artifactName), cacheFile)
bar.Finish()
ok = true
break
}

View File

@@ -24,6 +24,8 @@ import (
"strings"
"sync"
. "github.com/logrusorgru/aurora"
"github.com/mudler/luet/pkg/bus"
compiler "github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
@@ -44,6 +46,7 @@ type LuetInstallerOptions struct {
FullUninstall, FullCleanUninstall bool
CheckConflicts bool
SolverUpgrade, RemoveUnavailableOnUpgrade, UpgradeNewRevisions bool
Ask bool
}
type LuetInstaller struct {
@@ -62,7 +65,82 @@ func NewLuetInstaller(opts LuetInstallerOptions) Installer {
return &LuetInstaller{Options: opts}
}
// computeUpgrade returns the packages to be uninstalled and installed in a system to perform an upgrade
// based on the system repositories
func (l *LuetInstaller) computeUpgrade(syncedRepos Repositories, s *System) (pkg.Packages, pkg.Packages, error) {
toInstall := pkg.Packages{}
var uninstall pkg.Packages
var err error
// First match packages against repositories by priority
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
// compute a "big" world
solv := solver.NewResolver(solver.Options{Type: l.Options.SolverOptions.Implementation, Concurrency: l.Options.Concurrency}, s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
var solution solver.PackagesAssertions
if l.Options.SolverUpgrade {
uninstall, solution, err = solv.UpgradeUniverse(l.Options.RemoveUnavailableOnUpgrade)
if err != nil {
return uninstall, toInstall, errors.Wrap(err, "Failed solving solution for upgrade")
}
} else {
uninstall, solution, err = solv.Upgrade(l.Options.FullUninstall, true)
if err != nil {
return uninstall, toInstall, errors.Wrap(err, "Failed solving solution for upgrade")
}
}
for _, assertion := range solution {
// Be sure to filter from solutions packages already installed in the system
if _, err := s.Database.FindPackage(assertion.Package); err != nil && assertion.Value {
toInstall = append(toInstall, assertion.Package)
}
}
if l.Options.UpgradeNewRevisions {
for _, p := range s.Database.World() {
matches := syncedRepos.PackageMatches(pkg.Packages{p})
if len(matches) == 0 {
// Package missing. the user should run luet upgrade --universe
continue
}
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
return uninstall, toInstall, errors.New("Package in compilespec empty")
}
if artefact.GetCompileSpec().GetPackage().Matches(p) && artefact.GetCompileSpec().GetPackage().GetBuildTimestamp() != p.GetBuildTimestamp() {
toInstall = append(toInstall, matches[0].Package).Unique()
uninstall = append(uninstall, p).Unique()
}
}
}
}
return uninstall, toInstall, nil
}
func packsToList(p pkg.Packages) string {
var packs []string
for _, pp := range p {
packs = append(packs, pp.HumanReadableString())
}
return strings.Join(packs, " ")
}
func matchesToList(artefacts map[string]ArtifactMatch) string {
var packs []string
for fingerprint, match := range artefacts {
packs = append(packs, fmt.Sprintf("%s (%s)", fingerprint, match.Repository.GetName()))
}
return strings.Join(packs, " ")
}
// Upgrade upgrades a System based on the Installer options. Returns error in case of failure
func (l *LuetInstaller) Upgrade(s *System) error {
syncedRepos, err := l.SyncRepositories(true)
if err != nil {
return err
@@ -72,83 +150,39 @@ func (l *LuetInstaller) Upgrade(s *System) error {
if l.Options.UpgradeNewRevisions {
Info(":memo: note: will consider new build revisions while upgrading")
}
Spinner(32)
defer SpinnerStop()
// First match packages against repositories by priority
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
// compute a "big" world
solv := solver.NewResolver(solver.Options{Type: l.Options.SolverOptions.Implementation, Concurrency: l.Options.Concurrency}, s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
var uninstall pkg.Packages
var solution solver.PackagesAssertions
if l.Options.SolverUpgrade {
uninstall, solution, err = solv.UpgradeUniverse(l.Options.RemoveUnavailableOnUpgrade)
if err != nil {
return errors.Wrap(err, "Failed solving solution for upgrade")
}
} else {
uninstall, solution, err = solv.Upgrade(!l.Options.FullUninstall, l.Options.NoDeps)
if err != nil {
return errors.Wrap(err, "Failed solving solution for upgrade")
}
uninstall, toInstall, err := l.computeUpgrade(syncedRepos, s)
if err != nil {
return errors.Wrap(err, "failed computing upgrade")
}
SpinnerStop()
if len(uninstall) > 0 {
Info(":recycle: Packages marked for uninstall:")
Info(":recycle: Packages that are going to be removed from the system:\n ", Yellow(packsToList(uninstall)).BgBlack().String())
}
for _, p := range uninstall {
Info(fmt.Sprintf("- %s", p.HumanReadableString()))
if len(toInstall) > 0 {
Info(":zap:Packages that are going to be installed in the system:\n ", Green(packsToList(toInstall)).BgBlack().String())
}
if len(solution) > 0 {
Info(":zap: Packages marked for upgrade:")
if len(toInstall) == 0 && len(uninstall) == 0 {
Info("Nothing to do")
return nil
}
toInstall := pkg.Packages{}
for _, assertion := range solution {
// Be sure to filter from solutions packages already installed in the system
if _, err := s.Database.FindPackage(assertion.Package); err != nil && assertion.Value {
Info(fmt.Sprintf("- %s", assertion.Package.HumanReadableString()))
toInstall = append(toInstall, assertion.Package)
}
}
if l.Options.UpgradeNewRevisions {
Info(":mag: Checking packages with new revisions available")
for _, p := range s.Database.World() {
matches := syncedRepos.PackageMatches(pkg.Packages{p})
if len(matches) == 0 {
// Package missing. the user should run luet upgrade --universe
Info(":warning: Installed packages seems to be missing from remote repositories.")
Info(":warning: It is suggested to run 'luet upgrade --universe'")
continue
}
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
return errors.New("Package in compilespec empty")
}
if artefact.GetCompileSpec().GetPackage().Matches(p) && artefact.GetCompileSpec().GetPackage().GetBuildTimestamp() != p.GetBuildTimestamp() {
toInstall = append(toInstall, matches[0].Package).Unique()
uninstall = append(uninstall, p).Unique()
Info(
fmt.Sprintf("- %s ( %s vs %s ) repo: %s (date: %s)",
p.HumanReadableString(),
artefact.GetCompileSpec().GetPackage().GetBuildTimestamp(),
p.GetBuildTimestamp(),
matches[0].Repo.GetName(),
matches[0].Repo.GetLastUpdate(),
))
}
}
if l.Options.Ask {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return l.swap(syncedRepos, uninstall, toInstall, s)
} else {
return errors.New("Aborted by user")
}
}
Spinner(32)
defer SpinnerStop()
return l.swap(syncedRepos, uninstall, toInstall, s)
}
@@ -179,6 +213,25 @@ func (l *LuetInstaller) Swap(toRemove pkg.Packages, toInstall pkg.Packages, s *S
if err != nil {
return err
}
if len(toRemove) > 0 {
Info(":recycle: Packages that are going to be removed from the system:\n ", Yellow(packsToList(toRemove)).BgBlack().String())
}
if len(toInstall) > 0 {
Info(":zap:Packages that are going to be installed in the system:\n ", Green(packsToList(toInstall)).BgBlack().String())
}
if l.Options.Ask {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return l.swap(syncedRepos, toRemove, toInstall, s)
} else {
return errors.New("Aborted by user")
}
}
return l.swap(syncedRepos, toRemove, toInstall, s)
}
@@ -188,10 +241,6 @@ func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, to
syncedRepos.SyncDatabase(allRepos)
toInstall = syncedRepos.ResolveSelectors(toInstall)
if err := l.download(syncedRepos, toInstall); err != nil {
return errors.Wrap(err, "Pre-downloading packages")
}
// We don't want any conflict with the installed to raise during the upgrade.
// In this way we both force uninstalls and we avoid to check with conflicts
// against the current system state which is pending to deletion
@@ -200,22 +249,55 @@ func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, to
// now the solver enforces the constraints and explictly denies two packages
// of the same version installed.
forced := l.Options.Force
nodeps := l.Options.NoDeps
l.Options.Force = true
l.Options.NoDeps = true
// First check what would have been done
installedtmp := pkg.NewInMemoryDatabase(false)
for _, i := range s.Database.World() {
_, err := installedtmp.CreatePackage(i)
if err != nil {
return errors.Wrap(err, "Failed create temporary in-memory db")
}
}
systemAfterChanges := &System{Database: installedtmp}
for _, u := range toRemove {
Info(":package:", u.HumanReadableString(), "Marked for deletion")
packs, err := l.computeUninstall(u, systemAfterChanges)
if err != nil && !l.Options.Force {
Error("Failed computing uninstall for ", u.HumanReadableString())
return errors.Wrap(err, "computing uninstall "+u.HumanReadableString())
}
for _, p := range packs {
err = systemAfterChanges.Database.RemovePackage(p)
if err != nil {
return errors.Wrap(err, "Failed removing package from database")
}
}
}
match, packages, assertions, allRepos, err := l.computeInstall(syncedRepos, toInstall, systemAfterChanges)
if err != nil {
return errors.Wrap(err, "computing installation")
}
if err := l.download(syncedRepos, match); err != nil {
return errors.Wrap(err, "Pre-downloading packages")
}
for _, u := range toRemove {
err := l.Uninstall(u, s)
if err != nil && !l.Options.Force {
Error("Failed uninstall for ", u.HumanReadableString())
return errors.Wrap(err, "uninstalling "+u.HumanReadableString())
}
}
l.Options.Force = forced
return l.install(syncedRepos, toInstall, s)
l.Options.Force = forced
l.Options.NoDeps = nodeps
return l.install(syncedRepos, match, packages, assertions, allRepos, s)
}
func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
@@ -223,35 +305,55 @@ func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
if err != nil {
return err
}
return l.install(syncedRepos, cp, s)
}
func (l *LuetInstaller) download(syncedRepos Repositories, cp pkg.Packages) error {
toDownload := map[string]ArtifactMatch{}
// FIXME: This can be optimized. We don't need to re-match this to the repository
// But we could just do it once
// Gathers things to download
for _, currentPack := range cp {
matches := syncedRepos.PackageMatches(pkg.Packages{currentPack})
if len(matches) == 0 {
return errors.New("Failed matching solutions against repository for " + currentPack.HumanReadableString() + " where are definitions coming from?!")
}
A:
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
return errors.New("Package in compilespec empty")
match, packages, assertions, allRepos, err := l.computeInstall(syncedRepos, cp, s)
if err != nil {
return err
}
// Check if we have to process something, or return to the user an error
if len(match) == 0 {
Info("No packages to install")
return nil
}
// Resolvers might decide to remove some packages from being installed
if !l.Options.SolverOptions.ResolverIsSet() {
for _, p := range cp {
found := false
vers, _ := s.Database.FindPackageVersions(p) // If was installed, it is found, as it was filtered
if len(vers) >= 1 {
found = true
continue
}
if matches[0].Package.Matches(artefact.GetCompileSpec().GetPackage()) {
toDownload[currentPack.GetFingerPrint()] = ArtifactMatch{Package: currentPack, Artifact: artefact, Repository: matches[0].Repo}
for _, m := range match {
if m.Package.GetName() == p.GetName() {
found = true
}
}
break A
if !found {
return fmt.Errorf("Package '%s' not found", p.HumanReadableString())
}
}
}
Info("Packages that are going to be installed in the system: \n ", Green(matchesToList(match)).BgBlack().String())
if l.Options.Ask {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return l.install(syncedRepos, match, packages, assertions, allRepos, s)
} else {
return errors.New("Aborted by user")
}
}
return l.install(syncedRepos, match, packages, assertions, allRepos, s)
}
func (l *LuetInstaller) download(syncedRepos Repositories, toDownload map[string]ArtifactMatch) error {
// Download packages into cache in parallel.
all := make(chan ArtifactMatch)
@@ -315,23 +417,25 @@ func (l *LuetInstaller) Reclaim(s *System) error {
return errors.Wrap(err, "Failed creating package")
}
s.Database.SetPackageFiles(&pkg.PackageFile{PackageFingerprint: pack.GetFingerPrint(), Files: match.Artifact.GetFiles()})
Info(":zap: Reclaimed package:", pack.HumanReadableString())
Info(":zap:Reclaimed package:", pack.HumanReadableString())
}
Info("Done!")
return nil
}
func (l *LuetInstaller) install(syncedRepos Repositories, cp pkg.Packages, s *System) error {
func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
var p pkg.Packages
toInstall := map[string]ArtifactMatch{}
allRepos := pkg.NewInMemoryDatabase(false)
var solution solver.PackagesAssertions
// Check if the package is installed first
for _, pi := range cp {
vers, _ := s.Database.FindPackageVersions(pi)
if len(vers) >= 1 {
Warning("Filtering out package " + pi.HumanReadableString() + ", it has other versions already installed. Uninstall one of them first ")
// Warning("Filtering out package " + pi.HumanReadableString() + ", it has other versions already installed. Uninstall one of them first ")
continue
//return errors.New("Package " + pi.GetFingerPrint() + " has other versions already installed. Uninstall one of them first: " + strings.Join(vers, " "))
@@ -340,8 +444,7 @@ func (l *LuetInstaller) install(syncedRepos Repositories, cp pkg.Packages, s *Sy
}
if len(p) == 0 {
Warning("No package to install, bailing out with no errors")
return nil
return toInstall, p, solution, allRepos, nil
}
// First get metas from all repos (and decodes trees)
@@ -349,25 +452,19 @@ func (l *LuetInstaller) install(syncedRepos Repositories, cp pkg.Packages, s *Sy
// matches := syncedRepos.PackageMatches(p)
// compute a "big" world
allRepos := pkg.NewInMemoryDatabase(false)
syncedRepos.SyncDatabase(allRepos)
p = syncedRepos.ResolveSelectors(p)
toInstall := map[string]ArtifactMatch{}
var packagesToInstall pkg.Packages
var err error
var solution solver.PackagesAssertions
if !l.Options.NoDeps {
Info(":deciduous_tree: Computing installation, hang tight")
solv := solver.NewResolver(solver.Options{Type: l.Options.SolverOptions.Implementation, Concurrency: l.Options.Concurrency}, s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
solution, err = solv.Install(p)
/// TODO: PackageAssertions needs to be a map[fingerprint]pack so lookup is in O(1)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Failed solving solution for package")
return toInstall, p, solution, allRepos, errors.Wrap(err, "Failed solving solution for package")
}
Info(":deciduous_tree: Finished calculating dependencies")
// Gathers things to install
Info(":deciduous_tree: Checking for packages already installed, and prepare for installation")
for _, assertion := range solution {
if assertion.Value {
if _, err := s.Database.FindPackage(assertion.Package); err == nil {
@@ -386,52 +483,41 @@ func (l *LuetInstaller) install(syncedRepos Repositories, cp pkg.Packages, s *Sy
packagesToInstall = append(packagesToInstall, currentPack)
}
}
Info(":deciduous_tree: Finding packages to install from :cloud:")
// Gathers things to install
for _, currentPack := range packagesToInstall {
// Check if package is already installed.
matches := syncedRepos.PackageMatches(pkg.Packages{currentPack})
if len(matches) == 0 {
return errors.New("Failed matching solutions against repository for " + currentPack.HumanReadableString() + " where are definitions coming from?!")
return toInstall, p, solution, allRepos, errors.New("Failed matching solutions against repository for " + currentPack.HumanReadableString() + " where are definitions coming from?!")
}
A:
for _, artefact := range matches[0].Repo.GetIndex() {
if artefact.GetCompileSpec().GetPackage() == nil {
return errors.New("Package in compilespec empty")
return toInstall, p, solution, allRepos, errors.New("Package in compilespec empty")
}
if matches[0].Package.Matches(artefact.GetCompileSpec().GetPackage()) {
currentPack.SetBuildTimestamp(artefact.GetCompileSpec().GetPackage().GetBuildTimestamp())
// Filter out already installed
if _, err := s.Database.FindPackage(currentPack); err != nil {
toInstall[currentPack.GetFingerPrint()] = ArtifactMatch{Package: currentPack, Artifact: artefact, Repository: matches[0].Repo}
Info("\t:package:", currentPack.HumanReadableString(), ":cloud:", matches[0].Repo.GetName())
}
break A
}
}
}
return toInstall, p, solution, allRepos, nil
}
func (l *LuetInstaller) install(syncedRepos Repositories, toInstall map[string]ArtifactMatch, p pkg.Packages, solution solver.PackagesAssertions, allRepos pkg.PackageDatabase, s *System) error {
// Install packages into rootfs in parallel.
if err := l.download(syncedRepos, toInstall); err != nil {
return errors.Wrap(err, "Downloading packages")
}
all := make(chan ArtifactMatch)
var wg = new(sync.WaitGroup)
// Download first
for i := 0; i < l.Options.Concurrency; i++ {
wg.Add(1)
go l.downloadWorker(i, wg, all)
}
for _, c := range toInstall {
all <- c
}
close(all)
wg.Wait()
all = make(chan ArtifactMatch)
wg = new(sync.WaitGroup)
wg := new(sync.WaitGroup)
// Do the real install
for i := 0; i < l.Options.Concurrency; i++ {
@@ -451,6 +537,7 @@ func (l *LuetInstaller) install(syncedRepos Repositories, cp pkg.Packages, s *Sy
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Failed creating package")
}
bus.Manager.Publish(bus.EventPackageInstall, c)
}
var toFinalize []pkg.Package
if !l.Options.NoDeps {
@@ -540,9 +627,9 @@ func (l *LuetInstaller) downloadWorker(i int, wg *sync.WaitGroup, c <-chan Artif
return errors.Wrap(err, "Failed installing package "+p.Package.GetName())
}
if err == nil {
Info(":package: ", p.Package.HumanReadableString(), "downloaded")
Info("\n:package: Package ", p.Package.HumanReadableString(), "downloaded")
} else if err != nil && l.Options.Force {
Info(":package: ", p.Package.HumanReadableString(), "downloaded with failures (force download)")
Info("\n:package: ", p.Package.HumanReadableString(), "downloaded with failures (force download)")
}
}
@@ -561,9 +648,9 @@ func (l *LuetInstaller) installerWorker(i int, wg *sync.WaitGroup, c <-chan Arti
return errors.Wrap(err, "Failed installing package "+p.Package.GetName())
}
if err == nil {
Info(":package: ", p.Package.HumanReadableString(), "installed")
Info(":package: Package ", p.Package.HumanReadableString(), "installed")
} else if err != nil && l.Options.Force {
Info(":package: ", p.Package.HumanReadableString(), "installed with failures (force install)")
Info(":package: Package ", p.Package.HumanReadableString(), "installed with failures (forced install)")
}
}
@@ -655,16 +742,15 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
return errors.Wrap(err, "Failed removing package from database")
}
Info(":recycle:", p.GetFingerPrint(), "Removed :heavy_check_mark:")
bus.Manager.Publish(bus.EventPackageUnInstall, p)
Info(":recycle: ", p.GetFingerPrint(), "Removed :heavy_check_mark:")
return nil
}
func (l *LuetInstaller) Uninstall(p pkg.Package, s *System) error {
Spinner(32)
defer SpinnerStop()
Info(":recycle: Uninstalling :package:", p.HumanReadableString(), "hang tight")
func (l *LuetInstaller) computeUninstall(p pkg.Package, s *System) (pkg.Packages, error) {
var toUninstall pkg.Packages
// compute uninstall from all world - remove packages in parallel - run uninstall finalizer (in order) TODO - mark the uninstallation in db
// Get installed definition
checkConflicts := l.Options.CheckConflicts
@@ -681,44 +767,81 @@ func (l *LuetInstaller) Uninstall(p pkg.Package, s *System) error {
for _, i := range s.Database.World() {
_, err := installedtmp.CreatePackage(i)
if err != nil {
return errors.Wrap(err, "Failed create temporary in-memory db")
return toUninstall, errors.Wrap(err, "Failed create temporary in-memory db")
}
}
if !l.Options.NoDeps {
Info(":mag: Finding :package:", p.HumanReadableString(), "dependency graph :deciduous_tree:")
solv := solver.NewResolver(solver.Options{Type: l.Options.SolverOptions.Implementation, Concurrency: l.Options.Concurrency}, installedtmp, installedtmp, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
var solution pkg.Packages
var err error
if l.Options.FullCleanUninstall {
solution, err = solv.UninstallUniverse(pkg.Packages{p})
if err != nil {
return errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
return toUninstall, errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
}
} else {
solution, err = solv.Uninstall(p, checkConflicts, full)
solution, err = solv.Uninstall(checkConflicts, full, p)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
return toUninstall, errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
}
}
for _, p := range solution {
Info(":recycle: Uninstalling", p.HumanReadableString())
toUninstall = append(toUninstall, p)
}
} else {
toUninstall = append(toUninstall, p)
}
return toUninstall, nil
}
func (l *LuetInstaller) Uninstall(p pkg.Package, s *System) error {
if p.IsSelector() {
if packs, _ := s.Database.FindPackages(p); len(packs) == 0 {
return errors.New("Package not found in the system")
}
} else {
if _, err := s.Database.FindPackage(p); err != nil {
return errors.Wrap(err, "package not found in the system")
}
}
Spinner(32)
toUninstall, err := l.computeUninstall(p, s)
if err != nil {
return errors.Wrap(err, "while computing uninstall")
}
SpinnerStop()
uninstall := func() error {
for _, p := range toUninstall {
err := l.uninstall(p, s)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Uninstall failed")
}
}
} else {
Info(":recycle: Uninstalling", p.HumanReadableString(), "without deps")
err := l.uninstall(p, s)
if err != nil && !l.Options.Force {
return errors.Wrap(err, "Uninstall failed")
}
Info(":recycle: :package:", p.HumanReadableString(), "uninstalled :heavy_check_mark:")
return nil
}
return nil
if len(toUninstall) == 0 {
Info("Nothing to do")
return nil
}
Info(":recycle: Packages that are going to be removed from the system:\n ", Yellow(packsToList(toUninstall)).BgBlack().String())
if l.Options.Ask {
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
if Ask() {
l.Options.Ask = false // Don't prompt anymore
return uninstall()
} else {
return errors.New("Aborted by user")
}
}
return uninstall()
}
func (l *LuetInstaller) Repositories(r []Repository) { l.PackageRepositories = r }

View File

@@ -26,6 +26,7 @@ import (
"strings"
"time"
"github.com/mudler/luet/pkg/bus"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/config"
"github.com/mudler/luet/pkg/helpers"
@@ -426,6 +427,14 @@ func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
r.Name, r.Revision, r.LastUpdate,
))
bus.Manager.Publish(bus.EventRepositoryPreBuild, struct {
Repo LuetSystemRepository
Path string
}{
Repo: *r,
Path: dst,
})
// Create tree and repository file
archive, err := config.LuetCfg.GetSystem().TempDir("archive")
if err != nil {
@@ -506,6 +515,14 @@ func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
return err
}
bus.Manager.Publish(bus.EventRepositoryPostBuild, struct {
Repo LuetSystemRepository
Path string
}{
Repo: *r,
Path: dst,
})
return nil
}
@@ -694,9 +711,12 @@ func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
repo.SetPriority(r.GetPriority())
repo.SetName(r.GetName())
InfoC(
aurora.Bold(
aurora.Yellow(":information_source: Repository "+repo.GetName()+" priority: ")).String() +
aurora.Bold(aurora.Green(repo.GetPriority())).String() + " - type " +
aurora.Yellow(":information_source:").String() +
aurora.Magenta("Repository: ").String() +
aurora.Green(aurora.Bold(repo.GetName()).String()).String() +
aurora.Magenta(" Priority: ").String() +
aurora.Bold(aurora.Green(repo.GetPriority())).String() +
aurora.Magenta(" Type: ").String() +
aurora.Bold(aurora.Green(repo.GetType())).String(),
)
return repo, nil

View File

@@ -24,7 +24,7 @@ func (s *System) ExecuteFinalizers(packs []pkg.Package, force bool) error {
executedFinalizer := map[string]bool{}
for _, p := range packs {
if helpers.Exists(p.Rel(tree.FinalizerFile)) {
out, err := helpers.RenderFiles(p.Rel(tree.FinalizerFile), p.Rel(tree.DefinitionFile))
out, err := helpers.RenderFiles(p.Rel(tree.FinalizerFile), p.Rel(tree.DefinitionFile), "")
if err != nil && !force {
return errors.Wrap(err, "reading file "+p.Rel(tree.FinalizerFile))
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"os"
"regexp"
"strings"
. "github.com/mudler/luet/pkg/config"
@@ -36,6 +37,22 @@ func GetAurora() Aurora {
return aurora
}
func Ask() bool {
var input string
Info("Do you want to continue with this operation? [y/N]: ")
_, err := fmt.Scanln(&input)
if err != nil {
return false
}
input = strings.ToLower(input)
if input == "y" || input == "yes" {
return true
}
return false
}
func ZapLogger() error {
var err error
if z == nil {
@@ -183,9 +200,9 @@ func msg(level string, withoutColor bool, msg ...interface{}) {
case "debug":
levelMsg = White(message).BgBlack().String()
case "info":
levelMsg = Bold(White(message)).BgBlack().String()
levelMsg = message
case "error":
levelMsg = Bold(Red(":bomb: " + message + ":fire:")).BgBlack().String()
levelMsg = Red(message).String()
}
}
@@ -231,6 +248,6 @@ func Error(mess ...interface{}) {
}
func Fatal(mess ...interface{}) {
Error(mess)
Error(mess...)
os.Exit(1)
}

View File

@@ -28,6 +28,7 @@ type PackageDatabase interface {
}
type PackageSet interface {
GetRevdeps(p Package) (Packages, error)
GetPackages() []string //Ids
CreatePackage(pkg Package) (string, error)
GetPackage(ID string) (Package, error)

View File

@@ -86,6 +86,17 @@ func (db *BoltDatabase) Retrieve(ID string) ([]byte, error) {
return enc, nil
}
// GetRevdeps uses a new inmemory db to calcuate revdeps
// TODO: Have a memory instance for boltdb, so we don't compute each time we get called
// as this is REALLY expensive. But we don't perform usually those operations in a file db.
func (db *BoltDatabase) GetRevdeps(p Package) (Packages, error) {
memory := NewInMemoryDatabase(false)
for _, p := range db.World() {
memory.CreatePackage(p)
}
return memory.GetRevdeps(p)
}
func (db *BoltDatabase) FindPackage(tofind Package) (Package, error) {
// Provides: Return the replaced package here
if provided, err := db.getProvide(tofind); err == nil {

View File

@@ -31,6 +31,7 @@ var DBInMemoryInstance = &InMemoryDatabase{
Database: map[string]string{},
CacheNoVersion: map[string]map[string]interface{}{},
ProvidesDatabase: map[string]map[string]Package{},
RevDepsDatabase: map[string]map[string]Package{},
}
type InMemoryDatabase struct {
@@ -39,6 +40,7 @@ type InMemoryDatabase struct {
FileDatabase map[string][]string
CacheNoVersion map[string]map[string]interface{}
ProvidesDatabase map[string]map[string]Package
RevDepsDatabase map[string]map[string]Package
}
func NewInMemoryDatabase(singleton bool) PackageDatabase {
@@ -50,6 +52,7 @@ func NewInMemoryDatabase(singleton bool) PackageDatabase {
Database: map[string]string{},
CacheNoVersion: map[string]map[string]interface{}{},
ProvidesDatabase: map[string]map[string]Package{},
RevDepsDatabase: map[string]map[string]Package{},
}
}
return DBInMemoryInstance
@@ -125,6 +128,47 @@ func (db *InMemoryDatabase) GetAllPackages(packages chan Package) error {
return nil
}
func (db *InMemoryDatabase) getRevdeps(p Package, visited map[string]interface{}) (Packages, error) {
var versionsInWorld Packages
if _, ok := visited[p.HumanReadableString()]; ok {
return versionsInWorld, nil
}
visited[p.HumanReadableString()] = true
var res Packages
packs, err := db.FindPackages(p)
if err != nil {
return res, err
}
for _, pp := range packs {
// db.Lock()
list := db.RevDepsDatabase[pp.GetFingerPrint()]
// db.Unlock()
for _, revdep := range list {
dep, err := db.FindPackage(revdep)
if err != nil {
return res, err
}
res = append(res, dep)
packs, err := db.getRevdeps(dep, visited)
if err != nil {
return res, err
}
res = append(res, packs...)
}
}
return res.Unique(), nil
}
// GetRevdeps returns the package reverse dependencies,
// matching also selectors in versions (>, <, >=, <=)
// TODO: Code should use db explictly
func (db *InMemoryDatabase) GetRevdeps(p Package) (Packages, error) {
return db.getRevdeps(p, make(map[string]interface{}))
}
// Encode encodes the package to string.
// It returns an ID which can be used to retrieve the package later on.
func (db *InMemoryDatabase) CreatePackage(p Package) (string, error) {
@@ -143,9 +187,16 @@ func (db *InMemoryDatabase) CreatePackage(p Package) (string, error) {
return "", err
}
db.populateCaches(pd)
return ID, nil
}
func (db *InMemoryDatabase) populateCaches(p Package) {
pd, _ := p.(*DefaultPackage)
// Create extra cache between package -> []versions
db.Lock()
defer db.Unlock()
// Provides: Store package provides, we will reuse this when walking deps
for _, provide := range pd.Provides {
@@ -157,21 +208,41 @@ func (db *InMemoryDatabase) CreatePackage(p Package) (string, error) {
db.ProvidesDatabase[provide.GetPackageName()][provide.GetVersion()] = p
}
_, ok = db.CacheNoVersion[p.GetPackageName()]
_, ok := db.CacheNoVersion[p.GetPackageName()]
if !ok {
db.CacheNoVersion[p.GetPackageName()] = make(map[string]interface{})
}
db.CacheNoVersion[p.GetPackageName()][p.GetVersion()] = nil
db.Unlock()
return ID, nil
for _, re := range pd.GetRequires() {
packages, _ := db.FindPackages(re)
db.Lock()
for _, pa := range packages {
_, ok := db.RevDepsDatabase[pa.GetFingerPrint()]
if !ok {
db.RevDepsDatabase[pa.GetFingerPrint()] = make(map[string]Package)
}
db.RevDepsDatabase[pa.GetFingerPrint()][pd.GetFingerPrint()] = pd
}
_, ok := db.RevDepsDatabase[re.GetFingerPrint()]
if !ok {
db.RevDepsDatabase[re.GetFingerPrint()] = make(map[string]Package)
}
db.RevDepsDatabase[re.GetFingerPrint()][pd.GetFingerPrint()] = pd
db.Unlock()
}
}
func (db *InMemoryDatabase) getProvide(p Package) (Package, error) {
db.Lock()
pa, ok := db.ProvidesDatabase[p.GetPackageName()][p.GetVersion()]
if !ok {
versions, ok := db.ProvidesDatabase[p.GetPackageName()]
db.Unlock()
defer db.Unlock()
if !ok {
return nil, errors.New("No versions found for package")
@@ -195,6 +266,7 @@ func (db *InMemoryDatabase) getProvide(p Package) (Package, error) {
return nil, errors.New("No package provides this")
}
db.Unlock()
return db.FindPackage(pa)
}
@@ -229,8 +301,9 @@ func (db *InMemoryDatabase) FindPackageVersions(p Package) (Packages, error) {
if provided, err := db.getProvide(p); err == nil {
p = provided
}
db.Lock()
versions, ok := db.CacheNoVersion[p.GetPackageName()]
db.Unlock()
if !ok {
return nil, errors.New("No versions found for package")
}
@@ -247,29 +320,38 @@ func (db *InMemoryDatabase) FindPackageVersions(p Package) (Packages, error) {
// FindPackages return the list of the packages beloging to cat/name (any versions in requested range)
func (db *InMemoryDatabase) FindPackages(p Package) (Packages, error) {
if !p.IsSelector() {
pack, err := db.FindPackage(p)
if err != nil {
return []Package{}, err
}
return []Package{pack}, nil
}
// Provides: Treat as the replaced package here
if provided, err := db.getProvide(p); err == nil {
p = provided
}
db.Lock()
var matches []*DefaultPackage
versions, ok := db.CacheNoVersion[p.GetPackageName()]
for ve := range versions {
match, _ := p.SelectorMatchVersion(ve, nil)
if match {
matches = append(matches, &DefaultPackage{Name: p.GetName(), Category: p.GetCategory(), Version: ve})
}
}
db.Unlock()
if !ok {
return nil, errors.New(fmt.Sprintf("No versions found for: %s", p.HumanReadableString()))
}
var versionsInWorld []Package
for ve, _ := range versions {
match, err := p.SelectorMatchVersion(ve, nil)
for _, p := range matches {
w, err := db.FindPackage(p)
if err != nil {
return nil, errors.Wrap(err, "Error on match selector")
}
if match {
w, err := db.FindPackage(&DefaultPackage{Name: p.GetName(), Category: p.GetCategory(), Version: ve})
if err != nil {
return nil, errors.Wrap(err, "Cache mismatch - this shouldn't happen")
}
versionsInWorld = append(versionsInWorld, w)
return nil, errors.Wrap(err, "Cache mismatch - this shouldn't happen")
}
versionsInWorld = append(versionsInWorld, w)
}
return Packages(versionsInWorld), nil
}

View File

@@ -49,7 +49,6 @@ type Package interface {
Requires([]*DefaultPackage) Package
Conflicts([]*DefaultPackage) Package
Revdeps(PackageDatabase) Packages
ExpandedRevdeps(definitiondb PackageDatabase, visited map[string]interface{}) Packages
LabelDeps(PackageDatabase, string) Packages
GetProvides() []*DefaultPackage
@@ -147,6 +146,60 @@ func DefaultPackageFromYaml(yml []byte) (DefaultPackage, error) {
return unescaped, nil
}
type rawPackages []map[string]interface{}
func (r rawPackages) Find(name, category, version string) map[string]interface{} {
for _, v := range r {
if v["name"] == name && v["category"] == category && v["version"] == version {
return v
}
}
return map[string]interface{}{}
}
func GetRawPackages(yml []byte) (rawPackages, error) {
var rawPackages struct {
Packages []map[string]interface{} `yaml:"packages"`
}
source, err := yaml.YAMLToJSON(yml)
if err != nil {
return []map[string]interface{}{}, err
}
rawIn := json.RawMessage(source)
bytes, err := rawIn.MarshalJSON()
if err != nil {
return []map[string]interface{}{}, err
}
err = json.Unmarshal(bytes, &rawPackages)
if err != nil {
return []map[string]interface{}{}, err
}
return rawPackages.Packages, nil
}
func DefaultPackagesFromYaml(yml []byte) ([]DefaultPackage, error) {
var unescaped struct {
Packages []DefaultPackage `json:"packages"`
}
source, err := yaml.YAMLToJSON(yml)
if err != nil {
return []DefaultPackage{}, err
}
rawIn := json.RawMessage(source)
bytes, err := rawIn.MarshalJSON()
if err != nil {
return []DefaultPackage{}, err
}
err = json.Unmarshal(bytes, &unescaped)
if err != nil {
return []DefaultPackage{}, err
}
return unescaped.Packages, nil
}
// Major and minor gets escaped when marshalling in JSON, making compiler fails recognizing selectors for expansion
func (t *DefaultPackage) JSON() ([]byte, error) {
buffer := &bytes.Buffer{}
@@ -459,8 +512,7 @@ func walkPackage(p Package, definitiondb PackageDatabase, visited map[string]int
}
visited[p.HumanReadableString()] = true
revdepvisited := make(map[string]interface{})
revdeps := p.ExpandedRevdeps(definitiondb, revdepvisited)
revdeps, _ := definitiondb.GetRevdeps(p)
for _, r := range revdeps {
versionsInWorld = append(versionsInWorld, r)
}
@@ -494,52 +546,6 @@ func (p *DefaultPackage) Related(definitiondb PackageDatabase) Packages {
return walkPackage(p, definitiondb, map[string]interface{}{})
}
// ExpandedRevdeps returns the package reverse dependencies,
// matching also selectors in versions (>, <, >=, <=)
func (p *DefaultPackage) ExpandedRevdeps(definitiondb PackageDatabase, visited map[string]interface{}) Packages {
var versionsInWorld Packages
if _, ok := visited[p.HumanReadableString()]; ok {
return versionsInWorld
}
visited[p.HumanReadableString()] = true
for _, w := range definitiondb.World() {
if w.Matches(p) {
continue
}
match := false
for _, re := range w.GetRequires() {
if re.Matches(p) {
match = true
}
if !match {
packages, _ := re.Expand(definitiondb)
for _, pa := range packages {
if pa.Matches(p) {
match = true
}
}
}
// if ok, _ := w.RequiresContains(definitiondb, p); ok {
}
if match {
versionsInWorld = append(versionsInWorld, w)
versionsInWorld = append(versionsInWorld, w.ExpandedRevdeps(definitiondb, visited).Unique()...)
}
// }
}
//visited[p.HumanReadableString()] = true
return versionsInWorld.Unique()
}
func (p *DefaultPackage) LabelDeps(definitiondb PackageDatabase, labelKey string) Packages {
var pkgsWithLabelInWorld Packages
// TODO: check if integrate some index to improve

View File

@@ -220,8 +220,8 @@ var _ = Describe("Package", func() {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
visited := make(map[string]interface{})
lst := a.ExpandedRevdeps(definitions, visited)
lst, err := definitions.GetRevdeps(a)
Expect(err).ToNot(HaveOccurred())
Expect(lst).To(ContainElement(c))
Expect(lst).To(ContainElement(d))
Expect(lst).To(ContainElement(e))
@@ -242,9 +242,9 @@ var _ = Describe("Package", func() {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
visited := make(map[string]interface{})
lst := a.ExpandedRevdeps(definitions, visited)
lst, err := definitions.GetRevdeps(a)
Expect(err).ToNot(HaveOccurred())
Expect(lst).To(ContainElement(b))
Expect(lst).To(ContainElement(c))
Expect(lst).To(ContainElement(d))
@@ -266,9 +266,8 @@ var _ = Describe("Package", func() {
_, err := definitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
visited := make(map[string]interface{})
lst := a.ExpandedRevdeps(definitions, visited)
lst, err := definitions.GetRevdeps(a)
Expect(err).ToNot(HaveOccurred())
Expect(lst).To(ContainElement(b))
Expect(lst).To(ContainElement(c))
Expect(lst).To(ContainElement(d))

View File

@@ -76,7 +76,7 @@ func (s *Parallel) noRulesInstalled() bool {
return true
}
func (s *Parallel) buildParallelFormula(formulas []bf.Formula, packages pkg.Packages) (bf.Formula, error) {
func (s *Parallel) buildParallelFormula(db pkg.PackageDatabase, formulas []bf.Formula, packages pkg.Packages) (bf.Formula, error) {
var wg = new(sync.WaitGroup)
var wg2 = new(sync.WaitGroup)
@@ -87,7 +87,7 @@ func (s *Parallel) buildParallelFormula(formulas []bf.Formula, packages pkg.Pack
go func(wg *sync.WaitGroup, c <-chan pkg.Package) {
defer wg.Done()
for p := range c {
solvable, err := p.BuildFormula(s.DefinitionDatabase, s.ParallelDatabase)
solvable, err := p.BuildFormula(db, s.ParallelDatabase)
if err != nil {
panic(err)
}
@@ -126,13 +126,13 @@ func (s *Parallel) BuildInstalled() (bf.Formula, error) {
var packages pkg.Packages
for _, p := range s.Installed() {
packages = append(packages, p)
for _, dep := range p.Related(s.DefinitionDatabase) {
for _, dep := range p.Related(s.InstalledDatabase) {
packages = append(packages, dep)
}
}
return s.buildParallelFormula(formulas, packages)
return s.buildParallelFormula(s.InstalledDatabase, formulas, packages)
}
// BuildWorld builds the formula which olds the requirements from the package definitions
@@ -148,7 +148,7 @@ func (s *Parallel) BuildWorld(includeInstalled bool) (bf.Formula, error) {
//f = bf.And(f, solvable)
formulas = append(formulas, solvable)
}
return s.buildParallelFormula(formulas, s.World())
return s.buildParallelFormula(s.DefinitionDatabase, formulas, s.World())
}
// BuildWorld builds the formula which olds the requirements from the package definitions
@@ -200,7 +200,7 @@ func (s *Parallel) BuildPartialWorld(includeInstalled bool) (bf.Formula, error)
close(results)
wg2.Wait()
return s.buildParallelFormula(formulas, packages)
return s.buildParallelFormula(s.DefinitionDatabase, formulas, packages)
//return s.buildParallelFormula(formulas, s.World())
}
@@ -273,9 +273,11 @@ func (s *Parallel) Conflicts(pack pkg.Package, lsp pkg.Packages) (bool, error) {
for _, p := range ls {
temporarySet.CreatePackage(p)
}
visited := make(map[string]interface{})
revdeps := p.ExpandedRevdeps(temporarySet, visited)
revdeps, err := temporarySet.GetRevdeps(p)
if err != nil {
return false, errors.Wrap(err, "error scanning revdeps")
}
var revdepsErr error
for _, r := range revdeps {
if revdepsErr == nil {
@@ -469,23 +471,22 @@ func (s *Parallel) UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAsse
go func(wg *sync.WaitGroup, c <-chan pkg.Package) {
defer wg.Done()
for p := range c {
available, err := universe.FindPackageVersions(p)
if err != nil {
removed = append(removed, p) /// FIXME: Racy
}
if len(available) == 0 {
available, err := s.DefinitionDatabase.FindPackageVersions(p)
if len(available) == 0 || err != nil {
removed = append(removed, p)
continue
}
bestmatch := available.Best(nil)
// Found a better version available
if !bestmatch.Matches(p) {
encodedP, _ := p.Encode(universe)
P := bf.Var(encodedP)
results <- bf.And(bf.Not(P), r)
encodedP, _ = bestmatch.Encode(universe)
P = bf.Var(encodedP)
results <- bf.And(P, r)
oldP, _ := p.Encode(universe)
toreplaceP := bf.Var(oldP)
best, _ := bestmatch.Encode(universe)
toUpgrade := bf.Var(best)
solvablenew, _ := bestmatch.BuildFormula(s.DefinitionDatabase, s.ParallelDatabase)
results <- bf.And(bf.Not(toreplaceP), bf.And(append(solvablenew, toUpgrade)...))
}
}
}(wg, all)
@@ -512,8 +513,7 @@ func (s *Parallel) UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAsse
// Treat removed packages from universe as marked for deletion
if dropremoved {
// SAT encode the clauses against the world
for _, p := range removed {
for _, p := range removed.Unique() {
encodedP, err := p.Encode(universe)
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't encode package")
@@ -615,23 +615,22 @@ func (s *Parallel) Upgrade(checkconflicts, full bool) (pkg.Packages, PackagesAss
}
// Then try to uninstall the versions in the system, and store that tree
for _, p := range toUninstall {
r, err := s.Uninstall(p, checkconflicts, false)
r, err := s.Uninstall(checkconflicts, false, toUninstall...)
if err != nil {
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't uninstall candidates ")
}
for _, z := range r {
err = installedcopy.RemovePackage(z)
if err != nil {
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't uninstall selected candidate "+p.GetFingerPrint())
}
for _, z := range r {
err = installedcopy.RemovePackage(z)
if err != nil {
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't remove copy of package targetted for removal")
}
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't remove copy of package targetted for removal")
}
}
if len(toInstall) == 0 {
return toUninstall, PackagesAssertions{}, nil
}
r, e := s2.Install(toInstall)
return toUninstall, r, e
assertions, e := s2.Install(toInstall)
return toUninstall, assertions, e
// To that tree, ask to install the versions that should be upgraded, and try to solve
// Return the solution
@@ -639,21 +638,30 @@ func (s *Parallel) Upgrade(checkconflicts, full bool) (pkg.Packages, PackagesAss
// Uninstall takes a candidate package and return a list of packages that would be removed
// in order to purge the candidate. Returns error if unsat.
func (s *Parallel) Uninstall(c pkg.Package, checkconflicts, full bool) (pkg.Packages, error) {
func (s *Parallel) Uninstall(checkconflicts, full bool, packs ...pkg.Package) (pkg.Packages, error) {
if len(packs) == 0 {
return pkg.Packages{}, nil
}
var res pkg.Packages
candidate, err := s.InstalledDatabase.FindPackage(c)
if err != nil {
toRemove := pkg.Packages{}
// return nil, errors.Wrap(err, "Couldn't find required package in db definition")
packages, err := c.Expand(s.InstalledDatabase)
// Info("Expanded", packages, err)
if err != nil || len(packages) == 0 {
candidate = c
} else {
candidate = packages.Best(nil)
for _, c := range packs {
candidate, err := s.InstalledDatabase.FindPackage(c)
if err != nil {
// return nil, errors.Wrap(err, "Couldn't find required package in db definition")
packages, err := c.Expand(s.InstalledDatabase)
// Info("Expanded", packages, err)
if err != nil || len(packages) == 0 {
candidate = c
} else {
candidate = packages.Best(nil)
}
//Relax search, otherwise we cannot compute solutions for packages not in definitions
// return nil, errors.Wrap(err, "Package not found between installed")
}
//Relax search, otherwise we cannot compute solutions for packages not in definitions
// return nil, errors.Wrap(err, "Package not found between installed")
toRemove = append(toRemove, candidate)
}
// Build a fake "Installed" - Candidate and its requires tree
var InstalledMinusCandidate pkg.Packages
@@ -661,30 +669,38 @@ func (s *Parallel) Uninstall(c pkg.Package, checkconflicts, full bool) (pkg.Pack
// We are asked to not perform a full uninstall (checking all the possible requires that could
// be removed). Let's only check if we can remove the selected package
if !full && checkconflicts {
if conflicts, err := s.Conflicts(candidate, s.Installed()); conflicts {
return nil, err
} else {
return pkg.Packages{candidate}, nil
for _, candidate := range toRemove {
if conflicts, err := s.Conflicts(candidate, s.Installed()); conflicts {
return nil, err
}
}
return toRemove, nil
}
// TODO: Can be optimized
for _, i := range s.Installed() {
if !i.Matches(candidate) {
contains, err := candidate.RequiresContains(s.ParallelDatabase, i)
if err != nil {
return nil, errors.Wrap(err, "Failed getting installed list")
}
if !contains {
InstalledMinusCandidate = append(InstalledMinusCandidate, i)
matched := false
for _, candidate := range toRemove {
if !i.Matches(candidate) {
contains, err := candidate.RequiresContains(s.ParallelDatabase, i)
if err != nil {
return nil, errors.Wrap(err, "Failed getting installed list")
}
if !contains {
matched = true
}
}
}
if matched {
InstalledMinusCandidate = append(InstalledMinusCandidate, i)
}
}
s2 := &Parallel{Concurrency: s.Concurrency, InstalledDatabase: pkg.NewInMemoryDatabase(false), DefinitionDatabase: s.DefinitionDatabase, ParallelDatabase: pkg.NewInMemoryDatabase(false)}
s2 := &Parallel{Concurrency: s.Concurrency, InstalledDatabase: pkg.NewInMemoryDatabase(false), DefinitionDatabase: s.InstalledDatabase, ParallelDatabase: pkg.NewInMemoryDatabase(false)}
s2.SetResolver(s.Resolver)
// Get the requirements to install the candidate
asserts, err := s2.Install(pkg.Packages{candidate})
asserts, err := s2.Install(toRemove)
if err != nil {
return nil, err
}

View File

@@ -401,7 +401,7 @@ var _ = Describe("Parallel", func() {
Expect(solution).ToNot(ContainElement(PackageAssert{Package: D, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(4))
Expect(len(solution)).To(Equal(3))
Expect(err).ToNot(HaveOccurred())
})
@@ -529,7 +529,7 @@ var _ = Describe("Parallel", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(6))
Expect(len(solution)).To(Equal(5))
Expect(err).ToNot(HaveOccurred())
})
@@ -570,7 +570,7 @@ var _ = Describe("Parallel", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(6))
Expect(len(solution)).To(Equal(5))
Expect(err).ToNot(HaveOccurred())
})
@@ -593,7 +593,7 @@ var _ = Describe("Parallel", func() {
}
s = &Parallel{InstalledDatabase: dbInstalled, Concurrency: 4, DefinitionDatabase: dbDefinitions, ParallelDatabase: db}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -619,7 +619,7 @@ var _ = Describe("Parallel", func() {
}
s = &Parallel{InstalledDatabase: dbInstalled, Concurrency: 4, DefinitionDatabase: dbDefinitions, ParallelDatabase: db}
solution, err := s.Uninstall(&pkg.DefaultPackage{Name: "A", Version: ">1.0"}, true, true)
solution, err := s.Uninstall(true, true, &pkg.DefaultPackage{Name: "A", Version: ">1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -643,7 +643,7 @@ var _ = Describe("Parallel", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -667,12 +667,13 @@ var _ = Describe("Parallel", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(B))
Expect(len(solution)).To(Equal(1))
Expect(len(solution)).To(Equal(2))
})
It("Uninstalls complex packages correctly, even if shared deps are required by system packages", func() {
@@ -690,7 +691,7 @@ var _ = Describe("Parallel", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -699,6 +700,31 @@ var _ = Describe("Parallel", func() {
Expect(len(solution)).To(Equal(1))
})
It("Uninstalls multiple complex packages correctly, even if shared deps are required by system packages", func() {
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(true, true, A, C)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(C))
Expect(solution).To(ContainElement(A))
Expect(solution).ToNot(ContainElement(B))
Expect(len(solution)).To(Equal(2))
})
It("Uninstalls complex packages in world correctly", func() {
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
@@ -715,7 +741,7 @@ var _ = Describe("Parallel", func() {
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -741,7 +767,7 @@ var _ = Describe("Parallel", func() {
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -1070,7 +1096,7 @@ var _ = Describe("Parallel", func() {
}
val, err := s.Conflicts(D, dbInstalled.World())
Expect(err.Error()).To(Equal("\n/A-\n/B-"))
Expect(err.Error()).To(Or(Equal("\n/A-\n/B-"), Equal("\n/B-\n/A-")))
Expect(val).To(BeTrue())
})
@@ -1260,12 +1286,172 @@ var _ = Describe("Parallel", func() {
Expect(uninstall[0].GetName()).To(Equal("a"))
Expect(uninstall[0].GetVersion()).To(Equal("1.1"))
Expect(solution).To(ContainElement(PackageAssert{Package: A1, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: false}))
Expect(solution).To(ContainElement(PackageAssert{Package: A, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: A, Value: true}))
Expect(len(solution)).To(Equal(4))
Expect(len(solution)).To(Equal(3))
})
It("UpgradeUniverse upgrades correctly", func() {
D := pkg.NewPackage("d", "1.5", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
D.SetCategory("test")
C = pkg.NewPackage("c", "1.5", []*pkg.DefaultPackage{
&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"},
&pkg.DefaultPackage{Name: "d", Version: ">=1.0", Category: "test"},
}, []*pkg.DefaultPackage{})
C.SetCategory("test")
C1 := pkg.NewPackage("c", "1.6", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
C1.SetCategory("test")
B = pkg.NewPackage("b", "1.0", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B.SetCategory("test")
B1 := pkg.NewPackage("b", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "c", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
B1.SetCategory("test")
A = pkg.NewPackage("a", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "b", Version: "1.0", Category: "test"}}, []*pkg.DefaultPackage{})
A.SetCategory("test")
A1 = pkg.NewPackage("a", "1.2", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "b", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
A1.SetCategory("test")
for _, p := range []pkg.Package{A1, B, B1, C, C1, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
uninstall, solution, err := s.UpgradeUniverse(false)
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(2))
Expect(uninstall).To(ContainElement(A))
Expect(uninstall).To(ContainElement(B))
Expect(solution).To(ContainElement(PackageAssert{Package: A1, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: B1, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: C1, Value: true}))
Expect(len(solution)).To(Equal(6))
})
It("UpgradeUniverse upgrades correctly", func() {
D := pkg.NewPackage("d", "1.5", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
D.SetCategory("test")
D1 := pkg.NewPackage("d", "1.6", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
D1.SetCategory("test")
C = pkg.NewPackage("c", "1.5", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
C.SetCategory("test")
B = pkg.NewPackage("b", "1.0", []*pkg.DefaultPackage{
&pkg.DefaultPackage{Name: "c", Version: ">=1.0", Category: "test"},
}, []*pkg.DefaultPackage{})
B.SetCategory("test")
A = pkg.NewPackage("a", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "b", Version: "1.0", Category: "test"}}, []*pkg.DefaultPackage{})
A.SetCategory("test")
for _, p := range []pkg.Package{A, B, C, D, D1} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
uninstall, solution, err := s.UpgradeUniverse(false)
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(1))
Expect(uninstall[0].GetName()).To(Equal("d"))
Expect(uninstall[0].GetVersion()).To(Equal("1.5"))
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: true}))
Expect(len(solution)).To(Equal(3))
})
It("Upgrade upgrades correctly", func() {
D := pkg.NewPackage("d", "1.5", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
D.SetCategory("test")
C = pkg.NewPackage("c", "1.5", []*pkg.DefaultPackage{
&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"},
&pkg.DefaultPackage{Name: "d", Version: ">=1.0", Category: "test"},
}, []*pkg.DefaultPackage{})
C.SetCategory("test")
C1 := pkg.NewPackage("c", "1.6", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
C1.SetCategory("test")
B = pkg.NewPackage("b", "1.0", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
B.SetCategory("test")
B1 := pkg.NewPackage("b", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "c", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
B1.SetCategory("test")
A = pkg.NewPackage("a", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "b", Version: "1.0", Category: "test"}}, []*pkg.DefaultPackage{})
A.SetCategory("test")
A1 = pkg.NewPackage("a", "1.2", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "b", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
A1.SetCategory("test")
for _, p := range []pkg.Package{A1, B, B1, C, C1, D} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
uninstall, solution, err := s.Upgrade(false, false)
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(2))
Expect(uninstall).To(ContainElement(A))
Expect(uninstall).To(ContainElement(B))
Expect(solution).To(ContainElement(PackageAssert{Package: A1, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: B1, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: C1, Value: true}))
Expect(len(solution)).To(Equal(6))
})
It("Upgrade upgrades correctly", func() {
D := pkg.NewPackage("d", "1.5", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
D.SetCategory("test")
D1 := pkg.NewPackage("d", "1.6", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
D1.SetCategory("test")
C = pkg.NewPackage("c", "1.5", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
C.SetCategory("test")
B = pkg.NewPackage("b", "1.0", []*pkg.DefaultPackage{
&pkg.DefaultPackage{Name: "c", Version: ">=1.0", Category: "test"},
}, []*pkg.DefaultPackage{})
B.SetCategory("test")
A = pkg.NewPackage("a", "1.1", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "b", Version: "1.0", Category: "test"}}, []*pkg.DefaultPackage{})
A.SetCategory("test")
for _, p := range []pkg.Package{A, B, C, D, D1} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, D} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
uninstall, solution, err := s.Upgrade(false, false)
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(1))
Expect(uninstall[0].GetName()).To(Equal("d"))
Expect(uninstall[0].GetVersion()).To(Equal("1.5"))
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: true}))
Expect(len(solution)).To(Equal(5))
})
})
})

View File

@@ -37,7 +37,7 @@ const (
type PackageSolver interface {
SetDefinitionDatabase(pkg.PackageDatabase)
Install(p pkg.Packages) (PackagesAssertions, error)
Uninstall(candidate pkg.Package, checkconflicts, full bool) (pkg.Packages, error)
Uninstall(checkconflicts, full bool, candidate ...pkg.Package) (pkg.Packages, error)
ConflictsWithInstalled(p pkg.Package) (bool, error)
ConflictsWith(p pkg.Package, ls pkg.Packages) (bool, error)
Conflicts(pack pkg.Package, lsp pkg.Packages) (bool, error)
@@ -134,13 +134,13 @@ func (s *Solver) BuildInstalled() (bf.Formula, error) {
var packages pkg.Packages
for _, p := range s.Installed() {
packages = append(packages, p)
for _, dep := range p.Related(s.DefinitionDatabase) {
for _, dep := range p.Related(s.InstalledDatabase) {
packages = append(packages, dep)
}
}
for _, p := range packages {
solvable, err := p.BuildFormula(s.DefinitionDatabase, s.SolverDatabase)
solvable, err := p.BuildFormula(s.InstalledDatabase, s.SolverDatabase)
if err != nil {
return nil, err
}
@@ -211,8 +211,8 @@ func (s *Solver) BuildPartialWorld(includeInstalled bool) (bf.Formula, error) {
if len(formulas) != 0 {
return bf.And(formulas...), nil
}
return bf.True, nil
return bf.True, nil
}
func (s *Solver) getList(db pkg.PackageDatabase, lsp pkg.Packages) (pkg.Packages, error) {
@@ -255,8 +255,11 @@ func (s *Solver) Conflicts(pack pkg.Package, lsp pkg.Packages) (bool, error) {
for _, p := range ls {
temporarySet.CreatePackage(p)
}
visited := make(map[string]interface{})
revdeps := p.ExpandedRevdeps(temporarySet, visited)
revdeps, err := temporarySet.GetRevdeps(p)
if err != nil {
return false, errors.Wrap(err, "error scanning revdeps")
}
var revdepsErr error
for _, r := range revdeps {
@@ -396,6 +399,7 @@ func (s *Solver) UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAssert
notUptodate := pkg.Packages{}
removed := pkg.Packages{}
toUpgrade := pkg.Packages{}
replacements := map[pkg.Package]pkg.Package{}
// TODO: this is memory expensive, we need to optimize this
universe := pkg.NewInMemoryDatabase(false)
@@ -408,11 +412,9 @@ func (s *Solver) UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAssert
// Grab all the installed ones, see if they are eligible for update
for _, p := range s.Installed() {
available, err := universe.FindPackageVersions(p)
if err != nil {
available, err := s.DefinitionDatabase.FindPackageVersions(p)
if len(available) == 0 || err != nil {
removed = append(removed, p)
}
if len(available) == 0 {
continue
}
@@ -421,6 +423,7 @@ func (s *Solver) UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAssert
if !bestmatch.Matches(p) {
notUptodate = append(notUptodate, p)
toUpgrade = append(toUpgrade, bestmatch)
replacements[p] = bestmatch
}
}
@@ -434,28 +437,37 @@ func (s *Solver) UpgradeUniverse(dropremoved bool) (pkg.Packages, PackagesAssert
// Treat removed packages from universe as marked for deletion
if dropremoved {
notUptodate = append(notUptodate, removed...)
// SAT encode the clauses against the world
for _, p := range removed.Unique() {
encodedP, err := p.Encode(universe)
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't encode package")
}
P := bf.Var(encodedP)
formulas = append(formulas, bf.And(bf.Not(P), r))
}
}
// SAT encode the clauses against the world
for _, p := range notUptodate.Unique() {
encodedP, err := p.Encode(universe)
for old, new := range replacements {
oldP, err := old.Encode(universe)
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't encode package")
}
P := bf.Var(encodedP)
formulas = append(formulas, bf.And(bf.Not(P), r))
}
for _, p := range toUpgrade {
encodedP, err := p.Encode(universe)
oldencodedP := bf.Var(oldP)
newP, err := new.Encode(universe)
if err != nil {
return nil, nil, errors.Wrap(err, "couldn't encode package")
}
P := bf.Var(encodedP)
formulas = append(formulas, bf.And(P, r))
newEncodedP := bf.Var(newP)
//solvable, err := old.BuildFormula(s.DefinitionDatabase, s.SolverDatabase)
solvablenew, err := new.BuildFormula(s.DefinitionDatabase, s.SolverDatabase)
formulas = append(formulas, bf.And(bf.Not(oldencodedP), bf.And(append(solvablenew, newEncodedP)...)))
}
//formulas = append(formulas, r)
markedForRemoval := pkg.Packages{}
if len(formulas) == 0 {
@@ -518,23 +530,24 @@ func (s *Solver) Upgrade(checkconflicts, full bool) (pkg.Packages, PackagesAsser
}
}
// Then try to uninstall the versions in the system, and store that tree
for _, p := range toUninstall {
r, err := s.Uninstall(p, checkconflicts, false)
r, err := s.Uninstall(checkconflicts, false, toUninstall.Unique()...)
if err != nil {
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't uninstall candidates ")
}
for _, z := range r {
err = installedcopy.RemovePackage(z)
if err != nil {
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't uninstall selected candidate "+p.GetFingerPrint())
}
for _, z := range r {
err = installedcopy.RemovePackage(z)
if err != nil {
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't remove copy of package targetted for removal")
}
return nil, nil, errors.Wrap(err, "Could not compute upgrade - couldn't remove copy of package targetted for removal")
}
}
if len(toInstall) == 0 {
return toUninstall, PackagesAssertions{}, nil
}
r, e := s2.Install(toInstall)
return toUninstall, r, e
assertions, err := s2.Install(toInstall.Unique())
return toUninstall, assertions, err
// To that tree, ask to install the versions that should be upgraded, and try to solve
// Return the solution
@@ -542,52 +555,72 @@ func (s *Solver) Upgrade(checkconflicts, full bool) (pkg.Packages, PackagesAsser
// Uninstall takes a candidate package and return a list of packages that would be removed
// in order to purge the candidate. Returns error if unsat.
func (s *Solver) Uninstall(c pkg.Package, checkconflicts, full bool) (pkg.Packages, error) {
var res pkg.Packages
candidate, err := s.InstalledDatabase.FindPackage(c)
if err != nil {
// return nil, errors.Wrap(err, "Couldn't find required package in db definition")
packages, err := c.Expand(s.InstalledDatabase)
// Info("Expanded", packages, err)
if err != nil || len(packages) == 0 {
candidate = c
} else {
candidate = packages.Best(nil)
}
//Relax search, otherwise we cannot compute solutions for packages not in definitions
// return nil, errors.Wrap(err, "Package not found between installed")
func (s *Solver) Uninstall(checkconflicts, full bool, packs ...pkg.Package) (pkg.Packages, error) {
if len(packs) == 0 {
return pkg.Packages{}, nil
}
var res pkg.Packages
toRemove := pkg.Packages{}
for _, c := range packs {
candidate, err := s.InstalledDatabase.FindPackage(c)
if err != nil {
// return nil, errors.Wrap(err, "Couldn't find required package in db definition")
packages, err := c.Expand(s.InstalledDatabase)
// Info("Expanded", packages, err)
if err != nil || len(packages) == 0 {
candidate = c
} else {
candidate = packages.Best(nil)
}
//Relax search, otherwise we cannot compute solutions for packages not in definitions
// return nil, errors.Wrap(err, "Package not found between installed")
}
toRemove = append(toRemove, candidate)
}
// Build a fake "Installed" - Candidate and its requires tree
var InstalledMinusCandidate pkg.Packages
// We are asked to not perform a full uninstall (checking all the possible requires that could
// be removed). Let's only check if we can remove the selected package
if !full && checkconflicts {
if conflicts, err := s.Conflicts(candidate, s.Installed()); conflicts {
return nil, err
} else {
return pkg.Packages{candidate}, nil
for _, candidate := range toRemove {
if conflicts, err := s.Conflicts(candidate, s.Installed()); conflicts {
return nil, err
}
}
return toRemove, nil
}
// TODO: Can be optimized
for _, i := range s.Installed() {
if !i.Matches(candidate) {
contains, err := candidate.RequiresContains(s.SolverDatabase, i)
if err != nil {
return nil, errors.Wrap(err, "Failed getting installed list")
}
if !contains {
InstalledMinusCandidate = append(InstalledMinusCandidate, i)
matched := false
for _, candidate := range toRemove {
if !i.Matches(candidate) {
contains, err := candidate.RequiresContains(s.SolverDatabase, i)
if err != nil {
return nil, errors.Wrap(err, "Failed getting installed list")
}
if !contains {
matched = true
}
}
}
if matched {
InstalledMinusCandidate = append(InstalledMinusCandidate, i)
}
}
s2 := NewSolver(Options{Type: SingleCoreSimple}, pkg.NewInMemoryDatabase(false), s.DefinitionDatabase, pkg.NewInMemoryDatabase(false))
s2 := NewSolver(Options{Type: SingleCoreSimple}, pkg.NewInMemoryDatabase(false), s.InstalledDatabase, pkg.NewInMemoryDatabase(false))
s2.SetResolver(s.Resolver)
// Get the requirements to install the candidate
asserts, err := s2.Install(pkg.Packages{candidate})
asserts, err := s2.Install(toRemove)
if err != nil {
return nil, err
}
@@ -634,6 +667,7 @@ func (s *Solver) BuildFormula() (bf.Formula, error) {
}
for _, wanted := range s.Wanted {
encodedW, err := wanted.Encode(s.SolverDatabase)
if err != nil {
return nil, err

View File

@@ -401,7 +401,7 @@ var _ = Describe("Solver", func() {
Expect(solution).ToNot(ContainElement(PackageAssert{Package: D, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(4))
Expect(len(solution)).To(Equal(3))
Expect(err).ToNot(HaveOccurred())
})
@@ -529,7 +529,7 @@ var _ = Describe("Solver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(6))
Expect(len(solution)).To(Equal(5))
Expect(err).ToNot(HaveOccurred())
})
@@ -570,7 +570,7 @@ var _ = Describe("Solver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: D1, Value: false}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: E, Value: true}))
Expect(len(solution)).To(Equal(6))
Expect(len(solution)).To(Equal(5))
Expect(err).ToNot(HaveOccurred())
})
@@ -593,7 +593,7 @@ var _ = Describe("Solver", func() {
}
s = NewSolver(Options{Type: SingleCoreSimple}, dbInstalled, dbDefinitions, db)
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -619,7 +619,7 @@ var _ = Describe("Solver", func() {
}
s = NewSolver(Options{Type: SingleCoreSimple}, dbInstalled, dbDefinitions, db)
solution, err := s.Uninstall(&pkg.DefaultPackage{Name: "A", Version: ">1.0"}, true, true)
solution, err := s.Uninstall(true, true, &pkg.DefaultPackage{Name: "A", Version: ">1.0"})
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -643,7 +643,7 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -667,12 +667,13 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
Expect(solution).To(ContainElement(B))
Expect(len(solution)).To(Equal(1))
Expect(len(solution)).To(Equal(2))
})
It("Uninstalls complex packages correctly, even if shared deps are required by system packages", func() {
@@ -690,7 +691,7 @@ var _ = Describe("Solver", func() {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -715,7 +716,7 @@ var _ = Describe("Solver", func() {
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -741,7 +742,7 @@ var _ = Describe("Solver", func() {
Expect(err).ToNot(HaveOccurred())
}
solution, err := s.Uninstall(A, true, true)
solution, err := s.Uninstall(true, true, A)
Expect(err).ToNot(HaveOccurred())
Expect(solution).To(ContainElement(A))
@@ -1070,7 +1071,7 @@ var _ = Describe("Solver", func() {
}
val, err := s.Conflicts(D, dbInstalled.World())
Expect(err.Error()).To(Equal("\n/A-\n/B-"))
Expect(err.Error()).To(Or(Equal("\n/A-\n/B-"), Equal("\n/B-\n/A-")))
Expect(val).To(BeTrue())
})
@@ -1209,6 +1210,8 @@ var _ = Describe("Solver", func() {
})
})
Context("Upgrades", func() {
E := pkg.NewPackage("e", "1.5", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
E.SetCategory("test")
C := pkg.NewPackage("c", "1.5", []*pkg.DefaultPackage{&pkg.DefaultPackage{Name: "a", Version: ">=1.0", Category: "test"}}, []*pkg.DefaultPackage{})
C.SetCategory("test")
B := pkg.NewPackage("b", "1.0", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
@@ -1296,10 +1299,37 @@ var _ = Describe("Solver", func() {
Expect(solution).To(ContainElement(PackageAssert{Package: A1, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: true}))
Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: false}))
Expect(solution).To(ContainElement(PackageAssert{Package: A, Value: false}))
Expect(len(solution)).To(Equal(4))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: C, Value: true}))
Expect(solution).ToNot(ContainElement(PackageAssert{Package: A, Value: true}))
Expect(len(solution)).To(Equal(3))
})
It("Suggests to remove untracked packages", func() {
for _, p := range []pkg.Package{E} {
_, err := dbDefinitions.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
for _, p := range []pkg.Package{A, B, C, E} {
_, err := dbInstalled.CreatePackage(p)
Expect(err).ToNot(HaveOccurred())
}
uninstall, solution, err := s.UpgradeUniverse(true)
Expect(err).ToNot(HaveOccurred())
Expect(len(uninstall)).To(Equal(3))
Expect(uninstall).To(ContainElement(B))
Expect(uninstall).To(ContainElement(A))
Expect(uninstall).To(ContainElement(C))
Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: false}))
Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: false}))
Expect(solution).To(ContainElement(PackageAssert{Package: A, Value: false}))
Expect(len(solution)).To(Equal(3))
})
})
})

View File

@@ -44,6 +44,14 @@ type DefaultPackageSanitized struct {
Labels map[string]string `json:"labels,omitempty" yaml:"labels,omitempty"`
}
func NewDefaultPackageSanitizedFromYaml(data []byte) (*DefaultPackageSanitized, error) {
ans := &DefaultPackageSanitized{}
if err := yaml.Unmarshal(data, ans); err != nil {
return nil, err
}
return ans, nil
}
func NewDefaultPackageSanitized(p pkg.Package) *DefaultPackageSanitized {
ans := &DefaultPackageSanitized{
Name: p.GetName(),
@@ -110,3 +118,12 @@ func NewDefaultPackageSanitized(p pkg.Package) *DefaultPackageSanitized {
func (p *DefaultPackageSanitized) Yaml() ([]byte, error) {
return yaml.Marshal(p)
}
func (p *DefaultPackageSanitized) Clone() (*DefaultPackageSanitized, error) {
data, err := p.Yaml()
if err != nil {
return nil, err
}
return NewDefaultPackageSanitizedFromYaml(data)
}

View File

@@ -1,142 +0,0 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package gentoo
// NOTE: Look here as an example of the builder definition executor
// https://gist.github.com/adnaan/6ca68c7985c6f851def3
import (
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
"sync"
. "github.com/mudler/luet/pkg/logger"
tree "github.com/mudler/luet/pkg/tree"
pkg "github.com/mudler/luet/pkg/package"
)
type MemoryDB int
const (
InMemory MemoryDB = iota
BoltDB MemoryDB = iota
)
func NewGentooBuilder(e EbuildParser, concurrency int, db MemoryDB) tree.Parser {
return &GentooBuilder{EbuildParser: e, Concurrency: concurrency}
}
type GentooBuilder struct {
EbuildParser EbuildParser
Concurrency int
DBType MemoryDB
}
type EbuildParser interface {
ScanEbuild(string) (pkg.Packages, error)
}
func (gb *GentooBuilder) scanEbuild(path string, db pkg.PackageDatabase) error {
defer func() {
if r := recover(); r != nil {
Error(r)
}
}()
pkgs, err := gb.EbuildParser.ScanEbuild(path)
if err != nil {
return err
}
for _, p := range pkgs {
_, err := db.FindPackage(p)
if err != nil {
_, err := db.CreatePackage(p)
if err != nil {
return err
}
}
}
return nil
}
func (gb *GentooBuilder) worker(i int, wg *sync.WaitGroup, s <-chan string, db pkg.PackageDatabase) {
defer wg.Done()
for path := range s {
Info("#"+strconv.Itoa(i), "parsing", path)
err := gb.scanEbuild(path, db)
if err != nil {
Error(path, ":", err.Error())
}
}
}
func (gb *GentooBuilder) Generate(dir string) (pkg.PackageDatabase, error) {
var toScan = make(chan string)
Spinner(27)
defer SpinnerStop()
var db pkg.PackageDatabase
// Support for
switch gb.DBType {
case InMemory:
db = pkg.NewInMemoryDatabase(false)
case BoltDB:
tmpfile, err := ioutil.TempFile("", "boltdb")
if err != nil {
return nil, err
}
db = pkg.NewBoltDatabase(tmpfile.Name())
default:
db = pkg.NewInMemoryDatabase(false)
}
Debug("Concurrency", gb.Concurrency)
// the waitgroup will allow us to wait for all the goroutines to finish at the end
var wg = new(sync.WaitGroup)
for i := 0; i < gb.Concurrency; i++ {
wg.Add(1)
go gb.worker(i, wg, toScan, db)
}
// TODO: Handle cleaning after? Cleanup implemented in GetPackageSet().Clean()
err := filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if info.IsDir() {
return nil
}
// Ensure that only file with suffix .ebuild are elaborated.
// and ignore .swp files or files with string ebuild on name
if strings.HasSuffix(info.Name(), ".ebuild") {
toScan <- path
}
return nil
})
close(toScan)
wg.Wait()
if err != nil {
return db, err
}
return db, nil
}

View File

@@ -1,177 +0,0 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package gentoo_test
import (
"fmt"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
pkg "github.com/mudler/luet/pkg/package"
. "github.com/mudler/luet/pkg/tree/builder/gentoo"
)
type FakeParser struct {
}
func (f *FakeParser) ScanEbuild(path string) (pkg.Packages, error) {
return pkg.Packages{&pkg.DefaultPackage{Name: path}}, nil
}
var _ = Describe("GentooBuilder", func() {
Context("Simple test", func() {
for _, dbType := range []MemoryDB{InMemory, BoltDB} {
It("parses correctly deps", func() {
gb := NewGentooBuilder(&FakeParser{}, 20, dbType)
tree, err := gb.Generate("../../../../tests/fixtures/overlay")
defer func() {
Expect(tree.Clean()).ToNot(HaveOccurred())
}()
Expect(err).ToNot(HaveOccurred())
Expect(len(tree.GetPackages())).To(Equal(10))
})
}
})
Context("Parse ebuild1", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/overlay/app-crypt/pinentry-gnome/pinentry-gnome-1.0.0-r2.ebuild")
It("parses correctly deps", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(pkgs[0].GetLicense()).To(Equal("GPL-2"))
Expect(pkgs[0].GetDescription()).To(Equal("GNOME 3 frontend for pinentry"))
})
})
Context("Parse ebuild2", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/parser/mod_dav_svn-1.12.2.ebuild")
It("Parsing ebuild2", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(pkgs[0].GetLicense()).To(Equal("Subversion"))
Expect(pkgs[0].GetDescription()).To(Equal("Subversion WebDAV support"))
})
})
Context("Parse ebuild3", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/parser/linux-sources-1.ebuild")
It("Check parsing of the ebuild3", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(len(pkgs[0].GetRequires())).To(Equal(0))
Expect(pkgs[0].GetLicense()).To(Equal(""))
Expect(pkgs[0].GetDescription()).To(Equal("Virtual for Linux kernel sources"))
})
})
Context("Parse ebuild4", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/parser/sabayon-mce-1.1-r5.ebuild")
It("Check parsing of the ebuild4", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(len(pkgs[0].GetRequires())).To(Equal(2))
Expect(pkgs[0].GetLicense()).To(Equal("GPL-2"))
Expect(pkgs[0].GetDescription()).To(Equal("Sabayon Linux Media Center Infrastructure"))
})
})
Context("Parse ebuild5", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/parser/libreoffice-l10n-meta-6.2.8.2.ebuild")
It("Check parsing of the ebuild5", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(len(pkgs[0].GetRequires())).To(Equal(146))
Expect(pkgs[0].GetLicense()).To(Equal("LGPL-2"))
Expect(pkgs[0].GetDescription()).To(Equal("LibreOffice.org localisation meta-package"))
})
})
Context("Parse ebuild6", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/parser/pkgs-checker-0.2.0.ebuild")
It("Check parsing of the ebuild6", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(len(pkgs[0].GetRequires())).To(Equal(0))
Expect(pkgs[0].GetLicense()).To(Equal("GPL-3"))
Expect(pkgs[0].GetDescription()).To(Equal("Sabayon Packages Checker"))
})
})
Context("Parse ebuild7", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/parser/calamares-sabayon-base-modules-1.15.ebuild")
It("Check parsing of the ebuild7", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(len(pkgs[0].GetRequires())).To(Equal(2))
Expect(pkgs[0].GetLicense()).To(Equal("CC-BY-SA-4.0"))
Expect(pkgs[0].GetDescription()).To(Equal("Sabayon Official Calamares base modules"))
})
})
Context("Parse ebuild8", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/parser/subversion-1.12.0.ebuild")
It("Check parsing of the ebuild8", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(len(pkgs[0].GetRequires())).To(Equal(25))
Expect(pkgs[0].GetLicense()).To(Equal("Subversion GPL-2"))
Expect(pkgs[0].GetDescription()).To(Equal("Advanced version control system"))
})
})
Context("Parse ebuild9", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/parser/kodi-raspberrypi-16.0.ebuild")
PIt("Check parsing of the ebuild9", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(len(pkgs[0].GetRequires())).To(Equal(66))
Expect(pkgs[0].GetLicense()).To(Equal("GPL-2"))
Expect(pkgs[0].GetDescription()).To(Equal("Kodi is a free and open source media-player and entertainment hub"))
})
})
Context("Parse ebuild10", func() {
parser := &SimpleEbuildParser{}
pkgs, err := parser.ScanEbuild("../../../../tests/fixtures/parser/tango-icon-theme-0.8.90-r1.ebuild")
It("Check parsing of the ebuild10", func() {
Expect(err).ToNot(HaveOccurred())
fmt.Println("PKG ", pkgs[0])
Expect(len(pkgs[0].GetRequires())).To(Equal(2))
Expect(pkgs[0].GetLicense()).To(Equal("public-domain"))
Expect(pkgs[0].GetDescription()).To(Equal("SVG and PNG icon theme from the Tango project"))
})
})
})

View File

@@ -1,447 +0,0 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package gentoo
// NOTE: Look here as an example of the builder definition executor
// https://gist.github.com/adnaan/6ca68c7985c6f851def3
import (
"context"
"errors"
"fmt"
"io/ioutil"
"path/filepath"
"regexp"
"strings"
"time"
. "github.com/mudler/luet/pkg/logger"
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
pkg "github.com/mudler/luet/pkg/package"
"mvdan.cc/sh/v3/expand"
"mvdan.cc/sh/v3/shell"
"mvdan.cc/sh/v3/syntax"
)
const (
uriRegex = "(.*[.]tar[.].*|.*[.]zip|.*[.]run|.*[.]png|.*[.]rpm|.*[.]gz)"
)
// SimpleEbuildParser ignores USE flags and generates just 1-1 package
type SimpleEbuildParser struct {
World pkg.PackageDatabase
}
type GentooDependency struct {
Use string
UseCondition _gentoo.PackageCond
SubDeps []*GentooDependency
Dep *_gentoo.GentooPackage
}
type GentooRDEPEND struct {
Dependencies []*GentooDependency
}
func NewGentooDependency(pkg, use string) (*GentooDependency, error) {
var err error
ans := &GentooDependency{
Use: use,
SubDeps: make([]*GentooDependency, 0),
}
if strings.HasPrefix(use, "!") {
ans.Use = ans.Use[1:]
ans.UseCondition = _gentoo.PkgCondNot
}
if pkg != "" {
ans.Dep, err = _gentoo.ParsePackageStr(pkg)
if err != nil {
return nil, err
}
// TODO: Fix this on parsing phase for handle correctly ${PV}
if strings.HasSuffix(ans.Dep.Name, "-") {
ans.Dep.Name = ans.Dep.Name[:len(ans.Dep.Name)-1]
}
}
return ans, nil
}
func (d *GentooDependency) String() string {
if d.Dep != nil {
return fmt.Sprintf("%s", d.Dep)
} else {
return fmt.Sprintf("%s %d %s", d.Use, d.UseCondition, d.SubDeps)
}
}
func (d *GentooDependency) GetDepsList() []*GentooDependency {
ans := make([]*GentooDependency, 0)
if len(d.SubDeps) > 0 {
for _, d2 := range d.SubDeps {
list := d2.GetDepsList()
ans = append(ans, list...)
}
}
if d.Dep != nil {
ans = append(ans, d)
}
return ans
}
func (d *GentooDependency) AddSubDependency(pkg, use string) (*GentooDependency, error) {
ans, err := NewGentooDependency(pkg, use)
if err != nil {
return nil, err
}
d.SubDeps = append(d.SubDeps, ans)
return ans, nil
}
func (r *GentooRDEPEND) GetDependencies() []*GentooDependency {
ans := make([]*GentooDependency, 0)
for _, d := range r.Dependencies {
list := d.GetDepsList()
ans = append(ans, list...)
}
// the same dependency could be available in multiple use flags.
// It's needed avoid duplicate.
m := make(map[string]*GentooDependency, 0)
for _, p := range ans {
m[p.String()] = p
}
ans = make([]*GentooDependency, 0)
for _, p := range m {
ans = append(ans, p)
}
return ans
}
func ParseRDEPEND(rdepend string) (*GentooRDEPEND, error) {
var lastdep []*GentooDependency = make([]*GentooDependency, 0)
var pendingDep = false
var orDep = false
var dep *GentooDependency
var err error
ans := &GentooRDEPEND{
Dependencies: make([]*GentooDependency, 0),
}
if rdepend != "" {
rdepends := strings.Split(rdepend, "\n")
for _, rr := range rdepends {
rr = strings.TrimSpace(rr)
if rr == "" {
continue
}
if strings.HasPrefix(rr, "|| (") {
orDep = true
continue
}
if orDep {
rr = strings.TrimSpace(rr)
if rr == ")" {
orDep = false
}
continue
}
if strings.Index(rr, "?") > 0 {
// use flag present
if pendingDep {
dep, err = lastdep[len(lastdep)-1].AddSubDependency("", rr[:strings.Index(rr, "?")])
if err != nil {
Debug("Ignoring subdependency ", rr[:strings.Index(rr, "?")])
}
} else {
dep, err = NewGentooDependency("", rr[:strings.Index(rr, "?")])
if err != nil {
Debug("Ignoring dep", rr)
} else {
ans.Dependencies = append(ans.Dependencies, dep)
}
}
if strings.Index(rr, ")") < 0 {
pendingDep = true
lastdep = append(lastdep, dep)
}
if strings.Index(rr, "|| (") >= 0 {
// Ignore dep in or
continue
}
fields := strings.Split(rr[strings.Index(rr, "?")+1:], " ")
for _, f := range fields {
f = strings.TrimSpace(f)
if f == ")" || f == "(" || f == "" {
continue
}
_, err = dep.AddSubDependency(f, "")
if err != nil {
Debug("Ignoring subdependency ", f)
}
}
} else if pendingDep {
fields := strings.Split(rr, " ")
for _, f := range fields {
f = strings.TrimSpace(f)
if f == ")" || f == "(" || f == "" {
continue
}
_, err = lastdep[len(lastdep)-1].AddSubDependency(f, "")
if err != nil {
return nil, err
}
}
if strings.Index(rr, ")") >= 0 {
lastdep = lastdep[:len(lastdep)-1]
if len(lastdep) == 0 {
pendingDep = false
}
}
} else {
rr = strings.TrimSpace(rr)
// Check if there multiple deps in single row
fields := strings.Split(rr, " ")
if len(fields) > 1 {
for _, rrr := range fields {
rrr = strings.TrimSpace(rrr)
if rrr == "" {
continue
}
dep, err := NewGentooDependency(rrr, "")
if err != nil {
Debug("Ignoring dep", rr)
} else {
ans.Dependencies = append(ans.Dependencies, dep)
}
}
} else {
dep, err := NewGentooDependency(rr, "")
if err != nil {
Debug("Ignoring dep", rr)
} else {
ans.Dependencies = append(ans.Dependencies, dep)
}
}
}
}
}
return ans, nil
}
func SourceFile(ctx context.Context, path string, pkg *_gentoo.GentooPackage) (map[string]expand.Variable, error) {
content, err := ioutil.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("could not open: %v", err)
}
scontent := string(content)
// Add default Genoo Variables
ebuild := fmt.Sprintf("P=%s\n", pkg.GetP()) +
fmt.Sprintf("PN=%s\n", pkg.GetPN()) +
fmt.Sprintf("PV=%s\n", pkg.GetPV()) +
fmt.Sprintf("PVR=%s\n", pkg.GetPVR())
// Disable inherit
scontent = strings.ReplaceAll(scontent, "inherit", "#inherit")
// Disable function from eclass (TODO: check how fix better this)
scontent = strings.ReplaceAll(scontent, "need_apache", "#need_apache")
scontent = strings.ReplaceAll(scontent, "want_apache", "#want_apache")
regexFuncs := regexp.MustCompile(
"[a-zA-Z]+.*[_][a-z]+[(][)][\\s]{",
)
matches := regexFuncs.FindAllIndex([]byte(scontent), -1)
// Drop section after functions (src_*, *() {)
if len(matches) > 0 {
ebuild = ebuild + scontent[:matches[0][0]]
} else {
ebuild = ebuild + scontent
}
// [[ ${PV} == "9999" ]] is not supported. Workaround but we need a better solution.
regexDoubleBrakets := regexp.MustCompile(
//"[[][[].*",
"^[[][[].*",
//"^.*\[\[.*\]\]",
)
matchDB := regexDoubleBrakets.FindAllIndex([]byte(ebuild), -1)
if len(matchDB) > 0 {
ebuild = ebuild[:matchDB[0][0]] + "#" + ebuild[matchDB[0][0]:]
}
//fmt.Println("EBUILD ", ebuild)
file, err := syntax.NewParser().Parse(strings.NewReader(ebuild), path)
if err != nil {
return nil, fmt.Errorf("could not parse: %v", err)
}
return shell.SourceNode(ctx, file)
}
// ScanEbuild returns a list of packages (always one with SimpleEbuildParser) decoded from an ebuild.
func (ep *SimpleEbuildParser) ScanEbuild(path string) (pkg.Packages, error) {
Debug("Starting parsing of ebuild", path)
pkgstr := filepath.Base(path)
paths := strings.Split(filepath.Dir(path), "/")
pkgstr = paths[len(paths)-2] + "/" + strings.Replace(pkgstr, ".ebuild", "", -1)
gp, err := _gentoo.ParsePackageStr(pkgstr)
if err != nil {
return pkg.Packages{}, errors.New("Error on parsing package string")
}
pack := &pkg.DefaultPackage{
Name: gp.Name,
Version: fmt.Sprintf("%s%s", gp.Version, gp.VersionSuffix),
Category: gp.Category,
Uri: make([]string, 0),
}
Debug("Prepare package ", pack.Category+"/"+pack.Name+"-"+pack.Version)
// Adding a timeout of 60secs, as with some bash files it can hang indefinetly
timeout, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
vars, err := SourceFile(timeout, path, gp)
if err != nil {
Error("Error on source file ", pack.Name, ": ", err)
return pkg.Packages{}, err
}
// Retrieve slot
slot, ok := vars["SLOT"]
if ok && slot.String() != "0" {
pack.SetCategory(fmt.Sprintf("%s-%s", gp.Category, slot.String()))
}
// TODO: Handle this a bit better
iuse, ok := vars["IUSE"]
if ok {
uses := strings.Split(strings.TrimSpace(iuse.String()), " ")
for _, u := range uses {
pack.AddUse(u)
}
}
// Retrieve package description
descr, ok := vars["DESCRIPTION"]
if ok {
pack.SetDescription(descr.String())
}
// Retrieve package license
license, ok := vars["LICENSE"]
if ok {
pack.SetLicense(license.String())
}
uri, ok := vars["SRC_URI"]
if ok {
// TODO: handle mirror:
uris := strings.Split(uri.String(), "\n")
for _, u := range uris {
u = strings.TrimSpace(u)
if u == "" {
continue
}
if match, _ := regexp.Match(uriRegex, []byte(u)); match {
if strings.Index(u, "(") >= 0 {
regexUri := regexp.MustCompile("(http|ftp|mirror).*[ ]")
matches := regexUri.FindAllIndex([]byte(u), -1)
if len(matches) > 0 {
u = u[matches[0][0]:matches[0][1]]
} else {
continue
}
}
pack.AddURI(u)
Debug("Add uri ", u)
} else {
Debug("Skip uri ", u)
}
}
}
rdepend, ok := vars["RDEPEND"]
if ok {
gRDEPEND, err := ParseRDEPEND(rdepend.String())
if err != nil {
Warning("Error on parsing RDEPEND for package ", pack.Category+"/"+pack.Name, err)
return pkg.Packages{pack}, nil
// return pkg.Packages{}, err
}
pack.PackageConflicts = []*pkg.DefaultPackage{}
pack.PackageRequires = []*pkg.DefaultPackage{}
// TODO: See how handle use flags enabled.
// and if it's correct get list of deps directly.
for _, d := range gRDEPEND.GetDependencies() {
//TODO: Resolve to db or create a new one.
//TODO: handle SLOT too.
dep := &pkg.DefaultPackage{
Name: d.Dep.Name,
Version: d.Dep.Version + d.Dep.VersionSuffix,
Category: d.Dep.Category,
}
Debug(fmt.Sprintf("For package %s found dep: %s/%s %s",
gp, dep.Category, dep.Name, dep.Version))
if d.Dep.Condition == _gentoo.PkgCondNot {
pack.PackageConflicts = append(pack.PackageConflicts, dep)
} else {
pack.PackageRequires = append(pack.PackageRequires, dep)
}
}
}
Debug("Finished processing ebuild", path, "deps ", len(pack.PackageRequires))
//TODO: Deps and conflicts
return pkg.Packages{pack}, nil
}

View File

@@ -1,669 +0,0 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package gentoo_test
import (
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
. "github.com/mudler/luet/pkg/tree/builder/gentoo"
)
var _ = Describe("GentooBuilder", func() {
Context("Parse RDEPEND1", func() {
rdepend := `
app-crypt/sbsigntools
x11-themes/sabayon-artwork-grub
sys-boot/os-prober
app-arch/xz-utils
>=sys-libs/ncurses-5.2-r5:0=
`
gr, err := ParseRDEPEND(rdepend)
It("Check error", func() {
Expect(err).Should(BeNil())
})
It("Check gr", func() {
Expect(gr).ShouldNot(BeNil())
})
It("Check deps #", func() {
Expect(len(gr.Dependencies)).Should(Equal(5))
})
It("Check dep1", func() {
Expect(*gr.Dependencies[0]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sbsigntools",
Category: "app-crypt",
Slot: "0",
},
},
))
})
It("Check dep2", func() {
Expect(*gr.Dependencies[1]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sabayon-artwork-grub",
Category: "x11-themes",
Slot: "0",
},
},
))
})
It("Check dep5", func() {
Expect(*gr.Dependencies[4]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "ncurses",
Category: "sys-libs",
Slot: "0=",
Version: "5.2",
VersionSuffix: "-r5",
Condition: _gentoo.PkgCondGreaterEqual,
},
},
))
})
})
Context("Parse RDEPEND2", func() {
rdepend := `
app-crypt/sbsigntools
x11-themes/sabayon-artwork-grub
sys-boot/os-prober
app-arch/xz-utils
>=sys-libs/ncurses-5.2-r5:0=
mount? ( sys-fs/fuse )
`
gr, err := ParseRDEPEND(rdepend)
It("Check error", func() {
Expect(err).Should(BeNil())
})
It("Check gr", func() {
Expect(gr).ShouldNot(BeNil())
})
It("Check deps #", func() {
Expect(len(gr.Dependencies)).Should(Equal(6))
})
It("Check dep1", func() {
Expect(*gr.Dependencies[0]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sbsigntools",
Category: "app-crypt",
Slot: "0",
},
},
))
})
It("Check dep2", func() {
Expect(*gr.Dependencies[1]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sabayon-artwork-grub",
Category: "x11-themes",
Slot: "0",
},
},
))
})
It("Check dep5", func() {
Expect(*gr.Dependencies[4]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "ncurses",
Category: "sys-libs",
Slot: "0=",
Version: "5.2",
VersionSuffix: "-r5",
Condition: _gentoo.PkgCondGreaterEqual,
},
},
))
})
It("Check dep6", func() {
Expect(*gr.Dependencies[5]).Should(Equal(
GentooDependency{
Use: "mount",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: []*GentooDependency{
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "fuse",
Category: "sys-fs",
Slot: "0",
},
},
},
Dep: nil,
},
))
})
})
Context("Parse RDEPEND3", func() {
rdepend := `
app-crypt/sbsigntools
x11-themes/sabayon-artwork-grub
sys-boot/os-prober
app-arch/xz-utils
>=sys-libs/ncurses-5.2-r5:0=
mount? ( sys-fs/fuse =sys-apps/pmount-0.9.99_alpha-r5:= )
`
gr, err := ParseRDEPEND(rdepend)
It("Check error", func() {
Expect(err).Should(BeNil())
})
It("Check gr", func() {
Expect(gr).ShouldNot(BeNil())
})
It("Check deps #", func() {
Expect(len(gr.Dependencies)).Should(Equal(6))
})
It("Check dep1", func() {
Expect(*gr.Dependencies[0]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sbsigntools",
Category: "app-crypt",
Slot: "0",
},
},
))
})
It("Check dep2", func() {
Expect(*gr.Dependencies[1]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sabayon-artwork-grub",
Category: "x11-themes",
Slot: "0",
},
},
))
})
It("Check dep5", func() {
Expect(*gr.Dependencies[4]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "ncurses",
Category: "sys-libs",
Slot: "0=",
Version: "5.2",
VersionSuffix: "-r5",
Condition: _gentoo.PkgCondGreaterEqual,
},
},
))
})
It("Check dep6", func() {
Expect(*gr.Dependencies[5]).Should(Equal(
GentooDependency{
Use: "mount",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: []*GentooDependency{
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "fuse",
Category: "sys-fs",
Slot: "0",
},
},
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "pmount",
Category: "sys-apps",
Condition: _gentoo.PkgCondEqual,
Version: "0.9.99",
VersionSuffix: "_alpha-r5",
Slot: "=",
},
},
},
Dep: nil,
},
))
})
})
Context("Parse RDEPEND4", func() {
rdepend := `
app-crypt/sbsigntools
x11-themes/sabayon-artwork-grub
sys-boot/os-prober
app-arch/xz-utils
>=sys-libs/ncurses-5.2-r5:0=
!mount? ( sys-fs/fuse =sys-apps/pmount-0.9.99_alpha-r5:= )
`
gr, err := ParseRDEPEND(rdepend)
It("Check error", func() {
Expect(err).Should(BeNil())
})
It("Check gr", func() {
Expect(gr).ShouldNot(BeNil())
})
It("Check deps #", func() {
Expect(len(gr.Dependencies)).Should(Equal(6))
})
It("Check dep1", func() {
Expect(*gr.Dependencies[0]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sbsigntools",
Category: "app-crypt",
Slot: "0",
},
},
))
})
It("Check dep2", func() {
Expect(*gr.Dependencies[1]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sabayon-artwork-grub",
Category: "x11-themes",
Slot: "0",
},
},
))
})
It("Check dep5", func() {
Expect(*gr.Dependencies[4]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "ncurses",
Category: "sys-libs",
Slot: "0=",
Version: "5.2",
VersionSuffix: "-r5",
Condition: _gentoo.PkgCondGreaterEqual,
},
},
))
})
It("Check dep6", func() {
Expect(*gr.Dependencies[5]).Should(Equal(
GentooDependency{
Use: "mount",
UseCondition: _gentoo.PkgCondNot,
SubDeps: []*GentooDependency{
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "fuse",
Category: "sys-fs",
Slot: "0",
},
},
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "pmount",
Category: "sys-apps",
Condition: _gentoo.PkgCondEqual,
Version: "0.9.99",
VersionSuffix: "_alpha-r5",
Slot: "=",
},
},
},
Dep: nil,
},
))
})
})
Context("Parse RDEPEND5", func() {
rdepend := `
app-crypt/sbsigntools
>=sys-libs/ncurses-5.2-r5:0=
mount? (
sys-fs/fuse
=sys-apps/pmount-0.9.99_alpha-r5:=
)
`
gr, err := ParseRDEPEND(rdepend)
It("Check error", func() {
Expect(err).Should(BeNil())
})
It("Check gr", func() {
Expect(gr).ShouldNot(BeNil())
})
It("Check deps #", func() {
Expect(len(gr.Dependencies)).Should(Equal(3))
})
It("Check dep1", func() {
Expect(*gr.Dependencies[0]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sbsigntools",
Category: "app-crypt",
Slot: "0",
},
},
))
})
It("Check dep2", func() {
Expect(*gr.Dependencies[1]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "ncurses",
Category: "sys-libs",
Slot: "0=",
Version: "5.2",
VersionSuffix: "-r5",
Condition: _gentoo.PkgCondGreaterEqual,
},
},
))
})
It("Check dep3", func() {
Expect(*gr.Dependencies[2]).Should(Equal(
GentooDependency{
Use: "mount",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: []*GentooDependency{
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "fuse",
Category: "sys-fs",
Slot: "0",
},
},
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "pmount",
Category: "sys-apps",
Condition: _gentoo.PkgCondEqual,
Version: "0.9.99",
VersionSuffix: "_alpha-r5",
Slot: "=",
},
},
},
Dep: nil,
},
))
})
})
Context("Parse RDEPEND6", func() {
rdepend := `
app-crypt/sbsigntools
>=sys-libs/ncurses-5.2-r5:0=
mount? (
sys-fs/fuse
=sys-apps/pmount-0.9.99_alpha-r5:= )
`
gr, err := ParseRDEPEND(rdepend)
It("Check error", func() {
Expect(err).Should(BeNil())
})
It("Check gr", func() {
Expect(gr).ShouldNot(BeNil())
})
It("Check deps #", func() {
Expect(len(gr.Dependencies)).Should(Equal(3))
})
It("Check dep1", func() {
Expect(*gr.Dependencies[0]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sbsigntools",
Category: "app-crypt",
Slot: "0",
},
},
))
})
})
Context("Parse RDEPEND7", func() {
rdepend := `
app-crypt/sbsigntools
>=sys-libs/ncurses-5.2-r5:0=
mount? (
sys-fs/fuse
=sys-apps/pmount-0.9.99_alpha-r5:=
ext2? (
sys-fs/genext2fs
)
)
`
gr, err := ParseRDEPEND(rdepend)
It("Check error", func() {
Expect(err).Should(BeNil())
})
It("Check gr", func() {
Expect(gr).ShouldNot(BeNil())
})
It("Check deps #", func() {
Expect(len(gr.Dependencies)).Should(Equal(3))
})
It("Check dep1", func() {
Expect(*gr.Dependencies[0]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "sbsigntools",
Category: "app-crypt",
Slot: "0",
},
},
))
})
It("Check dep2", func() {
Expect(*gr.Dependencies[1]).Should(Equal(
GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "ncurses",
Category: "sys-libs",
Slot: "0=",
Version: "5.2",
VersionSuffix: "-r5",
Condition: _gentoo.PkgCondGreaterEqual,
},
},
))
})
It("Check dep3", func() {
Expect(*gr.Dependencies[2]).Should(Equal(
GentooDependency{
Use: "mount",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: []*GentooDependency{
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "fuse",
Category: "sys-fs",
Slot: "0",
},
},
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "pmount",
Category: "sys-apps",
Condition: _gentoo.PkgCondEqual,
Version: "0.9.99",
VersionSuffix: "_alpha-r5",
Slot: "=",
},
},
&GentooDependency{
Use: "ext2",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: []*GentooDependency{
&GentooDependency{
Use: "",
UseCondition: _gentoo.PkgCondInvalid,
SubDeps: make([]*GentooDependency, 0),
Dep: &_gentoo.GentooPackage{
Name: "genext2fs",
Category: "sys-fs",
Slot: "0",
},
},
},
Dep: nil,
},
},
},
))
})
})
Context("Simple test", func() {
for _, dbType := range []MemoryDB{InMemory, BoltDB} {
It("parses correctly deps", func() {
gb := NewGentooBuilder(&SimpleEbuildParser{}, 20, dbType)
tree, err := gb.Generate("../../../../tests/fixtures/overlay")
Expect(err).ToNot(HaveOccurred())
defer func() {
Expect(tree.Clean()).ToNot(HaveOccurred())
}()
Expect(len(tree.GetPackages())).To(Equal(10))
for _, p := range tree.World() {
Expect(p.GetName()).To(ContainSubstring("pinentry"))
Expect(p.GetVersion()).To(ContainSubstring("1."))
}
})
}
})
})

View File

@@ -74,39 +74,92 @@ func (r *CompilerRecipe) Load(path string) error {
return errors.Wrap(err, "Error on walk path "+currentpath)
}
if info.Name() != DefinitionFile {
if info.Name() != DefinitionFile && info.Name() != CollectionFile {
return nil // Skip with no errors
}
pack, err := ReadDefinitionFile(currentpath)
if err != nil {
return err
}
// Path is set only internally when tree is loaded from disk
pack.SetPath(filepath.Dir(currentpath))
switch info.Name() {
case DefinitionFile:
// Instead of rdeps, have a different tree for build deps.
compileDefPath := pack.Rel(CompilerDefinitionFile)
if helpers.Exists(compileDefPath) {
dat, err := ioutil.ReadFile(compileDefPath)
pack, err := ReadDefinitionFile(currentpath)
if err != nil {
return errors.Wrap(err,
"Error reading file "+CompilerDefinitionFile+" from "+
filepath.Dir(currentpath))
return err
}
// Path is set only internally when tree is loaded from disk
pack.SetPath(filepath.Dir(currentpath))
// Instead of rdeps, have a different tree for build deps.
compileDefPath := pack.Rel(CompilerDefinitionFile)
if helpers.Exists(compileDefPath) {
dat, err := helpers.RenderFiles(compileDefPath, currentpath, "")
if err != nil {
return errors.Wrap(err,
"Error templating file "+CompilerDefinitionFile+" from "+
filepath.Dir(currentpath))
}
packbuild, err := pkg.DefaultPackageFromYaml([]byte(dat))
if err != nil {
return errors.Wrap(err,
"Error reading yaml "+CompilerDefinitionFile+" from "+
filepath.Dir(currentpath))
}
pack.Requires(packbuild.GetRequires())
pack.Conflicts(packbuild.GetConflicts())
}
_, err = r.Database.CreatePackage(&pack)
if err != nil {
return errors.Wrap(err, "Error creating package "+pack.GetName())
}
case CollectionFile:
dat, err := ioutil.ReadFile(currentpath)
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
}
packs, err := pkg.DefaultPackagesFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
packsRaw, err := pkg.GetRawPackages(dat)
for _, pack := range packs {
pack.SetPath(filepath.Dir(currentpath))
// Instead of rdeps, have a different tree for build deps.
compileDefPath := pack.Rel(CompilerDefinitionFile)
if helpers.Exists(compileDefPath) {
raw := packsRaw.Find(pack.GetName(), pack.GetCategory(), pack.GetVersion())
buildyaml, err := ioutil.ReadFile(compileDefPath)
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
}
dat, err := helpers.RenderHelm(string(buildyaml), raw, map[string]interface{}{})
if err != nil {
return errors.Wrap(err,
"Error templating file "+CompilerDefinitionFile+" from "+
filepath.Dir(currentpath))
}
packbuild, err := pkg.DefaultPackageFromYaml([]byte(dat))
if err != nil {
return errors.Wrap(err,
"Error reading yaml "+CompilerDefinitionFile+" from "+
filepath.Dir(currentpath))
}
pack.Requires(packbuild.GetRequires())
pack.Conflicts(packbuild.GetConflicts())
}
_, err = r.Database.CreatePackage(&pack)
if err != nil {
return errors.Wrap(err, "Error creating package "+pack.GetName())
}
}
packbuild, err := pkg.DefaultPackageFromYaml(dat)
if err != nil {
return errors.Wrap(err,
"Error reading yaml "+CompilerDefinitionFile+" from "+
filepath.Dir(currentpath))
}
pack.Requires(packbuild.GetRequires())
pack.Conflicts(packbuild.GetConflicts())
}
_, err = r.Database.CreatePackage(&pack)
if err != nil {
return errors.Wrap(err, "Error creating package "+pack.GetName())
}
return nil

View File

@@ -85,7 +85,7 @@ func (r *InstallerRecipe) Load(path string) error {
// the function that handles each file or dir
var ff = func(currentpath string, info os.FileInfo, err error) error {
if info.Name() != DefinitionFile {
if info.Name() != DefinitionFile && info.Name() != CollectionFile {
return nil // Skip with no errors
}
@@ -93,16 +93,35 @@ func (r *InstallerRecipe) Load(path string) error {
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
}
pack, err := pkg.DefaultPackageFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
// Path is set only internally when tree is loaded from disk
pack.SetPath(filepath.Dir(currentpath))
_, err = r.Database.CreatePackage(&pack)
if err != nil {
return errors.Wrap(err, "Error creating package "+pack.GetName())
switch info.Name() {
case DefinitionFile:
pack, err := pkg.DefaultPackageFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
// Path is set only internally when tree is loaded from disk
pack.SetPath(filepath.Dir(currentpath))
_, err = r.Database.CreatePackage(&pack)
if err != nil {
return errors.Wrap(err, "Error creating package "+pack.GetName())
}
case CollectionFile:
packs, err := pkg.DefaultPackagesFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
for _, p := range packs {
// Path is set only internally when tree is loaded from disk
p.SetPath(filepath.Dir(currentpath))
_, err = r.Database.CreatePackage(&p)
if err != nil {
return errors.Wrap(err, "Error creating package "+p.GetName())
}
}
}
return nil

View File

@@ -34,6 +34,7 @@ import (
const (
DefinitionFile = "definition.yaml"
CollectionFile = "collection.yaml"
)
func NewGeneralRecipe(db pkg.PackageDatabase) Builder { return &Recipe{Database: db} }
@@ -94,7 +95,7 @@ func (r *Recipe) Load(path string) error {
// the function that handles each file or dir
var ff = func(currentpath string, info os.FileInfo, err error) error {
if info.Name() != DefinitionFile {
if info.Name() != DefinitionFile && info.Name() != CollectionFile {
return nil // Skip with no errors
}
@@ -102,16 +103,34 @@ func (r *Recipe) Load(path string) error {
if err != nil {
return errors.Wrap(err, "Error reading file "+currentpath)
}
pack, err := pkg.DefaultPackageFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
// Path is set only internally when tree is loaded from disk
pack.SetPath(filepath.Dir(currentpath))
_, err = r.Database.CreatePackage(&pack)
if err != nil {
return errors.Wrap(err, "Error creating package "+pack.GetName())
switch info.Name() {
case DefinitionFile:
pack, err := pkg.DefaultPackageFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
// Path is set only internally when tree is loaded from disk
pack.SetPath(filepath.Dir(currentpath))
_, err = r.Database.CreatePackage(&pack)
if err != nil {
return errors.Wrap(err, "Error creating package "+pack.GetName())
}
case CollectionFile:
packs, err := pkg.DefaultPackagesFromYaml(dat)
if err != nil {
return errors.Wrap(err, "Error reading yaml "+currentpath)
}
for _, p := range packs {
// Path is set only internally when tree is loaded from disk
p.SetPath(filepath.Dir(currentpath))
_, err = r.Database.CreatePackage(&p)
if err != nil {
return errors.Wrap(err, "Error creating package "+p.GetName())
}
}
}
return nil

View File

@@ -1,136 +0,0 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
// Recipe is a builder imeplementation.
// It reads a Tree and spit it in human readable form (YAML), called recipe,
// It also loads a tree (recipe) from a YAML (to a db, e.g. BoltDB), allowing to query it
// with the solver, using the package object.
package tree_test
import (
"io/ioutil"
"os"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
gentoo "github.com/mudler/luet/pkg/tree/builder/gentoo"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
. "github.com/mudler/luet/pkg/tree"
)
type FakeParser struct {
}
var _ = Describe("Recipe", func() {
for _, dbType := range []gentoo.MemoryDB{gentoo.InMemory, gentoo.BoltDB} {
Context("Tree generation and storing", func() {
It("parses and writes a tree", func() {
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
gb := gentoo.NewGentooBuilder(&gentoo.SimpleEbuildParser{}, 20, dbType)
tree, err := gb.Generate("../../tests/fixtures/overlay")
Expect(err).ToNot(HaveOccurred())
defer func() {
Expect(tree.Clean()).ToNot(HaveOccurred())
}()
Expect(len(tree.GetPackages())).To(Equal(10))
generalRecipe := NewGeneralRecipe(tree)
err = generalRecipe.Save(tmpdir)
Expect(err).ToNot(HaveOccurred())
})
})
Context("Reloading trees", func() {
It("writes and reads back the same tree", func() {
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
gb := gentoo.NewGentooBuilder(&gentoo.SimpleEbuildParser{}, 20, dbType)
tree, err := gb.Generate("../../tests/fixtures/overlay")
Expect(err).ToNot(HaveOccurred())
defer func() {
Expect(tree.Clean()).ToNot(HaveOccurred())
}()
Expect(len(tree.GetPackages())).To(Equal(10))
generalRecipe := NewGeneralRecipe(tree)
err = generalRecipe.Save(tmpdir)
Expect(err).ToNot(HaveOccurred())
db := pkg.NewInMemoryDatabase(false)
generalRecipe = NewGeneralRecipe(db)
generalRecipe.WithDatabase(nil)
Expect(generalRecipe.GetDatabase()).To(BeNil())
err = generalRecipe.Load(tmpdir)
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(10))
for _, p := range tree.World() {
Expect(p.GetName()).To(ContainSubstring("pinentry"))
}
})
})
Context("Simple solving with the fixture tree", func() {
It("writes and reads back the same tree", func() {
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
gb := gentoo.NewGentooBuilder(&gentoo.SimpleEbuildParser{}, 20, dbType)
tree, err := gb.Generate("../../tests/fixtures/overlay")
Expect(err).ToNot(HaveOccurred())
defer func() {
Expect(tree.Clean()).ToNot(HaveOccurred())
}()
Expect(len(tree.GetPackages())).To(Equal(10))
pack, err := tree.FindPackage(&pkg.DefaultPackage{
Name: "pinentry",
Version: "1.0.0-r2",
Category: "app-crypt",
}) // Note: the definition depends on pinentry-base without an explicit version
Expect(err).ToNot(HaveOccurred())
s := solver.NewSolver(solver.Options{Type: solver.SingleCoreSimple}, pkg.NewInMemoryDatabase(false), tree, tree)
solution, err := s.Install([]pkg.Package{pack})
Expect(err).ToNot(HaveOccurred())
Expect(len(solution)).To(Equal(14))
var allSol string
for _, sol := range solution {
allSol = allSol + "\n" + sol.ToString()
}
Expect(allSol).To(ContainSubstring("app-crypt/pinentry-base 1.0.0 installed"))
Expect(allSol).To(ContainSubstring("app-crypt/pinentry 1.1.0-r2 not installed"))
Expect(allSol).To(ContainSubstring("app-crypt/pinentry 1.0.0-r2 installed"))
})
})
}
})

View File

@@ -0,0 +1,7 @@
image: quay.io/mocaccino/extra
steps:
- touch /{{.Values.name}}
- touch /build-extra-{{.Values.foo}}
- touch /{{.Values.name}}-{{.Values.bb}}
unpack: true

View File

@@ -0,0 +1,13 @@
packages:
- name: "a"
category: "distro"
version: "0.1"
foo: "baz"
- name: "b"
category: "distro"
version: "0.3"
foo: "f"
- name: "c"
category: "distro"
version: "0.3"
foo: "bar"

View File

@@ -0,0 +1,2 @@
install:
- touch /finalize-{{.Values.name}}

View File

@@ -0,0 +1,4 @@
image: quay.io/mocaccino/extra
steps:
- touch /{{.Values.name}}
- touch /{{.Values.name}}-{{.Values.bb}}

View File

@@ -0,0 +1,3 @@
name: foo
category: test
version: "1.1"

6
tests/fixtures/collections/build.yaml vendored Normal file
View File

@@ -0,0 +1,6 @@
image: quay.io/mocaccino/extra
steps:
- touch /{{.Values.name}}
- touch /build-extra-{{.Values.foo}}
unpack: true

View File

@@ -0,0 +1,13 @@
packages:
- name: "a"
category: "distro"
version: "0.1"
foo: "baz"
- name: "b"
category: "distro"
version: "0.3"
foo: "f"
- name: "c"
category: "distro"
version: "0.3"
foo: "bar"

View File

@@ -0,0 +1,2 @@
install:
- touch /finalize-{{.Values.name}}

3
tests/fixtures/plugin/test-foo vendored Executable file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
echo "$1" >> $EVENT_FILE
echo "$2" >> $PAYLOAD_FILE

View File

@@ -1,6 +1,6 @@
steps:
- tar xvf a-test-1.0.package.* -C ./
- ls -liah /a
- mv a /b
requires:
- name: "a"

View File

@@ -57,22 +57,22 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml test/c
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
luet install -y --config $tmpdir/luet.yaml test/c
#luet install -y --config $tmpdir/luet.yaml test/c@1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
}
testReInstall() {
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
output=$(luet install -y --config $tmpdir/luet.yaml test/c@1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertContains 'contains warning' "$output" 'Filtering out'
assertContains 'contains warning' "$output" 'No packages to install'
}
testUnInstall() {
luet uninstall --config $tmpdir/luet.yaml test/c
luet uninstall -y --config $tmpdir/luet.yaml test/c
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
@@ -80,10 +80,10 @@ testUnInstall() {
testInstallAgain() {
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
output=$(luet install -y --config $tmpdir/luet.yaml test/c@1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertNotContains 'contains warning' "$output" 'Filtering out'
assertNotContains 'contains warning' "$output" 'No packages to install'
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
}

View File

@@ -12,7 +12,7 @@ oneTimeTearDown() {
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c-1.0 > /dev/null
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c@1.0 > /dev/null
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
@@ -62,22 +62,22 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml test/c-1.0
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
luet install -y --config $tmpdir/luet.yaml test/c@1.0
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
}
testReInstall() {
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
output=$(luet install -y --config $tmpdir/luet.yaml =test/c-1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertContains 'contains warning' "$output" 'Filtering out'
assertContains 'contains warning' "$output" 'No packages to install'
}
testUnInstall() {
luet uninstall --config $tmpdir/luet.yaml test/c-1.0
luet uninstall -y --config $tmpdir/luet.yaml =test/c-1.0
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
@@ -85,10 +85,10 @@ testUnInstall() {
testInstallAgain() {
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
output=$(luet install -y --config $tmpdir/luet.yaml =test/c-1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertNotContains 'contains warning' "$output" 'Filtering out'
assertNotContains 'contains warning' "$output" 'No packages to install'
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
}

View File

@@ -12,7 +12,7 @@ oneTimeTearDown() {
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c-1.0 > /dev/null
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c@1.0 > /dev/null
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
@@ -65,22 +65,22 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml test/c-1.0
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
luet install -y --config $tmpdir/luet.yaml test/c@1.0
#luet install -y --config $tmpdir/luet.yaml test/c@1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
}
testReInstall() {
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
output=$(luet install -y --config $tmpdir/luet.yaml test/c@1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertContains 'contains warning' "$output" 'Filtering out'
assertContains 'contains warning' "$output" 'No packages to install'
}
testUnInstall() {
luet uninstall --config $tmpdir/luet.yaml test/c-1.0
luet uninstall -y --config $tmpdir/luet.yaml test/c@1.0
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
@@ -88,10 +88,10 @@ testUnInstall() {
testInstallAgain() {
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
output=$(luet install -y --config $tmpdir/luet.yaml test/c@1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertNotContains 'contains warning' "$output" 'Filtering out'
assertNotContains 'contains warning' "$output" 'No packages to install'
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
}

View File

@@ -12,7 +12,7 @@ oneTimeTearDown() {
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c-1.0 > /dev/null
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c@1.0 > /dev/null
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
@@ -55,22 +55,22 @@ testRepo() {
}
testInstall() {
luet install --config $tmpdir/luet.yaml test/c-1.0
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
luet install -y --config $tmpdir/luet.yaml test/c@1.0
#luet install -y --config $tmpdir/luet.yaml test/c@1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
}
testReInstall() {
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
output=$(luet install -y --config $tmpdir/luet.yaml test/c@1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertContains 'contains warning' "$output" 'Filtering out'
assertContains 'contains warning' "$output" 'No packages to install'
}
testUnInstall() {
luet uninstall --config $tmpdir/luet.yaml test/c-1.0
luet uninstall -y --config $tmpdir/luet.yaml test/c@1.0
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
@@ -78,10 +78,10 @@ testUnInstall() {
testInstallAgain() {
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
output=$(luet install -y --config $tmpdir/luet.yaml test/c@1.0)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertNotContains 'contains warning' "$output" 'Filtering out'
assertNotContains 'contains warning' "$output" 'No packages to install'
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
}

View File

@@ -57,15 +57,15 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml test/c
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
luet install -y --config $tmpdir/luet.yaml test/c
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package C installed' "[ -e '$tmpdir/testrootfs/c' ]"
}
testFullInstall() {
output=$(luet install --config $tmpdir/luet.yaml test/d test/f test/e test/a)
output=$(luet install -y --config $tmpdir/luet.yaml test/d test/f test/e test/a)
installst=$?
assertEquals 'cannot install' "$installst" "1"
assertTrue 'package D installed' "[ ! -e '$tmpdir/testrootfs/d' ]"
@@ -73,11 +73,11 @@ testFullInstall() {
}
testInstallAgain() {
output=$(luet install --solver-type qlearning --config $tmpdir/luet.yaml test/d test/f test/e test/a)
output=$(luet install -y --solver-type qlearning --config $tmpdir/luet.yaml test/d test/f test/e test/a)
installst=$?
echo "$output"
assertEquals 'install test successfully' "0" "$installst"
assertNotContains 'contains warning' "$output" 'Filtering out'
assertNotContains 'contains warning' "$output" 'No packages to install'
assertTrue 'package D installed' "[ -e '$tmpdir/testrootfs/d' ]"
assertTrue 'package F installed' "[ -e '$tmpdir/testrootfs/f' ]"
assertTrue 'package E not installed' "[ ! -e '$tmpdir/testrootfs/e' ]"

View File

@@ -59,8 +59,8 @@ EOF
testInstall() {
luet install --config $tmpdir/luet.yaml test/b
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
luet install -y --config $tmpdir/luet.yaml test/b
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package B installed' "[ -e '$tmpdir/testrootfs/b' ]"
@@ -71,7 +71,7 @@ testInstall() {
testUnInstall() {
luet uninstall --full --config $tmpdir/luet.yaml test/b
luet uninstall -y --full --config $tmpdir/luet.yaml test/b
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/b' ]"

View File

@@ -12,35 +12,35 @@ oneTimeTearDown() {
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b-1.0
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b@1.0
buildst=$?
assertTrue 'create package B 1.0' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
assertEquals 'builds successfully' "$buildst" "0"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b-1.1
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b@1.1
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package B 1.1' "[ -e '$tmpdir/testbuild/b-test-1.1.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a-1.0
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a@1.0
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.0' "[ -e '$tmpdir/testbuild/a-test-1.0.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a-1.1
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a@1.1
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.1' "[ -e '$tmpdir/testbuild/a-test-1.1.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a-1.2
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a@1.2
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.2' "[ -e '$tmpdir/testbuild/a-test-1.2.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/c-1.0
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/c@1.0
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package C 1.0' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar.gz' ]"
@@ -85,31 +85,31 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml test/b-1.0
luet install -y --config $tmpdir/luet.yaml test/b@1.0
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed B' "[ -e '$tmpdir/testrootfs/test5' ]"
luet install --config $tmpdir/luet.yaml test/a-1.0
luet install -y --config $tmpdir/luet.yaml test/a@1.0
assertTrue 'package installed A' "[ -e '$tmpdir/testrootfs/testaa' ]"
installst=$?
assertEquals 'install test successfully' "$installst" "0"
luet install --config $tmpdir/luet.yaml test/a-1.1
luet install -y --config $tmpdir/luet.yaml test/a@1.1
assertTrue 'package installed A' "[ -e '$tmpdir/testrootfs/testaa' ]"
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package keeps old A' "[ -e '$tmpdir/testrootfs/testaa' ]"
assertTrue 'package new A was not installed' "[ ! -e '$tmpdir/testrootfs/testlatest' ]"
luet install --config $tmpdir/luet.yaml test/c-1.0
luet install -y --config $tmpdir/luet.yaml test/c@1.0
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed C' "[ -e '$tmpdir/testrootfs/c' ]"
}
testUpgrade() {
upgrade=$(luet --config $tmpdir/luet.yaml upgrade)
upgrade=$(luet --config $tmpdir/luet.yaml upgrade -y)
installst=$?
echo "$upgrade"
assertEquals 'install test successfully' "$installst" "0"
@@ -117,8 +117,8 @@ testUpgrade() {
assertTrue 'package installed B' "[ -e '$tmpdir/testrootfs/newc' ]"
assertTrue 'package uninstalled A' "[ ! -e '$tmpdir/testrootfs/testaa' ]"
assertTrue 'package installed new A' "[ -e '$tmpdir/testrootfs/testlatest' ]"
assertNotContains 'does not contain test/c-1.0' "$upgrade" 'test/c-1.0'
assertNotContains 'does not attempt to download test/c-1.0' "$upgrade" 'test/c-1.0 downloaded'
assertNotContains 'does not contain test/c@1.0' "$upgrade" 'test/c-1.0'
assertNotContains 'does not attempt to download test/c@1.0' "$upgrade" 'test/c-1.0 downloaded'
}
# Load shUnit2.

View File

@@ -11,35 +11,35 @@ oneTimeTearDown() {
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b-1.0
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b@1.0
buildst=$?
assertTrue 'create package B 1.0' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
assertEquals 'builds successfully' "$buildst" "0"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b-1.1
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b@1.1
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package B 1.1' "[ -e '$tmpdir/testbuild/b-test-1.1.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a-1.0
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a@1.0
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.0' "[ -e '$tmpdir/testbuild/a-test-1.0.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a-1.1
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a@1.1
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.1' "[ -e '$tmpdir/testbuild/a-test-1.1.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a-1.2
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a@1.2
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.2' "[ -e '$tmpdir/testbuild/a-test-1.2.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/c-1.0
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/c@1.0
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package C 1.0' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar.gz' ]"
@@ -84,24 +84,24 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml test/b-1.0
luet install -y --config $tmpdir/luet.yaml test/b@1.0
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed B' "[ -e '$tmpdir/testrootfs/test5' ]"
luet install --config $tmpdir/luet.yaml test/a-1.0
luet install -y --config $tmpdir/luet.yaml test/a@1.0
assertTrue 'package installed A' "[ -e '$tmpdir/testrootfs/testaa' ]"
installst=$?
assertEquals 'install test successfully' "$installst" "0"
luet install --config $tmpdir/luet.yaml test/a-1.1
luet install -y --config $tmpdir/luet.yaml test/a@1.1
assertTrue 'package installed A' "[ -e '$tmpdir/testrootfs/testaa' ]"
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package keeps old A' "[ -e '$tmpdir/testrootfs/testaa' ]"
assertTrue 'package new A was not installed' "[ ! -e '$tmpdir/testrootfs/testlatest' ]"
luet install --config $tmpdir/luet.yaml test/c-1.0
luet install -y --config $tmpdir/luet.yaml test/c@1.0
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed C' "[ -e '$tmpdir/testrootfs/c' ]"

View File

@@ -12,7 +12,7 @@ oneTimeTearDown() {
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/finalizers" --destination $tmpdir/testbuild --compression gzip --all > /dev/null
luet build --tree "$ROOT_DIR/tests/fixtures/finalizers" --destination $tmpdir/testbuild --compression gzip --all
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package' "[ -e '$tmpdir/testbuild/alpine-seed-1.0.package.tar.gz' ]"
@@ -56,8 +56,8 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml seed/alpine
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
luet install -y --config $tmpdir/luet.yaml seed/alpine
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/bin/busybox' ]"

View File

@@ -56,8 +56,8 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml seed/alpine
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
luet install -y --config $tmpdir/luet.yaml seed/alpine
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/bin/busybox' ]"

View File

@@ -21,7 +21,7 @@ testBuild() {
assertEquals 'builds successfully' "0" "$buildst"
luet build --tree "$ROOT_DIR/tests/fixtures/versioning" --destination $tmpdir/testbuild --compression gzip '>=dev-libs/libsigc++-2-0'
luet build --tree "$ROOT_DIR/tests/fixtures/versioning" --destination $tmpdir/testbuild --compression gzip '=dev-libs/libsigc++-2-2.10.1+1'
buildst=$?
assertEquals 'builds successfully' "0" "$buildst"
}
@@ -64,13 +64,13 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml media-libs/libsndfile
luet install -y --config $tmpdir/luet.yaml media-libs/libsndfile
installst=$?
assertEquals 'install test successfully' "0" "$installst"
}
testInstall2() {
luet install --config $tmpdir/luet.yaml '>=dev-libs/libsigc++-2-0'
luet install -y --config $tmpdir/luet.yaml '=dev-libs/libsigc++-2-2.10.1+1'
installst=$?
assertEquals 'install test successfully' "0" "$installst"
}

View File

@@ -63,7 +63,7 @@ testInstall() {
docker run --name luet-runtime-test \
-v /tmp:/tmp \
-v $tmpdir/luet.yaml:/etc/luet/luet.yaml:ro \
luet:test install seed/alpine
luet:test install -y seed/alpine
installst=$?
assertEquals 'install test successfully' "0" "$installst"

View File

@@ -12,35 +12,35 @@ oneTimeTearDown() {
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b-1.0
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b@1.0
buildst=$?
assertTrue 'create package B 1.0' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
assertEquals 'builds successfully' "$buildst" "0"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b-1.1
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/b@1.1
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package B 1.1' "[ -e '$tmpdir/testbuild/b-test-1.1.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a-1.0
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a@1.0
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.0' "[ -e '$tmpdir/testbuild/a-test-1.0.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a-1.1
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a@1.1
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.1' "[ -e '$tmpdir/testbuild/a-test-1.1.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a-1.2
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/a@1.2
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package A 1.2' "[ -e '$tmpdir/testbuild/a-test-1.2.package.tar.gz' ]"
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/c-1.0
luet build --tree "$ROOT_DIR/tests/fixtures/upgrade_integration" --destination $tmpdir/testbuild --compression gzip test/c@1.0
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package C 1.0' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar.gz' ]"
@@ -85,31 +85,31 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml test/b-1.0
luet install -y --config $tmpdir/luet.yaml test/b@1.0
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed B' "[ -e '$tmpdir/testrootfs/test5' ]"
luet install --config $tmpdir/luet.yaml test/a-1.0
luet install -y --config $tmpdir/luet.yaml test/a@1.0
assertTrue 'package installed A' "[ -e '$tmpdir/testrootfs/testaa' ]"
installst=$?
assertEquals 'install test successfully' "$installst" "0"
luet install --config $tmpdir/luet.yaml test/a-1.1
luet install -y --config $tmpdir/luet.yaml test/a@1.1
assertTrue 'package installed A' "[ -e '$tmpdir/testrootfs/testaa' ]"
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package keeps old A' "[ -e '$tmpdir/testrootfs/testaa' ]"
assertTrue 'package new A was not installed' "[ ! -e '$tmpdir/testrootfs/testlatest' ]"
luet install --config $tmpdir/luet.yaml test/c-1.0
luet install -y --config $tmpdir/luet.yaml test/c@1.0
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed C' "[ -e '$tmpdir/testrootfs/c' ]"
}
testUpgrade() {
upgrade=$(luet --config $tmpdir/luet.yaml upgrade --universe --clean)
upgrade=$(luet --config $tmpdir/luet.yaml upgrade -y --universe --clean)
installst=$?
assertEquals 'install test successfully' "$installst" "0"
echo "$upgrade"
@@ -119,8 +119,8 @@ testUpgrade() {
assertTrue 'package installed new A' "[ -e '$tmpdir/testrootfs/testlatest' ]"
# It does remove C as well, no other package depends on it.
assertContains 'does contain test/c-1.0' "$upgrade" 'test/c-1.0'
assertNotContains 'does not attempt to download test/c-1.0' "$upgrade" 'test/c-1.0 downloaded'
#assertContains 'does contain test/c-1.0' "$upgrade" 'test/c-1.0'
#assertNotContains 'does not attempt to download test/c-1.0' "$upgrade" 'test/c-1.0 downloaded'
}
# Load shUnit2.

View File

@@ -74,7 +74,7 @@ testInstall() {
mkdir $tmpdir/testrootfs/etc/a -p
echo "fakeconf" > $tmpdir/testrootfs/etc/a/conf
luet install --config $tmpdir/luet.yaml test/a
luet install -y --config $tmpdir/luet.yaml test/a
installst=$?
assertEquals 'install test successfully' "$installst" "0"
@@ -86,7 +86,7 @@ testInstall() {
testUnInstall() {
luet uninstall --full --config $tmpdir/luet.yaml test/a
luet uninstall -y --full --config $tmpdir/luet.yaml test/a
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"

View File

@@ -74,7 +74,7 @@ testInstall() {
mkdir $tmpdir/testrootfs/opt/etc -p
echo "fakeconf" > $tmpdir/testrootfs/opt/etc/conf
luet install --config $tmpdir/luet.yaml test/a
luet install -y --config $tmpdir/luet.yaml test/a
installst=$?
assertEquals 'install test successfully' "$installst" "0"
@@ -86,7 +86,7 @@ testInstall() {
testUnInstall() {
luet uninstall --full --config $tmpdir/luet.yaml test/a
luet uninstall -y --full --config $tmpdir/luet.yaml test/a
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"

View File

@@ -75,7 +75,7 @@ EOF
}
testUpgrade() {
luet install --config $tmpdir/luet.yaml test/b-1.0
luet install -y --config $tmpdir/luet.yaml test/b@1.0
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed B' "[ -e '$tmpdir/testrootfs/test5' ]"
@@ -100,13 +100,13 @@ EOF
res=$?
assertEquals 'config test successfully' "$res" "0"
luet upgrade --sync --config $tmpdir/luet.yaml
luet upgrade -y --sync --config $tmpdir/luet.yaml
installst=$?
assertEquals 'upgrade test successfully' "$installst" "0"
assertTrue 'package uninstalled B' "[ ! -e '$tmpdir/testrootfs/test5' ]"
assertTrue 'package installed B' "[ -e '$tmpdir/testrootfs/newc' ]"
content=$(luet upgrade --sync --config $tmpdir/luet.yaml)
content=$(luet upgrade -y --sync --config $tmpdir/luet.yaml)
installst=$?
assertNotContains 'didn not upgrade' "$content" "Uninstalling"
}

View File

@@ -56,7 +56,7 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml test/pkgAsym test/pkgBsym
luet install -y --config $tmpdir/luet.yaml test/pkgAsym test/pkgBsym
installst=$?
assertEquals 'install test successfully' "$installst" "0"
ls -liah $tmpdir/testrootfs/

View File

@@ -56,7 +56,7 @@ EOF
}
testInstall() {
$ROOT_DIR/tests/integration/bin/luet install --config $tmpdir/luet.yaml test/caps-0.1 test/caps2-0.1
$ROOT_DIR/tests/integration/bin/luet install -y --config $tmpdir/luet.yaml test/caps@0.1 test/caps2@0.1
installst=$?
assertEquals 'install test successfully' "$installst" "0"

View File

@@ -75,7 +75,7 @@ testInstall() {
mkdir $tmpdir/testrootfs/etc/a -p
echo "fakeconf" > $tmpdir/testrootfs/etc/a/conf
luet install --config $tmpdir/luet.yaml test/a
luet install -y --config $tmpdir/luet.yaml test/a
installst=$?
assertEquals 'install test successfully' "$installst" "0"
@@ -87,7 +87,7 @@ testInstall() {
testUnInstall() {
luet uninstall --full --config $tmpdir/luet.yaml test/a
luet uninstall -y --full --config $tmpdir/luet.yaml test/a
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"

View File

@@ -58,7 +58,7 @@ EOF
testDatabase() {
luet database create --config $tmpdir/luet.yaml $tmpdir/testbuild/c-test-1.0.metadata.yaml
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
createst=$?
assertEquals 'created package successfully' "$createst" "0"
assertTrue 'package not installed' "[ ! -e '$tmpdir/testrootfs/c' ]"
@@ -69,18 +69,18 @@ testDatabase() {
assertContains 'contains test/c-1.0' "$installed" 'test/c-1.0'
touch $tmpdir/testrootfs/c
luet database remove --config $tmpdir/luet.yaml test/c-1.0
luet database remove --config $tmpdir/luet.yaml test/c@1.0
removetest=$?
assertEquals 'package removed successfully' "$removetest" "0"
assertTrue 'file not touched' "[ -e '$tmpdir/testrootfs/c' ]"
luet database create --config $tmpdir/luet.yaml $tmpdir/testbuild/c-test-1.0.metadata.yaml
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
createst=$?
assertEquals 'created package successfully' "$createst" "0"
assertTrue 'file still present' "[ -e '$tmpdir/testrootfs/c' ]"
luet uninstall --config $tmpdir/luet.yaml test/c
luet uninstall -y --config $tmpdir/luet.yaml test/c
installst=$?
assertEquals 'uninstall test successfully' "$installst" "0"
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"

View File

@@ -56,8 +56,8 @@ EOF
}
testInstall() {
luet install --config $tmpdir/luet.yaml seed/alpine
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
luet install -y --config $tmpdir/luet.yaml seed/alpine
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/bin/busybox' ]"

83
tests/integration/20_plugin.sh Executable file
View File

@@ -0,0 +1,83 @@
#!/bin/bash
export LUET_NOLOCK=true
export PATH=$PATH:$ROOT_DIR/tests/fixtures/plugin
oneTimeSetUp() {
export tmpdir="$(mktemp -d)"
export EVENT_FILE=$tmpdir/events.txt
export PAYLOAD_FILE=$tmpdir/payloads.txt
}
oneTimeTearDown() {
rm -rf "$tmpdir"
}
testBuild() {
mkdir $tmpdir/testbuild
luet build --plugin test-foo --tree "$ROOT_DIR/tests/fixtures/templatedfinalizers" --destination $tmpdir/testbuild --compression gzip --all
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertContains 'event file contains corresponding event' "$(cat $EVENT_FILE)" 'package.pre.build'
assertContains 'event file contains corresponding event' "$(cat $PAYLOAD_FILE)" 'alpine'
}
testRepo() {
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
luet --plugin test-foo create-repo --tree "$ROOT_DIR/tests/fixtures/templatedfinalizers" \
--output $tmpdir/testbuild \
--packages $tmpdir/testbuild \
--name "test" \
--descr "Test Repo" \
--urls $tmpdir/testrootfs \
--type disk > /dev/null
createst=$?
assertEquals 'create repo successfully' "$createst" "0"
assertContains 'event file contains corresponding event' "$(cat $EVENT_FILE)" 'repository.pre.build'
}
testConfig() {
mkdir $tmpdir/testrootfs
cat <<EOF > $tmpdir/luet.yaml
general:
debug: true
system:
rootfs: $tmpdir/testrootfs
database_path: "/"
database_engine: "boltdb"
config_from_host: true
repositories:
- name: "main"
type: "disk"
enable: true
urls:
- "$tmpdir/testbuild"
EOF
luet config --config $tmpdir/luet.yaml
res=$?
assertEquals 'config test successfully' "$res" "0"
}
testInstall() {
luet --plugin test-foo install -y --config $tmpdir/luet.yaml seed/alpine
#luet install -y --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/bin/busybox' ]"
assertTrue 'finalizer runs' "[ -e '$tmpdir/testrootfs/tmp/foo' ]"
assertEquals 'finalizer printed used shell' "$(cat $tmpdir/testrootfs/tmp/foo)" 'alpine'
assertContains 'event file contains corresponding event' "$(cat $EVENT_FILE)" 'package.install'
}
testCleanup() {
luet cleanup --config $tmpdir/luet.yaml
installst=$?
assertEquals 'install test successfully' "$installst" "0"
}
# Load shUnit2.
. "$ROOT_DIR/tests/integration/shunit2"/shunit2

View File

@@ -0,0 +1,121 @@
#!/bin/bash
export LUET_NOLOCK=true
oneTimeSetUp() {
export tmpdir="$(mktemp -d)"
}
oneTimeTearDown() {
rm -rf "$tmpdir"
}
testBuild() {
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/collections" --destination $tmpdir/testbuild --compression gzip --all
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package B' "[ -e '$tmpdir/testbuild/b-distro-0.3.package.tar.gz' ]"
assertTrue 'create package A' "[ -e '$tmpdir/testbuild/a-distro-0.1.package.tar.gz' ]"
assertTrue 'create package C' "[ -e '$tmpdir/testbuild/c-distro-0.3.package.tar.gz' ]"
}
testRepo() {
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
luet create-repo --tree "$ROOT_DIR/tests/fixtures/collections" \
--output $tmpdir/testbuild \
--packages $tmpdir/testbuild \
--name "test" \
--descr "Test Repo" \
--urls $tmpdir/testrootfs \
--type disk
createst=$?
assertEquals 'create repo successfully' "$createst" "0"
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
}
testConfig() {
mkdir $tmpdir/testrootfs
cat <<EOF > $tmpdir/luet.yaml
general:
debug: true
system:
rootfs: $tmpdir/testrootfs
database_path: "/"
database_engine: "boltdb"
config_from_host: true
repositories:
- name: "main"
type: "disk"
enable: true
urls:
- "$tmpdir/testbuild"
EOF
luet config --config $tmpdir/luet.yaml
res=$?
assertEquals 'config test successfully' "$res" "0"
}
testInstall() {
luet install -y --config $tmpdir/luet.yaml distro/a
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed A' "[ -e '$tmpdir/testrootfs/a' ]"
# Build time can interpolate on fields which aren't package properties.
assertTrue 'extra field on A' "[ -e '$tmpdir/testrootfs/build-extra-baz' ]"
# Finalizers can interpolate only on package field. No extra fields are allowed at this time.
assertTrue 'finalizer executed on A' "[ -e '$tmpdir/testrootfs/finalize-a' ]"
installed=$(luet --config $tmpdir/luet.yaml search --installed .)
searchst=$?
assertEquals 'search exists successfully' "$searchst" "0"
assertContains 'contains distro/a-0.1' "$installed" 'distro/a-0.1'
luet uninstall -y --config $tmpdir/luet.yaml distro/a
installst=$?
assertEquals 'install test successfully' "$installst" "0"
# We do the same check for the others
luet install -y --config $tmpdir/luet.yaml distro/b
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed B' "[ -e '$tmpdir/testrootfs/b' ]"
assertTrue 'extra field on B' "[ -e '$tmpdir/testrootfs/build-extra-f' ]"
assertTrue 'finalizer executed on B' "[ -e '$tmpdir/testrootfs/finalize-b' ]"
installed=$(luet --config $tmpdir/luet.yaml search --installed .)
searchst=$?
assertEquals 'search exists successfully' "$searchst" "0"
assertContains 'contains distro/b-0.3' "$installed" 'distro/b-0.3'
luet uninstall -y --config $tmpdir/luet.yaml distro/b
installst=$?
assertEquals 'install test successfully' "$installst" "0"
luet install -y --config $tmpdir/luet.yaml distro/c
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed C' "[ -e '$tmpdir/testrootfs/c' ]"
assertTrue 'extra field on C' "[ -e '$tmpdir/testrootfs/build-extra-bar' ]"
assertTrue 'finalizer executed on C' "[ -e '$tmpdir/testrootfs/finalize-c' ]"
installed=$(luet --config $tmpdir/luet.yaml search --installed .)
searchst=$?
assertEquals 'search exists successfully' "$searchst" "0"
assertContains 'contains distro/c-0.3' "$installed" 'distro/c-0.3'
luet uninstall -y --config $tmpdir/luet.yaml distro/c
installst=$?
assertEquals 'install test successfully' "$installst" "0"
}
# Load shUnit2.
. "$ROOT_DIR/tests/integration/shunit2"/shunit2

View File

@@ -0,0 +1,133 @@
#!/bin/bash
export LUET_NOLOCK=true
oneTimeSetUp() {
export tmpdir="$(mktemp -d)"
}
oneTimeTearDown() {
rm -rf "$tmpdir"
}
testBuild() {
cat <<EOF > $tmpdir/default.yaml
bb: "ttt"
EOF
mkdir $tmpdir/testbuild
luet build --tree "$ROOT_DIR/tests/fixtures/build_values" --values $tmpdir/default.yaml --destination $tmpdir/testbuild --compression gzip --all
buildst=$?
assertEquals 'builds successfully' "$buildst" "0"
assertTrue 'create package B' "[ -e '$tmpdir/testbuild/b-distro-0.3.package.tar.gz' ]"
assertTrue 'create package A' "[ -e '$tmpdir/testbuild/a-distro-0.1.package.tar.gz' ]"
assertTrue 'create package C' "[ -e '$tmpdir/testbuild/c-distro-0.3.package.tar.gz' ]"
assertTrue 'create package foo' "[ -e '$tmpdir/testbuild/foo-test-1.1.package.tar.gz' ]"
}
testRepo() {
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
luet create-repo --tree "$ROOT_DIR/tests/fixtures/build_values" \
--output $tmpdir/testbuild \
--packages $tmpdir/testbuild \
--name "test" \
--descr "Test Repo" \
--urls $tmpdir/testrootfs \
--type disk
createst=$?
assertEquals 'create repo successfully' "$createst" "0"
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
}
testConfig() {
mkdir $tmpdir/testrootfs
cat <<EOF > $tmpdir/luet.yaml
general:
debug: true
system:
rootfs: $tmpdir/testrootfs
database_path: "/"
database_engine: "boltdb"
config_from_host: true
repositories:
- name: "main"
type: "disk"
enable: true
urls:
- "$tmpdir/testbuild"
EOF
luet config --config $tmpdir/luet.yaml
res=$?
assertEquals 'config test successfully' "$res" "0"
}
testInstall() {
luet install -y --config $tmpdir/luet.yaml distro/a
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed A' "[ -e '$tmpdir/testrootfs/a' ]"
# Build time can interpolate on fields which aren't package properties.
assertTrue 'extra field on A' "[ -e '$tmpdir/testrootfs/build-extra-baz' ]"
assertTrue 'package installed A interpolated with values' "[ -e '$tmpdir/testrootfs/a-ttt' ]"
# Finalizers can interpolate only on package field. No extra fields are allowed at this time.
assertTrue 'finalizer executed on A' "[ -e '$tmpdir/testrootfs/finalize-a' ]"
installed=$(luet --config $tmpdir/luet.yaml search --installed .)
searchst=$?
assertEquals 'search exists successfully' "$searchst" "0"
assertContains 'contains distro/a-0.1' "$installed" 'distro/a-0.1'
luet uninstall -y --config $tmpdir/luet.yaml distro/a
installst=$?
assertEquals 'install test successfully' "$installst" "0"
# We do the same check for the others
luet install -y --config $tmpdir/luet.yaml distro/b
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed B' "[ -e '$tmpdir/testrootfs/b' ]"
assertTrue 'package installed B interpolated with values' "[ -e '$tmpdir/testrootfs/b-ttt' ]"
assertTrue 'extra field on B' "[ -e '$tmpdir/testrootfs/build-extra-f' ]"
assertTrue 'finalizer executed on B' "[ -e '$tmpdir/testrootfs/finalize-b' ]"
installed=$(luet --config $tmpdir/luet.yaml search --installed .)
searchst=$?
assertEquals 'search exists successfully' "$searchst" "0"
assertContains 'contains distro/b-0.3' "$installed" 'distro/b-0.3'
luet uninstall -y --config $tmpdir/luet.yaml distro/b
installst=$?
assertEquals 'install test successfully' "$installst" "0"
luet install -y --config $tmpdir/luet.yaml distro/c
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed C' "[ -e '$tmpdir/testrootfs/c' ]"
assertTrue 'extra field on C' "[ -e '$tmpdir/testrootfs/build-extra-bar' ]"
assertTrue 'package installed C interpolated with values' "[ -e '$tmpdir/testrootfs/c-ttt' ]"
assertTrue 'finalizer executed on C' "[ -e '$tmpdir/testrootfs/finalize-c' ]"
installed=$(luet --config $tmpdir/luet.yaml search --installed .)
searchst=$?
assertEquals 'search exists successfully' "$searchst" "0"
assertContains 'contains distro/c-0.3' "$installed" 'distro/c-0.3'
luet uninstall -y --config $tmpdir/luet.yaml distro/c
installst=$?
assertEquals 'install test successfully' "$installst" "0"
luet install -y --config $tmpdir/luet.yaml test/foo
installst=$?
assertEquals 'install test successfully' "$installst" "0"
assertTrue 'package installed foo' "[ -e '$tmpdir/testrootfs/foo' ]"
assertTrue 'package installed foo interpolated with values' "[ -e '$tmpdir/testrootfs/foo-ttt' ]"
}
# Load shUnit2.
. "$ROOT_DIR/tests/integration/shunit2"/shunit2

View File

@@ -1,6 +1,7 @@
#!/bin/bash
set -e
export LUET_YES=true
export ROOT_DIR="$(git rev-parse --show-toplevel)"
pushd $ROOT_DIR

15
vendor/github.com/asaskevich/govalidator/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,15 @@
bin/
.idea/
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out

18
vendor/github.com/asaskevich/govalidator/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,18 @@
dist: bionic
language: go
env: GO111MODULE=on GOFLAGS='-mod vendor'
install: true
email: false
go:
- 1.10
- 1.11
- 1.12
- 1.13
- tip
before_script:
- go install github.com/golangci/golangci-lint/cmd/golangci-lint
script:
- golangci-lint run # run a bunch of code checkers/linters in parallel
- go test -v -race ./... # Run all the tests with the race detector enabled

View File

@@ -0,0 +1,63 @@
#### Support
If you do have a contribution to the package, feel free to create a Pull Request or an Issue.
#### What to contribute
If you don't know what to do, there are some features and functions that need to be done
- [ ] Refactor code
- [ ] Edit docs and [README](https://github.com/asaskevich/govalidator/README.md): spellcheck, grammar and typo check
- [ ] Create actual list of contributors and projects that currently using this package
- [ ] Resolve [issues and bugs](https://github.com/asaskevich/govalidator/issues)
- [ ] Update actual [list of functions](https://github.com/asaskevich/govalidator#list-of-functions)
- [ ] Update [list of validators](https://github.com/asaskevich/govalidator#validatestruct-2) that available for `ValidateStruct` and add new
- [ ] Implement new validators: `IsFQDN`, `IsIMEI`, `IsPostalCode`, `IsISIN`, `IsISRC` etc
- [x] Implement [validation by maps](https://github.com/asaskevich/govalidator/issues/224)
- [ ] Implement fuzzing testing
- [ ] Implement some struct/map/array utilities
- [ ] Implement map/array validation
- [ ] Implement benchmarking
- [ ] Implement batch of examples
- [ ] Look at forks for new features and fixes
#### Advice
Feel free to create what you want, but keep in mind when you implement new features:
- Code must be clear and readable, names of variables/constants clearly describes what they are doing
- Public functions must be documented and described in source file and added to README.md to the list of available functions
- There are must be unit-tests for any new functions and improvements
## Financial contributions
We also welcome financial contributions in full transparency on our [open collective](https://opencollective.com/govalidator).
Anyone can file an expense. If the expense makes sense for the development of the community, it will be "merged" in the ledger of our open collective by the core contributors and the person who filed the expense will be reimbursed.
## Credits
### Contributors
Thank you to all the people who have already contributed to govalidator!
<a href="https://github.com/asaskevich/govalidator/graphs/contributors"><img src="https://opencollective.com/govalidator/contributors.svg?width=890" /></a>
### Backers
Thank you to all our backers! [[Become a backer](https://opencollective.com/govalidator#backer)]
<a href="https://opencollective.com/govalidator#backers" target="_blank"><img src="https://opencollective.com/govalidator/backers.svg?width=890"></a>
### Sponsors
Thank you to all our sponsors! (please ask your company to also support this open source project by [becoming a sponsor](https://opencollective.com/govalidator#sponsor))
<a href="https://opencollective.com/govalidator/sponsor/0/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/0/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/1/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/1/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/2/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/2/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/3/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/3/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/4/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/4/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/5/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/5/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/6/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/6/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/7/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/7/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/8/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/8/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/9/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/9/avatar.svg"></a>

21
vendor/github.com/asaskevich/govalidator/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Alex Saskevich
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

616
vendor/github.com/asaskevich/govalidator/README.md generated vendored Normal file
View File

@@ -0,0 +1,616 @@
govalidator
===========
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/asaskevich/govalidator?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![GoDoc](https://godoc.org/github.com/asaskevich/govalidator?status.png)](https://godoc.org/github.com/asaskevich/govalidator) [![Coverage Status](https://img.shields.io/coveralls/asaskevich/govalidator.svg)](https://coveralls.io/r/asaskevich/govalidator?branch=master) [![wercker status](https://app.wercker.com/status/1ec990b09ea86c910d5f08b0e02c6043/s "wercker status")](https://app.wercker.com/project/bykey/1ec990b09ea86c910d5f08b0e02c6043)
[![Build Status](https://travis-ci.org/asaskevich/govalidator.svg?branch=master)](https://travis-ci.org/asaskevich/govalidator) [![Go Report Card](https://goreportcard.com/badge/github.com/asaskevich/govalidator)](https://goreportcard.com/report/github.com/asaskevich/govalidator) [![GoSearch](http://go-search.org/badge?id=github.com%2Fasaskevich%2Fgovalidator)](http://go-search.org/view?id=github.com%2Fasaskevich%2Fgovalidator) [![Backers on Open Collective](https://opencollective.com/govalidator/backers/badge.svg)](#backers) [![Sponsors on Open Collective](https://opencollective.com/govalidator/sponsors/badge.svg)](#sponsors) [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fasaskevich%2Fgovalidator.svg?type=shield)](https://app.fossa.io/projects/git%2Bgithub.com%2Fasaskevich%2Fgovalidator?ref=badge_shield)
A package of validators and sanitizers for strings, structs and collections. Based on [validator.js](https://github.com/chriso/validator.js).
#### Installation
Make sure that Go is installed on your computer.
Type the following command in your terminal:
go get github.com/asaskevich/govalidator
or you can get specified release of the package with `gopkg.in`:
go get gopkg.in/asaskevich/govalidator.v10
After it the package is ready to use.
#### Import package in your project
Add following line in your `*.go` file:
```go
import "github.com/asaskevich/govalidator"
```
If you are unhappy to use long `govalidator`, you can do something like this:
```go
import (
valid "github.com/asaskevich/govalidator"
)
```
#### Activate behavior to require all fields have a validation tag by default
`SetFieldsRequiredByDefault` causes validation to fail when struct fields do not include validations or are not explicitly marked as exempt (using `valid:"-"` or `valid:"email,optional"`). A good place to activate this is a package init function or the main() function.
`SetNilPtrAllowedByRequired` causes validation to pass when struct fields marked by `required` are set to nil. This is disabled by default for consistency, but some packages that need to be able to determine between `nil` and `zero value` state can use this. If disabled, both `nil` and `zero` values cause validation errors.
```go
import "github.com/asaskevich/govalidator"
func init() {
govalidator.SetFieldsRequiredByDefault(true)
}
```
Here's some code to explain it:
```go
// this struct definition will fail govalidator.ValidateStruct() (and the field values do not matter):
type exampleStruct struct {
Name string ``
Email string `valid:"email"`
}
// this, however, will only fail when Email is empty or an invalid email address:
type exampleStruct2 struct {
Name string `valid:"-"`
Email string `valid:"email"`
}
// lastly, this will only fail when Email is an invalid email address but not when it's empty:
type exampleStruct2 struct {
Name string `valid:"-"`
Email string `valid:"email,optional"`
}
```
#### Recent breaking changes (see [#123](https://github.com/asaskevich/govalidator/pull/123))
##### Custom validator function signature
A context was added as the second parameter, for structs this is the object being validated this makes dependent validation possible.
```go
import "github.com/asaskevich/govalidator"
// old signature
func(i interface{}) bool
// new signature
func(i interface{}, o interface{}) bool
```
##### Adding a custom validator
This was changed to prevent data races when accessing custom validators.
```go
import "github.com/asaskevich/govalidator"
// before
govalidator.CustomTypeTagMap["customByteArrayValidator"] = func(i interface{}, o interface{}) bool {
// ...
}
// after
govalidator.CustomTypeTagMap.Set("customByteArrayValidator", func(i interface{}, o interface{}) bool {
// ...
})
```
#### List of functions:
```go
func Abs(value float64) float64
func BlackList(str, chars string) string
func ByteLength(str string, params ...string) bool
func CamelCaseToUnderscore(str string) string
func Contains(str, substring string) bool
func Count(array []interface{}, iterator ConditionIterator) int
func Each(array []interface{}, iterator Iterator)
func ErrorByField(e error, field string) string
func ErrorsByField(e error) map[string]string
func Filter(array []interface{}, iterator ConditionIterator) []interface{}
func Find(array []interface{}, iterator ConditionIterator) interface{}
func GetLine(s string, index int) (string, error)
func GetLines(s string) []string
func HasLowerCase(str string) bool
func HasUpperCase(str string) bool
func HasWhitespace(str string) bool
func HasWhitespaceOnly(str string) bool
func InRange(value interface{}, left interface{}, right interface{}) bool
func InRangeFloat32(value, left, right float32) bool
func InRangeFloat64(value, left, right float64) bool
func InRangeInt(value, left, right interface{}) bool
func IsASCII(str string) bool
func IsAlpha(str string) bool
func IsAlphanumeric(str string) bool
func IsBase64(str string) bool
func IsByteLength(str string, min, max int) bool
func IsCIDR(str string) bool
func IsCRC32(str string) bool
func IsCRC32b(str string) bool
func IsCreditCard(str string) bool
func IsDNSName(str string) bool
func IsDataURI(str string) bool
func IsDialString(str string) bool
func IsDivisibleBy(str, num string) bool
func IsEmail(str string) bool
func IsExistingEmail(email string) bool
func IsFilePath(str string) (bool, int)
func IsFloat(str string) bool
func IsFullWidth(str string) bool
func IsHalfWidth(str string) bool
func IsHash(str string, algorithm string) bool
func IsHexadecimal(str string) bool
func IsHexcolor(str string) bool
func IsHost(str string) bool
func IsIP(str string) bool
func IsIPv4(str string) bool
func IsIPv6(str string) bool
func IsISBN(str string, version int) bool
func IsISBN10(str string) bool
func IsISBN13(str string) bool
func IsISO3166Alpha2(str string) bool
func IsISO3166Alpha3(str string) bool
func IsISO4217(str string) bool
func IsISO693Alpha2(str string) bool
func IsISO693Alpha3b(str string) bool
func IsIn(str string, params ...string) bool
func IsInRaw(str string, params ...string) bool
func IsInt(str string) bool
func IsJSON(str string) bool
func IsLatitude(str string) bool
func IsLongitude(str string) bool
func IsLowerCase(str string) bool
func IsMAC(str string) bool
func IsMD4(str string) bool
func IsMD5(str string) bool
func IsMagnetURI(str string) bool
func IsMongoID(str string) bool
func IsMultibyte(str string) bool
func IsNatural(value float64) bool
func IsNegative(value float64) bool
func IsNonNegative(value float64) bool
func IsNonPositive(value float64) bool
func IsNotNull(str string) bool
func IsNull(str string) bool
func IsNumeric(str string) bool
func IsPort(str string) bool
func IsPositive(value float64) bool
func IsPrintableASCII(str string) bool
func IsRFC3339(str string) bool
func IsRFC3339WithoutZone(str string) bool
func IsRGBcolor(str string) bool
func IsRequestURI(rawurl string) bool
func IsRequestURL(rawurl string) bool
func IsRipeMD128(str string) bool
func IsRipeMD160(str string) bool
func IsRsaPub(str string, params ...string) bool
func IsRsaPublicKey(str string, keylen int) bool
func IsSHA1(str string) bool
func IsSHA256(str string) bool
func IsSHA384(str string) bool
func IsSHA512(str string) bool
func IsSSN(str string) bool
func IsSemver(str string) bool
func IsTiger128(str string) bool
func IsTiger160(str string) bool
func IsTiger192(str string) bool
func IsTime(str string, format string) bool
func IsType(v interface{}, params ...string) bool
func IsURL(str string) bool
func IsUTFDigit(str string) bool
func IsUTFLetter(str string) bool
func IsUTFLetterNumeric(str string) bool
func IsUTFNumeric(str string) bool
func IsUUID(str string) bool
func IsUUIDv3(str string) bool
func IsUUIDv4(str string) bool
func IsUUIDv5(str string) bool
func IsUnixTime(str string) bool
func IsUpperCase(str string) bool
func IsVariableWidth(str string) bool
func IsWhole(value float64) bool
func LeftTrim(str, chars string) string
func Map(array []interface{}, iterator ResultIterator) []interface{}
func Matches(str, pattern string) bool
func MaxStringLength(str string, params ...string) bool
func MinStringLength(str string, params ...string) bool
func NormalizeEmail(str string) (string, error)
func PadBoth(str string, padStr string, padLen int) string
func PadLeft(str string, padStr string, padLen int) string
func PadRight(str string, padStr string, padLen int) string
func PrependPathToErrors(err error, path string) error
func Range(str string, params ...string) bool
func RemoveTags(s string) string
func ReplacePattern(str, pattern, replace string) string
func Reverse(s string) string
func RightTrim(str, chars string) string
func RuneLength(str string, params ...string) bool
func SafeFileName(str string) string
func SetFieldsRequiredByDefault(value bool)
func SetNilPtrAllowedByRequired(value bool)
func Sign(value float64) float64
func StringLength(str string, params ...string) bool
func StringMatches(s string, params ...string) bool
func StripLow(str string, keepNewLines bool) string
func ToBoolean(str string) (bool, error)
func ToFloat(str string) (float64, error)
func ToInt(value interface{}) (res int64, err error)
func ToJSON(obj interface{}) (string, error)
func ToString(obj interface{}) string
func Trim(str, chars string) string
func Truncate(str string, length int, ending string) string
func TruncatingErrorf(str string, args ...interface{}) error
func UnderscoreToCamelCase(s string) string
func ValidateMap(inputMap map[string]interface{}, validationMap map[string]interface{}) (bool, error)
func ValidateStruct(s interface{}) (bool, error)
func WhiteList(str, chars string) string
type ConditionIterator
type CustomTypeValidator
type Error
func (e Error) Error() string
type Errors
func (es Errors) Error() string
func (es Errors) Errors() []error
type ISO3166Entry
type ISO693Entry
type InterfaceParamValidator
type Iterator
type ParamValidator
type ResultIterator
type UnsupportedTypeError
func (e *UnsupportedTypeError) Error() string
type Validator
```
#### Examples
###### IsURL
```go
println(govalidator.IsURL(`http://user@pass:domain.com/path/page`))
```
###### IsType
```go
println(govalidator.IsType("Bob", "string"))
println(govalidator.IsType(1, "int"))
i := 1
println(govalidator.IsType(&i, "*int"))
```
IsType can be used through the tag `type` which is essential for map validation:
```go
type User struct {
Name string `valid:"type(string)"`
Age int `valid:"type(int)"`
Meta interface{} `valid:"type(string)"`
}
result, err := govalidator.ValidateStruct(user{"Bob", 20, "meta"})
if err != nil {
println("error: " + err.Error())
}
println(result)
```
###### ToString
```go
type User struct {
FirstName string
LastName string
}
str := govalidator.ToString(&User{"John", "Juan"})
println(str)
```
###### Each, Map, Filter, Count for slices
Each iterates over the slice/array and calls Iterator for every item
```go
data := []interface{}{1, 2, 3, 4, 5}
var fn govalidator.Iterator = func(value interface{}, index int) {
println(value.(int))
}
govalidator.Each(data, fn)
```
```go
data := []interface{}{1, 2, 3, 4, 5}
var fn govalidator.ResultIterator = func(value interface{}, index int) interface{} {
return value.(int) * 3
}
_ = govalidator.Map(data, fn) // result = []interface{}{1, 6, 9, 12, 15}
```
```go
data := []interface{}{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
var fn govalidator.ConditionIterator = func(value interface{}, index int) bool {
return value.(int)%2 == 0
}
_ = govalidator.Filter(data, fn) // result = []interface{}{2, 4, 6, 8, 10}
_ = govalidator.Count(data, fn) // result = 5
```
###### ValidateStruct [#2](https://github.com/asaskevich/govalidator/pull/2)
If you want to validate structs, you can use tag `valid` for any field in your structure. All validators used with this field in one tag are separated by comma. If you want to skip validation, place `-` in your tag. If you need a validator that is not on the list below, you can add it like this:
```go
govalidator.TagMap["duck"] = govalidator.Validator(func(str string) bool {
return str == "duck"
})
```
For completely custom validators (interface-based), see below.
Here is a list of available validators for struct fields (validator - used function):
```go
"email": IsEmail,
"url": IsURL,
"dialstring": IsDialString,
"requrl": IsRequestURL,
"requri": IsRequestURI,
"alpha": IsAlpha,
"utfletter": IsUTFLetter,
"alphanum": IsAlphanumeric,
"utfletternum": IsUTFLetterNumeric,
"numeric": IsNumeric,
"utfnumeric": IsUTFNumeric,
"utfdigit": IsUTFDigit,
"hexadecimal": IsHexadecimal,
"hexcolor": IsHexcolor,
"rgbcolor": IsRGBcolor,
"lowercase": IsLowerCase,
"uppercase": IsUpperCase,
"int": IsInt,
"float": IsFloat,
"null": IsNull,
"uuid": IsUUID,
"uuidv3": IsUUIDv3,
"uuidv4": IsUUIDv4,
"uuidv5": IsUUIDv5,
"creditcard": IsCreditCard,
"isbn10": IsISBN10,
"isbn13": IsISBN13,
"json": IsJSON,
"multibyte": IsMultibyte,
"ascii": IsASCII,
"printableascii": IsPrintableASCII,
"fullwidth": IsFullWidth,
"halfwidth": IsHalfWidth,
"variablewidth": IsVariableWidth,
"base64": IsBase64,
"datauri": IsDataURI,
"ip": IsIP,
"port": IsPort,
"ipv4": IsIPv4,
"ipv6": IsIPv6,
"dns": IsDNSName,
"host": IsHost,
"mac": IsMAC,
"latitude": IsLatitude,
"longitude": IsLongitude,
"ssn": IsSSN,
"semver": IsSemver,
"rfc3339": IsRFC3339,
"rfc3339WithoutZone": IsRFC3339WithoutZone,
"ISO3166Alpha2": IsISO3166Alpha2,
"ISO3166Alpha3": IsISO3166Alpha3,
```
Validators with parameters
```go
"range(min|max)": Range,
"length(min|max)": ByteLength,
"runelength(min|max)": RuneLength,
"stringlength(min|max)": StringLength,
"matches(pattern)": StringMatches,
"in(string1|string2|...|stringN)": IsIn,
"rsapub(keylength)" : IsRsaPub,
```
Validators with parameters for any type
```go
"type(type)": IsType,
```
And here is small example of usage:
```go
type Post struct {
Title string `valid:"alphanum,required"`
Message string `valid:"duck,ascii"`
Message2 string `valid:"animal(dog)"`
AuthorIP string `valid:"ipv4"`
Date string `valid:"-"`
}
post := &Post{
Title: "My Example Post",
Message: "duck",
Message2: "dog",
AuthorIP: "123.234.54.3",
}
// Add your own struct validation tags
govalidator.TagMap["duck"] = govalidator.Validator(func(str string) bool {
return str == "duck"
})
// Add your own struct validation tags with parameter
govalidator.ParamTagMap["animal"] = govalidator.ParamValidator(func(str string, params ...string) bool {
species := params[0]
return str == species
})
govalidator.ParamTagRegexMap["animal"] = regexp.MustCompile("^animal\\((\\w+)\\)$")
result, err := govalidator.ValidateStruct(post)
if err != nil {
println("error: " + err.Error())
}
println(result)
```
###### ValidateMap [#2](https://github.com/asaskevich/govalidator/pull/338)
If you want to validate maps, you can use the map to be validated and a validation map that contain the same tags used in ValidateStruct, both maps have to be in the form `map[string]interface{}`
So here is small example of usage:
```go
var mapTemplate = map[string]interface{}{
"name":"required,alpha",
"family":"required,alpha",
"email":"required,email",
"cell-phone":"numeric",
"address":map[string]interface{}{
"line1":"required,alphanum",
"line2":"alphanum",
"postal-code":"numeric",
},
}
var inputMap = map[string]interface{}{
"name":"Bob",
"family":"Smith",
"email":"foo@bar.baz",
"address":map[string]interface{}{
"line1":"",
"line2":"",
"postal-code":"",
},
}
result, err := govalidator.ValidateMap(inputMap, mapTemplate)
if err != nil {
println("error: " + err.Error())
}
println(result)
```
###### WhiteList
```go
// Remove all characters from string ignoring characters between "a" and "z"
println(govalidator.WhiteList("a3a43a5a4a3a2a23a4a5a4a3a4", "a-z") == "aaaaaaaaaaaa")
```
###### Custom validation functions
Custom validation using your own domain specific validators is also available - here's an example of how to use it:
```go
import "github.com/asaskevich/govalidator"
type CustomByteArray [6]byte // custom types are supported and can be validated
type StructWithCustomByteArray struct {
ID CustomByteArray `valid:"customByteArrayValidator,customMinLengthValidator"` // multiple custom validators are possible as well and will be evaluated in sequence
Email string `valid:"email"`
CustomMinLength int `valid:"-"`
}
govalidator.CustomTypeTagMap.Set("customByteArrayValidator", func(i interface{}, context interface{}) bool {
switch v := context.(type) { // you can type switch on the context interface being validated
case StructWithCustomByteArray:
// you can check and validate against some other field in the context,
// return early or not validate against the context at all your choice
case SomeOtherType:
// ...
default:
// expecting some other type? Throw/panic here or continue
}
switch v := i.(type) { // type switch on the struct field being validated
case CustomByteArray:
for _, e := range v { // this validator checks that the byte array is not empty, i.e. not all zeroes
if e != 0 {
return true
}
}
}
return false
})
govalidator.CustomTypeTagMap.Set("customMinLengthValidator", func(i interface{}, context interface{}) bool {
switch v := context.(type) { // this validates a field against the value in another field, i.e. dependent validation
case StructWithCustomByteArray:
return len(v.ID) >= v.CustomMinLength
}
return false
})
```
###### Loop over Error()
By default .Error() returns all errors in a single String. To access each error you can do this:
```go
if err != nil {
errs := err.(govalidator.Errors).Errors()
for _, e := range errs {
fmt.Println(e.Error())
}
}
```
###### Custom error messages
Custom error messages are supported via annotations by adding the `~` separator - here's an example of how to use it:
```go
type Ticket struct {
Id int64 `json:"id"`
FirstName string `json:"firstname" valid:"required~First name is blank"`
}
```
#### Notes
Documentation is available here: [godoc.org](https://godoc.org/github.com/asaskevich/govalidator).
Full information about code coverage is also available here: [govalidator on gocover.io](http://gocover.io/github.com/asaskevich/govalidator).
#### Support
If you do have a contribution to the package, feel free to create a Pull Request or an Issue.
#### What to contribute
If you don't know what to do, there are some features and functions that need to be done
- [ ] Refactor code
- [ ] Edit docs and [README](https://github.com/asaskevich/govalidator/README.md): spellcheck, grammar and typo check
- [ ] Create actual list of contributors and projects that currently using this package
- [ ] Resolve [issues and bugs](https://github.com/asaskevich/govalidator/issues)
- [ ] Update actual [list of functions](https://github.com/asaskevich/govalidator#list-of-functions)
- [ ] Update [list of validators](https://github.com/asaskevich/govalidator#validatestruct-2) that available for `ValidateStruct` and add new
- [ ] Implement new validators: `IsFQDN`, `IsIMEI`, `IsPostalCode`, `IsISIN`, `IsISRC` etc
- [x] Implement [validation by maps](https://github.com/asaskevich/govalidator/issues/224)
- [ ] Implement fuzzing testing
- [ ] Implement some struct/map/array utilities
- [ ] Implement map/array validation
- [ ] Implement benchmarking
- [ ] Implement batch of examples
- [ ] Look at forks for new features and fixes
#### Advice
Feel free to create what you want, but keep in mind when you implement new features:
- Code must be clear and readable, names of variables/constants clearly describes what they are doing
- Public functions must be documented and described in source file and added to README.md to the list of available functions
- There are must be unit-tests for any new functions and improvements
## Credits
### Contributors
This project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].
#### Special thanks to [contributors](https://github.com/asaskevich/govalidator/graphs/contributors)
* [Daniel Lohse](https://github.com/annismckenzie)
* [Attila Oláh](https://github.com/attilaolah)
* [Daniel Korner](https://github.com/Dadie)
* [Steven Wilkin](https://github.com/stevenwilkin)
* [Deiwin Sarjas](https://github.com/deiwin)
* [Noah Shibley](https://github.com/slugmobile)
* [Nathan Davies](https://github.com/nathj07)
* [Matt Sanford](https://github.com/mzsanford)
* [Simon ccl1115](https://github.com/ccl1115)
<a href="https://github.com/asaskevich/govalidator/graphs/contributors"><img src="https://opencollective.com/govalidator/contributors.svg?width=890" /></a>
### Backers
Thank you to all our backers! 🙏 [[Become a backer](https://opencollective.com/govalidator#backer)]
<a href="https://opencollective.com/govalidator#backers" target="_blank"><img src="https://opencollective.com/govalidator/backers.svg?width=890"></a>
### Sponsors
Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [[Become a sponsor](https://opencollective.com/govalidator#sponsor)]
<a href="https://opencollective.com/govalidator/sponsor/0/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/0/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/1/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/1/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/2/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/2/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/3/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/3/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/4/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/4/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/5/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/5/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/6/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/6/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/7/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/7/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/8/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/8/avatar.svg"></a>
<a href="https://opencollective.com/govalidator/sponsor/9/website" target="_blank"><img src="https://opencollective.com/govalidator/sponsor/9/avatar.svg"></a>
## License
[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fasaskevich%2Fgovalidator.svg?type=large)](https://app.fossa.io/projects/git%2Bgithub.com%2Fasaskevich%2Fgovalidator?ref=badge_large)

Some files were not shown because too many files have changed in this diff Show More