mirror of
https://github.com/mudler/luet.git
synced 2025-09-02 15:54:39 +00:00
Compare commits
116 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
68a5604d8c | ||
|
fcec6c5699 | ||
|
d527eaed60 | ||
|
c7253ac8ad | ||
|
fffed79767 | ||
|
f329e1d5e0 | ||
|
20bc250470 | ||
|
05b13c6c1e | ||
|
0fb3497a3b | ||
|
3563636ea9 | ||
|
d6cdb8ea42 | ||
|
d7d05de9fe | ||
|
a756c802f9 | ||
|
d6f7c47eae | ||
|
b2a5de9222 | ||
|
3cd87abafe | ||
|
7e388c6fed | ||
|
07a154474b | ||
|
6aa353edb2 | ||
|
8951203165 | ||
|
d330fedcc4 | ||
|
dfb6dab9dc | ||
|
4f33eca263 | ||
|
54b0dce54b | ||
|
ea2a60a853 | ||
|
c8f4ba0a47 | ||
|
33da68c2ff | ||
|
c9090ef1fd | ||
|
7e0ea34b81 | ||
|
2f6bef14d5 | ||
|
711c039296 | ||
|
7ce522110e | ||
|
ac6554c291 | ||
|
d4255b086b | ||
|
1b90407475 | ||
|
6f6e2bf15f | ||
|
6d450d3af0 | ||
|
33b442a832 | ||
|
f068bfdb9b | ||
|
4dc4205868 | ||
|
50ec17e738 | ||
|
4c80d70512 | ||
|
b826288037 | ||
|
821ac20fa2 | ||
|
97edac4aa1 | ||
|
a118c7f98b | ||
|
fcd05a57d3 | ||
|
255aecf20b | ||
|
5594844971 | ||
|
f813370501 | ||
|
c353ab4978 | ||
|
1653a60428 | ||
|
de2afe8ed0 | ||
|
298de447b8 | ||
|
40687c3072 | ||
|
2528fe0fb4 | ||
|
78b5963a4f | ||
|
524bbf990e | ||
|
96f4a6c0e3 | ||
|
0e30e6a1ad | ||
|
0147b2cf99 | ||
|
eee0136156 | ||
|
c6fe34b059 | ||
|
7ad767a81b | ||
|
4c5f6f9f8d | ||
|
c9b684523f | ||
|
aeea0cc5fe | ||
|
850b3f1c50 | ||
|
091e51e426 | ||
|
f498dfc692 | ||
|
22bc53ba13 | ||
|
b6dba27a4a | ||
|
07633dc307 | ||
|
d5fd14bceb | ||
|
9b6f4a094d | ||
|
e013412832 | ||
|
60ed9e0a04 | ||
|
e751b989e0 | ||
|
6012e0081e | ||
|
d3bd78d618 | ||
|
6af62b5851 | ||
|
7ec36da059 | ||
|
c284d3e4bf | ||
|
f28e8deb96 | ||
|
4433fc72ac | ||
|
20654d5dbb | ||
|
b986c613ab | ||
|
cd0e588fa9 | ||
|
5358475069 | ||
|
716d404307 | ||
|
b12410edb7 | ||
|
db7301f7bf | ||
|
ebcf6075d0 | ||
|
98248432d1 | ||
|
bbd811a6f2 | ||
|
9af733370a | ||
|
ce888a2f40 | ||
|
01e66ee0b4 | ||
|
a71e1a6f1d | ||
|
3b266fd600 | ||
|
0d02eccc6c | ||
|
ee210851f0 | ||
|
7f160a7a89 | ||
|
8b66127016 | ||
|
16453bd09f | ||
|
358b39b5dd | ||
|
da11a84d23 | ||
|
0cb49a40c0 | ||
|
bbc9574745 | ||
|
6f837c8c26 | ||
|
4dffc658db | ||
|
a7e262cc48 | ||
|
91d05b071d | ||
|
bbeb800611 | ||
|
ffcac1d03e | ||
|
4c62f714c4 |
48
README.md
48
README.md
@@ -4,18 +4,60 @@
|
||||
[](https://godoc.org/github.com/mudler/luet)
|
||||
[](https://codecov.io/gh/mudler/luet)
|
||||
|
||||
Luet is a Package Manager based off from containers - it uses Docker (and other tech) to sandbox your builds and generate packages from them. It has no dependencies and it is well suitable for "from scratch" environments.
|
||||
Luet is a multi-platform Package Manager based off from containers - it uses Docker (and other tech) to sandbox your builds and generate packages from them. It has zero dependencies and it is well suitable for "from scratch" environments. It can also version entire rootfs and enables delivery of OTA-alike updates, making it a perfect fit for the Edge computing era and IoT embedded devices.
|
||||
|
||||
It offers a simple [specfile format](https://luet-lab.github.io/docs/docs/concepts/specfile/) in YAML notation to define both packages and rootfs. As it is based on containers, it can be used to build seed stages for Linux From Scratch installations and it can build and track updates for those systems.
|
||||
|
||||
It is written entirely in Golang and where used as package manager, it can run in from scratch environment, with zero dependencies.
|
||||
|
||||
## In a glance
|
||||
|
||||
- Luet can reuse Gentoo's portage tree hierarchy, and it is heavily inspired from it.
|
||||
- It builds, installs, uninstalls and perform upgrades on machines
|
||||
- Installer doesn't depend on anything
|
||||
- Installer doesn't depend on anything ( 0 dep installer !), statically built
|
||||
- Support for packages as "layers"
|
||||
- It uses SAT solving techniques to solve the deptree ( Inspired by [OPIUM](https://ranjitjhala.github.io/static/opium.pdf) )
|
||||
|
||||
## Install
|
||||
|
||||
To install luet, you can grab a release on the [Release page](https://github.com/mudler/luet/releases) or compile it in your machine (requires Golang installed):
|
||||
|
||||
$ git clone https://github.com/mudler/luet.git
|
||||
$ cd luet
|
||||
$ make build
|
||||
|
||||
## Status
|
||||
|
||||
Luet is not feature-complete yet, it can build, install/uninstall/upgrade packages - but it doesn't support yet all the features you would normally expect from a Package Manager nowadays.
|
||||
|
||||
Check out the [Wiki](https://github.com/mudler/luet/wiki) for more informations.
|
||||
# Dependency solving
|
||||
|
||||
Luet uses SAT and Reinforcement learning engine for dependency solving.
|
||||
It encodes the package requirements into a SAT problem, using gophersat to solve the dependency tree and give a concrete model as result.
|
||||
|
||||
## SAT encoding
|
||||
|
||||
Each package and its constraints are encoded and built around [OPIUM](https://ranjitjhala.github.io/static/opium.pdf). Additionally, Luet treats
|
||||
also selectors seamlessly while building the model, adding *ALO* ( *At least one* ) and *AMO* ( *At most one* ) rules to guarantee coherence within the installed system.
|
||||
|
||||
## Reinforcement learning
|
||||
|
||||
Luet also implements a small and portable qlearning agent that will try to solve conflict on your behalf
|
||||
when they arises while trying to validate your queries against the system model.
|
||||
|
||||
To leverage it, simply pass ```--solver-type qlearning``` to the subcommands that supports it ( you can check out by invoking ```--help``` ).
|
||||
|
||||
## Documentation
|
||||
|
||||
[Documentation](https://luet-lab.github.io/docs) is available, or
|
||||
run `luet --help`, any subcommand is documented as well, try e.g.: `luet build --help`.
|
||||
|
||||
## Authors
|
||||
|
||||
Luet is here thanks to our amazing [contributors](https://github.com/mudler/luet/graphs/contributors)!.
|
||||
|
||||
Luet was originally created by Ettore Di Giacinto, mudler@sabayon.org, mudler@gentoo.org.
|
||||
|
||||
## License
|
||||
|
||||
Luet is distributed under the terms of GPLv3, check out the LICENSE file.
|
||||
|
76
cmd/build.go
76
cmd/build.go
@@ -15,17 +15,18 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"regexp"
|
||||
"runtime"
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler"
|
||||
"github.com/mudler/luet/pkg/compiler/backend"
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
tree "github.com/mudler/luet/pkg/tree"
|
||||
|
||||
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
@@ -39,25 +40,38 @@ var buildCmd = &cobra.Command{
|
||||
viper.BindPFlag("tree", cmd.Flags().Lookup("tree"))
|
||||
viper.BindPFlag("destination", cmd.Flags().Lookup("destination"))
|
||||
viper.BindPFlag("backend", cmd.Flags().Lookup("backend"))
|
||||
viper.BindPFlag("concurrency", cmd.Flags().Lookup("concurrency"))
|
||||
viper.BindPFlag("privileged", cmd.Flags().Lookup("privileged"))
|
||||
viper.BindPFlag("database", cmd.Flags().Lookup("database"))
|
||||
viper.BindPFlag("revdeps", cmd.Flags().Lookup("revdeps"))
|
||||
viper.BindPFlag("all", cmd.Flags().Lookup("all"))
|
||||
viper.BindPFlag("compression", cmd.Flags().Lookup("compression"))
|
||||
|
||||
viper.BindPFlag("image-repository", cmd.Flags().Lookup("image-repository"))
|
||||
viper.BindPFlag("push", cmd.Flags().Lookup("push"))
|
||||
viper.BindPFlag("pull", cmd.Flags().Lookup("pull"))
|
||||
viper.BindPFlag("keep-images", cmd.Flags().Lookup("keep-images"))
|
||||
|
||||
LuetCfg.Viper.BindPFlag("solver.type", cmd.Flags().Lookup("solver-type"))
|
||||
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
|
||||
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
|
||||
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
|
||||
clean := viper.GetBool("clean")
|
||||
src := viper.GetString("tree")
|
||||
dst := viper.GetString("destination")
|
||||
concurrency := viper.GetInt("concurrency")
|
||||
concurrency := LuetCfg.GetGeneral().Concurrency
|
||||
backendType := viper.GetString("backend")
|
||||
privileged := viper.GetBool("privileged")
|
||||
revdeps := viper.GetBool("revdeps")
|
||||
all := viper.GetBool("all")
|
||||
databaseType := viper.GetString("database")
|
||||
compressionType := viper.GetString("compression")
|
||||
imageRepository := viper.GetString("image-repository")
|
||||
push := viper.GetBool("push")
|
||||
pull := viper.GetBool("pull")
|
||||
keepImages := viper.GetBool("keep-images")
|
||||
|
||||
compilerSpecs := compiler.NewLuetCompilationspecs()
|
||||
var compilerBackend compiler.CompilerBackend
|
||||
@@ -92,22 +106,52 @@ var buildCmd = &cobra.Command{
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
|
||||
stype := LuetCfg.Viper.GetString("solver.type")
|
||||
discount := LuetCfg.Viper.GetFloat64("solver.discount")
|
||||
rate := LuetCfg.Viper.GetFloat64("solver.rate")
|
||||
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
|
||||
|
||||
LuetCfg.GetSolverOptions().Type = stype
|
||||
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
|
||||
LuetCfg.GetSolverOptions().Discount = float32(discount)
|
||||
LuetCfg.GetSolverOptions().MaxAttempts = attempts
|
||||
|
||||
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
|
||||
|
||||
opts := compiler.NewDefaultCompilerOptions()
|
||||
opts.SolverOptions = *LuetCfg.GetSolverOptions()
|
||||
opts.ImageRepository = imageRepository
|
||||
opts.Clean = clean
|
||||
opts.PullFirst = pull
|
||||
opts.KeepImg = keepImages
|
||||
opts.Push = push
|
||||
luetCompiler := compiler.NewLuetCompiler(compilerBackend, generalRecipe.GetDatabase(), opts)
|
||||
luetCompiler.SetConcurrency(concurrency)
|
||||
luetCompiler.SetCompressionType(compiler.CompressionImplementation(compressionType))
|
||||
if !all {
|
||||
for _, a := range args {
|
||||
decodepackage, err := regexp.Compile(`^([<>]?\~?=?)((([^\/]+)\/)?(?U)(\S+))(-(\d+(\.\d+)*[a-z]?(_(alpha|beta|pre|rc|p)\d*)*(-r\d+)?))?$`)
|
||||
gp, err := _gentoo.ParsePackageStr(a)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
Fatal("Invalid package string ", a, ": ", err.Error())
|
||||
}
|
||||
packageInfo := decodepackage.FindAllStringSubmatch(a, -1)
|
||||
category := packageInfo[0][4]
|
||||
name := packageInfo[0][5]
|
||||
version := packageInfo[0][1] + packageInfo[0][7]
|
||||
spec, err := luetCompiler.FromPackage(&pkg.DefaultPackage{Name: name, Category: category, Version: version})
|
||||
|
||||
if gp.Version == "" {
|
||||
gp.Version = "0"
|
||||
gp.Condition = _gentoo.PkgCondGreaterEqual
|
||||
}
|
||||
|
||||
pack := &pkg.DefaultPackage{
|
||||
Name: gp.Name,
|
||||
Version: fmt.Sprintf("%s%s%s",
|
||||
pkg.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
|
||||
gp.Version,
|
||||
gp.VersionSuffix,
|
||||
),
|
||||
Category: gp.Category,
|
||||
Uri: make([]string, 0),
|
||||
}
|
||||
spec, err := luetCompiler.FromPackage(pack)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
@@ -158,13 +202,21 @@ func init() {
|
||||
buildCmd.Flags().Bool("clean", true, "Build all packages without considering the packages present in the build directory")
|
||||
buildCmd.Flags().String("tree", path, "Source luet tree")
|
||||
buildCmd.Flags().String("backend", "docker", "backend used (docker,img)")
|
||||
buildCmd.Flags().Int("concurrency", runtime.NumCPU(), "Concurrency")
|
||||
buildCmd.Flags().Bool("privileged", false, "Privileged (Keep permissions)")
|
||||
buildCmd.Flags().String("database", "memory", "database used for solving (memory,boltdb)")
|
||||
buildCmd.Flags().Bool("revdeps", false, "Build with revdeps")
|
||||
buildCmd.Flags().Bool("all", false, "Build all packages in the tree")
|
||||
buildCmd.Flags().String("destination", path, "Destination folder")
|
||||
buildCmd.Flags().String("compression", "none", "Compression alg: none, gzip")
|
||||
buildCmd.Flags().String("image-repository", "luet/cache", "Default base image string for generated image")
|
||||
buildCmd.Flags().Bool("push", false, "Push images to a hub")
|
||||
buildCmd.Flags().Bool("pull", false, "Pull images from a hub")
|
||||
buildCmd.Flags().Bool("keep-images", true, "Keep built docker images in the host")
|
||||
|
||||
buildCmd.Flags().String("solver-type", "", "Solver strategy")
|
||||
buildCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
|
||||
buildCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
|
||||
buildCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
|
||||
|
||||
RootCmd.AddCommand(buildCmd)
|
||||
}
|
||||
|
70
cmd/cleanup.go
Normal file
70
cmd/cleanup.go
Normal file
@@ -0,0 +1,70 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var cleanupCmd = &cobra.Command{
|
||||
Use: "cleanup",
|
||||
Short: "Clean packages cache.",
|
||||
Long: `remove downloaded packages tarballs and clean cache directory`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
var cleaned int = 0
|
||||
|
||||
// Check if cache dir exists
|
||||
if helpers.Exists(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath()) {
|
||||
|
||||
files, err := ioutil.ReadDir(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath())
|
||||
if err != nil {
|
||||
Fatal("Error on read cachedir ", err.Error())
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if file.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
if config.LuetCfg.GetGeneral().Debug {
|
||||
Info("Removing ", file.Name())
|
||||
}
|
||||
|
||||
err := os.RemoveAll(
|
||||
filepath.Join(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), file.Name()))
|
||||
if err != nil {
|
||||
Fatal("Error on removing", file.Name())
|
||||
}
|
||||
cleaned++
|
||||
}
|
||||
}
|
||||
|
||||
Info("Cleaned: ", cleaned, "packages.")
|
||||
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(cleanupCmd)
|
||||
}
|
59
cmd/config.go
Normal file
59
cmd/config.go
Normal file
@@ -0,0 +1,59 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var configCmd = &cobra.Command{
|
||||
Use: "config",
|
||||
Short: "Print config",
|
||||
Long: `Show luet configuration`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
fmt.Println(config.LuetCfg.GetLogging())
|
||||
fmt.Println(config.LuetCfg.GetGeneral())
|
||||
fmt.Println(config.LuetCfg.GetSystem())
|
||||
if len(config.LuetCfg.CacheRepositories) > 0 {
|
||||
fmt.Println("repetitors:")
|
||||
for _, r := range config.LuetCfg.CacheRepositories {
|
||||
fmt.Println(" - ", r.String())
|
||||
}
|
||||
}
|
||||
if len(config.LuetCfg.SystemRepositories) > 0 {
|
||||
fmt.Println("repositories:")
|
||||
for _, r := range config.LuetCfg.SystemRepositories {
|
||||
fmt.Println(" - ", r.String())
|
||||
}
|
||||
}
|
||||
|
||||
if len(config.LuetCfg.RepositoriesConfDir) > 0 {
|
||||
fmt.Println("repos_confdir:")
|
||||
for _, dir := range config.LuetCfg.RepositoriesConfDir {
|
||||
fmt.Println(" - ", dir)
|
||||
}
|
||||
}
|
||||
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(configCmd)
|
||||
}
|
@@ -16,8 +16,8 @@ package cmd
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"runtime"
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
tree "github.com/mudler/luet/pkg/tree"
|
||||
@@ -33,13 +33,11 @@ var convertCmd = &cobra.Command{
|
||||
Long: `Parses external PM and produces a luet parsable tree`,
|
||||
PreRun: func(cmd *cobra.Command, args []string) {
|
||||
viper.BindPFlag("type", cmd.Flags().Lookup("type"))
|
||||
viper.BindPFlag("concurrency", cmd.Flags().Lookup("concurrency"))
|
||||
viper.BindPFlag("database", cmd.Flags().Lookup("database"))
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
|
||||
t := viper.GetString("type")
|
||||
c := viper.GetInt("concurrency")
|
||||
databaseType := viper.GetString("database")
|
||||
var db pkg.PackageDatabase
|
||||
|
||||
@@ -54,9 +52,15 @@ var convertCmd = &cobra.Command{
|
||||
var builder tree.Parser
|
||||
switch t {
|
||||
case "gentoo":
|
||||
builder = gentoo.NewGentooBuilder(&gentoo.SimpleEbuildParser{}, c, gentoo.InMemory)
|
||||
builder = gentoo.NewGentooBuilder(
|
||||
&gentoo.SimpleEbuildParser{},
|
||||
LuetCfg.GetGeneral().Concurrency,
|
||||
gentoo.InMemory)
|
||||
default: // dup
|
||||
builder = gentoo.NewGentooBuilder(&gentoo.SimpleEbuildParser{}, c, gentoo.InMemory)
|
||||
builder = gentoo.NewGentooBuilder(
|
||||
&gentoo.SimpleEbuildParser{},
|
||||
LuetCfg.GetGeneral().Concurrency,
|
||||
gentoo.InMemory)
|
||||
}
|
||||
|
||||
switch databaseType {
|
||||
@@ -91,7 +95,6 @@ var convertCmd = &cobra.Command{
|
||||
|
||||
func init() {
|
||||
convertCmd.Flags().String("type", "gentoo", "source type")
|
||||
convertCmd.Flags().Int("concurrency", runtime.NumCPU(), "Concurrency")
|
||||
convertCmd.Flags().String("database", "memory", "database used for solving (memory,boltdb)")
|
||||
|
||||
RootCmd.AddCommand(convertCmd)
|
||||
|
@@ -17,8 +17,9 @@ package cmd
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler"
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
installer "github.com/mudler/luet/pkg/installer"
|
||||
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
|
||||
@@ -35,23 +36,71 @@ var createrepoCmd = &cobra.Command{
|
||||
viper.BindPFlag("tree", cmd.Flags().Lookup("tree"))
|
||||
viper.BindPFlag("output", cmd.Flags().Lookup("output"))
|
||||
viper.BindPFlag("name", cmd.Flags().Lookup("name"))
|
||||
viper.BindPFlag("uri", cmd.Flags().Lookup("uri"))
|
||||
viper.BindPFlag("descr", cmd.Flags().Lookup("descr"))
|
||||
viper.BindPFlag("urls", cmd.Flags().Lookup("urls"))
|
||||
viper.BindPFlag("type", cmd.Flags().Lookup("type"))
|
||||
viper.BindPFlag("tree-compression", cmd.Flags().Lookup("tree-compression"))
|
||||
viper.BindPFlag("tree-path", cmd.Flags().Lookup("tree-path"))
|
||||
viper.BindPFlag("reset-revision", cmd.Flags().Lookup("reset-revision"))
|
||||
viper.BindPFlag("repo", cmd.Flags().Lookup("repo"))
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
var err error
|
||||
var repo installer.Repository
|
||||
|
||||
tree := viper.GetString("tree")
|
||||
dst := viper.GetString("output")
|
||||
packages := viper.GetString("packages")
|
||||
name := viper.GetString("name")
|
||||
uri := viper.GetString("uri")
|
||||
descr := viper.GetString("descr")
|
||||
urls := viper.GetStringSlice("urls")
|
||||
t := viper.GetString("type")
|
||||
reset := viper.GetBool("reset-revision")
|
||||
treetype := viper.GetString("tree-compression")
|
||||
treepath := viper.GetString("tree-path")
|
||||
source_repo := viper.GetString("repo")
|
||||
|
||||
if source_repo != "" {
|
||||
// Search for system repository
|
||||
lrepo, err := LuetCfg.GetSystemRepository(source_repo)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
|
||||
if tree == "" {
|
||||
tree = lrepo.TreePath
|
||||
}
|
||||
|
||||
if t == "" {
|
||||
t = lrepo.Type
|
||||
}
|
||||
|
||||
repo, err = installer.GenerateRepository(lrepo.Name,
|
||||
lrepo.Description, t,
|
||||
lrepo.Urls,
|
||||
lrepo.Priority,
|
||||
packages,
|
||||
tree,
|
||||
pkg.NewInMemoryDatabase(false))
|
||||
|
||||
} else {
|
||||
repo, err = installer.GenerateRepository(name, descr, t, urls, 1, packages,
|
||||
tree, pkg.NewInMemoryDatabase(false))
|
||||
}
|
||||
|
||||
repo, err := installer.GenerateRepository(name, uri, t, 1, packages, tree, pkg.NewInMemoryDatabase(false))
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
err = repo.Write(dst)
|
||||
|
||||
if treetype != "" {
|
||||
repo.SetTreeCompressionType(compiler.CompressionImplementation(treetype))
|
||||
}
|
||||
|
||||
if treepath != "" {
|
||||
repo.SetTreePath(treepath)
|
||||
}
|
||||
|
||||
err = repo.Write(dst, reset)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
@@ -67,8 +116,14 @@ func init() {
|
||||
createrepoCmd.Flags().String("tree", path, "Source luet tree")
|
||||
createrepoCmd.Flags().String("output", path, "Destination folder")
|
||||
createrepoCmd.Flags().String("name", "luet", "Repository name")
|
||||
createrepoCmd.Flags().String("uri", path, "Repository uri")
|
||||
createrepoCmd.Flags().String("type", "local", "Repository type (local)")
|
||||
createrepoCmd.Flags().String("descr", "luet", "Repository description")
|
||||
createrepoCmd.Flags().StringSlice("urls", []string{}, "Repository URLs")
|
||||
createrepoCmd.Flags().String("type", "disk", "Repository type (disk)")
|
||||
createrepoCmd.Flags().Bool("reset-revision", false, "Reset repository revision.")
|
||||
createrepoCmd.Flags().String("repo", "", "Use repository defined in configuration.")
|
||||
|
||||
createrepoCmd.Flags().String("tree-compression", "none", "Compression alg: none, gzip")
|
||||
createrepoCmd.Flags().String("tree-path", installer.TREE_TARBALL, "Repository tree filename")
|
||||
|
||||
RootCmd.AddCommand(createrepoCmd)
|
||||
}
|
||||
|
@@ -15,70 +15,93 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
|
||||
installer "github.com/mudler/luet/pkg/installer"
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
|
||||
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
var installCmd = &cobra.Command{
|
||||
Use: "install <pkg1> <pkg2> ...",
|
||||
Short: "Install a package",
|
||||
PreRun: func(cmd *cobra.Command, args []string) {
|
||||
viper.BindPFlag("system-dbpath", cmd.Flags().Lookup("system-dbpath"))
|
||||
viper.BindPFlag("system-target", cmd.Flags().Lookup("system-target"))
|
||||
viper.BindPFlag("concurrency", cmd.Flags().Lookup("concurrency"))
|
||||
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
|
||||
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
|
||||
LuetCfg.Viper.BindPFlag("solver.type", cmd.Flags().Lookup("solver-type"))
|
||||
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
|
||||
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
|
||||
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
|
||||
},
|
||||
Long: `Install packages in parallel`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
c := []*installer.LuetRepository{}
|
||||
err := viper.UnmarshalKey("system-repositories", &c)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
|
||||
var toInstall []pkg.Package
|
||||
var systemDB pkg.PackageDatabase
|
||||
|
||||
for _, a := range args {
|
||||
decodepackage, err := regexp.Compile(`^([<>]?\~?=?)((([^\/]+)\/)?(?U)(\S+))(-(\d+(\.\d+)*[a-z]?(_(alpha|beta|pre|rc|p)\d*)*(-r\d+)?))?$`)
|
||||
gp, err := _gentoo.ParsePackageStr(a)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
Fatal("Invalid package string ", a, ": ", err.Error())
|
||||
}
|
||||
packageInfo := decodepackage.FindAllStringSubmatch(a, -1)
|
||||
|
||||
category := packageInfo[0][4]
|
||||
name := packageInfo[0][5]
|
||||
version := packageInfo[0][1] + packageInfo[0][7]
|
||||
toInstall = append(toInstall, &pkg.DefaultPackage{Name: name, Category: category, Version: version})
|
||||
if gp.Version == "" {
|
||||
gp.Version = "0"
|
||||
gp.Condition = _gentoo.PkgCondGreaterEqual
|
||||
}
|
||||
|
||||
pack := &pkg.DefaultPackage{
|
||||
Name: gp.Name,
|
||||
Version: fmt.Sprintf("%s%s%s",
|
||||
pkg.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
|
||||
gp.Version,
|
||||
gp.VersionSuffix,
|
||||
),
|
||||
Category: gp.Category,
|
||||
Uri: make([]string, 0),
|
||||
}
|
||||
toInstall = append(toInstall, pack)
|
||||
}
|
||||
|
||||
// This shouldn't be necessary, but we need to unmarshal the repositories to a concrete struct, thus we need to port them back to the Repositories type
|
||||
synced := installer.Repositories{}
|
||||
for _, toSync := range c {
|
||||
s, err := toSync.Sync()
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
repos := installer.Repositories{}
|
||||
for _, repo := range LuetCfg.SystemRepositories {
|
||||
if !repo.Enable {
|
||||
continue
|
||||
}
|
||||
synced = append(synced, s)
|
||||
r := installer.NewSystemRepository(repo)
|
||||
repos = append(repos, r)
|
||||
}
|
||||
|
||||
inst := installer.NewLuetInstaller(viper.GetInt("concurrency"))
|
||||
stype := LuetCfg.Viper.GetString("solver.type")
|
||||
discount := LuetCfg.Viper.GetFloat64("solver.discount")
|
||||
rate := LuetCfg.Viper.GetFloat64("solver.rate")
|
||||
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
|
||||
|
||||
inst.Repositories(synced)
|
||||
LuetCfg.GetSolverOptions().Type = stype
|
||||
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
|
||||
LuetCfg.GetSolverOptions().Discount = float32(discount)
|
||||
LuetCfg.GetSolverOptions().MaxAttempts = attempts
|
||||
|
||||
os.MkdirAll(viper.GetString("system-dbpath"), os.ModePerm)
|
||||
systemDB := pkg.NewBoltDatabase(filepath.Join(viper.GetString("system-dbpath"), "luet.db"))
|
||||
system := &installer.System{Database: systemDB, Target: viper.GetString("system-target")}
|
||||
err = inst.Install(toInstall, system)
|
||||
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
|
||||
|
||||
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{Concurrency: LuetCfg.GetGeneral().Concurrency, SolverOptions: *LuetCfg.GetSolverOptions()})
|
||||
inst.Repositories(repos)
|
||||
|
||||
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
|
||||
systemDB = pkg.NewBoltDatabase(
|
||||
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
|
||||
} else {
|
||||
systemDB = pkg.NewInMemoryDatabase(true)
|
||||
}
|
||||
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
|
||||
err := inst.Install(toInstall, system)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
@@ -92,7 +115,10 @@ func init() {
|
||||
}
|
||||
installCmd.Flags().String("system-dbpath", path, "System db path")
|
||||
installCmd.Flags().String("system-target", path, "System rootpath")
|
||||
installCmd.Flags().Int("concurrency", runtime.NumCPU(), "Concurrency")
|
||||
installCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+AvailableResolvers+" )")
|
||||
installCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
|
||||
installCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
|
||||
installCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
|
||||
|
||||
RootCmd.AddCommand(installCmd)
|
||||
}
|
||||
|
37
cmd/repo.go
Normal file
37
cmd/repo.go
Normal file
@@ -0,0 +1,37 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
. "github.com/mudler/luet/cmd/repo"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var repoGroupCmd = &cobra.Command{
|
||||
Use: "repo [command] [OPTIONS]",
|
||||
Short: "Manage repositories",
|
||||
}
|
||||
|
||||
func init() {
|
||||
RootCmd.AddCommand(repoGroupCmd)
|
||||
|
||||
repoGroupCmd.AddCommand(
|
||||
NewRepoListCommand(),
|
||||
NewRepoUpdateCommand(),
|
||||
)
|
||||
}
|
101
cmd/repo/list.go
Normal file
101
cmd/repo/list.go
Normal file
@@ -0,0 +1,101 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd_repo
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
installer "github.com/mudler/luet/pkg/installer"
|
||||
|
||||
. "github.com/logrusorgru/aurora"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func NewRepoListCommand() *cobra.Command {
|
||||
var ans = &cobra.Command{
|
||||
Use: "list [OPTIONS]",
|
||||
Short: "List of the configured repositories.",
|
||||
Args: cobra.OnlyValidArgs,
|
||||
PreRun: func(cmd *cobra.Command, args []string) {
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
var repoColor, repoText, repoRevision string
|
||||
|
||||
enable, _ := cmd.Flags().GetBool("enabled")
|
||||
quiet, _ := cmd.Flags().GetBool("quiet")
|
||||
repoType, _ := cmd.Flags().GetString("type")
|
||||
|
||||
for _, repo := range LuetCfg.SystemRepositories {
|
||||
if enable && !repo.Enable {
|
||||
continue
|
||||
}
|
||||
|
||||
if repoType != "" && repo.Type != repoType {
|
||||
continue
|
||||
}
|
||||
|
||||
repoRevision = ""
|
||||
|
||||
if quiet {
|
||||
fmt.Println(repo.Name)
|
||||
} else {
|
||||
if repo.Enable {
|
||||
repoColor = Bold(BrightGreen(repo.Name)).String()
|
||||
} else {
|
||||
repoColor = Bold(BrightRed(repo.Name)).String()
|
||||
}
|
||||
if repo.Description != "" {
|
||||
repoText = Yellow(repo.Description).String()
|
||||
} else {
|
||||
repoText = Yellow(repo.Urls[0]).String()
|
||||
}
|
||||
|
||||
repobasedir := LuetCfg.GetSystem().GetRepoDatabaseDirPath(repo.Name)
|
||||
if repo.Cached {
|
||||
|
||||
r := installer.NewSystemRepository(repo)
|
||||
localRepo, _ := r.(*installer.LuetSystemRepository).ReadSpecFile(filepath.Join(repobasedir,
|
||||
installer.REPOSITORY_SPECFILE), false)
|
||||
if localRepo != nil {
|
||||
tsec, _ := strconv.ParseInt(localRepo.GetLastUpdate(), 10, 64)
|
||||
repoRevision = Bold(Red(localRepo.GetRevision())).String() +
|
||||
" - " + Bold(Green(time.Unix(tsec, 0).String())).String()
|
||||
}
|
||||
}
|
||||
|
||||
if repoRevision != "" {
|
||||
fmt.Println(
|
||||
fmt.Sprintf("%s\n %s\n Revision %s", repoColor, repoText, repoRevision))
|
||||
} else {
|
||||
fmt.Println(
|
||||
fmt.Sprintf("%s\n %s", repoColor, repoText))
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
ans.Flags().Bool("enabled", false, "Show only enable repositories.")
|
||||
ans.Flags().BoolP("quiet", "q", false, "Show only name of the repositories.")
|
||||
ans.Flags().StringP("type", "t", "", "Filter repositories of a specific type")
|
||||
|
||||
return ans
|
||||
}
|
83
cmd/repo/update.go
Normal file
83
cmd/repo/update.go
Normal file
@@ -0,0 +1,83 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package cmd_repo
|
||||
|
||||
import (
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
installer "github.com/mudler/luet/pkg/installer"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func NewRepoUpdateCommand() *cobra.Command {
|
||||
var ans = &cobra.Command{
|
||||
Use: "update [repo1] [repo2] [OPTIONS]",
|
||||
Short: "Update a specific cached repository or all cached repositories.",
|
||||
Example: `
|
||||
# Update all cached repositories:
|
||||
$> luet repo update
|
||||
|
||||
# Update only repo1 and repo2
|
||||
$> luet repo update repo1 repo2
|
||||
`,
|
||||
PreRun: func(cmd *cobra.Command, args []string) {
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
|
||||
ignore, _ := cmd.Flags().GetBool("ignore-errors")
|
||||
force, _ := cmd.Flags().GetBool("force")
|
||||
|
||||
if len(args) > 0 {
|
||||
for _, rname := range args {
|
||||
repo, err := LuetCfg.GetSystemRepository(rname)
|
||||
if err != nil && !ignore {
|
||||
Fatal(err.Error())
|
||||
} else if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
r := installer.NewSystemRepository(*repo)
|
||||
Spinner(32)
|
||||
_, err = r.Sync(force)
|
||||
if err != nil && !ignore {
|
||||
Fatal("Error on sync repository " + rname + ": " + err.Error())
|
||||
}
|
||||
SpinnerStop()
|
||||
}
|
||||
|
||||
} else {
|
||||
for _, repo := range LuetCfg.SystemRepositories {
|
||||
if repo.Cached {
|
||||
r := installer.NewSystemRepository(repo)
|
||||
Spinner(32)
|
||||
_, err := r.Sync(force)
|
||||
if err != nil && !ignore {
|
||||
Fatal("Error on sync repository " + r.GetName() + ": " + err.Error())
|
||||
}
|
||||
SpinnerStop()
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
ans.Flags().BoolP("ignore-errors", "i", false, "Ignore errors on sync repositories.")
|
||||
ans.Flags().BoolP("force", "f", false, "Force resync.")
|
||||
|
||||
return ans
|
||||
}
|
97
cmd/root.go
97
cmd/root.go
@@ -17,11 +17,15 @@ package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/marcsauter/single"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
repo "github.com/mudler/luet/pkg/repository"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
@@ -30,7 +34,10 @@ import (
|
||||
var cfgFile string
|
||||
var Verbose bool
|
||||
|
||||
const LuetCLIVersion = "0.4"
|
||||
const (
|
||||
LuetCLIVersion = "0.6"
|
||||
LuetEnvPrefix = "LUET"
|
||||
)
|
||||
|
||||
// RootCmd represents the base command when called without any subcommands
|
||||
var RootCmd = &cobra.Command{
|
||||
@@ -38,6 +45,44 @@ var RootCmd = &cobra.Command{
|
||||
Short: "Package manager for the XXth century!",
|
||||
Long: `Package manager which uses containers to build packages`,
|
||||
Version: LuetCLIVersion,
|
||||
PersistentPreRun: func(cmd *cobra.Command, args []string) {
|
||||
err := LoadConfig(config.LuetCfg)
|
||||
if err != nil {
|
||||
Fatal("failed to load configuration:", err.Error())
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
func LoadConfig(c *config.LuetConfig) error {
|
||||
// If a config file is found, read it in.
|
||||
if err := c.Viper.ReadInConfig(); err != nil {
|
||||
Warning(err)
|
||||
}
|
||||
|
||||
err := c.Viper.Unmarshal(&config.LuetCfg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
Debug("Using config file:", c.Viper.ConfigFileUsed())
|
||||
|
||||
NewSpinner()
|
||||
|
||||
if c.GetLogging().Path != "" {
|
||||
// Init zap logger
|
||||
err = ZapLogger()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Load repositories
|
||||
err = repo.LoadRepositories(c)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Execute adds all child commands to the root command sets flags appropriately.
|
||||
@@ -62,8 +107,26 @@ func Execute() {
|
||||
|
||||
func init() {
|
||||
cobra.OnInitialize(initConfig)
|
||||
RootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file (default is $HOME/.luet.yaml)")
|
||||
RootCmd.PersistentFlags().BoolVarP(&Verbose, "verbose", "v", false, "verbose output")
|
||||
pflags := RootCmd.PersistentFlags()
|
||||
pflags.StringVar(&cfgFile, "config", "", "config file (default is $HOME/.luet.yaml)")
|
||||
pflags.BoolVarP(&Verbose, "verbose", "v", false, "verbose output")
|
||||
pflags.Bool("fatal", false, "Enables Warnings to exit")
|
||||
|
||||
u, err := user.Current()
|
||||
if err != nil {
|
||||
Fatal("failed to retrieve user identity:", err.Error())
|
||||
}
|
||||
sameOwner := false
|
||||
if u.Uid == "0" {
|
||||
sameOwner = true
|
||||
}
|
||||
pflags.Bool("same-owner", sameOwner, "Maintain same owner on uncompress.")
|
||||
pflags.Int("concurrency", runtime.NumCPU(), "Concurrency")
|
||||
|
||||
config.LuetCfg.Viper.BindPFlag("general.same_owner", pflags.Lookup("same-owner"))
|
||||
config.LuetCfg.Viper.BindPFlag("general.debug", pflags.Lookup("verbose"))
|
||||
config.LuetCfg.Viper.BindPFlag("general.concurrency", pflags.Lookup("concurrency"))
|
||||
config.LuetCfg.Viper.BindPFlag("general.fatal_warnings", pflags.Lookup("fatal"))
|
||||
}
|
||||
|
||||
// initConfig reads in config file and ENV variables if set.
|
||||
@@ -74,29 +137,23 @@ func initConfig() {
|
||||
Error(err)
|
||||
os.Exit(1)
|
||||
}
|
||||
viper.SetEnvPrefix(LuetEnvPrefix)
|
||||
viper.SetConfigType("yaml")
|
||||
viper.SetConfigName(".luet") // name of config file (without extension)
|
||||
if cfgFile != "" { // enable ability to specify config file via flag
|
||||
Info(">>> cfgFile: ", cfgFile)
|
||||
viper.SetConfigFile(cfgFile)
|
||||
configDir := path.Dir(cfgFile)
|
||||
if configDir != "." && configDir != dir {
|
||||
viper.AddConfigPath(configDir)
|
||||
}
|
||||
} else {
|
||||
viper.AddConfigPath(dir)
|
||||
viper.AddConfigPath(".")
|
||||
viper.AddConfigPath("$HOME")
|
||||
viper.AddConfigPath("/etc/luet")
|
||||
}
|
||||
|
||||
viper.AddConfigPath(dir)
|
||||
viper.AddConfigPath(".")
|
||||
viper.AddConfigPath("$HOME")
|
||||
viper.AddConfigPath("/etc/luet")
|
||||
|
||||
viper.AutomaticEnv() // read in environment variables that match
|
||||
|
||||
// If a config file is found, read it in.
|
||||
if err := viper.ReadInConfig(); err == nil {
|
||||
Info("Using config file:", viper.ConfigFileUsed())
|
||||
} else {
|
||||
Warning(err)
|
||||
}
|
||||
|
||||
// Create EnvKey Replacer for handle complex structure
|
||||
replacer := strings.NewReplacer(".", "__")
|
||||
viper.SetEnvKeyReplacer(replacer)
|
||||
viper.SetTypeByDefaultValue(true)
|
||||
}
|
||||
|
@@ -18,15 +18,13 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
installer "github.com/mudler/luet/pkg/installer"
|
||||
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
var searchCmd = &cobra.Command{
|
||||
@@ -34,43 +32,68 @@ var searchCmd = &cobra.Command{
|
||||
Short: "Search packages",
|
||||
Long: `Search for installed and available packages`,
|
||||
PreRun: func(cmd *cobra.Command, args []string) {
|
||||
viper.BindPFlag("system-dbpath", cmd.Flags().Lookup("system-dbpath"))
|
||||
viper.BindPFlag("system-target", cmd.Flags().Lookup("system-target"))
|
||||
viper.BindPFlag("concurrency", cmd.Flags().Lookup("concurrency"))
|
||||
viper.BindPFlag("installed", cmd.Flags().Lookup("installed"))
|
||||
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
|
||||
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
|
||||
LuetCfg.Viper.BindPFlag("installed", cmd.Flags().Lookup("installed"))
|
||||
LuetCfg.Viper.BindPFlag("solver.type", cmd.Flags().Lookup("solver-type"))
|
||||
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
|
||||
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
|
||||
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
c := []*installer.LuetRepository{}
|
||||
err := viper.UnmarshalKey("system-repositories", &c)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
var systemDB pkg.PackageDatabase
|
||||
|
||||
if len(args) != 1 {
|
||||
Fatal("Wrong number of arguments (expected 1)")
|
||||
}
|
||||
installed := viper.GetBool("installed")
|
||||
installed := LuetCfg.Viper.GetBool("installed")
|
||||
stype := LuetCfg.Viper.GetString("solver.type")
|
||||
discount := LuetCfg.Viper.GetFloat64("solver.discount")
|
||||
rate := LuetCfg.Viper.GetFloat64("solver.rate")
|
||||
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
|
||||
|
||||
LuetCfg.GetSolverOptions().Type = stype
|
||||
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
|
||||
LuetCfg.GetSolverOptions().Discount = float32(discount)
|
||||
LuetCfg.GetSolverOptions().MaxAttempts = attempts
|
||||
|
||||
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
|
||||
|
||||
if !installed {
|
||||
synced := installer.Repositories{}
|
||||
|
||||
for _, toSync := range c {
|
||||
s, err := toSync.Sync()
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
repos := installer.Repositories{}
|
||||
for _, repo := range LuetCfg.SystemRepositories {
|
||||
if !repo.Enable {
|
||||
continue
|
||||
}
|
||||
synced = append(synced, s)
|
||||
r := installer.NewSystemRepository(repo)
|
||||
repos = append(repos, r)
|
||||
}
|
||||
|
||||
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{Concurrency: LuetCfg.GetGeneral().Concurrency, SolverOptions: *LuetCfg.GetSolverOptions()})
|
||||
|
||||
inst.Repositories(repos)
|
||||
synced, err := inst.SyncRepositories(false)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
|
||||
Info("--- Search results: ---")
|
||||
|
||||
matches := synced.Search(args[0])
|
||||
for _, m := range matches {
|
||||
Info(":package:", m.Package.GetCategory(), m.Package.GetName(), m.Package.GetVersion(), "repository:", m.Repo.GetName())
|
||||
Info(":package:", m.Package.GetCategory(), m.Package.GetName(),
|
||||
m.Package.GetVersion(), "repository:", m.Repo.GetName())
|
||||
}
|
||||
} else {
|
||||
os.MkdirAll(viper.GetString("system-dbpath"), os.ModePerm)
|
||||
systemDB := pkg.NewBoltDatabase(filepath.Join(viper.GetString("system-dbpath"), "luet.db"))
|
||||
system := &installer.System{Database: systemDB, Target: viper.GetString("system-target")}
|
||||
|
||||
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
|
||||
systemDB = pkg.NewBoltDatabase(
|
||||
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
|
||||
} else {
|
||||
systemDB = pkg.NewInMemoryDatabase(true)
|
||||
}
|
||||
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
|
||||
var term = regexp.MustCompile(args[0])
|
||||
|
||||
for _, k := range system.Database.GetPackages() {
|
||||
@@ -91,7 +114,10 @@ func init() {
|
||||
}
|
||||
searchCmd.Flags().String("system-dbpath", path, "System db path")
|
||||
searchCmd.Flags().String("system-target", path, "System rootpath")
|
||||
searchCmd.Flags().Int("concurrency", runtime.NumCPU(), "Concurrency")
|
||||
searchCmd.Flags().Bool("installed", false, "Search between system packages")
|
||||
searchCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+AvailableResolvers+" )")
|
||||
searchCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
|
||||
searchCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
|
||||
searchCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
|
||||
RootCmd.AddCommand(searchCmd)
|
||||
}
|
||||
|
59
cmd/serve-repo.go
Normal file
59
cmd/serve-repo.go
Normal file
@@ -0,0 +1,59 @@
|
||||
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"os"
|
||||
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
var serverepoCmd = &cobra.Command{
|
||||
Use: "serve-repo",
|
||||
Short: "Embedded micro-http server",
|
||||
Long: `Embedded mini http server for serving local repositories`,
|
||||
PreRun: func(cmd *cobra.Command, args []string) {
|
||||
viper.BindPFlag("dir", cmd.Flags().Lookup("dir"))
|
||||
viper.BindPFlag("address", cmd.Flags().Lookup("address"))
|
||||
viper.BindPFlag("port", cmd.Flags().Lookup("port"))
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
|
||||
dir := viper.GetString("dir")
|
||||
port := viper.GetString("port")
|
||||
address := viper.GetString("address")
|
||||
|
||||
http.Handle("/", http.FileServer(http.Dir(dir)))
|
||||
|
||||
Info("Serving ", dir, " on HTTP port: ", port)
|
||||
Fatal(http.ListenAndServe(address+":"+port, nil))
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
path, err := os.Getwd()
|
||||
if err != nil {
|
||||
Fatal(err)
|
||||
}
|
||||
serverepoCmd.Flags().String("dir", path, "Packages folder (output from build)")
|
||||
serverepoCmd.Flags().String("port", "9090", "Listening port")
|
||||
serverepoCmd.Flags().String("address", "0.0.0.0", "Listening address")
|
||||
|
||||
RootCmd.AddCommand(serverepoCmd)
|
||||
}
|
@@ -15,52 +15,79 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
installer "github.com/mudler/luet/pkg/installer"
|
||||
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
|
||||
_gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
var uninstallCmd = &cobra.Command{
|
||||
Use: "uninstall <pkg>",
|
||||
Short: "Uninstall a package",
|
||||
Use: "uninstall <pkg> <pkg2> ...",
|
||||
Short: "Uninstall a package or a list of packages",
|
||||
Long: `Uninstall packages`,
|
||||
PreRun: func(cmd *cobra.Command, args []string) {
|
||||
viper.BindPFlag("system-dbpath", cmd.Flags().Lookup("system-dbpath"))
|
||||
viper.BindPFlag("system-target", cmd.Flags().Lookup("system-target"))
|
||||
viper.BindPFlag("concurrency", cmd.Flags().Lookup("concurrency"))
|
||||
LuetCfg.Viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
|
||||
LuetCfg.Viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
|
||||
LuetCfg.Viper.BindPFlag("solver.type", cmd.Flags().Lookup("solver-type"))
|
||||
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
|
||||
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
|
||||
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
if len(args) != 1 {
|
||||
Fatal("Wrong number of args")
|
||||
}
|
||||
var systemDB pkg.PackageDatabase
|
||||
|
||||
a := args[0]
|
||||
decodepackage, err := regexp.Compile(`^([<>]?\~?=?)((([^\/]+)\/)?(?U)(\S+))(-(\d+(\.\d+)*[a-z]?(_(alpha|beta|pre|rc|p)\d*)*(-r\d+)?))?$`)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
packageInfo := decodepackage.FindAllStringSubmatch(a, -1)
|
||||
for _, a := range args {
|
||||
gp, err := _gentoo.ParsePackageStr(a)
|
||||
if err != nil {
|
||||
Fatal("Invalid package string ", a, ": ", err.Error())
|
||||
}
|
||||
if gp.Version == "" {
|
||||
gp.Version = "0"
|
||||
gp.Condition = _gentoo.PkgCondGreaterEqual
|
||||
}
|
||||
pack := &pkg.DefaultPackage{
|
||||
Name: gp.Name,
|
||||
Version: fmt.Sprintf("%s%s%s",
|
||||
pkg.PkgSelectorConditionFromInt(gp.Condition.Int()).String(),
|
||||
gp.Version,
|
||||
gp.VersionSuffix,
|
||||
),
|
||||
Category: gp.Category,
|
||||
Uri: make([]string, 0),
|
||||
}
|
||||
|
||||
category := packageInfo[0][4]
|
||||
name := packageInfo[0][5]
|
||||
version := packageInfo[0][7]
|
||||
stype := LuetCfg.Viper.GetString("solver.type")
|
||||
discount := LuetCfg.Viper.GetFloat64("solver.discount")
|
||||
rate := LuetCfg.Viper.GetFloat64("solver.rate")
|
||||
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
|
||||
|
||||
inst := installer.NewLuetInstaller(viper.GetInt("concurrency"))
|
||||
os.MkdirAll(viper.GetString("system-dbpath"), os.ModePerm)
|
||||
systemDB := pkg.NewBoltDatabase(filepath.Join(viper.GetString("system-dbpath"), "luet.db"))
|
||||
system := &installer.System{Database: systemDB, Target: viper.GetString("system-target")}
|
||||
err = inst.Uninstall(&pkg.DefaultPackage{Name: name, Category: category, Version: version}, system)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
LuetCfg.GetSolverOptions().Type = stype
|
||||
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
|
||||
LuetCfg.GetSolverOptions().Discount = float32(discount)
|
||||
LuetCfg.GetSolverOptions().MaxAttempts = attempts
|
||||
|
||||
Debug("Solver", LuetCfg.GetSolverOptions().CompactString())
|
||||
|
||||
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{Concurrency: LuetCfg.GetGeneral().Concurrency, SolverOptions: *LuetCfg.GetSolverOptions()})
|
||||
|
||||
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
|
||||
systemDB = pkg.NewBoltDatabase(
|
||||
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
|
||||
} else {
|
||||
systemDB = pkg.NewInMemoryDatabase(true)
|
||||
}
|
||||
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
|
||||
err = inst.Uninstall(pack, system)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -72,6 +99,9 @@ func init() {
|
||||
}
|
||||
uninstallCmd.Flags().String("system-dbpath", path, "System db path")
|
||||
uninstallCmd.Flags().String("system-target", path, "System rootpath")
|
||||
uninstallCmd.Flags().Int("concurrency", runtime.NumCPU(), "Concurrency")
|
||||
uninstallCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+AvailableResolvers+" )")
|
||||
uninstallCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
|
||||
uninstallCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
|
||||
uninstallCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
|
||||
RootCmd.AddCommand(uninstallCmd)
|
||||
}
|
||||
|
@@ -17,50 +17,66 @@ package cmd
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
installer "github.com/mudler/luet/pkg/installer"
|
||||
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
)
|
||||
|
||||
var upgradeCmd = &cobra.Command{
|
||||
Use: "upgrade",
|
||||
Short: "Upgrades the system",
|
||||
PreRun: func(cmd *cobra.Command, args []string) {
|
||||
viper.BindPFlag("system-dbpath", cmd.Flags().Lookup("system-dbpath"))
|
||||
viper.BindPFlag("system-target", cmd.Flags().Lookup("system-target"))
|
||||
viper.BindPFlag("concurrency", cmd.Flags().Lookup("concurrency"))
|
||||
LuetCfg.Viper.BindPFlag("system.database_path", installCmd.Flags().Lookup("system-dbpath"))
|
||||
LuetCfg.Viper.BindPFlag("system.rootfs", installCmd.Flags().Lookup("system-target"))
|
||||
LuetCfg.Viper.BindPFlag("solver.type", cmd.Flags().Lookup("solver-type"))
|
||||
LuetCfg.Viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
|
||||
LuetCfg.Viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
|
||||
LuetCfg.Viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
|
||||
},
|
||||
Long: `Upgrades packages in parallel`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
c := []*installer.LuetRepository{}
|
||||
err := viper.UnmarshalKey("system-repositories", &c)
|
||||
var systemDB pkg.PackageDatabase
|
||||
|
||||
repos := installer.Repositories{}
|
||||
for _, repo := range LuetCfg.SystemRepositories {
|
||||
if !repo.Enable {
|
||||
continue
|
||||
}
|
||||
|
||||
r := installer.NewSystemRepository(repo)
|
||||
repos = append(repos, r)
|
||||
}
|
||||
|
||||
stype := LuetCfg.Viper.GetString("solver.type")
|
||||
discount := LuetCfg.Viper.GetFloat64("solver.discount")
|
||||
rate := LuetCfg.Viper.GetFloat64("solver.rate")
|
||||
attempts := LuetCfg.Viper.GetInt("solver.max_attempts")
|
||||
|
||||
LuetCfg.GetSolverOptions().Type = stype
|
||||
LuetCfg.GetSolverOptions().LearnRate = float32(rate)
|
||||
LuetCfg.GetSolverOptions().Discount = float32(discount)
|
||||
LuetCfg.GetSolverOptions().MaxAttempts = attempts
|
||||
|
||||
Debug("Solver", LuetCfg.GetSolverOptions().String())
|
||||
|
||||
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{Concurrency: LuetCfg.GetGeneral().Concurrency, SolverOptions: *LuetCfg.GetSolverOptions()})
|
||||
inst.Repositories(repos)
|
||||
_, err := inst.SyncRepositories(false)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
|
||||
// This shouldn't be necessary, but we need to unmarshal the repositories to a concrete struct, thus we need to port them back to the Repositories type
|
||||
synced := installer.Repositories{}
|
||||
for _, toSync := range c {
|
||||
s, err := toSync.Sync()
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
synced = append(synced, s)
|
||||
if LuetCfg.GetSystem().DatabaseEngine == "boltdb" {
|
||||
systemDB = pkg.NewBoltDatabase(
|
||||
filepath.Join(LuetCfg.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
|
||||
} else {
|
||||
systemDB = pkg.NewInMemoryDatabase(true)
|
||||
}
|
||||
|
||||
inst := installer.NewLuetInstaller(viper.GetInt("concurrency"))
|
||||
|
||||
inst.Repositories(synced)
|
||||
|
||||
os.MkdirAll(viper.GetString("system-dbpath"), os.ModePerm)
|
||||
systemDB := pkg.NewBoltDatabase(filepath.Join(viper.GetString("system-dbpath"), "luet.db"))
|
||||
system := &installer.System{Database: systemDB, Target: viper.GetString("system-target")}
|
||||
system := &installer.System{Database: systemDB, Target: LuetCfg.GetSystem().Rootfs}
|
||||
err = inst.Upgrade(system)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
@@ -75,7 +91,9 @@ func init() {
|
||||
}
|
||||
upgradeCmd.Flags().String("system-dbpath", path, "System db path")
|
||||
upgradeCmd.Flags().String("system-target", path, "System rootpath")
|
||||
upgradeCmd.Flags().Int("concurrency", runtime.NumCPU(), "Concurrency")
|
||||
|
||||
upgradeCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+AvailableResolvers+" )")
|
||||
upgradeCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
|
||||
upgradeCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
|
||||
upgradeCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
|
||||
RootCmd.AddCommand(upgradeCmd)
|
||||
}
|
||||
|
128
contrib/config/luet.yaml
Normal file
128
contrib/config/luet.yaml
Normal file
@@ -0,0 +1,128 @@
|
||||
# Luet Configuration File
|
||||
#
|
||||
# ---------------------------------------------
|
||||
# Logging configuration section:
|
||||
# ---------------------------------------------
|
||||
# logging:
|
||||
# Leave empty to skip logging to file.
|
||||
# path: ""
|
||||
#
|
||||
# Set logging level: error|warning|info|debug
|
||||
# level: "info"
|
||||
#
|
||||
# Enable JSON log format instead of console mode.
|
||||
# json_format: false.
|
||||
#
|
||||
# ---------------------------------------------
|
||||
# General configuration section:
|
||||
# ---------------------------------------------
|
||||
# general:
|
||||
# Define max concurrency processes. Default is based of arch: runtime.NumCPU()
|
||||
# concurrency: 1
|
||||
#
|
||||
# Enable Debug. If debug is active spinner is disabled.
|
||||
# debug: false
|
||||
#
|
||||
# Show output of build execution (docker, img, etc.)
|
||||
# show_build_output: false
|
||||
#
|
||||
# Define spinner ms
|
||||
# spinner_ms: 200
|
||||
#
|
||||
# Define spinner charset. See https://github.com/briandowns/spinner
|
||||
# spinner_charset: 22
|
||||
#
|
||||
# Enable warnings to exit
|
||||
# fatal_warnings: false
|
||||
#
|
||||
# Try extracting tree/packages with the same ownership as exists in the archive (default for superuser).
|
||||
# same_owner: false
|
||||
#
|
||||
# ---------------------------------------------
|
||||
# System configuration section:
|
||||
# ---------------------------------------------
|
||||
# system:
|
||||
#
|
||||
# Rootfs path of the luet system. Default is /.
|
||||
# A specific path could be used for test installation to
|
||||
# a chroot environment.
|
||||
# rootfs: "/"
|
||||
#
|
||||
# Choice database engine used for luet database.
|
||||
# Supported values: boltdb|memory
|
||||
# database_engine: boltdb
|
||||
#
|
||||
# Database path directory where store luet database.
|
||||
# The path is append to rootfs option path.
|
||||
# database_path: "/var/cache/luet"
|
||||
#
|
||||
# ---------------------------------------------
|
||||
# Repositories configurations directories.
|
||||
# ---------------------------------------------
|
||||
# Define the list of directories where luet
|
||||
# try for files with .yml extension that define
|
||||
# luet repository.
|
||||
# repos_confdir:
|
||||
# - /etc/luet/repos.conf.d
|
||||
#
|
||||
#
|
||||
# ---------------------------------------------
|
||||
# System repositories
|
||||
# ---------------------------------------------
|
||||
# In alternative to define repositories files
|
||||
# through repos_confdir option is possible
|
||||
# define directly the list of the repositories.
|
||||
#
|
||||
# repositories:
|
||||
#
|
||||
# Name of the repository. It's better that this name is unique. Mandatory.
|
||||
# - name: "repo1"
|
||||
#
|
||||
# A user-friendly description of the repository
|
||||
# description: "My luet repo"
|
||||
#
|
||||
# Type of the repository. Supported types are: dir|http. Mandatory.
|
||||
# type: "dir"
|
||||
#
|
||||
# Define the priority of the repository on research packages. Default is 9999.
|
||||
# priority: 9999
|
||||
#
|
||||
# Enable/Disable of the repository.
|
||||
# enable: false
|
||||
#
|
||||
# Cached repository. If true a local cache of the remote repository tree is maintained
|
||||
# locally in the $tree_path else it is used a temporary directory that is removed when
|
||||
# installation of a package is completed. A cached repository reduce time on search/install
|
||||
# packages. By default caching is disable.
|
||||
# cached: false
|
||||
#
|
||||
# Path where store tree of the specifications. Default path is $database_path/repos/$repo_name
|
||||
# tree_path: "/var/cache/luet/repos/local"
|
||||
#
|
||||
# Define the list of the URL where retrieve tree and packages.
|
||||
# urls:
|
||||
# - https://mydomain.local/luet/repo1
|
||||
#
|
||||
# auth:
|
||||
# Define Basic authentication header
|
||||
# basic: "mybasicauth"
|
||||
# Define token authentication header
|
||||
# token: "mytoken"
|
||||
# ---------------------------------------------
|
||||
# Solver parameter configuration:
|
||||
# ---------------------------------------------
|
||||
# solver:
|
||||
#
|
||||
# Solver strategy to solve possible conflicts during depedency
|
||||
# solving. Defaults to empty (none). Available: qlearning
|
||||
# type: ""
|
||||
#
|
||||
# Solver agent learning rate. 0.1 to 1.0
|
||||
# rate: 0.7
|
||||
#
|
||||
# Learning discount factor.
|
||||
# discount: 1.0
|
||||
#
|
||||
# Number of overall attempts that the solver has available before bailing out.
|
||||
# max_attempts: 9000
|
||||
#
|
22
go.mod
22
go.mod
@@ -4,18 +4,18 @@ go 1.12
|
||||
|
||||
require (
|
||||
github.com/DataDog/zstd v1.4.4 // indirect
|
||||
github.com/MottainaiCI/simplestreams-builder v0.0.0-20190710131531-efb382161f56 // indirect
|
||||
github.com/Sabayon/pkgs-checker v0.4.1
|
||||
github.com/Sabayon/pkgs-checker v0.4.2-0.20200101193228-1d500105afb7
|
||||
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154
|
||||
github.com/briandowns/spinner v1.7.0
|
||||
github.com/cavaliercoder/grab v2.0.0+incompatible
|
||||
github.com/crillab/gophersat v1.1.7
|
||||
github.com/crillab/gophersat v1.1.9-0.20200211102949-9a8bf7f2f0a3
|
||||
github.com/docker/docker v0.7.3-0.20180827131323-0c5f8d2b9b23
|
||||
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd
|
||||
github.com/ghodss/yaml v1.0.0
|
||||
github.com/go-yaml/yaml v2.1.0+incompatible // indirect
|
||||
github.com/hashicorp/go-version v1.2.0
|
||||
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3
|
||||
github.com/klauspost/pgzip v1.2.1
|
||||
github.com/kr/pretty v0.2.0 // indirect
|
||||
github.com/kyokomi/emoji v2.1.0+incompatible
|
||||
github.com/logrusorgru/aurora v0.0.0-20190417123914-21d75270181e
|
||||
github.com/marcsauter/single v0.0.0-20181104081128-f8bf46f26ec0
|
||||
@@ -33,9 +33,15 @@ require (
|
||||
github.com/spf13/viper v1.5.0
|
||||
github.com/stevenle/topsort v0.0.0-20130922064739-8130c1d7596b
|
||||
go.etcd.io/bbolt v1.3.3
|
||||
golang.org/x/crypto v0.0.0-20191028145041-f83a4685e152 // indirect
|
||||
golang.org/x/sys v0.0.0-20191110163157-d32e6e3b99c4 // indirect
|
||||
gopkg.in/yaml.v2 v2.2.5
|
||||
mvdan.cc/sh v2.6.4+incompatible // indirect
|
||||
go.uber.org/atomic v1.5.1 // indirect
|
||||
go.uber.org/multierr v1.4.0 // indirect
|
||||
go.uber.org/zap v1.13.0
|
||||
golang.org/x/crypto v0.0.0-20191227163750-53104e6ec876 // indirect
|
||||
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f // indirect
|
||||
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553 // indirect
|
||||
golang.org/x/sys v0.0.0-20200102141924-c96a22e43c9c // indirect
|
||||
golang.org/x/tools v0.0.0-20200102200121-6de373a2766c // indirect
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 // indirect
|
||||
gopkg.in/yaml.v2 v2.2.7
|
||||
mvdan.cc/sh/v3 v3.0.0-beta1
|
||||
)
|
||||
|
63
go.sum
63
go.sum
@@ -9,13 +9,11 @@ github.com/Microsoft/go-winio v0.4.11 h1:zoIOcVf0xPN1tnMVbTtEdI+P8OofVk3NObnwOQ6
|
||||
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
|
||||
github.com/Microsoft/hcsshim v0.8.6 h1:ZfF0+zZeYdzMIVMZHKtDKJvLHj76XCuVae/jNkjj0IA=
|
||||
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
|
||||
github.com/MottainaiCI/simplestreams-builder v0.0.0-20190710131531-efb382161f56 h1:XCZM9J5KqLsr5NqtrZuXiD3X5fe5IfgU7IIUZzpeFBk=
|
||||
github.com/MottainaiCI/simplestreams-builder v0.0.0-20190710131531-efb382161f56/go.mod h1:+Gbv6dg6TPHWq4oDjZY1vn978PLCEZ2hOu8kvn+S7t4=
|
||||
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEVMRuU21PR1EtLVZJmdB18Gu3Rw=
|
||||
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
|
||||
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
|
||||
github.com/Sabayon/pkgs-checker v0.4.1 h1:NImZhA5Z9souyr9Ff3nDzP0Bs9SGuDLxRzduVqci3dQ=
|
||||
github.com/Sabayon/pkgs-checker v0.4.1/go.mod h1:GFGM6ZzSE5owdGgjLnulj0+Vt9UTd5LFGmB2AOVPYrE=
|
||||
github.com/Sabayon/pkgs-checker v0.4.2-0.20200101193228-1d500105afb7 h1:Vf80sSLu1ZWjjMmUKhw0FqM43lEOvT8O5B22NaHB6AQ=
|
||||
github.com/Sabayon/pkgs-checker v0.4.2-0.20200101193228-1d500105afb7/go.mod h1:GFGM6ZzSE5owdGgjLnulj0+Vt9UTd5LFGmB2AOVPYrE=
|
||||
github.com/Sereal/Sereal v0.0.0-20181211220259-509a78ddbda3 h1:Xu7z47ZiE/J+sKXHZMGxEor/oY2q6dq51fkO0JqdSwY=
|
||||
github.com/Sereal/Sereal v0.0.0-20181211220259-509a78ddbda3/go.mod h1:D0JMgToj/WdxCgd30Kc1UcA9E+WdZoJqeVOuYW7iTBM=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
@@ -57,6 +55,8 @@ github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:ma
|
||||
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
|
||||
github.com/crillab/gophersat v1.1.7 h1:f2Phe0W9jGyN1OefygKdcTdNM99q/goSjbWrFRjZGWc=
|
||||
github.com/crillab/gophersat v1.1.7/go.mod h1:S91tHga1PCZzYhCkStwZAhvp1rCc+zqtSi55I+vDWGc=
|
||||
github.com/crillab/gophersat v1.1.9-0.20200211102949-9a8bf7f2f0a3 h1:HO63LCf9kTXQgUnlvFeS2qSDQhZ/cLP8DAJO89CythY=
|
||||
github.com/crillab/gophersat v1.1.9-0.20200211102949-9a8bf7f2f0a3/go.mod h1:S91tHga1PCZzYhCkStwZAhvp1rCc+zqtSi55I+vDWGc=
|
||||
github.com/cyphar/filepath-securejoin v0.2.2 h1:jCwT2GTP+PY5nBz3c/YL5PAIbusElVrPujOBSCj8xRg=
|
||||
github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
@@ -77,6 +77,8 @@ github.com/docker/libnetwork v0.8.0-dev.2.0.20180608203834-19279f049241 h1:+ebE/
|
||||
github.com/docker/libnetwork v0.8.0-dev.2.0.20180608203834-19279f049241/go.mod h1:93m0aTqz6z+g32wla4l4WxTrdtvBRmVzYRkYvasA5Z8=
|
||||
github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7 h1:UhxFibDNY/bfvqU5CAUmr9zpesgbU6SWc8/B4mflAE4=
|
||||
github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
|
||||
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd h1:JWEotl+g5uCCn37eVAYLF3UjBqO5HJ0ezZ5Zgnsdoqc=
|
||||
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd/go.mod h1:y0+kb0ORo7mC8lQbUzC4oa7ufu565J6SyUgWd39Z1Ic=
|
||||
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
|
||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||
@@ -89,8 +91,6 @@ github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/go-yaml/yaml v2.1.0+incompatible h1:RYi2hDdss1u4YE7GwixGzWwVo47T8UQwnTLB6vQiq+o=
|
||||
github.com/go-yaml/yaml v2.1.0+incompatible/go.mod h1:w2MrLa16VYP0jy6N7M5kHaCkaLENm+P+Tv+MfurjSw0=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
|
||||
@@ -107,6 +107,7 @@ github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8l
|
||||
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/gorilla/context v1.1.1 h1:AWwleXJkX/nhcU9bZSnZoi3h/qGYqQAGhq6zZe/aQW8=
|
||||
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
|
||||
@@ -146,6 +147,8 @@ github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxv
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs=
|
||||
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
@@ -197,7 +200,9 @@ github.com/opencontainers/runtime-spec v1.0.1 h1:wY4pOY8fBdSIvs9+IDHC55thBuEulhz
|
||||
github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
|
||||
github.com/otiai10/copy v1.0.2 h1:DDNipYy6RkIkjMwy+AWzgKiNTyj2RUI9yEMeETEpVyc=
|
||||
github.com/otiai10/copy v1.0.2/go.mod h1:c7RpqBkwMom4bYTSkLSym4VSJz/XtncWRAj/J4PEIMY=
|
||||
github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95 h1:+OLn68pqasWca0z5ryit9KGfp3sUsW4Lqg32iRMJyzs=
|
||||
github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE=
|
||||
github.com/otiai10/mint v1.3.0 h1:Ady6MKVezQwHBkGzLFbrsywyp09Ah7rkmfjV3Bcr5uc=
|
||||
github.com/otiai10/mint v1.3.0/go.mod h1:F5AjcsTsWUqX+Na9fpHb52P8pcRX2CI6A3ctIT91xUo=
|
||||
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||
github.com/pelletier/go-toml v1.6.0 h1:aetoXYr0Tv7xRU/V4B4IZJ2QcbtMUFoNb3ORp7TzIK4=
|
||||
@@ -222,6 +227,7 @@ github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40T
|
||||
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
||||
github.com/rogpeppe/fastuuid v1.1.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
|
||||
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/rogpeppe/go-internal v1.5.0/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
|
||||
github.com/rootless-containers/proto v0.1.0 h1:gS1JOMEtk1YDYHCzBAf/url+olMJbac7MTrgSeP6zh4=
|
||||
github.com/rootless-containers/proto v0.1.0/go.mod h1:vgkUFZbQd0gcE/K/ZwtE4MYjZPu0UNHLXIQxhyqAFh8=
|
||||
@@ -257,8 +263,6 @@ github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnIn
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
|
||||
github.com/spf13/viper v1.4.0 h1:yXHLWeravcrgGyFSyCgdYpXQ9dR9c/WED3pg1RhxqEU=
|
||||
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
|
||||
github.com/spf13/viper v1.5.0 h1:GpsTwfsQ27oS/Aha/6d1oD7tpKIqWnOA6tgOX9HHkt4=
|
||||
github.com/spf13/viper v1.5.0/go.mod h1:AkYRkVJF8TkSG/xet6PzXX+l39KhhXa2pdqVSxnTcn4=
|
||||
github.com/stevenle/topsort v0.0.0-20130922064739-8130c1d7596b h1:wJSBFlabo96ySlmSX0a02WAPyGxagzTo9c5sk3sHP3E=
|
||||
@@ -296,19 +300,36 @@ go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
go.etcd.io/bbolt v1.3.3 h1:MUGmc65QhB3pIlaQ5bB4LwqSj6GIonVJXpZiaKNyaKk=
|
||||
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.5.1 h1:rsqfU5vBkVknbhUGbAUwQKR2H4ItV8tjJ+6kJX4cxHM=
|
||||
go.uber.org/atomic v1.5.1/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||
go.uber.org/multierr v1.4.0 h1:f3WCSC2KzAcBXGATIxAB1E2XuCpNU255wNKZ505qi3E=
|
||||
go.uber.org/multierr v1.4.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.13.0 h1:nR6NoDBgAf67s68NhaXbsojM+2gxp3S1hWkHDl27pVU=
|
||||
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
|
||||
golang.org/x/crypto v0.0.0-20180820150726-614d502a4dac/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191028145041-f83a4685e152 h1:ZC1Xn5A1nlpSmQCIva4bZ3ob3lmhYIefc+GU+DLg1Ow=
|
||||
golang.org/x/crypto v0.0.0-20191028145041-f83a4685e152/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191227163750-53104e6ec876 h1:sKJQZMuxjOAR/Uo2LBfU90onWEf1dF4C+0hPJCc9Mpc=
|
||||
golang.org/x/crypto v0.0.0-20191227163750-53104e6ec876/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f h1:J5lckAjkw6qYlOZNj90mLYNTEKDvWeuc1yieZ8qUzUE=
|
||||
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
|
||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
@@ -319,6 +340,8 @@ golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190912160710-24e19bdeb0f2 h1:4dVFTC832rPn4pomLSz1vA+are2+dU19w1H8OngV7nc=
|
||||
golang.org/x/net v0.0.0-20190912160710-24e19bdeb0f2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553 h1:efeOvDhwQ29Dj3SdAV/MJf8oukgn+8D8WgaCaRMchF8=
|
||||
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
@@ -341,8 +364,8 @@ golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20190913121621-c3b328c6e5a7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191008105621-543471e840be h1:QAcqgptGM8IQBC9K/RC4o+O9YmqEm0diQn9QmZw/0mU=
|
||||
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191110163157-d32e6e3b99c4 h1:Hynbrlo6LbYI3H1IqXpkVDOcX/3HiPdhVEuyj5a59RM=
|
||||
golang.org/x/sys v0.0.0-20191110163157-d32e6e3b99c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200102141924-c96a22e43c9c h1:OYFUffxXPezb7BVTx9AaD4Vl0qtxmklBIkwCKH1YwDY=
|
||||
golang.org/x/sys v0.0.0-20200102141924-c96a22e43c9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
@@ -351,10 +374,18 @@ golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGm
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190913181337-0240832f5c3d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200102200121-6de373a2766c h1:PBxLbymhzlh6kZuAXmeh8JK2tAJR0GM5Q/W71G2QJ40=
|
||||
golang.org/x/tools v0.0.0-20200102200121-6de373a2766c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898 h1:/atklqdjdhuosWIl6AIbOeHJjicWYPqR9bpxqxYG2pA=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/appengine v1.1.0 h1:igQkv0AAhEIvTEpD5LIpAfav2eeVO9HBTjvKHVJPRSs=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
@@ -378,13 +409,13 @@ gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I=
|
||||
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.5 h1:ymVxjfMaHvXD8RqPRmzHHsB3VvucivSkIAvJFDI5O3c=
|
||||
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.7 h1:VUgggvou5XRW9mHwD/yXxIYSMtY0zoKQf/v226p2nyo=
|
||||
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gotest.tools v2.1.0+incompatible h1:5USw7CrJBYKqjg9R7QlA6jzqZKEAtvW82aNmsxxGPxw=
|
||||
gotest.tools v2.1.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
mvdan.cc/editorconfig v0.1.1-0.20191109213504-890940e3f00e/go.mod h1:Ge4atmRUYqueGppvJ7JNrtqpqokoJEFxYbP0Z+WeKS8=
|
||||
mvdan.cc/sh v2.6.4+incompatible h1:eD6tDeh0pw+/TOTI1BBEryZ02rD2nMcFsgcvde7jffM=
|
||||
mvdan.cc/sh v2.6.4+incompatible/go.mod h1:IeeQbZq+x2SUGBensq/jge5lLQbS3XT2ktyp3wrt4x8=
|
||||
mvdan.cc/sh/v3 v3.0.0-beta1 h1:UqiwBEXEPzelaGxuvixaOtzc7WzKtrElePJ8HqvW7K8=
|
||||
mvdan.cc/sh/v3 v3.0.0-beta1/go.mod h1:rBIndNJFYPp8xSppiZcGIk6B5d1g3OEARxEaXjPxwVI=
|
||||
|
@@ -31,6 +31,7 @@ import (
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
"github.com/mudler/luet/pkg/solver"
|
||||
@@ -112,6 +113,12 @@ func (a *PackageArtifact) SetCompressionType(t CompressionImplementation) {
|
||||
a.CompressionType = t
|
||||
}
|
||||
|
||||
func (a *PackageArtifact) GetChecksums() Checksums {
|
||||
return a.Checksums
|
||||
}
|
||||
func (a *PackageArtifact) SetChecksums(c Checksums) {
|
||||
a.Checksums = c
|
||||
}
|
||||
func (a *PackageArtifact) Hash() error {
|
||||
return a.Checksums.Generate(a)
|
||||
}
|
||||
@@ -288,14 +295,15 @@ func (a *PackageArtifact) Unpack(dst string, keepPerms bool) error {
|
||||
return errors.Wrap(err, "Cannot copy to "+a.GetPath()+".uncompressed")
|
||||
}
|
||||
|
||||
err = helpers.Untar(a.GetPath()+".uncompressed", dst, keepPerms)
|
||||
err = helpers.Untar(a.GetPath()+".uncompressed", dst,
|
||||
LuetCfg.GetGeneral().SameOwner)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
// Defaults to tar only (covers when "none" is supplied)
|
||||
default:
|
||||
return helpers.Untar(a.GetPath(), dst, keepPerms)
|
||||
return helpers.Untar(a.GetPath(), dst, LuetCfg.GetGeneral().SameOwner)
|
||||
}
|
||||
return errors.New("Compression type must be supplied")
|
||||
}
|
||||
@@ -459,3 +467,38 @@ func ExtractArtifactFromDelta(src, dst string, layers []ArtifactLayer, concurren
|
||||
}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
func ComputeArtifactLayerSummary(diffs []ArtifactLayer) ArtifactLayersSummary {
|
||||
|
||||
ans := ArtifactLayersSummary{
|
||||
Layers: make([]ArtifactLayerSummary, 0),
|
||||
}
|
||||
|
||||
for _, layer := range diffs {
|
||||
sum := ArtifactLayerSummary{
|
||||
FromImage: layer.FromImage,
|
||||
ToImage: layer.ToImage,
|
||||
AddFiles: 0,
|
||||
AddSizes: 0,
|
||||
DelFiles: 0,
|
||||
DelSizes: 0,
|
||||
ChangeFiles: 0,
|
||||
ChangeSizes: 0,
|
||||
}
|
||||
for _, a := range layer.Diffs.Additions {
|
||||
sum.AddFiles++
|
||||
sum.AddSizes += int64(a.Size)
|
||||
}
|
||||
for _, d := range layer.Diffs.Deletions {
|
||||
sum.DelFiles++
|
||||
sum.DelSizes += int64(d.Size)
|
||||
}
|
||||
for _, c := range layer.Diffs.Changes {
|
||||
sum.ChangeFiles++
|
||||
sum.ChangeSizes += int64(c.Size)
|
||||
}
|
||||
ans.Layers = append(ans.Layers, sum)
|
||||
}
|
||||
|
||||
return ans
|
||||
}
|
||||
|
@@ -18,11 +18,15 @@ package backend_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestSolver(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Backend Suite")
|
||||
}
|
||||
|
@@ -17,6 +17,7 @@ package backend
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
@@ -26,6 +27,7 @@ import (
|
||||
capi "github.com/mudler/docker-companion/api"
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
@@ -54,7 +56,12 @@ func (*SimpleDocker) BuildImage(opts compiler.CompilerBackendOptions) error {
|
||||
}
|
||||
Info(":whale: Building image " + name + " done")
|
||||
|
||||
//Info(string(out))
|
||||
if config.LuetCfg.GetGeneral().ShowBuildOutput {
|
||||
Info(string(out))
|
||||
} else {
|
||||
Debug(string(out))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -94,6 +101,18 @@ func (*SimpleDocker) RemoveImage(opts compiler.CompilerBackendOptions) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (*SimpleDocker) Push(opts compiler.CompilerBackendOptions) error {
|
||||
name := opts.ImageName
|
||||
pusharg := []string{"push", name}
|
||||
out, err := exec.Command("docker", pusharg...).CombinedOutput()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Failed pushing image: "+string(out))
|
||||
}
|
||||
Info(":whale: Pushed image:", name)
|
||||
//Info(string(out))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SimpleDocker) ImageDefinitionToTar(opts compiler.CompilerBackendOptions) error {
|
||||
if err := s.BuildImage(opts); err != nil {
|
||||
return errors.Wrap(err, "Failed building image")
|
||||
@@ -229,11 +248,27 @@ func (*SimpleDocker) Changes(fromImage, toImage string) ([]compiler.ArtifactLaye
|
||||
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Failed Resolving layer diffs: "+string(out))
|
||||
}
|
||||
|
||||
if config.LuetCfg.GetGeneral().ShowBuildOutput {
|
||||
Info(string(out))
|
||||
}
|
||||
|
||||
var diffs []compiler.ArtifactLayer
|
||||
|
||||
err = json.Unmarshal(out, &diffs)
|
||||
if err != nil {
|
||||
return []compiler.ArtifactLayer{}, errors.Wrap(err, "Failed unmarshalling json response: "+string(out))
|
||||
}
|
||||
|
||||
if config.LuetCfg.GetLogging().Level == "debug" {
|
||||
summary := compiler.ComputeArtifactLayerSummary(diffs)
|
||||
for _, l := range summary.Layers {
|
||||
Debug(fmt.Sprintf("Diff %s -> %s: add %d (%d bytes), del %d (%d bytes), change %d (%d bytes)",
|
||||
l.FromImage, l.ToImage,
|
||||
l.AddFiles, l.AddSizes,
|
||||
l.DelFiles, l.DelSizes,
|
||||
l.ChangeFiles, l.ChangeSizes))
|
||||
}
|
||||
}
|
||||
|
||||
return diffs, nil
|
||||
}
|
||||
|
@@ -145,3 +145,15 @@ func (*SimpleImg) ExtractRootfs(opts compiler.CompilerBackendOptions, keepPerms
|
||||
func (*SimpleImg) Changes(fromImage, toImage string) ([]compiler.ArtifactLayer, error) {
|
||||
return NewSimpleDockerBackend().Changes(fromImage, toImage)
|
||||
}
|
||||
|
||||
func (*SimpleImg) Push(opts compiler.CompilerBackendOptions) error {
|
||||
name := opts.ImageName
|
||||
pusharg := []string{"push", name}
|
||||
out, err := exec.Command("img", pusharg...).CombinedOutput()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Failed pushing image: "+string(out))
|
||||
}
|
||||
Info(":tea: Pushed image:", name)
|
||||
//Info(string(out))
|
||||
return nil
|
||||
}
|
||||
|
@@ -42,6 +42,7 @@ type LuetCompiler struct {
|
||||
PullFirst, KeepImg, Clean bool
|
||||
Concurrency int
|
||||
CompressionType CompressionImplementation
|
||||
Options CompilerOptions
|
||||
}
|
||||
|
||||
func NewLuetCompiler(backend CompilerBackend, db pkg.PackageDatabase, opt *CompilerOptions) Compiler {
|
||||
@@ -58,6 +59,7 @@ func NewLuetCompiler(backend CompilerBackend, db pkg.PackageDatabase, opt *Compi
|
||||
KeepImg: opt.KeepImg,
|
||||
Concurrency: opt.Concurrency,
|
||||
Clean: opt.Clean,
|
||||
Options: *opt,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -254,6 +256,15 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
|
||||
return nil, errors.Wrap(err, "Could not copy package sources")
|
||||
|
||||
}
|
||||
|
||||
// Copy file into the build context, the compilespec might have requested to do so.
|
||||
if len(p.GetRetrieve()) > 0 {
|
||||
err := p.CopyRetrieves(buildDir)
|
||||
if err != nil {
|
||||
Warning("Failed copying retrieves", err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
if buildertaggedImage == "" {
|
||||
buildertaggedImage = cs.ImageRepository + "-" + p.GetPackage().GetFingerPrint() + "-builder"
|
||||
}
|
||||
@@ -261,7 +272,7 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
|
||||
packageImage = cs.ImageRepository + "-" + p.GetPackage().GetFingerPrint()
|
||||
}
|
||||
|
||||
if cs.PullFirst {
|
||||
if cs.Options.PullFirst {
|
||||
//Best effort pull
|
||||
cs.Backend.DownloadImage(CompilerBackendOptions{ImageName: buildertaggedImage})
|
||||
cs.Backend.DownloadImage(CompilerBackendOptions{ImageName: packageImage})
|
||||
@@ -288,6 +299,12 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
|
||||
return nil, errors.Wrap(err, "Could not export image")
|
||||
}
|
||||
|
||||
if cs.Options.Push {
|
||||
err = cs.Backend.Push(builderOpts)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Could not push image: "+image+" "+builderOpts.DockerFileName)
|
||||
}
|
||||
}
|
||||
// Then we write the step image, which uses the builder one
|
||||
p.WriteStepImageDefinition(buildertaggedImage, filepath.Join(buildDir, p.GetPackage().GetFingerPrint()+".dockerfile"))
|
||||
runnerOpts := CompilerBackendOptions{
|
||||
@@ -309,6 +326,13 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
|
||||
if err := cs.Backend.ExportImage(runnerOpts); err != nil {
|
||||
return nil, errors.Wrap(err, "Failed exporting image")
|
||||
}
|
||||
|
||||
if cs.Options.Push {
|
||||
err = cs.Backend.Push(runnerOpts)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Could not push image: "+image+" "+builderOpts.DockerFileName)
|
||||
}
|
||||
}
|
||||
// }
|
||||
|
||||
var diffs []ArtifactLayer
|
||||
@@ -341,13 +365,11 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage, packageImage
|
||||
// TODO: Handle caching and optionally do not remove things
|
||||
err = cs.Backend.RemoveImage(builderOpts)
|
||||
if err != nil {
|
||||
// TODO: Have a --fatal flag which enables Warnings to exit.
|
||||
Warning("Could not remove image ", builderOpts.ImageName)
|
||||
// return nil, errors.Wrap(err, "Could not remove image")
|
||||
}
|
||||
err = cs.Backend.RemoveImage(runnerOpts)
|
||||
if err != nil {
|
||||
// TODO: Have a --fatal flag which enables Warnings to exit.
|
||||
Warning("Could not remove image ", builderOpts.ImageName)
|
||||
// return nil, errors.Wrap(err, "Could not remove image")
|
||||
}
|
||||
@@ -445,7 +467,6 @@ func (cs *LuetCompiler) packageFromImage(p CompilationSpec, tag string, keepPerm
|
||||
// TODO: Handle caching and optionally do not remove things
|
||||
err = cs.Backend.RemoveImage(builderOpts)
|
||||
if err != nil {
|
||||
// TODO: Have a --fatal flag which enables Warnings to exit.
|
||||
Warning("Could not remove image ", builderOpts.ImageName)
|
||||
// return nil, errors.Wrap(err, "Could not remove image")
|
||||
}
|
||||
@@ -462,7 +483,7 @@ func (cs *LuetCompiler) packageFromImage(p CompilationSpec, tag string, keepPerm
|
||||
|
||||
func (cs *LuetCompiler) ComputeDepTree(p CompilationSpec) (solver.PackagesAssertions, error) {
|
||||
|
||||
s := solver.NewSolver(pkg.NewInMemoryDatabase(false), cs.Database, pkg.NewInMemoryDatabase(false))
|
||||
s := solver.NewResolver(pkg.NewInMemoryDatabase(false), cs.Database, pkg.NewInMemoryDatabase(false), cs.Options.SolverOptions.Resolver())
|
||||
|
||||
solution, err := s.Install([]pkg.Package{p.GetPackage()})
|
||||
if err != nil {
|
||||
|
@@ -18,11 +18,15 @@ package compiler_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestSolver(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Compiler Suite")
|
||||
}
|
||||
|
@@ -18,6 +18,7 @@ package compiler
|
||||
import (
|
||||
"runtime"
|
||||
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/solver"
|
||||
)
|
||||
@@ -43,17 +44,20 @@ type CompilerBackendOptions struct {
|
||||
}
|
||||
|
||||
type CompilerOptions struct {
|
||||
ImageRepository string
|
||||
PullFirst, KeepImg bool
|
||||
Concurrency int
|
||||
CompressionType CompressionImplementation
|
||||
Clean bool
|
||||
ImageRepository string
|
||||
PullFirst, KeepImg, Push bool
|
||||
Concurrency int
|
||||
CompressionType CompressionImplementation
|
||||
Clean bool
|
||||
|
||||
SolverOptions config.LuetSolverOptions
|
||||
}
|
||||
|
||||
func NewDefaultCompilerOptions() *CompilerOptions {
|
||||
return &CompilerOptions{
|
||||
ImageRepository: "luet/cache",
|
||||
PullFirst: true,
|
||||
PullFirst: false,
|
||||
Push: false,
|
||||
CompressionType: None,
|
||||
KeepImg: true,
|
||||
Concurrency: runtime.NumCPU(),
|
||||
@@ -71,6 +75,8 @@ type CompilerBackend interface {
|
||||
|
||||
CopyImage(string, string) error
|
||||
DownloadImage(opts CompilerBackendOptions) error
|
||||
|
||||
Push(opts CompilerBackendOptions) error
|
||||
}
|
||||
|
||||
type Artifact interface {
|
||||
@@ -90,6 +96,9 @@ type Artifact interface {
|
||||
FileList() ([]string, error)
|
||||
Hash() error
|
||||
Verify() error
|
||||
|
||||
GetChecksums() Checksums
|
||||
SetChecksums(c Checksums)
|
||||
}
|
||||
|
||||
type ArtifactNode struct {
|
||||
@@ -106,6 +115,19 @@ type ArtifactLayer struct {
|
||||
ToImage string `json:"Image2"`
|
||||
Diffs ArtifactDiffs `json:"Diff"`
|
||||
}
|
||||
type ArtifactLayerSummary struct {
|
||||
FromImage string `json:"image1"`
|
||||
ToImage string `json:"image2"`
|
||||
AddFiles int `json:"add_files"`
|
||||
AddSizes int64 `json:"add_sizes"`
|
||||
DelFiles int `json:"del_files"`
|
||||
DelSizes int64 `json:"del_sizes"`
|
||||
ChangeFiles int `json:"change_files"`
|
||||
ChangeSizes int64 `json:"change_sizes"`
|
||||
}
|
||||
type ArtifactLayersSummary struct {
|
||||
Layers []ArtifactLayerSummary `json:"summary"`
|
||||
}
|
||||
|
||||
// CompilationSpec represent a compilation specification derived from a package
|
||||
type CompilationSpec interface {
|
||||
@@ -135,6 +157,9 @@ type CompilationSpec interface {
|
||||
|
||||
GetSourceAssertion() solver.PackagesAssertions
|
||||
SetSourceAssertion(as solver.PackagesAssertions)
|
||||
|
||||
GetRetrieve() []string
|
||||
CopyRetrieves(dest string) error
|
||||
}
|
||||
|
||||
type CompilationSpecs interface {
|
||||
|
@@ -21,6 +21,7 @@ import (
|
||||
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/solver"
|
||||
"github.com/otiai10/copy"
|
||||
yaml "gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
@@ -95,6 +96,8 @@ type LuetCompilationSpec struct {
|
||||
Package *pkg.DefaultPackage `json:"package"`
|
||||
SourceAssertion solver.PackagesAssertions `json:"-"`
|
||||
|
||||
Retrieve []string `json:"retrieve"`
|
||||
|
||||
OutputPath string `json:"-"` // Where the build processfiles go
|
||||
Unpack bool `json:"unpack"`
|
||||
Includes []string `json:"includes"`
|
||||
@@ -136,6 +139,10 @@ func (cs *LuetCompilationSpec) GetIncludes() []string {
|
||||
return cs.Includes
|
||||
}
|
||||
|
||||
func (cs *LuetCompilationSpec) GetRetrieve() []string {
|
||||
return cs.Retrieve
|
||||
}
|
||||
|
||||
func (cs *LuetCompilationSpec) GetSeedImage() string {
|
||||
return cs.Seed
|
||||
}
|
||||
@@ -164,6 +171,24 @@ func (cs *LuetCompilationSpec) SetSeedImage(s string) {
|
||||
cs.Seed = s
|
||||
}
|
||||
|
||||
func (cs *LuetCompilationSpec) CopyRetrieves(dest string) error {
|
||||
var err error
|
||||
if len(cs.Retrieve) > 0 {
|
||||
for _, s := range cs.Retrieve {
|
||||
matches, err := filepath.Glob(cs.Rel(s))
|
||||
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, m := range matches {
|
||||
err = copy.Copy(m, filepath.Join(dest, filepath.Base(m)))
|
||||
}
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// TODO: docker build image first. Then a backend can be used to actually spin up a container with it and run the steps within
|
||||
func (cs *LuetCompilationSpec) RenderBuildImage() (string, error) {
|
||||
spec := `
|
||||
@@ -174,6 +199,19 @@ ENV PACKAGE_NAME=` + cs.Package.GetName() + `
|
||||
ENV PACKAGE_VERSION=` + cs.Package.GetVersion() + `
|
||||
ENV PACKAGE_CATEGORY=` + cs.Package.GetCategory()
|
||||
|
||||
if len(cs.Retrieve) > 0 {
|
||||
for _, s := range cs.Retrieve {
|
||||
//var file string
|
||||
// if helpers.IsValidUrl(s) {
|
||||
// file = s
|
||||
// } else {
|
||||
// file = cs.Rel(s)
|
||||
// }
|
||||
spec = spec + `
|
||||
ADD ` + s + ` /luetbuild/`
|
||||
}
|
||||
}
|
||||
|
||||
for _, s := range cs.Env {
|
||||
spec = spec + `
|
||||
ENV ` + s
|
||||
|
@@ -105,4 +105,74 @@ RUN echo bar > /test2`))
|
||||
})
|
||||
|
||||
})
|
||||
|
||||
It("Renders retrieve and env fields", func() {
|
||||
generalRecipe := tree.NewGeneralRecipe(pkg.NewInMemoryDatabase(false))
|
||||
|
||||
err := generalRecipe.Load("../../tests/fixtures/retrieve")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
|
||||
|
||||
compiler := NewLuetCompiler(nil, generalRecipe.GetDatabase(), NewDefaultCompilerOptions())
|
||||
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
lspec, ok := spec.(*LuetCompilationSpec)
|
||||
Expect(ok).To(BeTrue())
|
||||
|
||||
Expect(lspec.Steps).To(Equal([]string{"echo foo > /test", "echo bar > /test2"}))
|
||||
Expect(lspec.Image).To(Equal("luet/base"))
|
||||
Expect(lspec.Seed).To(Equal("alpine"))
|
||||
tmpdir, err := ioutil.TempDir("", "tree")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(tmpdir) // clean up
|
||||
|
||||
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err := helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM alpine
|
||||
COPY . /luetbuild
|
||||
WORKDIR /luetbuild
|
||||
ENV PACKAGE_NAME=a
|
||||
ENV PACKAGE_VERSION=1.0
|
||||
ENV PACKAGE_CATEGORY=test
|
||||
ADD test /luetbuild/
|
||||
ADD http://www.google.com /luetbuild/
|
||||
ENV test=1`))
|
||||
|
||||
lspec.SetOutputPath("/foo/bar")
|
||||
|
||||
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM alpine
|
||||
COPY . /luetbuild
|
||||
WORKDIR /luetbuild
|
||||
ENV PACKAGE_NAME=a
|
||||
ENV PACKAGE_VERSION=1.0
|
||||
ENV PACKAGE_CATEGORY=test
|
||||
ADD test /luetbuild/
|
||||
ADD http://www.google.com /luetbuild/
|
||||
ENV test=1`))
|
||||
|
||||
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM luet/base
|
||||
ENV PACKAGE_NAME=a
|
||||
ENV PACKAGE_VERSION=1.0
|
||||
ENV PACKAGE_CATEGORY=test
|
||||
ENV test=1
|
||||
RUN echo foo > /test
|
||||
RUN echo bar > /test2`))
|
||||
|
||||
})
|
||||
})
|
||||
|
326
pkg/config/config.go
Normal file
326
pkg/config/config.go
Normal file
@@ -0,0 +1,326 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
solver "github.com/mudler/luet/pkg/solver"
|
||||
v "github.com/spf13/viper"
|
||||
)
|
||||
|
||||
var LuetCfg = NewLuetConfig(v.GetViper())
|
||||
var AvailableResolvers = strings.Join([]string{solver.QLearningResolverType}, " ")
|
||||
|
||||
type LuetLoggingConfig struct {
|
||||
Path string `mapstructure:"path"`
|
||||
JsonFormat bool `mapstructure:"json_format"`
|
||||
Level string `mapstructure:"level"`
|
||||
}
|
||||
|
||||
type LuetGeneralConfig struct {
|
||||
SameOwner bool `mapstructure:"same_owner"`
|
||||
Concurrency int `mapstructure:"concurrency"`
|
||||
Debug bool `mapstructure:"debug"`
|
||||
ShowBuildOutput bool `mapstructure:"show_build_output"`
|
||||
SpinnerMs int `mapstructure:"spinner_ms"`
|
||||
SpinnerCharset int `mapstructure:"spinner_charset"`
|
||||
FatalWarns bool `mapstructure:"fatal_warnings"`
|
||||
}
|
||||
|
||||
type LuetSolverOptions struct {
|
||||
Type string `mapstructure:"type"`
|
||||
LearnRate float32 `mapstructure:"rate"`
|
||||
Discount float32 `mapstructure:"discount"`
|
||||
MaxAttempts int `mapstructure:"max_attempts"`
|
||||
}
|
||||
|
||||
func (opts LuetSolverOptions) Resolver() solver.PackageResolver {
|
||||
switch opts.Type {
|
||||
case solver.QLearningResolverType:
|
||||
if opts.LearnRate != 0.0 {
|
||||
return solver.NewQLearningResolver(opts.LearnRate, opts.Discount, opts.MaxAttempts, 999999)
|
||||
|
||||
}
|
||||
return solver.SimpleQLearningSolver()
|
||||
}
|
||||
|
||||
return &solver.DummyPackageResolver{}
|
||||
}
|
||||
|
||||
func (opts *LuetSolverOptions) CompactString() string {
|
||||
return fmt.Sprintf("type: %s rate: %f, discount: %f, attempts: %d, initialobserved: %d",
|
||||
opts.Type, opts.LearnRate, opts.Discount, opts.MaxAttempts, 999999)
|
||||
}
|
||||
|
||||
type LuetSystemConfig struct {
|
||||
DatabaseEngine string `yaml:"database_engine" mapstructure:"database_engine"`
|
||||
DatabasePath string `yaml:"database_path" mapstructure:"database_path"`
|
||||
Rootfs string `yaml:"rootfs" mapstructure:"rootfs"`
|
||||
PkgsCachePath string `yaml:"pkgs_cache_path" mapstructure:"pkgs_cache_path"`
|
||||
}
|
||||
|
||||
func (sc LuetSystemConfig) GetRepoDatabaseDirPath(name string) string {
|
||||
dbpath := filepath.Join(sc.Rootfs, sc.DatabasePath)
|
||||
dbpath = filepath.Join(dbpath, "repos/"+name)
|
||||
err := os.MkdirAll(dbpath, os.ModePerm)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return dbpath
|
||||
}
|
||||
|
||||
func (sc LuetSystemConfig) GetSystemRepoDatabaseDirPath() string {
|
||||
dbpath := filepath.Join(sc.Rootfs,
|
||||
sc.DatabasePath)
|
||||
err := os.MkdirAll(dbpath, os.ModePerm)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return dbpath
|
||||
}
|
||||
|
||||
func (sc LuetSystemConfig) GetSystemPkgsCacheDirPath() (ans string) {
|
||||
var cachepath string
|
||||
if sc.PkgsCachePath != "" {
|
||||
cachepath = sc.PkgsCachePath
|
||||
} else {
|
||||
// Create dynamic cache for test suites
|
||||
cachepath, _ = ioutil.TempDir(os.TempDir(), "cachepkgs")
|
||||
}
|
||||
|
||||
if filepath.IsAbs(cachepath) {
|
||||
ans = cachepath
|
||||
} else {
|
||||
ans = filepath.Join(sc.GetSystemRepoDatabaseDirPath(), cachepath)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
type LuetRepository struct {
|
||||
Name string `json:"name" yaml:"name" mapstructure:"name"`
|
||||
Description string `json:"description,omitempty" yaml:"description,omitempty" mapstructure:"description"`
|
||||
Urls []string `json:"urls" yaml:"urls" mapstructure:"urls"`
|
||||
Type string `json:"type" yaml:"type" mapstructure:"type"`
|
||||
Mode string `json:"mode,omitempty" yaml:"mode,omitempty" mapstructure:"mode,omitempty"`
|
||||
Priority int `json:"priority,omitempty" yaml:"priority,omitempty" mapstructure:"priority"`
|
||||
Enable bool `json:"enable" yaml:"enable" mapstructure:"enable"`
|
||||
Cached bool `json:"cached,omitempty" yaml:"cached,omitempty" mapstructure:"cached,omitempty"`
|
||||
Authentication map[string]string `json:"auth,omitempty" yaml:"auth,omitempty" mapstructure:"auth,omitempty"`
|
||||
TreePath string `json:"tree_path,omitempty" yaml:"tree_path,omitempty" mapstructure:"tree_path"`
|
||||
|
||||
// Serialized options not used in repository configuration
|
||||
|
||||
// Incremented value that identify revision of the repository in a user-friendly way.
|
||||
Revision int `json:"revision,omitempty" yaml:"-,omitempty" mapstructure:"-,omitempty"`
|
||||
// Epoch time in seconds
|
||||
LastUpdate string `json:"last_update,omitempty" yaml:"-,omitempty" mapstructure:"-,omitempty"`
|
||||
}
|
||||
|
||||
func NewLuetRepository(name, t, descr string, urls []string, priority int, enable, cached bool) *LuetRepository {
|
||||
return &LuetRepository{
|
||||
Name: name,
|
||||
Description: descr,
|
||||
Urls: urls,
|
||||
Type: t,
|
||||
// Used in cached repositories
|
||||
Mode: "",
|
||||
Priority: priority,
|
||||
Enable: enable,
|
||||
Cached: cached,
|
||||
Authentication: make(map[string]string, 0),
|
||||
TreePath: "",
|
||||
}
|
||||
}
|
||||
|
||||
func NewEmptyLuetRepository() *LuetRepository {
|
||||
return &LuetRepository{
|
||||
Name: "",
|
||||
Description: "",
|
||||
Urls: []string{},
|
||||
Type: "",
|
||||
Priority: 9999,
|
||||
TreePath: "",
|
||||
Enable: false,
|
||||
Cached: false,
|
||||
Authentication: make(map[string]string, 0),
|
||||
}
|
||||
}
|
||||
|
||||
func (r *LuetRepository) String() string {
|
||||
return fmt.Sprintf("[%s] prio: %d, type: %s, enable: %t, cached: %t",
|
||||
r.Name, r.Priority, r.Type, r.Enable, r.Cached)
|
||||
}
|
||||
|
||||
type LuetConfig struct {
|
||||
Viper *v.Viper
|
||||
|
||||
Logging LuetLoggingConfig `mapstructure:"logging"`
|
||||
General LuetGeneralConfig `mapstructure:"general"`
|
||||
System LuetSystemConfig `mapstructure:"system"`
|
||||
Solver LuetSolverOptions `mapstructure:"solver"`
|
||||
|
||||
RepositoriesConfDir []string `mapstructure:"repos_confdir"`
|
||||
CacheRepositories []LuetRepository `mapstructure:"repetitors"`
|
||||
SystemRepositories []LuetRepository `mapstructure:"repositories"`
|
||||
}
|
||||
|
||||
func NewLuetConfig(viper *v.Viper) *LuetConfig {
|
||||
if viper == nil {
|
||||
viper = v.New()
|
||||
}
|
||||
|
||||
GenDefault(viper)
|
||||
return &LuetConfig{Viper: viper}
|
||||
}
|
||||
|
||||
func GenDefault(viper *v.Viper) {
|
||||
viper.SetDefault("logging.level", "info")
|
||||
viper.SetDefault("logging.path", "")
|
||||
viper.SetDefault("logging.json_format", false)
|
||||
|
||||
viper.SetDefault("general.concurrency", runtime.NumCPU())
|
||||
viper.SetDefault("general.debug", false)
|
||||
viper.SetDefault("general.show_build_output", false)
|
||||
viper.SetDefault("general.spinner_ms", 100)
|
||||
viper.SetDefault("general.spinner_charset", 22)
|
||||
viper.SetDefault("general.fatal_warnings", false)
|
||||
|
||||
u, _ := user.Current()
|
||||
if u.Uid == "0" {
|
||||
viper.SetDefault("general.same_owner", true)
|
||||
} else {
|
||||
viper.SetDefault("general.same_owner", false)
|
||||
}
|
||||
|
||||
viper.SetDefault("system.database_engine", "boltdb")
|
||||
viper.SetDefault("system.database_path", "/var/cache/luet")
|
||||
viper.SetDefault("system.rootfs", "/")
|
||||
viper.SetDefault("system.pkgs_cache_path", "packages")
|
||||
|
||||
viper.SetDefault("repos_confdir", []string{"/etc/luet/repos.conf.d"})
|
||||
viper.SetDefault("cache_repositories", []string{})
|
||||
viper.SetDefault("system_repositories", []string{})
|
||||
|
||||
viper.SetDefault("solver.type", "")
|
||||
viper.SetDefault("solver.rate", 0.7)
|
||||
viper.SetDefault("solver.discount", 1.0)
|
||||
viper.SetDefault("solver.max_attempts", 9000)
|
||||
}
|
||||
|
||||
func (c *LuetConfig) AddSystemRepository(r LuetRepository) {
|
||||
c.SystemRepositories = append(c.SystemRepositories, r)
|
||||
}
|
||||
|
||||
func (c *LuetConfig) GetLogging() *LuetLoggingConfig {
|
||||
return &c.Logging
|
||||
}
|
||||
|
||||
func (c *LuetConfig) GetGeneral() *LuetGeneralConfig {
|
||||
return &c.General
|
||||
}
|
||||
|
||||
func (c *LuetConfig) GetSystem() *LuetSystemConfig {
|
||||
return &c.System
|
||||
}
|
||||
|
||||
func (c *LuetConfig) GetSolverOptions() *LuetSolverOptions {
|
||||
return &c.Solver
|
||||
}
|
||||
|
||||
func (c *LuetConfig) GetSystemRepository(name string) (*LuetRepository, error) {
|
||||
var ans *LuetRepository = nil
|
||||
|
||||
for idx, repo := range c.SystemRepositories {
|
||||
if repo.Name == name {
|
||||
ans = &c.SystemRepositories[idx]
|
||||
break
|
||||
}
|
||||
}
|
||||
if ans == nil {
|
||||
return nil, errors.New("Repository " + name + " not found")
|
||||
}
|
||||
|
||||
return ans, nil
|
||||
}
|
||||
|
||||
func (c *LuetSolverOptions) String() string {
|
||||
ans := fmt.Sprintf(`
|
||||
solver:
|
||||
type: %s
|
||||
rate: %f
|
||||
discount: %f
|
||||
max_attempts: %d`, c.Type, c.LearnRate, c.Discount,
|
||||
c.MaxAttempts)
|
||||
|
||||
return ans
|
||||
}
|
||||
|
||||
func (c *LuetGeneralConfig) String() string {
|
||||
ans := fmt.Sprintf(`
|
||||
general:
|
||||
concurrency: %d
|
||||
same_owner: %t
|
||||
debug: %t
|
||||
fatal_warnings: %t
|
||||
show_build_output: %t
|
||||
spinner_ms: %d
|
||||
spinner_charset: %d`, c.Concurrency, c.SameOwner, c.Debug,
|
||||
c.FatalWarns, c.ShowBuildOutput,
|
||||
c.SpinnerMs, c.SpinnerCharset)
|
||||
|
||||
return ans
|
||||
}
|
||||
|
||||
func (c *LuetGeneralConfig) GetSpinnerMs() time.Duration {
|
||||
duration, err := time.ParseDuration(fmt.Sprintf("%dms", c.SpinnerMs))
|
||||
if err != nil {
|
||||
return 100 * time.Millisecond
|
||||
}
|
||||
return duration
|
||||
}
|
||||
|
||||
func (c *LuetLoggingConfig) String() string {
|
||||
ans := fmt.Sprintf(`
|
||||
logging:
|
||||
path: %s
|
||||
json_format: %t
|
||||
level: %s`, c.Path, c.JsonFormat, c.Level)
|
||||
|
||||
return ans
|
||||
}
|
||||
|
||||
func (c *LuetSystemConfig) String() string {
|
||||
ans := fmt.Sprintf(`
|
||||
system:
|
||||
database_engine: %s
|
||||
database_path: %s
|
||||
pkgs_cache_path: %s
|
||||
rootfs: %s`,
|
||||
c.DatabaseEngine, c.DatabasePath, c.PkgsCachePath, c.Rootfs)
|
||||
|
||||
return ans
|
||||
}
|
@@ -16,8 +16,10 @@
|
||||
package helpers
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
)
|
||||
@@ -49,14 +51,82 @@ func Tar(src, dest string) error {
|
||||
|
||||
// Untar just a wrapper around the docker functions
|
||||
func Untar(src, dest string, sameOwner bool) error {
|
||||
var ans error
|
||||
|
||||
in, err := os.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer in.Close()
|
||||
|
||||
return archive.Untar(in, dest, &archive.TarOptions{
|
||||
NoLchown: !sameOwner,
|
||||
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
|
||||
})
|
||||
if sameOwner {
|
||||
// PRE: i have root privileged.
|
||||
|
||||
opts := &archive.TarOptions{
|
||||
// NOTE: NoLchown boolean is used for chmod of the symlink
|
||||
// Probably it's needed set this always to true.
|
||||
NoLchown: true,
|
||||
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
|
||||
}
|
||||
|
||||
ans = archive.Untar(in, dest, opts)
|
||||
} else {
|
||||
|
||||
var fileReader io.ReadCloser = in
|
||||
|
||||
tr := tar.NewReader(fileReader)
|
||||
for {
|
||||
header, err := tr.Next()
|
||||
|
||||
switch {
|
||||
case err == io.EOF:
|
||||
goto tarEof
|
||||
case err != nil:
|
||||
return err
|
||||
case header == nil:
|
||||
continue
|
||||
}
|
||||
|
||||
// the target location where the dir/file should be created
|
||||
target := filepath.Join(dest, header.Name)
|
||||
|
||||
// Check the file type
|
||||
switch header.Typeflag {
|
||||
|
||||
// if its a dir and it doesn't exist create it
|
||||
case tar.TypeDir:
|
||||
if _, err := os.Stat(target); err != nil {
|
||||
if err := os.MkdirAll(target, 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// handle creation of file
|
||||
case tar.TypeReg:
|
||||
f, err := os.OpenFile(target, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// copy over contents
|
||||
if _, err := io.Copy(f, tr); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// manually close here after each file operation; defering would cause each
|
||||
// file close to wait until all operations have completed.
|
||||
f.Close()
|
||||
|
||||
case tar.TypeSymlink:
|
||||
source := header.Linkname
|
||||
err := os.Symlink(source, target)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
tarEof:
|
||||
}
|
||||
|
||||
return ans
|
||||
}
|
||||
|
@@ -18,11 +18,15 @@ package helpers_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestSolver(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Helpers Suite")
|
||||
}
|
||||
|
24
pkg/helpers/math.go
Normal file
24
pkg/helpers/math.go
Normal file
@@ -0,0 +1,24 @@
|
||||
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package helpers
|
||||
|
||||
func Factorial(n uint64) (result uint64) {
|
||||
if n > 0 {
|
||||
result = n * Factorial(n-1)
|
||||
return result
|
||||
}
|
||||
return 1
|
||||
}
|
@@ -18,11 +18,19 @@ package client_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestClient(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
// Set temporary directory for rootfs
|
||||
config.LuetCfg.GetSystem().Rootfs = "/tmp/luet-root"
|
||||
// Force dynamic path for packages cache
|
||||
config.LuetCfg.GetSystem().PkgsCachePath = ""
|
||||
RunSpecs(t, "Client Suite")
|
||||
}
|
||||
|
@@ -16,7 +16,9 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"net/url"
|
||||
"os"
|
||||
"path"
|
||||
@@ -25,6 +27,7 @@ import (
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
|
||||
"github.com/cavaliercoder/grab"
|
||||
@@ -38,61 +41,151 @@ func NewHttpClient(r RepoData) *HttpClient {
|
||||
return &HttpClient{RepoData: r}
|
||||
}
|
||||
|
||||
func (c *HttpClient) PrepareReq(dst, url string) (*grab.Request, error) {
|
||||
|
||||
req, err := grab.NewRequest(dst, url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if val, ok := c.RepoData.Authentication["token"]; ok {
|
||||
req.HTTPRequest.Header.Set("Authorization", "token "+val)
|
||||
} else if val, ok := c.RepoData.Authentication["basic"]; ok {
|
||||
req.HTTPRequest.Header.Set("Authorization", "Basic "+val)
|
||||
}
|
||||
|
||||
return req, err
|
||||
}
|
||||
|
||||
func Round(input float64) float64 {
|
||||
if input < 0 {
|
||||
return math.Ceil(input - 0.5)
|
||||
}
|
||||
return math.Floor(input + 0.5)
|
||||
}
|
||||
|
||||
func (c *HttpClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Artifact, error) {
|
||||
var u *url.URL = nil
|
||||
var err error
|
||||
var req *grab.Request
|
||||
var temp string
|
||||
|
||||
artifactName := path.Base(artifact.GetPath())
|
||||
Info("Downloading artifact", artifactName, "from", c.RepoData.Uri)
|
||||
cacheFile := filepath.Join(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), artifactName)
|
||||
ok := false
|
||||
|
||||
temp, err := ioutil.TempDir(os.TempDir(), "tree")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
// Check if file is already in cache
|
||||
if helpers.Exists(cacheFile) {
|
||||
Info("Use artifact", artifactName, "from cache.")
|
||||
} else {
|
||||
|
||||
temp, err = ioutil.TempDir(os.TempDir(), "tree")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer os.RemoveAll(temp)
|
||||
|
||||
client := grab.NewClient()
|
||||
|
||||
for _, uri := range c.RepoData.Urls {
|
||||
Info("Downloading artifact", artifactName, "from", uri)
|
||||
|
||||
u, err = url.Parse(uri)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
u.Path = path.Join(u.Path, artifactName)
|
||||
|
||||
req, err = c.PrepareReq(temp, u.String())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
resp := client.Do(req)
|
||||
if err = resp.Err(); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
Info("Downloaded", artifactName, "of",
|
||||
fmt.Sprintf("%.2f", (float64(resp.BytesComplete())/1000)/1000), "MB (",
|
||||
fmt.Sprintf("%.2f", (float64(resp.BytesPerSecond())/1024)/1024), "MiB/s )")
|
||||
|
||||
Debug("Copying file ", filepath.Join(temp, artifactName), "to", cacheFile)
|
||||
err = helpers.CopyFile(filepath.Join(temp, artifactName), cacheFile)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
ok = true
|
||||
break
|
||||
}
|
||||
|
||||
if !ok {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
file, err := ioutil.TempFile(temp, "HttpClient")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
u, err := url.Parse(c.RepoData.Uri)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
u.Path = path.Join(u.Path, artifactName)
|
||||
|
||||
_, err = grab.Get(temp, u.String())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = helpers.CopyFile(filepath.Join(temp, artifactName), file.Name())
|
||||
newart := artifact
|
||||
newart.SetPath(file.Name())
|
||||
newart.SetPath(cacheFile)
|
||||
return newart, nil
|
||||
}
|
||||
|
||||
func (c *HttpClient) DownloadFile(name string) (string, error) {
|
||||
temp, err := ioutil.TempDir(os.TempDir(), "tree")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
file, err := ioutil.TempFile(os.TempDir(), "HttpClient")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
//defer os.Remove(file.Name())
|
||||
u, err := url.Parse(c.RepoData.Uri)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
u.Path = path.Join(u.Path, name)
|
||||
var file *os.File = nil
|
||||
var u *url.URL = nil
|
||||
var err error
|
||||
var req *grab.Request
|
||||
var temp string
|
||||
|
||||
Info("Downloading", u.String())
|
||||
ok := false
|
||||
|
||||
_, err = grab.Get(temp, u.String())
|
||||
temp, err = ioutil.TempDir(os.TempDir(), "tree")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
err = helpers.CopyFile(filepath.Join(temp, name), file.Name())
|
||||
client := grab.NewClient()
|
||||
|
||||
for _, uri := range c.RepoData.Urls {
|
||||
|
||||
file, err = ioutil.TempFile(os.TempDir(), "HttpClient")
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
u, err = url.Parse(uri)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
u.Path = path.Join(u.Path, name)
|
||||
|
||||
Info("Downloading", u.String())
|
||||
|
||||
req, err = c.PrepareReq(temp, u.String())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
resp := client.Do(req)
|
||||
if err = resp.Err(); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
Info("Downloaded", filepath.Base(resp.Filename), "of",
|
||||
fmt.Sprintf("%.2f", (float64(resp.BytesComplete())/1000)/1000), "MB (",
|
||||
fmt.Sprintf("%.2f", (float64(resp.BytesPerSecond())/1024)/1024), "MiB/s )")
|
||||
|
||||
err = helpers.CopyFile(filepath.Join(temp, name), file.Name())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
ok = true
|
||||
break
|
||||
}
|
||||
|
||||
if !ok {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return file.Name(), err
|
||||
}
|
||||
|
@@ -44,7 +44,7 @@ var _ = Describe("Http client", func() {
|
||||
err = ioutil.WriteFile(filepath.Join(tmpdir, "test.txt"), []byte(`test`), os.ModePerm)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
c := NewHttpClient(RepoData{Uri: ts.URL})
|
||||
c := NewHttpClient(RepoData{Urls: []string{ts.URL}})
|
||||
path, err := c.DownloadFile("test.txt")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(path)).To(Equal("test"))
|
||||
@@ -62,7 +62,7 @@ var _ = Describe("Http client", func() {
|
||||
err = ioutil.WriteFile(filepath.Join(tmpdir, "test.txt"), []byte(`test`), os.ModePerm)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
c := NewHttpClient(RepoData{Uri: ts.URL})
|
||||
c := NewHttpClient(RepoData{Urls: []string{ts.URL}})
|
||||
path, err := c.DownloadArtifact(&compiler.PackageArtifact{Path: "test.txt"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(path.GetPath())).To(Equal("test"))
|
||||
|
@@ -16,5 +16,6 @@
|
||||
package client
|
||||
|
||||
type RepoData struct {
|
||||
Uri string
|
||||
}
|
||||
Urls []string
|
||||
Authentication map[string]string
|
||||
}
|
||||
|
@@ -21,6 +21,7 @@ import (
|
||||
"path"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler"
|
||||
@@ -36,29 +37,62 @@ func NewLocalClient(r RepoData) *LocalClient {
|
||||
}
|
||||
|
||||
func (c *LocalClient) DownloadArtifact(artifact compiler.Artifact) (compiler.Artifact, error) {
|
||||
artifactName := path.Base(artifact.GetPath())
|
||||
Info("Downloading artifact", artifactName, "from", c.RepoData.Uri)
|
||||
file, err := ioutil.TempFile(os.TempDir(), "localclient")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
//defer os.Remove(file.Name())
|
||||
var err error
|
||||
|
||||
artifactName := path.Base(artifact.GetPath())
|
||||
cacheFile := filepath.Join(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), artifactName)
|
||||
|
||||
// Check if file is already in cache
|
||||
if helpers.Exists(cacheFile) {
|
||||
Info("Use artifact", artifactName, "from cache.")
|
||||
} else {
|
||||
ok := false
|
||||
for _, uri := range c.RepoData.Urls {
|
||||
Info("Downloading artifact", artifactName, "from", uri)
|
||||
|
||||
//defer os.Remove(file.Name())
|
||||
err = helpers.CopyFile(filepath.Join(uri, artifactName), cacheFile)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
ok = true
|
||||
break
|
||||
}
|
||||
|
||||
if !ok {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
err = helpers.CopyFile(filepath.Join(c.RepoData.Uri, artifactName), file.Name())
|
||||
newart := artifact
|
||||
newart.SetPath(file.Name())
|
||||
newart.SetPath(cacheFile)
|
||||
return newart, nil
|
||||
}
|
||||
|
||||
func (c *LocalClient) DownloadFile(name string) (string, error) {
|
||||
Info("Downloading file", name, "from", c.RepoData.Uri)
|
||||
var err error
|
||||
var file *os.File = nil
|
||||
|
||||
file, err := ioutil.TempFile(os.TempDir(), "localclient")
|
||||
if err != nil {
|
||||
return "", err
|
||||
ok := false
|
||||
for _, uri := range c.RepoData.Urls {
|
||||
Info("Downloading file", name, "from", uri)
|
||||
file, err = ioutil.TempFile(os.TempDir(), "localclient")
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
//defer os.Remove(file.Name())
|
||||
|
||||
err = helpers.CopyFile(filepath.Join(uri, name), file.Name())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
ok = true
|
||||
break
|
||||
}
|
||||
//defer os.Remove(file.Name())
|
||||
|
||||
err = helpers.CopyFile(filepath.Join(c.RepoData.Uri, name), file.Name())
|
||||
if ok {
|
||||
return file.Name(), nil
|
||||
}
|
||||
|
||||
return file.Name(), err
|
||||
return "", err
|
||||
}
|
||||
|
@@ -39,7 +39,7 @@ var _ = Describe("Local client", func() {
|
||||
err = ioutil.WriteFile(filepath.Join(tmpdir, "test.txt"), []byte(`test`), os.ModePerm)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
c := NewLocalClient(RepoData{Uri: tmpdir})
|
||||
c := NewLocalClient(RepoData{Urls: []string{tmpdir}})
|
||||
path, err := c.DownloadFile("test.txt")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(path)).To(Equal("test"))
|
||||
@@ -55,7 +55,7 @@ var _ = Describe("Local client", func() {
|
||||
err = ioutil.WriteFile(filepath.Join(tmpdir, "test.txt"), []byte(`test`), os.ModePerm)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
c := NewLocalClient(RepoData{Uri: tmpdir})
|
||||
c := NewLocalClient(RepoData{Urls: []string{tmpdir}})
|
||||
path, err := c.DownloadArtifact(&compiler.PackageArtifact{Path: "test.txt"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(path.GetPath())).To(Equal("test"))
|
||||
|
@@ -25,6 +25,7 @@ import (
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
compiler "github.com/mudler/luet/pkg/compiler"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
@@ -34,9 +35,15 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type LuetInstallerOptions struct {
|
||||
SolverOptions config.LuetSolverOptions
|
||||
Concurrency int
|
||||
}
|
||||
|
||||
type LuetInstaller struct {
|
||||
PackageRepositories Repositories
|
||||
Concurrency int
|
||||
|
||||
Options LuetInstallerOptions
|
||||
}
|
||||
|
||||
type ArtifactMatch struct {
|
||||
@@ -86,32 +93,21 @@ func NewLuetFinalizerFromYaml(data []byte) (*LuetFinalizer, error) {
|
||||
return &p, err
|
||||
}
|
||||
|
||||
func NewLuetInstaller(concurrency int) Installer {
|
||||
return &LuetInstaller{Concurrency: concurrency}
|
||||
func NewLuetInstaller(opts LuetInstallerOptions) Installer {
|
||||
return &LuetInstaller{Options: opts}
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) Upgrade(s *System) error {
|
||||
Spinner(32)
|
||||
defer SpinnerStop()
|
||||
syncedRepos := Repositories{}
|
||||
for _, r := range l.PackageRepositories {
|
||||
repo, err := r.Sync()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Failed syncing repository: "+r.GetName())
|
||||
}
|
||||
syncedRepos = append(syncedRepos, repo)
|
||||
syncedRepos, err := l.SyncRepositories(true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// compute what to install and from where
|
||||
sort.Sort(syncedRepos)
|
||||
|
||||
// First match packages against repositories by priority
|
||||
// matches := syncedRepos.PackageMatches(p)
|
||||
|
||||
// compute a "big" world
|
||||
allRepos := pkg.NewInMemoryDatabase(false)
|
||||
syncedRepos.SyncDatabase(allRepos)
|
||||
solv := solver.NewSolver(s.Database, allRepos, pkg.NewInMemoryDatabase(false))
|
||||
// compute a "big" world
|
||||
solv := solver.NewResolver(s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
|
||||
uninstall, solution, err := solv.Upgrade()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Failed solving solution for upgrade")
|
||||
@@ -134,8 +130,29 @@ func (l *LuetInstaller) Upgrade(s *System) error {
|
||||
return l.Install(toInstall, s)
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
|
||||
func (l *LuetInstaller) SyncRepositories(inMemory bool) (Repositories, error) {
|
||||
Spinner(32)
|
||||
defer SpinnerStop()
|
||||
syncedRepos := Repositories{}
|
||||
for _, r := range l.PackageRepositories {
|
||||
repo, err := r.Sync(false)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Failed syncing repository: "+r.GetName())
|
||||
}
|
||||
syncedRepos = append(syncedRepos, repo)
|
||||
}
|
||||
|
||||
// compute what to install and from where
|
||||
sort.Sort(syncedRepos)
|
||||
|
||||
if !inMemory {
|
||||
l.PackageRepositories = syncedRepos
|
||||
}
|
||||
|
||||
return syncedRepos, nil
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
|
||||
var p []pkg.Package
|
||||
|
||||
// Check if the package is installed first
|
||||
@@ -156,35 +173,26 @@ func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
|
||||
Warning("No package to install, bailing out with no errors")
|
||||
return nil
|
||||
}
|
||||
|
||||
// First get metas from all repos (and decodes trees)
|
||||
|
||||
Spinner(32)
|
||||
defer SpinnerStop()
|
||||
syncedRepos := Repositories{}
|
||||
for _, r := range l.PackageRepositories {
|
||||
repo, err := r.Sync()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Failed syncing repository: "+r.GetName())
|
||||
}
|
||||
syncedRepos = append(syncedRepos, repo)
|
||||
syncedRepos, err := l.SyncRepositories(true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// compute what to install and from where
|
||||
sort.Sort(syncedRepos)
|
||||
|
||||
// First match packages against repositories by priority
|
||||
// matches := syncedRepos.PackageMatches(p)
|
||||
|
||||
// compute a "big" world
|
||||
allRepos := pkg.NewInMemoryDatabase(false)
|
||||
syncedRepos.SyncDatabase(allRepos)
|
||||
p = syncedRepos.ResolveSelectors(p)
|
||||
|
||||
solv := solver.NewSolver(s.Database, allRepos, pkg.NewInMemoryDatabase(false))
|
||||
solv := solver.NewResolver(s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
|
||||
solution, err := solv.Install(p)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Failed solving solution for package")
|
||||
}
|
||||
|
||||
// Gathers things to install
|
||||
toInstall := map[string]ArtifactMatch{}
|
||||
for _, assertion := range solution {
|
||||
@@ -214,7 +222,7 @@ func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
|
||||
all := make(chan ArtifactMatch)
|
||||
|
||||
var wg = new(sync.WaitGroup)
|
||||
for i := 0; i < l.Concurrency; i++ {
|
||||
for i := 0; i < l.Options.Concurrency; i++ {
|
||||
wg.Add(1)
|
||||
go l.installerWorker(i, wg, all, s)
|
||||
}
|
||||
@@ -284,10 +292,10 @@ func (l *LuetInstaller) Install(cp []pkg.Package, s *System) error {
|
||||
|
||||
func (l *LuetInstaller) installPackage(a ArtifactMatch, s *System) error {
|
||||
|
||||
Info("Installing", a.Package.GetName())
|
||||
|
||||
artifact, err := a.Repository.Client().DownloadArtifact(a.Artifact)
|
||||
defer os.Remove(artifact.GetPath())
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error on download artifact")
|
||||
}
|
||||
|
||||
err = artifact.Verify()
|
||||
if err != nil {
|
||||
@@ -296,7 +304,7 @@ func (l *LuetInstaller) installPackage(a ArtifactMatch, s *System) error {
|
||||
|
||||
files, err := artifact.FileList()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Could not get file list")
|
||||
return errors.Wrap(err, "Could not open package archive")
|
||||
}
|
||||
|
||||
err = artifact.Unpack(s.Target, true)
|
||||
@@ -356,8 +364,7 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
|
||||
func (l *LuetInstaller) Uninstall(p pkg.Package, s *System) error {
|
||||
// compute uninstall from all world - remove packages in parallel - run uninstall finalizer (in order) - mark the uninstallation in db
|
||||
// Get installed definition
|
||||
|
||||
solv := solver.NewSolver(s.Database, s.Database, pkg.NewInMemoryDatabase(false))
|
||||
solv := solver.NewResolver(s.Database, s.Database, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
|
||||
solution, err := solv.Uninstall(p)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Uninstall failed")
|
||||
|
@@ -18,11 +18,19 @@ package installer_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestInstaller(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
// Set temporary directory for rootfs
|
||||
config.LuetCfg.GetSystem().Rootfs = "/tmp/luet-root"
|
||||
// Force dynamic path for packages cache
|
||||
config.LuetCfg.GetSystem().PkgsCachePath = ""
|
||||
RunSpecs(t, "Installer Suite")
|
||||
}
|
||||
|
@@ -34,7 +34,7 @@ import (
|
||||
var _ = Describe("Installer", func() {
|
||||
Context("Writes a repository definition", func() {
|
||||
It("Writes a repo and can install packages from it", func() {
|
||||
//repo:=NewLuetRepository()
|
||||
//repo:=NewLuetSystemRepository()
|
||||
|
||||
tmpdir, err := ioutil.TempDir("", "tree")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -82,34 +82,35 @@ var _ = Describe("Installer", func() {
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
repo, err := GenerateRepository("test", tmpdir, "local", 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
|
||||
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir)
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).To(BeTrue())
|
||||
Expect(repo.GetUri()).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("local"))
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
fakeroot, err := ioutil.TempDir("", "fakeroot")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(fakeroot) // clean up
|
||||
|
||||
inst := NewLuetInstaller(1)
|
||||
repo2, err := NewLuetRepositoryFromYaml([]byte(`
|
||||
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
|
||||
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
|
||||
name: "test"
|
||||
type: "local"
|
||||
uri: "`+tmpdir+`"
|
||||
type: "disk"
|
||||
urls:
|
||||
- "`+tmpdir+`"
|
||||
`), pkg.NewInMemoryDatabase(false))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
inst.Repositories(Repositories{repo2})
|
||||
Expect(repo.GetUri()).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("local"))
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
systemDB := pkg.NewInMemoryDatabase(false)
|
||||
system := &System{Database: systemDB, Target: fakeroot}
|
||||
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
|
||||
@@ -146,7 +147,7 @@ uri: "`+tmpdir+`"
|
||||
|
||||
Context("Installation", func() {
|
||||
It("Installs in a system with a persistent db", func() {
|
||||
//repo:=NewLuetRepository()
|
||||
//repo:=NewLuetSystemRepository()
|
||||
|
||||
tmpdir, err := ioutil.TempDir("", "tree")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -194,34 +195,35 @@ uri: "`+tmpdir+`"
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
repo, err := GenerateRepository("test", tmpdir, "local", 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
|
||||
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir)
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).To(BeTrue())
|
||||
Expect(repo.GetUri()).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("local"))
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
fakeroot, err := ioutil.TempDir("", "fakeroot")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(fakeroot) // clean up
|
||||
|
||||
inst := NewLuetInstaller(1)
|
||||
repo2, err := NewLuetRepositoryFromYaml([]byte(`
|
||||
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
|
||||
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
|
||||
name: "test"
|
||||
type: "local"
|
||||
uri: "`+tmpdir+`"
|
||||
type: "disk"
|
||||
urls:
|
||||
- "`+tmpdir+`"
|
||||
`), pkg.NewInMemoryDatabase(false))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
inst.Repositories(Repositories{repo2})
|
||||
Expect(repo.GetUri()).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("local"))
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
bolt, err := ioutil.TempDir("", "db")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -264,7 +266,7 @@ uri: "`+tmpdir+`"
|
||||
|
||||
Context("Simple upgrades", func() {
|
||||
It("Installs packages and Upgrades a system with a persistent db", func() {
|
||||
//repo:=NewLuetRepository()
|
||||
//repo:=NewLuetSystemRepository()
|
||||
|
||||
tmpdir, err := ioutil.TempDir("", "tree")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -301,34 +303,35 @@ uri: "`+tmpdir+`"
|
||||
|
||||
Expect(errs).To(BeEmpty())
|
||||
|
||||
repo, err := GenerateRepository("test", tmpdir, "local", 1, tmpdir, "../../tests/fixtures/upgrade", pkg.NewInMemoryDatabase(false))
|
||||
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/upgrade", pkg.NewInMemoryDatabase(false))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir)
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).To(BeTrue())
|
||||
Expect(repo.GetUri()).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("local"))
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
fakeroot, err := ioutil.TempDir("", "fakeroot")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(fakeroot) // clean up
|
||||
|
||||
inst := NewLuetInstaller(1)
|
||||
repo2, err := NewLuetRepositoryFromYaml([]byte(`
|
||||
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
|
||||
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
|
||||
name: "test"
|
||||
type: "local"
|
||||
uri: "`+tmpdir+`"
|
||||
type: "disk"
|
||||
urls:
|
||||
- "`+tmpdir+`"
|
||||
`), pkg.NewInMemoryDatabase(false))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
inst.Repositories(Repositories{repo2})
|
||||
Expect(repo.GetUri()).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("local"))
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
bolt, err := ioutil.TempDir("", "db")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -377,7 +380,7 @@ uri: "`+tmpdir+`"
|
||||
|
||||
Context("Compressed packages", func() {
|
||||
It("Installs", func() {
|
||||
//repo:=NewLuetRepository()
|
||||
//repo:=NewLuetSystemRepository()
|
||||
|
||||
tmpdir, err := ioutil.TempDir("", "tree")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -413,36 +416,37 @@ uri: "`+tmpdir+`"
|
||||
|
||||
Expect(errs).To(BeEmpty())
|
||||
|
||||
repo, err := GenerateRepository("test", tmpdir, "local", 1, tmpdir, "../../tests/fixtures/upgrade", pkg.NewInMemoryDatabase(false))
|
||||
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/upgrade", pkg.NewInMemoryDatabase(false))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir)
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).To(BeTrue())
|
||||
Expect(repo.GetUri()).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("local"))
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
fakeroot, err := ioutil.TempDir("", "fakeroot")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(fakeroot) // clean up
|
||||
|
||||
inst := NewLuetInstaller(1)
|
||||
repo2, err := NewLuetRepositoryFromYaml([]byte(`
|
||||
inst := NewLuetInstaller(LuetInstallerOptions{Concurrency: 1})
|
||||
repo2, err := NewLuetSystemRepositoryFromYaml([]byte(`
|
||||
name: "test"
|
||||
type: "local"
|
||||
uri: "`+tmpdir+`"
|
||||
type: "disk"
|
||||
urls:
|
||||
- "`+tmpdir+`"
|
||||
`), pkg.NewInMemoryDatabase(false))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
inst.Repositories(Repositories{repo2})
|
||||
Expect(repo.GetUri()).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("local"))
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
bolt, err := ioutil.TempDir("", "db")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
@@ -27,6 +27,7 @@ type Installer interface {
|
||||
Uninstall(pkg.Package, *System) error
|
||||
Upgrade(s *System) error
|
||||
Repositories([]Repository)
|
||||
SyncRepositories(bool) (Repositories, error)
|
||||
}
|
||||
|
||||
type Client interface {
|
||||
@@ -38,17 +39,30 @@ type Repositories []Repository
|
||||
|
||||
type Repository interface {
|
||||
GetName() string
|
||||
GetUri() string
|
||||
SetUri(string)
|
||||
GetDescription() string
|
||||
GetUrls() []string
|
||||
SetUrls([]string)
|
||||
AddUrl(string)
|
||||
GetPriority() int
|
||||
GetIndex() compiler.ArtifactIndex
|
||||
GetTree() tree.Builder
|
||||
SetTree(tree.Builder)
|
||||
Write(path string) error
|
||||
Sync() (Repository, error)
|
||||
Write(path string, resetRevision bool) error
|
||||
Sync(bool) (Repository, error)
|
||||
GetTreePath() string
|
||||
SetTreePath(string)
|
||||
GetType() string
|
||||
SetType(string)
|
||||
SetAuthentication(map[string]string)
|
||||
GetAuthentication() map[string]string
|
||||
GetRevision() int
|
||||
IncrementRevision()
|
||||
GetLastUpdate() string
|
||||
SetLastUpdate(string)
|
||||
Client() Client
|
||||
|
||||
GetTreeChecksums() compiler.Checksums
|
||||
GetTreeCompressionType() compiler.CompressionImplementation
|
||||
SetTreeCompressionType(c compiler.CompressionImplementation)
|
||||
SetTreeChecksums(c compiler.Checksums)
|
||||
}
|
||||
|
@@ -16,42 +16,60 @@
|
||||
package installer
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/mudler/luet/pkg/installer/client"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
"github.com/mudler/luet/pkg/compiler"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
"github.com/mudler/luet/pkg/installer/client"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
tree "github.com/mudler/luet/pkg/tree"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
. "github.com/logrusorgru/aurora"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type LuetRepository struct {
|
||||
Name string `json:"name"`
|
||||
Uri string `json:"uri"`
|
||||
Priority int `json:"priority"`
|
||||
Index compiler.ArtifactIndex `json:"index"`
|
||||
Tree tree.Builder `json:"-"`
|
||||
TreePath string `json:"-"`
|
||||
Type string `json:"type"`
|
||||
const (
|
||||
REPOSITORY_SPECFILE = "repository.yaml"
|
||||
TREE_TARBALL = "tree.tar"
|
||||
)
|
||||
|
||||
type LuetSystemRepository struct {
|
||||
*config.LuetRepository
|
||||
|
||||
Index compiler.ArtifactIndex `json:"index"`
|
||||
Tree tree.Builder `json:"-"`
|
||||
TreePath string `json:"treepath"`
|
||||
TreeCompressionType compiler.CompressionImplementation `json:"treecompressiontype"`
|
||||
TreeChecksums compiler.Checksums `json:"treechecksums"`
|
||||
}
|
||||
|
||||
type LuetRepositorySerialized struct {
|
||||
Name string `json:"name"`
|
||||
Uri string `json:"uri"`
|
||||
Priority int `json:"priority"`
|
||||
Index []*compiler.PackageArtifact `json:"index"`
|
||||
Type string `json:"type"`
|
||||
type LuetSystemRepositorySerialized struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description,omitempty"`
|
||||
Urls []string `json:"urls"`
|
||||
Priority int `json:"priority"`
|
||||
Index []*compiler.PackageArtifact `json:"index"`
|
||||
Type string `json:"type"`
|
||||
Revision int `json:"revision,omitempty"`
|
||||
LastUpdate string `json:"last_update,omitempty"`
|
||||
TreePath string `json:"treepath"`
|
||||
TreeCompressionType compiler.CompressionImplementation `json:"treecompressiontype"`
|
||||
TreeChecksums compiler.Checksums `json:"treechecksums"`
|
||||
}
|
||||
|
||||
func GenerateRepository(name, uri, t string, priority int, src, treeDir string, db pkg.PackageDatabase) (Repository, error) {
|
||||
func GenerateRepository(name, descr, t string, urls []string, priority int, src, treeDir string, db pkg.PackageDatabase) (Repository, error) {
|
||||
|
||||
art, err := buildPackageIndex(src)
|
||||
if err != nil {
|
||||
@@ -63,24 +81,51 @@ func GenerateRepository(name, uri, t string, priority int, src, treeDir string,
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return NewLuetRepository(name, uri, t, priority, art, tr), nil
|
||||
return NewLuetSystemRepository(
|
||||
config.NewLuetRepository(name, t, descr, urls, priority, true, false),
|
||||
art, tr), nil
|
||||
}
|
||||
|
||||
func NewLuetRepository(name, uri, t string, priority int, art []compiler.Artifact, builder tree.Builder) Repository {
|
||||
return &LuetRepository{Index: art, Type: t, Tree: builder, Name: name, Uri: uri, Priority: priority}
|
||||
func NewSystemRepository(repo config.LuetRepository) Repository {
|
||||
return &LuetSystemRepository{
|
||||
LuetRepository: &repo,
|
||||
}
|
||||
}
|
||||
|
||||
func NewLuetRepositoryFromYaml(data []byte, db pkg.PackageDatabase) (Repository, error) {
|
||||
var p *LuetRepositorySerialized
|
||||
r := &LuetRepository{}
|
||||
func NewLuetSystemRepository(repo *config.LuetRepository, art []compiler.Artifact, builder tree.Builder) Repository {
|
||||
return &LuetSystemRepository{
|
||||
LuetRepository: repo,
|
||||
Index: art,
|
||||
Tree: builder,
|
||||
}
|
||||
}
|
||||
|
||||
func NewLuetSystemRepositoryFromYaml(data []byte, db pkg.PackageDatabase) (Repository, error) {
|
||||
var p *LuetSystemRepositorySerialized
|
||||
err := yaml.Unmarshal(data, &p)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
r.Name = p.Name
|
||||
r.Uri = p.Uri
|
||||
r.Priority = p.Priority
|
||||
r.Type = p.Type
|
||||
r := &LuetSystemRepository{
|
||||
LuetRepository: config.NewLuetRepository(
|
||||
p.Name,
|
||||
p.Type,
|
||||
p.Description,
|
||||
p.Urls,
|
||||
p.Priority,
|
||||
true,
|
||||
false,
|
||||
),
|
||||
TreeCompressionType: p.TreeCompressionType,
|
||||
TreeChecksums: p.TreeChecksums,
|
||||
TreePath: p.TreePath,
|
||||
}
|
||||
if p.Revision > 0 {
|
||||
r.Revision = p.Revision
|
||||
}
|
||||
if p.LastUpdate != "" {
|
||||
r.LastUpdate = p.LastUpdate
|
||||
}
|
||||
i := compiler.ArtifactIndex{}
|
||||
for _, ii := range p.Index {
|
||||
i = append(i, ii)
|
||||
@@ -122,56 +167,129 @@ func buildPackageIndex(path string) ([]compiler.Artifact, error) {
|
||||
return art, nil
|
||||
}
|
||||
|
||||
func (r *LuetRepository) GetName() string {
|
||||
return r.Name
|
||||
func (r *LuetSystemRepository) GetName() string {
|
||||
return r.LuetRepository.Name
|
||||
}
|
||||
func (r *LuetRepository) GetTreePath() string {
|
||||
func (r *LuetSystemRepository) GetDescription() string {
|
||||
return r.LuetRepository.Description
|
||||
}
|
||||
|
||||
func (r *LuetSystemRepository) GetAuthentication() map[string]string {
|
||||
return r.LuetRepository.Authentication
|
||||
}
|
||||
|
||||
func (r *LuetSystemRepository) GetTreeCompressionType() compiler.CompressionImplementation {
|
||||
return r.TreeCompressionType
|
||||
}
|
||||
|
||||
func (r *LuetSystemRepository) GetTreeChecksums() compiler.Checksums {
|
||||
return r.TreeChecksums
|
||||
}
|
||||
|
||||
func (r *LuetSystemRepository) SetTreeCompressionType(c compiler.CompressionImplementation) {
|
||||
r.TreeCompressionType = c
|
||||
}
|
||||
|
||||
func (r *LuetSystemRepository) SetTreeChecksums(c compiler.Checksums) {
|
||||
r.TreeChecksums = c
|
||||
}
|
||||
|
||||
func (r *LuetSystemRepository) GetType() string {
|
||||
return r.LuetRepository.Type
|
||||
}
|
||||
func (r *LuetSystemRepository) SetType(p string) {
|
||||
r.LuetRepository.Type = p
|
||||
}
|
||||
func (r *LuetSystemRepository) AddUrl(p string) {
|
||||
r.LuetRepository.Urls = append(r.LuetRepository.Urls, p)
|
||||
}
|
||||
func (r *LuetSystemRepository) GetUrls() []string {
|
||||
return r.LuetRepository.Urls
|
||||
}
|
||||
func (r *LuetSystemRepository) SetUrls(urls []string) {
|
||||
r.LuetRepository.Urls = urls
|
||||
}
|
||||
func (r *LuetSystemRepository) GetPriority() int {
|
||||
return r.LuetRepository.Priority
|
||||
}
|
||||
func (r *LuetSystemRepository) GetTreePath() string {
|
||||
return r.TreePath
|
||||
}
|
||||
func (r *LuetRepository) SetTreePath(p string) {
|
||||
func (r *LuetSystemRepository) SetTreePath(p string) {
|
||||
r.TreePath = p
|
||||
}
|
||||
|
||||
func (r *LuetRepository) SetTree(b tree.Builder) {
|
||||
func (r *LuetSystemRepository) SetTree(b tree.Builder) {
|
||||
r.Tree = b
|
||||
}
|
||||
|
||||
func (r *LuetRepository) GetType() string {
|
||||
return r.Type
|
||||
}
|
||||
func (r *LuetRepository) SetType(p string) {
|
||||
r.Type = p
|
||||
}
|
||||
|
||||
func (r *LuetRepository) SetUri(p string) {
|
||||
r.Uri = p
|
||||
}
|
||||
func (r *LuetRepository) GetUri() string {
|
||||
return r.Uri
|
||||
}
|
||||
func (r *LuetRepository) GetPriority() int {
|
||||
return r.Priority
|
||||
}
|
||||
func (r *LuetRepository) GetIndex() compiler.ArtifactIndex {
|
||||
func (r *LuetSystemRepository) GetIndex() compiler.ArtifactIndex {
|
||||
return r.Index
|
||||
}
|
||||
func (r *LuetRepository) GetTree() tree.Builder {
|
||||
func (r *LuetSystemRepository) GetTree() tree.Builder {
|
||||
return r.Tree
|
||||
}
|
||||
func (r *LuetSystemRepository) GetRevision() int {
|
||||
return r.LuetRepository.Revision
|
||||
}
|
||||
func (r *LuetSystemRepository) GetLastUpdate() string {
|
||||
return r.LuetRepository.LastUpdate
|
||||
}
|
||||
func (r *LuetSystemRepository) SetLastUpdate(u string) {
|
||||
r.LuetRepository.LastUpdate = u
|
||||
}
|
||||
func (r *LuetSystemRepository) IncrementRevision() {
|
||||
r.LuetRepository.Revision++
|
||||
}
|
||||
|
||||
func (r *LuetRepository) Write(dst string) error {
|
||||
func (r *LuetSystemRepository) SetAuthentication(auth map[string]string) {
|
||||
r.LuetRepository.Authentication = auth
|
||||
}
|
||||
|
||||
os.MkdirAll(dst, os.ModePerm)
|
||||
func (r *LuetSystemRepository) ReadSpecFile(file string, removeFile bool) (Repository, error) {
|
||||
dat, err := ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error reading file "+file)
|
||||
}
|
||||
if removeFile {
|
||||
defer os.Remove(file)
|
||||
}
|
||||
|
||||
var repo Repository
|
||||
repo, err = NewLuetSystemRepositoryFromYaml(dat, pkg.NewInMemoryDatabase(false))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error reading repository from file "+file)
|
||||
}
|
||||
|
||||
return repo, err
|
||||
}
|
||||
|
||||
func (r *LuetSystemRepository) Write(dst string, resetRevision bool) error {
|
||||
err := os.MkdirAll(dst, os.ModePerm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
r.Index = r.Index.CleanPath()
|
||||
r.LastUpdate = strconv.FormatInt(time.Now().Unix(), 10)
|
||||
|
||||
data, err := yaml.Marshal(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = ioutil.WriteFile(filepath.Join(dst, "repository.yaml"), data, os.ModePerm)
|
||||
if err != nil {
|
||||
return err
|
||||
repospec := filepath.Join(dst, REPOSITORY_SPECFILE)
|
||||
if resetRevision {
|
||||
r.Revision = 0
|
||||
} else {
|
||||
if _, err := os.Stat(repospec); !os.IsNotExist(err) {
|
||||
// Read existing file for retrieve revision
|
||||
spec, err := r.ReadSpecFile(repospec, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
r.Revision = spec.GetRevision()
|
||||
}
|
||||
}
|
||||
r.Revision++
|
||||
|
||||
Info(fmt.Sprintf(
|
||||
"For repository %s creating revision %d and last update %s...",
|
||||
r.Name, r.Revision, r.LastUpdate,
|
||||
))
|
||||
|
||||
archive, err := ioutil.TempDir(os.TempDir(), "archive")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error met while creating tempdir for archive")
|
||||
@@ -181,67 +299,148 @@ func (r *LuetRepository) Write(dst string) error {
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error met while saving the tree")
|
||||
}
|
||||
err = helpers.Tar(archive, filepath.Join(dst, "tree.tar"))
|
||||
tpath := r.GetTreePath()
|
||||
if tpath == "" {
|
||||
tpath = TREE_TARBALL
|
||||
}
|
||||
|
||||
a := compiler.NewPackageArtifact(filepath.Join(dst, tpath))
|
||||
a.SetCompressionType(r.TreeCompressionType)
|
||||
err = a.Compress(archive, 1)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error met while creating package archive")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *LuetRepository) Client() Client {
|
||||
switch r.GetType() {
|
||||
case "local":
|
||||
return client.NewLocalClient(client.RepoData{Uri: r.GetUri()})
|
||||
case "http":
|
||||
return client.NewHttpClient(client.RepoData{Uri: r.GetUri()})
|
||||
r.TreePath = path.Base(a.GetPath())
|
||||
err = a.Hash()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Failed generating checksums for tree")
|
||||
}
|
||||
r.TreeChecksums = a.GetChecksums()
|
||||
|
||||
data, err := yaml.Marshal(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = ioutil.WriteFile(repospec, data, os.ModePerm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
func (r *LuetRepository) Sync() (Repository, error) {
|
||||
|
||||
func (r *LuetSystemRepository) Client() Client {
|
||||
switch r.GetType() {
|
||||
case "disk":
|
||||
return client.NewLocalClient(client.RepoData{Urls: r.GetUrls()})
|
||||
case "http":
|
||||
return client.NewHttpClient(
|
||||
client.RepoData{
|
||||
Urls: r.GetUrls(),
|
||||
Authentication: r.GetAuthentication(),
|
||||
})
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
func (r *LuetSystemRepository) Sync(force bool) (Repository, error) {
|
||||
var repoUpdated bool = false
|
||||
var treefs string
|
||||
|
||||
Debug("Sync of the repository", r.Name, "in progress...")
|
||||
c := r.Client()
|
||||
if c == nil {
|
||||
return nil, errors.New("No client could be generated from repository.")
|
||||
}
|
||||
file, err := c.DownloadFile("repository.yaml")
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "While downloading repository.yaml from "+r.GetUri())
|
||||
}
|
||||
dat, err := ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error reading file "+file)
|
||||
}
|
||||
defer os.Remove(file)
|
||||
|
||||
// TODO: make it swappable
|
||||
repo, err := NewLuetRepositoryFromYaml(dat, pkg.NewInMemoryDatabase(false))
|
||||
// Retrieve remote repository.yaml for retrieve revision and date
|
||||
file, err := c.DownloadFile(REPOSITORY_SPECFILE)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error reading repository from file "+file)
|
||||
|
||||
return nil, errors.Wrap(err, "While downloading "+REPOSITORY_SPECFILE)
|
||||
}
|
||||
|
||||
archivetree, err := c.DownloadFile("tree.tar")
|
||||
repobasedir := config.LuetCfg.GetSystem().GetRepoDatabaseDirPath(r.GetName())
|
||||
repo, err := r.ReadSpecFile(file, false)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "While downloading repository.yaml from "+r.GetUri())
|
||||
return nil, err
|
||||
}
|
||||
defer os.RemoveAll(archivetree) // clean up
|
||||
// Remove temporary file that contains repository.html.
|
||||
// Example: /tmp/HttpClient236052003
|
||||
defer os.RemoveAll(file)
|
||||
|
||||
treefs, err := ioutil.TempDir(os.TempDir(), "treefs")
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
|
||||
if r.Cached {
|
||||
if !force {
|
||||
localRepo, _ := r.ReadSpecFile(filepath.Join(repobasedir, REPOSITORY_SPECFILE), false)
|
||||
if localRepo != nil {
|
||||
if localRepo.GetRevision() == repo.GetRevision() &&
|
||||
localRepo.GetLastUpdate() == repo.GetLastUpdate() {
|
||||
repoUpdated = true
|
||||
}
|
||||
}
|
||||
}
|
||||
if r.GetTreePath() == "" {
|
||||
treefs = filepath.Join(repobasedir, "treefs")
|
||||
} else {
|
||||
treefs = r.GetTreePath()
|
||||
}
|
||||
|
||||
} else {
|
||||
treefs, err = ioutil.TempDir(os.TempDir(), "treefs")
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
|
||||
}
|
||||
}
|
||||
//defer os.RemoveAll(treefs) // clean up
|
||||
|
||||
// TODO: Following as option if archive as output?
|
||||
// archive, err := ioutil.TempDir(os.TempDir(), "archive")
|
||||
// if err != nil {
|
||||
// return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
|
||||
// }
|
||||
// defer os.RemoveAll(archive) // clean up
|
||||
if !repoUpdated {
|
||||
tpath := repo.GetTreePath()
|
||||
if tpath == "" {
|
||||
tpath = TREE_TARBALL
|
||||
}
|
||||
a := compiler.NewPackageArtifact(tpath)
|
||||
|
||||
err = helpers.Untar(archivetree, treefs, false)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error met while unpacking rootfs")
|
||||
artifact, err := c.DownloadArtifact(a)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "While downloading "+tpath)
|
||||
}
|
||||
defer os.Remove(artifact.GetPath())
|
||||
|
||||
artifact.SetChecksums(repo.GetTreeChecksums())
|
||||
artifact.SetCompressionType(repo.GetTreeCompressionType())
|
||||
|
||||
err = artifact.Verify()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Tree integrity check failure")
|
||||
}
|
||||
|
||||
Debug("Tree tarball for the repository " + r.GetName() + " downloaded correctly.")
|
||||
|
||||
if r.Cached {
|
||||
// Copy updated repository.yaml file to repo dir now that the tree is synced.
|
||||
err = helpers.CopyFile(file, filepath.Join(repobasedir, REPOSITORY_SPECFILE))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error on update "+REPOSITORY_SPECFILE)
|
||||
}
|
||||
// Remove previous tree
|
||||
os.RemoveAll(treefs)
|
||||
}
|
||||
Debug("Decompress tree of the repository " + r.Name + "...")
|
||||
|
||||
err = artifact.Unpack(treefs, true)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error met while unpacking tree")
|
||||
}
|
||||
|
||||
tsec, _ := strconv.ParseInt(repo.GetLastUpdate(), 10, 64)
|
||||
|
||||
InfoC(
|
||||
Bold(Red(":house: Repository "+r.GetName()+" revision: ")).String() +
|
||||
Bold(Green(repo.GetRevision())).String() + " - " +
|
||||
Bold(Green(time.Unix(tsec, 0).String())).String(),
|
||||
)
|
||||
|
||||
} else {
|
||||
Info("Repository", r.GetName(), "is already up to date.")
|
||||
}
|
||||
|
||||
reciper := tree.NewInstallerRecipe(pkg.NewInMemoryDatabase(false))
|
||||
@@ -249,9 +448,11 @@ func (r *LuetRepository) Sync() (Repository, error) {
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error met while unpacking rootfs")
|
||||
}
|
||||
|
||||
repo.SetTree(reciper)
|
||||
repo.SetTreePath(treefs)
|
||||
repo.SetUri(r.GetUri())
|
||||
repo.SetUrls(r.GetUrls())
|
||||
repo.SetAuthentication(r.GetAuthentication())
|
||||
|
||||
return repo, nil
|
||||
}
|
||||
@@ -323,6 +524,29 @@ PACKAGE:
|
||||
|
||||
}
|
||||
|
||||
func (re Repositories) ResolveSelectors(p []pkg.Package) []pkg.Package {
|
||||
// If a selector is given, get the best from each repo
|
||||
sort.Sort(re) // respect prio
|
||||
var matches []pkg.Package
|
||||
PACKAGE:
|
||||
for _, pack := range p {
|
||||
for _, r := range re {
|
||||
if pack.IsSelector() {
|
||||
c, err := r.GetTree().GetDatabase().FindPackageCandidate(pack)
|
||||
if err == nil {
|
||||
matches = append(matches, c)
|
||||
continue PACKAGE
|
||||
}
|
||||
} else {
|
||||
matches = append(matches, pack)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return matches
|
||||
|
||||
}
|
||||
|
||||
func (re Repositories) Search(s string) []PackageMatch {
|
||||
sort.Sort(re)
|
||||
var term = regexp.MustCompile(s)
|
||||
|
@@ -23,6 +23,7 @@ import (
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler"
|
||||
backend "github.com/mudler/luet/pkg/compiler/backend"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
. "github.com/mudler/luet/pkg/installer"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
@@ -81,16 +82,16 @@ var _ = Describe("Repository", func() {
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
repo, err := GenerateRepository("test", tmpdir, "local", 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
|
||||
repo, err := GenerateRepository("test", "description", "disk", []string{tmpdir}, 1, tmpdir, "../../tests/fixtures/buildable", pkg.NewInMemoryDatabase(false))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir)
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("tree.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
|
||||
})
|
||||
})
|
||||
Context("Matching packages", func() {
|
||||
@@ -105,8 +106,8 @@ var _ = Describe("Repository", func() {
|
||||
|
||||
_, err = builder2.GetDatabase().CreatePackage(package2)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
repo1 := &LuetRepository{Name: "test1", Tree: builder1}
|
||||
repo2 := &LuetRepository{Name: "test2", Tree: builder2}
|
||||
repo1 := &LuetSystemRepository{LuetRepository: &config.LuetRepository{Name: "test1"}, Tree: builder1}
|
||||
repo2 := &LuetSystemRepository{LuetRepository: &config.LuetRepository{Name: "test2"}, Tree: builder2}
|
||||
repositories := Repositories{repo1, repo2}
|
||||
matches := repositories.PackageMatches([]pkg.Package{package1})
|
||||
Expect(matches).To(Equal([]PackageMatch{{Repo: repo1, Package: package1}}))
|
||||
|
@@ -3,17 +3,54 @@ package logger
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
|
||||
"github.com/briandowns/spinner"
|
||||
"github.com/kyokomi/emoji"
|
||||
. "github.com/logrusorgru/aurora"
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
var s *spinner.Spinner = spinner.New(spinner.CharSets[22], 100*time.Millisecond)
|
||||
var s *spinner.Spinner = nil
|
||||
var z *zap.Logger = nil
|
||||
|
||||
// TODO: handle this from configuration
|
||||
var debug = false
|
||||
func NewSpinner() {
|
||||
if s == nil {
|
||||
s = spinner.New(
|
||||
spinner.CharSets[LuetCfg.GetGeneral().SpinnerCharset],
|
||||
LuetCfg.GetGeneral().GetSpinnerMs())
|
||||
}
|
||||
}
|
||||
|
||||
func ZapLogger() error {
|
||||
var err error
|
||||
if z == nil {
|
||||
// TODO: test permission for open logfile.
|
||||
cfg := zap.NewProductionConfig()
|
||||
cfg.OutputPaths = []string{LuetCfg.GetLogging().Path}
|
||||
cfg.Level = level2AtomicLevel(LuetCfg.GetLogging().Level)
|
||||
cfg.ErrorOutputPaths = []string{}
|
||||
if LuetCfg.GetLogging().JsonFormat {
|
||||
cfg.Encoding = "json"
|
||||
} else {
|
||||
cfg.Encoding = "console"
|
||||
}
|
||||
cfg.DisableCaller = true
|
||||
cfg.DisableStacktrace = true
|
||||
cfg.EncoderConfig.TimeKey = "time"
|
||||
cfg.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
|
||||
|
||||
z, err = cfg.Build()
|
||||
if err != nil {
|
||||
fmt.Fprint(os.Stderr, "Error on initialize file logger: "+err.Error()+"\n")
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func Spinner(i int) {
|
||||
|
||||
@@ -21,7 +58,7 @@ func Spinner(i int) {
|
||||
i = 43
|
||||
}
|
||||
|
||||
if !debug && !s.Active() {
|
||||
if !LuetCfg.GetGeneral().Debug && !s.Active() {
|
||||
// s.UpdateCharSet(spinner.CharSets[i])
|
||||
s.Start() // Start the spinner
|
||||
}
|
||||
@@ -30,7 +67,7 @@ func Spinner(i int) {
|
||||
func SpinnerText(suffix, prefix string) {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
if debug {
|
||||
if LuetCfg.GetGeneral().Debug {
|
||||
fmt.Println(fmt.Sprintf("%s %s",
|
||||
Bold(Cyan(prefix)).String(),
|
||||
Bold(Magenta(suffix)).BgBlack().String(),
|
||||
@@ -42,59 +79,119 @@ func SpinnerText(suffix, prefix string) {
|
||||
}
|
||||
|
||||
func SpinnerStop() {
|
||||
if !debug {
|
||||
if !LuetCfg.GetGeneral().Debug {
|
||||
s.Stop()
|
||||
}
|
||||
}
|
||||
|
||||
func msg(level string, msg ...interface{}) {
|
||||
func level2Number(level string) int {
|
||||
switch level {
|
||||
case "error":
|
||||
return 0
|
||||
case "warning":
|
||||
return 1
|
||||
case "info":
|
||||
return 2
|
||||
default:
|
||||
return 3
|
||||
}
|
||||
}
|
||||
|
||||
func log2File(level, msg string) {
|
||||
switch level {
|
||||
case "error":
|
||||
z.Error(msg)
|
||||
case "warning":
|
||||
z.Warn(msg)
|
||||
case "info":
|
||||
z.Info(msg)
|
||||
default:
|
||||
z.Debug(msg)
|
||||
}
|
||||
}
|
||||
|
||||
func level2AtomicLevel(level string) zap.AtomicLevel {
|
||||
switch level {
|
||||
case "error":
|
||||
return zap.NewAtomicLevelAt(zap.ErrorLevel)
|
||||
case "warning":
|
||||
return zap.NewAtomicLevelAt(zap.WarnLevel)
|
||||
case "info":
|
||||
return zap.NewAtomicLevelAt(zap.InfoLevel)
|
||||
default:
|
||||
return zap.NewAtomicLevelAt(zap.DebugLevel)
|
||||
}
|
||||
}
|
||||
|
||||
func msg(level string, withoutColor bool, msg ...interface{}) {
|
||||
var message string
|
||||
var confLevel, msgLevel int
|
||||
|
||||
if LuetCfg.GetGeneral().Debug {
|
||||
confLevel = 3
|
||||
} else {
|
||||
confLevel = level2Number(LuetCfg.GetLogging().Level)
|
||||
}
|
||||
msgLevel = level2Number(level)
|
||||
if msgLevel > confLevel {
|
||||
return
|
||||
}
|
||||
|
||||
for _, m := range msg {
|
||||
message += " " + fmt.Sprintf("%v", m)
|
||||
}
|
||||
|
||||
var levelMsg string
|
||||
switch level {
|
||||
case "warning":
|
||||
levelMsg = Bold(Yellow(":construction: " + message)).BgBlack().String()
|
||||
case "debug":
|
||||
levelMsg = White(message).BgBlack().String()
|
||||
case "info":
|
||||
levelMsg = Bold(White(message)).BgBlack().String()
|
||||
case "error":
|
||||
levelMsg = Bold(Red(":bomb: " + message + ":fire:")).BgBlack().String()
|
||||
|
||||
if withoutColor {
|
||||
levelMsg = message
|
||||
} else {
|
||||
switch level {
|
||||
case "warning":
|
||||
levelMsg = Bold(Yellow(":construction: " + message)).BgBlack().String()
|
||||
case "debug":
|
||||
levelMsg = White(message).BgBlack().String()
|
||||
case "info":
|
||||
levelMsg = Bold(White(message)).BgBlack().String()
|
||||
case "error":
|
||||
levelMsg = Bold(Red(":bomb: " + message + ":fire:")).BgBlack().String()
|
||||
}
|
||||
}
|
||||
|
||||
levelMsg = emoji.Sprint(levelMsg)
|
||||
|
||||
// if s.Active() {
|
||||
// SpinnerText(levelMsg, "")
|
||||
// return
|
||||
// }
|
||||
|
||||
cmd := []interface{}{}
|
||||
for _, f := range msg {
|
||||
cmd = append(cmd, f)
|
||||
if z != nil {
|
||||
log2File(level, message)
|
||||
}
|
||||
|
||||
fmt.Println(levelMsg)
|
||||
//fmt.Println(cmd...)
|
||||
}
|
||||
|
||||
func Warning(mess ...interface{}) {
|
||||
msg("warning", mess...)
|
||||
msg("warning", false, mess...)
|
||||
if LuetCfg.GetGeneral().FatalWarns {
|
||||
os.Exit(2)
|
||||
}
|
||||
}
|
||||
|
||||
func Debug(mess ...interface{}) {
|
||||
msg("debug", mess...)
|
||||
msg("debug", false, mess...)
|
||||
}
|
||||
|
||||
func DebugC(mess ...interface{}) {
|
||||
msg("debug", true, mess...)
|
||||
}
|
||||
|
||||
func Info(mess ...interface{}) {
|
||||
msg("info", mess...)
|
||||
msg("info", false, mess...)
|
||||
}
|
||||
|
||||
func InfoC(mess ...interface{}) {
|
||||
msg("info", true, mess...)
|
||||
}
|
||||
|
||||
func Error(mess ...interface{}) {
|
||||
msg("error", mess...)
|
||||
msg("error", false, mess...)
|
||||
}
|
||||
|
||||
func Fatal(mess ...interface{}) {
|
||||
|
@@ -22,7 +22,6 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
version "github.com/hashicorp/go-version"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
storm "github.com/asdine/storm"
|
||||
@@ -233,16 +232,11 @@ func (db *BoltDatabase) getProvide(p Package) (Package, error) {
|
||||
|
||||
for ve, _ := range versions {
|
||||
|
||||
v, err := version.NewVersion(p.GetVersion())
|
||||
match, err := p.VersionMatchSelector(ve)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Wrap(err, "Error on match version")
|
||||
}
|
||||
constraints, err := version.NewConstraint(ve)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if constraints.Check(v) {
|
||||
if match {
|
||||
pa, ok := db.ProvidesDatabase[p.GetPackageName()][ve]
|
||||
if !ok {
|
||||
return nil, errors.New("No versions found for package")
|
||||
@@ -367,15 +361,11 @@ func (db *BoltDatabase) FindPackages(p Package) ([]Package, error) {
|
||||
continue
|
||||
}
|
||||
|
||||
v, err := version.NewVersion(w.GetVersion())
|
||||
match, err := p.SelectorMatchVersion(w.GetVersion())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Wrap(err, "Error on match selector")
|
||||
}
|
||||
constraints, err := version.NewConstraint(p.GetVersion())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if constraints.Check(v) {
|
||||
if match {
|
||||
versionsInWorld = append(versionsInWorld, w)
|
||||
}
|
||||
}
|
||||
|
@@ -20,7 +20,6 @@ import (
|
||||
"encoding/json"
|
||||
"sync"
|
||||
|
||||
version "github.com/hashicorp/go-version"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
@@ -178,16 +177,11 @@ func (db *InMemoryDatabase) getProvide(p Package) (Package, error) {
|
||||
|
||||
for ve, _ := range versions {
|
||||
|
||||
v, err := version.NewVersion(p.GetVersion())
|
||||
match, err := p.VersionMatchSelector(ve)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Wrap(err, "Error on match version")
|
||||
}
|
||||
constraints, err := version.NewConstraint(ve)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if constraints.Check(v) {
|
||||
if match {
|
||||
pa, ok := db.ProvidesDatabase[p.GetPackageName()][ve]
|
||||
if !ok {
|
||||
return nil, errors.New("No versions found for package")
|
||||
@@ -257,15 +251,12 @@ func (db *InMemoryDatabase) FindPackages(p Package) ([]Package, error) {
|
||||
}
|
||||
var versionsInWorld []Package
|
||||
for ve, _ := range versions {
|
||||
v, err := version.NewVersion(ve)
|
||||
match, err := p.SelectorMatchVersion(ve)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Wrap(err, "Error on match selector")
|
||||
}
|
||||
constraints, err := version.NewConstraint(p.GetVersion())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if constraints.Check(v) {
|
||||
|
||||
if match {
|
||||
w, err := db.FindPackage(&DefaultPackage{Name: p.GetName(), Category: p.GetCategory(), Version: ve})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Cache mismatch - this shouldn't happen")
|
||||
|
@@ -77,6 +77,27 @@ var _ = Describe("Database", func() {
|
||||
|
||||
})
|
||||
|
||||
It("Find specific package candidate", func() {
|
||||
db := NewInMemoryDatabase(false)
|
||||
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
|
||||
a1 := NewPackage("A", "1.1", []*DefaultPackage{}, []*DefaultPackage{})
|
||||
a3 := NewPackage("A", "1.3", []*DefaultPackage{}, []*DefaultPackage{})
|
||||
_, err := db.CreatePackage(a)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
_, err = db.CreatePackage(a1)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
_, err = db.CreatePackage(a3)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
s := NewPackage("A", "=1.0", []*DefaultPackage{}, []*DefaultPackage{})
|
||||
|
||||
pack, err := db.FindPackageCandidate(s)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(pack).To(Equal(a))
|
||||
|
||||
})
|
||||
|
||||
It("Provides replaces definitions", func() {
|
||||
db := NewInMemoryDatabase(false)
|
||||
a := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
|
||||
|
@@ -83,6 +83,10 @@ type Package interface {
|
||||
GetLicense() string
|
||||
|
||||
IsSelector() bool
|
||||
VersionMatchSelector(string) (bool, error)
|
||||
SelectorMatchVersion(string) (bool, error)
|
||||
|
||||
String() string
|
||||
}
|
||||
|
||||
type Tree interface {
|
||||
@@ -126,11 +130,11 @@ func (t *DefaultPackage) JSON() ([]byte, error) {
|
||||
|
||||
// DefaultPackage represent a standard package definition
|
||||
type DefaultPackage struct {
|
||||
ID int `json:"-" storm:"id,increment"` // primary key with auto increment
|
||||
Name string `json:"name"` // Affects YAML field names too.
|
||||
Version string `json:"version"` // Affects YAML field names too.
|
||||
Category string `json:"category"` // Affects YAML field names too.
|
||||
UseFlags []string `json:"use_flags"` // Affects YAML field names too.
|
||||
ID int `storm:"id,increment" json:"id"` // primary key with auto increment
|
||||
Name string `json:"name"` // Affects YAML field names too.
|
||||
Version string `json:"version"` // Affects YAML field names too.
|
||||
Category string `json:"category"` // Affects YAML field names too.
|
||||
UseFlags []string `json:"use_flags"` // Affects YAML field names too.
|
||||
State State
|
||||
PackageRequires []*DefaultPackage `json:"requires"` // Affects YAML field names too.
|
||||
PackageConflicts []*DefaultPackage `json:"conflicts"` // Affects YAML field names too.
|
||||
@@ -158,7 +162,7 @@ func NewPackage(name, version string, requires []*DefaultPackage, conflicts []*D
|
||||
func (p *DefaultPackage) String() string {
|
||||
b, err := p.JSON()
|
||||
if err != nil {
|
||||
return fmt.Sprintf("{ id: \"%d\", name: \"%s\" }", p.ID, p.Name)
|
||||
return fmt.Sprintf("{ id: \"%d\", name: \"%s\", version: \"%s\", category: \"%s\" }", p.ID, p.Name, p.Version, p.Category)
|
||||
}
|
||||
return fmt.Sprintf("%s", string(b))
|
||||
}
|
||||
@@ -169,6 +173,16 @@ func (p *DefaultPackage) GetFingerPrint() string {
|
||||
return fmt.Sprintf("%s-%s-%s", p.Name, p.Category, p.Version)
|
||||
}
|
||||
|
||||
func FromString(s string) Package {
|
||||
var unescaped DefaultPackage
|
||||
|
||||
err := json.Unmarshal([]byte(s), &unescaped)
|
||||
if err != nil {
|
||||
return &unescaped
|
||||
}
|
||||
return &unescaped
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) GetPackageName() string {
|
||||
return fmt.Sprintf("%s-%s", p.Name, p.Category)
|
||||
}
|
||||
@@ -322,15 +336,11 @@ func (p *DefaultPackage) Expand(definitiondb PackageDatabase) ([]Package, error)
|
||||
return nil, err
|
||||
}
|
||||
for _, w := range all {
|
||||
v, err := version.NewVersion(w.GetVersion())
|
||||
match, err := p.SelectorMatchVersion(w.GetVersion())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
constraints, err := version.NewConstraint(p.GetVersion())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if constraints.Check(v) {
|
||||
if match {
|
||||
versionsInWorld = append(versionsInWorld, w)
|
||||
}
|
||||
}
|
||||
@@ -538,20 +548,25 @@ func (pack *DefaultPackage) BuildFormula(definitiondb PackageDatabase, db Packag
|
||||
if err != nil || len(packages) == 0 {
|
||||
required = requiredDef
|
||||
} else {
|
||||
for _, p := range packages {
|
||||
encodedB, err := p.Encode(db)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
B := bf.Var(encodedB)
|
||||
formulas = append(formulas, bf.Or(bf.Not(A),
|
||||
bf.Not(B)))
|
||||
if len(packages) == 1 {
|
||||
required = packages[0]
|
||||
} else {
|
||||
for _, p := range packages {
|
||||
encodedB, err := p.Encode(db)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
B := bf.Var(encodedB)
|
||||
formulas = append(formulas, bf.Or(bf.Not(A),
|
||||
bf.Not(B)))
|
||||
|
||||
f, err := p.BuildFormula(definitiondb, db)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
f, err := p.BuildFormula(definitiondb, db)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
formulas = append(formulas, f...)
|
||||
}
|
||||
formulas = append(formulas, f...)
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@@ -18,11 +18,15 @@ package pkg_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestSolver(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Package Suite")
|
||||
}
|
||||
|
@@ -23,6 +23,15 @@ import (
|
||||
|
||||
var _ = Describe("Package", func() {
|
||||
|
||||
Context("Encoding/Decoding", func() {
|
||||
a := &DefaultPackage{Name: "test", Version: "1", Category: "t"}
|
||||
It("Encodes and decodes correctly", func() {
|
||||
Expect(a.String()).ToNot(Equal(""))
|
||||
p := FromString(a.String())
|
||||
Expect(p).To(Equal(a))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Simple package", func() {
|
||||
a := NewPackage("A", ">=1.0", []*DefaultPackage{}, []*DefaultPackage{})
|
||||
a1 := NewPackage("A", "1.0", []*DefaultPackage{}, []*DefaultPackage{})
|
||||
|
333
pkg/package/version.go
Normal file
333
pkg/package/version.go
Normal file
@@ -0,0 +1,333 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>,
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package pkg
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
version "github.com/hashicorp/go-version"
|
||||
)
|
||||
|
||||
// Package Selector Condition
|
||||
type PkgSelectorCondition int
|
||||
|
||||
type PkgVersionSelector struct {
|
||||
Version string
|
||||
VersionSuffix string
|
||||
Condition PkgSelectorCondition
|
||||
// TODO: Integrate support for multiple repository
|
||||
}
|
||||
|
||||
const (
|
||||
PkgCondInvalid = 0
|
||||
// >
|
||||
PkgCondGreater = 1
|
||||
// >=
|
||||
PkgCondGreaterEqual = 2
|
||||
// <
|
||||
PkgCondLess = 3
|
||||
// <=
|
||||
PkgCondLessEqual = 4
|
||||
// =
|
||||
PkgCondEqual = 5
|
||||
// !
|
||||
PkgCondNot = 6
|
||||
// ~
|
||||
PkgCondAnyRevision = 7
|
||||
// =<pkg>*
|
||||
PkgCondMatchVersion = 8
|
||||
)
|
||||
|
||||
func PkgSelectorConditionFromInt(c int) (ans PkgSelectorCondition) {
|
||||
if c == PkgCondGreater {
|
||||
ans = PkgCondGreater
|
||||
} else if c == PkgCondGreaterEqual {
|
||||
ans = PkgCondGreaterEqual
|
||||
} else if c == PkgCondLess {
|
||||
ans = PkgCondLess
|
||||
} else if c == PkgCondLessEqual {
|
||||
ans = PkgCondLessEqual
|
||||
} else if c == PkgCondNot {
|
||||
ans = PkgCondNot
|
||||
} else if c == PkgCondAnyRevision {
|
||||
ans = PkgCondAnyRevision
|
||||
} else if c == PkgCondMatchVersion {
|
||||
ans = PkgCondMatchVersion
|
||||
} else {
|
||||
ans = PkgCondInvalid
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (p PkgSelectorCondition) String() (ans string) {
|
||||
if p == PkgCondInvalid {
|
||||
ans = ""
|
||||
} else if p == PkgCondGreater {
|
||||
ans = ">"
|
||||
} else if p == PkgCondGreaterEqual {
|
||||
ans = ">="
|
||||
} else if p == PkgCondLess {
|
||||
ans = "<"
|
||||
} else if p == PkgCondLessEqual {
|
||||
ans = "<="
|
||||
} else if p == PkgCondEqual {
|
||||
// To permit correct matching on database
|
||||
// we currently use directly package version without =
|
||||
ans = ""
|
||||
} else if p == PkgCondNot {
|
||||
ans = "!"
|
||||
} else if p == PkgCondAnyRevision {
|
||||
ans = "~"
|
||||
} else if p == PkgCondMatchVersion {
|
||||
ans = "=*"
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (p PkgSelectorCondition) Int() (ans int) {
|
||||
if p == PkgCondInvalid {
|
||||
ans = PkgCondInvalid
|
||||
} else if p == PkgCondGreater {
|
||||
ans = PkgCondGreater
|
||||
} else if p == PkgCondGreaterEqual {
|
||||
ans = PkgCondGreaterEqual
|
||||
} else if p == PkgCondLess {
|
||||
ans = PkgCondLess
|
||||
} else if p == PkgCondLessEqual {
|
||||
ans = PkgCondLessEqual
|
||||
} else if p == PkgCondEqual {
|
||||
// To permit correct matching on database
|
||||
// we currently use directly package version without =
|
||||
ans = PkgCondEqual
|
||||
} else if p == PkgCondNot {
|
||||
ans = PkgCondNot
|
||||
} else if p == PkgCondAnyRevision {
|
||||
ans = PkgCondAnyRevision
|
||||
} else if p == PkgCondMatchVersion {
|
||||
ans = PkgCondMatchVersion
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func ParseVersion(v string) (PkgVersionSelector, error) {
|
||||
var ans PkgVersionSelector = PkgVersionSelector{
|
||||
Version: "",
|
||||
VersionSuffix: "",
|
||||
Condition: PkgCondInvalid,
|
||||
}
|
||||
|
||||
if strings.HasPrefix(v, ">=") {
|
||||
v = v[2:]
|
||||
ans.Condition = PkgCondGreaterEqual
|
||||
} else if strings.HasPrefix(v, ">") {
|
||||
v = v[1:]
|
||||
ans.Condition = PkgCondGreater
|
||||
} else if strings.HasPrefix(v, "<=") {
|
||||
v = v[2:]
|
||||
ans.Condition = PkgCondLessEqual
|
||||
} else if strings.HasPrefix(v, "<") {
|
||||
v = v[1:]
|
||||
ans.Condition = PkgCondLess
|
||||
} else if strings.HasPrefix(v, "=") {
|
||||
v = v[1:]
|
||||
if strings.HasSuffix(v, "*") {
|
||||
ans.Condition = PkgCondMatchVersion
|
||||
v = v[0 : len(v)-1]
|
||||
} else {
|
||||
ans.Condition = PkgCondEqual
|
||||
}
|
||||
} else if strings.HasPrefix(v, "~") {
|
||||
v = v[1:]
|
||||
ans.Condition = PkgCondAnyRevision
|
||||
} else if strings.HasPrefix(v, "!") {
|
||||
v = v[1:]
|
||||
ans.Condition = PkgCondNot
|
||||
}
|
||||
|
||||
regexPkg := regexp.MustCompile(
|
||||
fmt.Sprintf("(%s|%s|%s|%s|%s|%s)((%s|%s|%s|%s|%s|%s|%s)+)*$",
|
||||
// Version regex
|
||||
// 1.1
|
||||
"[0-9]+[.][0-9]+[a-z]*",
|
||||
// 1
|
||||
"[0-9]+[a-z]*",
|
||||
// 1.1.1
|
||||
"[0-9]+[.][0-9]+[.][0-9]+[a-z]*",
|
||||
// 1.1.1.1
|
||||
"[0-9]+[.][0-9]+[.][0-9]+[.][0-9]+[a-z]*",
|
||||
// 1.1.1.1.1
|
||||
"[0-9]+[.][0-9]+[.][0-9]+[.][0-9]+[.][0-9]+[a-z]*",
|
||||
// 1.1.1.1.1.1
|
||||
"[0-9]+[.][0-9]+[.][0-9]+[.][0-9]+[.][0-9]+[.][0-9]+[a-z]*",
|
||||
// suffix
|
||||
"-r[0-9]+",
|
||||
"_p[0-9]+",
|
||||
"_pre[0-9]*",
|
||||
"_rc[0-9]+",
|
||||
// handle also rc without number
|
||||
"_rc",
|
||||
"_alpha",
|
||||
"_beta",
|
||||
),
|
||||
)
|
||||
matches := regexPkg.FindAllString(v, -1)
|
||||
|
||||
if len(matches) > 0 {
|
||||
// Check if there patch
|
||||
if strings.Contains(matches[0], "_p") {
|
||||
ans.Version = matches[0][0:strings.Index(matches[0], "_p")]
|
||||
ans.VersionSuffix = matches[0][strings.Index(matches[0], "_p"):]
|
||||
} else if strings.Contains(matches[0], "_rc") {
|
||||
ans.Version = matches[0][0:strings.Index(matches[0], "_rc")]
|
||||
ans.VersionSuffix = matches[0][strings.Index(matches[0], "_rc"):]
|
||||
} else if strings.Contains(matches[0], "_alpha") {
|
||||
ans.Version = matches[0][0:strings.Index(matches[0], "_alpha")]
|
||||
ans.VersionSuffix = matches[0][strings.Index(matches[0], "_alpha"):]
|
||||
} else if strings.Contains(matches[0], "_beta") {
|
||||
ans.Version = matches[0][0:strings.Index(matches[0], "_beta")]
|
||||
ans.VersionSuffix = matches[0][strings.Index(matches[0], "_beta"):]
|
||||
} else if strings.Contains(matches[0], "-r") {
|
||||
ans.Version = matches[0][0:strings.Index(matches[0], "-r")]
|
||||
ans.VersionSuffix = matches[0][strings.Index(matches[0], "-r"):]
|
||||
} else {
|
||||
ans.Version = matches[0]
|
||||
}
|
||||
}
|
||||
|
||||
// Set condition if there isn't a prefix but only a version
|
||||
if ans.Condition == PkgCondInvalid && ans.Version != "" {
|
||||
ans.Condition = PkgCondEqual
|
||||
}
|
||||
|
||||
// NOTE: Now suffix complex like _alpha_rc1 are not supported.
|
||||
return ans, nil
|
||||
}
|
||||
|
||||
func PackageAdmit(selector, i PkgVersionSelector) (bool, error) {
|
||||
var v1 *version.Version = nil
|
||||
var v2 *version.Version = nil
|
||||
var ans bool
|
||||
var err error
|
||||
|
||||
if selector.Version != "" {
|
||||
v1, err = version.NewVersion(selector.Version)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
if i.Version != "" {
|
||||
v2, err = version.NewVersion(i.Version)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
} else {
|
||||
// If version is not defined match always package
|
||||
ans = true
|
||||
}
|
||||
|
||||
// If package doesn't define version admit all versions of the package.
|
||||
if selector.Version == "" {
|
||||
ans = true
|
||||
} else {
|
||||
if selector.Condition == PkgCondInvalid || selector.Condition == PkgCondEqual {
|
||||
// case 1: source-pkg-1.0 and dest-pkg-1.0 or dest-pkg without version
|
||||
if i.Version != "" && i.Version == selector.Version && selector.VersionSuffix == i.VersionSuffix {
|
||||
ans = true
|
||||
}
|
||||
} else if selector.Condition == PkgCondAnyRevision {
|
||||
if v1 != nil && v2 != nil {
|
||||
ans = v1.Equal(v2)
|
||||
}
|
||||
} else if selector.Condition == PkgCondMatchVersion {
|
||||
// TODO: case of 7.3* where 7.30 is accepted.
|
||||
if v1 != nil && v2 != nil {
|
||||
segments := v1.Segments()
|
||||
n := strings.Count(selector.Version, ".")
|
||||
switch n {
|
||||
case 0:
|
||||
segments[0]++
|
||||
case 1:
|
||||
segments[1]++
|
||||
case 2:
|
||||
segments[2]++
|
||||
default:
|
||||
segments[len(segments)-1]++
|
||||
}
|
||||
nextVersion := strings.Trim(strings.Replace(fmt.Sprint(segments), " ", ".", -1), "[]")
|
||||
constraints, err := version.NewConstraint(
|
||||
fmt.Sprintf(">= %s, < %s", selector.Version, nextVersion),
|
||||
)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
ans = constraints.Check(v2)
|
||||
}
|
||||
} else if v1 != nil && v2 != nil {
|
||||
|
||||
// TODO: Integrate check of version suffix
|
||||
switch selector.Condition {
|
||||
case PkgCondGreaterEqual:
|
||||
ans = v2.GreaterThanOrEqual(v1)
|
||||
case PkgCondLessEqual:
|
||||
ans = v2.LessThanOrEqual(v1)
|
||||
case PkgCondGreater:
|
||||
ans = v2.GreaterThan(v1)
|
||||
case PkgCondLess:
|
||||
ans = v2.LessThan(v1)
|
||||
case PkgCondNot:
|
||||
ans = !v2.Equal(v1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ans, nil
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) SelectorMatchVersion(v string) (bool, error) {
|
||||
if !p.IsSelector() {
|
||||
return false, errors.New("Package is not a selector")
|
||||
}
|
||||
|
||||
vS, err := ParseVersion(p.GetVersion())
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
vSI, err := ParseVersion(v)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
return PackageAdmit(vS, vSI)
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) VersionMatchSelector(selector string) (bool, error) {
|
||||
vS, err := ParseVersion(selector)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
vSI, err := ParseVersion(p.GetVersion())
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
return PackageAdmit(vS, vSI)
|
||||
}
|
197
pkg/package/version_test.go
Normal file
197
pkg/package/version_test.go
Normal file
@@ -0,0 +1,197 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package pkg_test
|
||||
|
||||
import (
|
||||
gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
|
||||
|
||||
. "github.com/mudler/luet/pkg/package"
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("Versions", func() {
|
||||
|
||||
Context("Versions Parser1", func() {
|
||||
v, err := ParseVersion(">=1.0")
|
||||
It("ParseVersion1", func() {
|
||||
var c PkgSelectorCondition = PkgCondGreaterEqual
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal("1.0"))
|
||||
Expect(v.VersionSuffix).Should(Equal(""))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Versions Parser2", func() {
|
||||
v, err := ParseVersion(">1.0")
|
||||
It("ParseVersion2", func() {
|
||||
var c PkgSelectorCondition = PkgCondGreater
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal("1.0"))
|
||||
Expect(v.VersionSuffix).Should(Equal(""))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Versions Parser3", func() {
|
||||
v, err := ParseVersion("<=1.0")
|
||||
It("ParseVersion3", func() {
|
||||
var c PkgSelectorCondition = PkgCondLessEqual
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal("1.0"))
|
||||
Expect(v.VersionSuffix).Should(Equal(""))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Versions Parser4", func() {
|
||||
v, err := ParseVersion("<1.0")
|
||||
It("ParseVersion4", func() {
|
||||
var c PkgSelectorCondition = PkgCondLess
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal("1.0"))
|
||||
Expect(v.VersionSuffix).Should(Equal(""))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Versions Parser5", func() {
|
||||
v, err := ParseVersion("=1.0")
|
||||
It("ParseVersion5", func() {
|
||||
var c PkgSelectorCondition = PkgCondEqual
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal("1.0"))
|
||||
Expect(v.VersionSuffix).Should(Equal(""))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Versions Parser6", func() {
|
||||
v, err := ParseVersion("!1.0")
|
||||
It("ParseVersion6", func() {
|
||||
var c PkgSelectorCondition = PkgCondNot
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal("1.0"))
|
||||
Expect(v.VersionSuffix).Should(Equal(""))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Versions Parser7", func() {
|
||||
v, err := ParseVersion("")
|
||||
It("ParseVersion7", func() {
|
||||
var c PkgSelectorCondition = PkgCondInvalid
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal(""))
|
||||
Expect(v.VersionSuffix).Should(Equal(""))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Versions Parser8", func() {
|
||||
v, err := ParseVersion("=12.1.0.2_p1")
|
||||
It("ParseVersion8", func() {
|
||||
var c PkgSelectorCondition = PkgCondEqual
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal("12.1.0.2"))
|
||||
Expect(v.VersionSuffix).Should(Equal("_p1"))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Versions Parser9", func() {
|
||||
v, err := ParseVersion(">=0.0.20190406.4.9.172-r1")
|
||||
It("ParseVersion9", func() {
|
||||
var c PkgSelectorCondition = PkgCondGreaterEqual
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal("0.0.20190406.4.9.172"))
|
||||
Expect(v.VersionSuffix).Should(Equal("-r1"))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Versions Parser10", func() {
|
||||
v, err := ParseVersion(">=0.0.20190406.4.9.172_alpha")
|
||||
It("ParseVersion10", func() {
|
||||
var c PkgSelectorCondition = PkgCondGreaterEqual
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(v.Version).Should(Equal("0.0.20190406.4.9.172"))
|
||||
Expect(v.VersionSuffix).Should(Equal("_alpha"))
|
||||
Expect(v.Condition).Should(Equal(c))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Selector1", func() {
|
||||
v1, err := ParseVersion(">=0.0.20190406.4.9.172-r1")
|
||||
v2, err2 := ParseVersion("1.0.111")
|
||||
match, err3 := PackageAdmit(v1, v2)
|
||||
It("Selector1", func() {
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(err2).Should(BeNil())
|
||||
Expect(err3).Should(BeNil())
|
||||
Expect(match).Should(Equal(true))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Selector2", func() {
|
||||
v1, err := ParseVersion(">=0.0.20190406.4.9.172-r1")
|
||||
v2, err2 := ParseVersion("0")
|
||||
match, err3 := PackageAdmit(v1, v2)
|
||||
It("Selector2", func() {
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(err2).Should(BeNil())
|
||||
Expect(err3).Should(BeNil())
|
||||
Expect(match).Should(Equal(false))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Selector3", func() {
|
||||
v1, err := ParseVersion(">0")
|
||||
v2, err2 := ParseVersion("0.0.40-alpha")
|
||||
match, err3 := PackageAdmit(v1, v2)
|
||||
It("Selector3", func() {
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(err2).Should(BeNil())
|
||||
Expect(err3).Should(BeNil())
|
||||
Expect(match).Should(Equal(true))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Selector4", func() {
|
||||
v1, err := ParseVersion(">0")
|
||||
v2, err2 := ParseVersion("")
|
||||
match, err3 := PackageAdmit(v1, v2)
|
||||
It("Selector4", func() {
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(err2).Should(BeNil())
|
||||
Expect(err3).Should(BeNil())
|
||||
Expect(match).Should(Equal(true))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Condition Converter 1", func() {
|
||||
gp, err := gentoo.ParsePackageStr("=layer/build-1.0")
|
||||
var cond gentoo.PackageCond = gentoo.PkgCondEqual
|
||||
It("Converter1", func() {
|
||||
Expect(err).Should(BeNil())
|
||||
Expect((*gp).Condition).Should(Equal(cond))
|
||||
Expect(PkgSelectorConditionFromInt((*gp).Condition.Int()).String()).Should(Equal(""))
|
||||
})
|
||||
})
|
||||
|
||||
})
|
85
pkg/repository/loader.go
Normal file
85
pkg/repository/loader.go
Normal file
@@ -0,0 +1,85 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package repository
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"path"
|
||||
"regexp"
|
||||
|
||||
"github.com/ghodss/yaml"
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
)
|
||||
|
||||
func LoadRepositories(c *LuetConfig) error {
|
||||
var regexRepo = regexp.MustCompile(`.yml$`)
|
||||
|
||||
for _, rdir := range c.RepositoriesConfDir {
|
||||
Debug("Parsing Repository Directory", rdir, "...")
|
||||
|
||||
files, err := ioutil.ReadDir(rdir)
|
||||
if err != nil {
|
||||
Warning("Skip dir", rdir, ":", err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if file.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
if !regexRepo.MatchString(file.Name()) {
|
||||
Debug("File", file.Name(), "skipped.")
|
||||
continue
|
||||
}
|
||||
|
||||
content, err := ioutil.ReadFile(path.Join(rdir, file.Name()))
|
||||
if err != nil {
|
||||
Warning("On read file", file.Name(), ":", err.Error())
|
||||
Warning("File", file.Name(), "skipped.")
|
||||
continue
|
||||
}
|
||||
|
||||
r, err := LoadRepository(content)
|
||||
if err != nil {
|
||||
Warning("On parse file", file.Name(), ":", err.Error())
|
||||
Warning("File", file.Name(), "skipped.")
|
||||
continue
|
||||
}
|
||||
|
||||
if r.Name == "" || len(r.Urls) == 0 || r.Type == "" {
|
||||
Warning("Invalid repository ", file.Name())
|
||||
Warning("File", file.Name(), "skipped.")
|
||||
continue
|
||||
}
|
||||
|
||||
c.AddSystemRepository(*r)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func LoadRepository(data []byte) (*LuetRepository, error) {
|
||||
ans := NewEmptyLuetRepository()
|
||||
err := yaml.Unmarshal(data, &ans)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ans, nil
|
||||
}
|
33
pkg/repository/repository_suite_test.go
Normal file
33
pkg/repository/repository_suite_test.go
Normal file
@@ -0,0 +1,33 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package repository_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestSolver(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Repository Suite")
|
||||
}
|
46
pkg/repository/repository_test.go
Normal file
46
pkg/repository/repository_test.go
Normal file
@@ -0,0 +1,46 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
// Daniele Rondina <geaaru@sabayonlinux.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package repository_test
|
||||
|
||||
import (
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
. "github.com/mudler/luet/pkg/repository"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
viper "github.com/spf13/viper"
|
||||
)
|
||||
|
||||
var _ = Describe("Repository", func() {
|
||||
Context("Load Repository1", func() {
|
||||
cfg := NewLuetConfig(viper.New())
|
||||
cfg.RepositoriesConfDir = []string{
|
||||
"../../tests/fixtures/repos.conf.d",
|
||||
}
|
||||
err := LoadRepositories(cfg)
|
||||
|
||||
It("Chec Load Repository 1", func() {
|
||||
Expect(err).Should(BeNil())
|
||||
Expect(len(cfg.SystemRepositories)).Should(Equal(1))
|
||||
Expect(cfg.SystemRepositories[0].Name).Should(Equal("test1"))
|
||||
Expect(cfg.SystemRepositories[0].Priority).Should(Equal(999))
|
||||
Expect(cfg.SystemRepositories[0].Type).Should(Equal("disk"))
|
||||
Expect(len(cfg.SystemRepositories[0].Urls)).Should(Equal(1))
|
||||
Expect(cfg.SystemRepositories[0].Urls[0]).Should(Equal("tests/repos/test1"))
|
||||
})
|
||||
})
|
||||
})
|
336
pkg/solver/resolver.go
Normal file
336
pkg/solver/resolver.go
Normal file
@@ -0,0 +1,336 @@
|
||||
// Copyright © 2020 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package solver
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strconv"
|
||||
|
||||
"github.com/crillab/gophersat/bf"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
"gopkg.in/yaml.v2"
|
||||
|
||||
"github.com/ecooper/qlearning"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type ActionType int
|
||||
|
||||
const (
|
||||
NoAction = 0
|
||||
Solved = iota
|
||||
NoSolution = iota
|
||||
Going = iota
|
||||
ActionRemoved = iota
|
||||
ActionAdded = iota
|
||||
|
||||
DoNoop = false
|
||||
|
||||
ActionDomains = 3 // Bump it if you increase the number of actions
|
||||
|
||||
DefaultMaxAttempts = 9000
|
||||
DefaultLearningRate = 0.7
|
||||
DefaultDiscount = 1.0
|
||||
DefaultInitialObserved = 999999
|
||||
|
||||
QLearningResolverType = "qlearning"
|
||||
)
|
||||
|
||||
//. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
// PackageResolver assists PackageSolver on unsat cases
|
||||
type PackageResolver interface {
|
||||
Solve(bf.Formula, PackageSolver) (PackagesAssertions, error)
|
||||
}
|
||||
|
||||
type DummyPackageResolver struct {
|
||||
}
|
||||
|
||||
func (*DummyPackageResolver) Solve(bf.Formula, PackageSolver) (PackagesAssertions, error) {
|
||||
return nil, errors.New("Could not satisfy the constraints. Try again by removing deps ")
|
||||
}
|
||||
|
||||
type QLearningResolver struct {
|
||||
Attempts int
|
||||
|
||||
ToAttempt int
|
||||
|
||||
attempts int
|
||||
|
||||
Attempted map[string]bool
|
||||
|
||||
Solver PackageSolver
|
||||
Formula bf.Formula
|
||||
|
||||
Targets []pkg.Package
|
||||
Current []pkg.Package
|
||||
|
||||
observedDelta int
|
||||
observedDeltaChoice []pkg.Package
|
||||
|
||||
Agent *qlearning.SimpleAgent
|
||||
}
|
||||
|
||||
func SimpleQLearningSolver() PackageResolver {
|
||||
return NewQLearningResolver(DefaultLearningRate, DefaultDiscount, DefaultMaxAttempts, DefaultInitialObserved)
|
||||
}
|
||||
|
||||
// Defaults LearningRate 0.7, Discount 1.0
|
||||
func NewQLearningResolver(LearningRate, Discount float32, MaxAttempts, initialObservedDelta int) PackageResolver {
|
||||
return &QLearningResolver{
|
||||
Agent: qlearning.NewSimpleAgent(LearningRate, Discount),
|
||||
observedDelta: initialObservedDelta,
|
||||
Attempts: MaxAttempts,
|
||||
}
|
||||
}
|
||||
|
||||
func (resolver *QLearningResolver) Solve(f bf.Formula, s PackageSolver) (PackagesAssertions, error) {
|
||||
// Info("Using QLearning solver to resolve conflicts. Please be patient.")
|
||||
resolver.Solver = s
|
||||
|
||||
s.SetResolver(&DummyPackageResolver{}) // Set dummy. Otherwise the attempts will run again a QLearning instance.
|
||||
defer s.SetResolver(resolver) // Set back ourselves as resolver
|
||||
|
||||
resolver.Formula = f
|
||||
|
||||
// Our agent by default has a learning rate of 0.7 and discount of 1.0.
|
||||
if resolver.Agent == nil {
|
||||
resolver.Agent = qlearning.NewSimpleAgent(DefaultLearningRate, DefaultDiscount) // FIXME: Remove hardcoded values
|
||||
}
|
||||
|
||||
// 3 are the action domains, counting noop regardless if enabled or not
|
||||
// get the permutations to attempt
|
||||
resolver.ToAttempt = int(helpers.Factorial(uint64(len(resolver.Solver.(*Solver).Wanted)-1) * ActionDomains)) // TODO: type assertions must go away
|
||||
resolver.Targets = resolver.Solver.(*Solver).Wanted
|
||||
|
||||
resolver.attempts = resolver.Attempts
|
||||
|
||||
resolver.Attempted = make(map[string]bool, len(resolver.Targets))
|
||||
|
||||
for resolver.IsComplete() == Going {
|
||||
// Pick the next move, which is going to be a letter choice.
|
||||
action := qlearning.Next(resolver.Agent, resolver)
|
||||
|
||||
// Whatever that choice is, let's update our model for its
|
||||
// impact. If the package chosen makes the formula sat,
|
||||
// then this action will be positive. Otherwise, it will be
|
||||
// negative.
|
||||
resolver.Agent.Learn(action, resolver)
|
||||
|
||||
// Reward doesn't change state so we can check what the
|
||||
// reward would be for this action, and report how the
|
||||
// env changed.
|
||||
// score := resolver.Reward(action)
|
||||
// if score > 0.0 {
|
||||
// resolver.Log("%s was correct", action.Action.String())
|
||||
// } else {
|
||||
// resolver.Log("%s was incorrect", action.Action.String())
|
||||
// }
|
||||
}
|
||||
|
||||
// If we get good result, take it
|
||||
// Take the result also if we did reached overall maximum attempts
|
||||
if resolver.IsComplete() == Solved || resolver.IsComplete() == NoSolution {
|
||||
|
||||
if len(resolver.observedDeltaChoice) != 0 {
|
||||
// Take the minimum delta observed choice result, and consume it (Try sets the wanted list)
|
||||
resolver.Solver.(*Solver).Wanted = resolver.observedDeltaChoice
|
||||
}
|
||||
|
||||
return resolver.Solver.Solve()
|
||||
} else {
|
||||
return nil, errors.New("QLearning resolver failed ")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Returns the current state.
|
||||
func (resolver *QLearningResolver) IsComplete() int {
|
||||
if resolver.attempts < 1 {
|
||||
return NoSolution
|
||||
}
|
||||
|
||||
if resolver.ToAttempt > 0 {
|
||||
return Going
|
||||
}
|
||||
|
||||
return Solved
|
||||
}
|
||||
|
||||
func (resolver *QLearningResolver) Try(c Choice) error {
|
||||
pack := c.Package
|
||||
packtoAdd := pkg.FromString(pack)
|
||||
resolver.Attempted[pack+strconv.Itoa(int(c.Action))] = true // increase the count
|
||||
s, _ := resolver.Solver.(*Solver)
|
||||
var filtered []pkg.Package
|
||||
|
||||
switch c.Action {
|
||||
case ActionAdded:
|
||||
found := false
|
||||
for _, p := range s.Wanted {
|
||||
if p.String() == pack {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
resolver.Solver.(*Solver).Wanted = append(resolver.Solver.(*Solver).Wanted, packtoAdd)
|
||||
}
|
||||
|
||||
case ActionRemoved:
|
||||
for _, p := range s.Wanted {
|
||||
if p.String() != pack {
|
||||
filtered = append(filtered, p)
|
||||
}
|
||||
}
|
||||
|
||||
resolver.Solver.(*Solver).Wanted = filtered
|
||||
}
|
||||
|
||||
_, err := resolver.Solver.Solve()
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// Choose applies a pack attempt, returning
|
||||
// true if the formula returns sat.
|
||||
//
|
||||
// Choose updates the resolver's state.
|
||||
func (resolver *QLearningResolver) Choose(c Choice) bool {
|
||||
//pack := pkg.FromString(c.Package)
|
||||
|
||||
err := resolver.Try(c)
|
||||
|
||||
if err == nil {
|
||||
resolver.ToAttempt--
|
||||
resolver.attempts-- // Decrease attempts - it's a barrier. We could also do not decrease it here, allowing more attempts to be made
|
||||
} else {
|
||||
resolver.attempts--
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Reward returns a score for a given qlearning.StateAction. Reward is a
|
||||
// member of the qlearning.Rewarder interface. If the choice will make sat the formula, a positive score is returned.
|
||||
// Otherwise, a static -1000 is returned.
|
||||
func (resolver *QLearningResolver) Reward(action *qlearning.StateAction) float32 {
|
||||
choice := action.Action.(*Choice)
|
||||
|
||||
//_, err := resolver.Solver.Solve()
|
||||
err := resolver.Try(*choice)
|
||||
|
||||
toBeInstalled := len(resolver.Solver.(*Solver).Wanted)
|
||||
originalTarget := len(resolver.Targets)
|
||||
noaction := choice.Action == NoAction
|
||||
delta := originalTarget - toBeInstalled
|
||||
|
||||
if err == nil {
|
||||
// if toBeInstalled == originalTarget { // Base case: all the targets matches (it shouldn't happen, but lets put a higher)
|
||||
// Debug("Target match, maximum score")
|
||||
// return 24.0 / float32(len(resolver.Attempted))
|
||||
|
||||
// }
|
||||
if DoNoop {
|
||||
if noaction && toBeInstalled == 0 { // We decided to stay in the current state, and no targets have been chosen
|
||||
return -100
|
||||
}
|
||||
}
|
||||
|
||||
if delta <= resolver.observedDelta { // Try to maximise observedDelta
|
||||
resolver.observedDelta = delta
|
||||
resolver.observedDeltaChoice = resolver.Solver.(*Solver).Wanted // we store it as this is our return value at the end
|
||||
return 24.0 / float32(len(resolver.Attempted))
|
||||
} else if toBeInstalled > 0 { // If we installed something, at least give a good score
|
||||
return 24.0 / float32(len(resolver.Attempted))
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return -1000
|
||||
}
|
||||
|
||||
// Next creates a new slice of qlearning.Action instances. A possible
|
||||
// action is created for each package that could be removed from the formula's target
|
||||
func (resolver *QLearningResolver) Next() []qlearning.Action {
|
||||
actions := make([]qlearning.Action, 0, (len(resolver.Targets)-1)*3)
|
||||
|
||||
TARGETS:
|
||||
for _, pack := range resolver.Targets {
|
||||
for _, current := range resolver.Solver.(*Solver).Wanted {
|
||||
if current.String() == pack.String() {
|
||||
actions = append(actions, &Choice{Package: pack.String(), Action: ActionRemoved})
|
||||
continue TARGETS
|
||||
}
|
||||
|
||||
}
|
||||
actions = append(actions, &Choice{Package: pack.String(), Action: ActionAdded})
|
||||
}
|
||||
|
||||
if DoNoop {
|
||||
actions = append(actions, &Choice{Package: "", Action: NoAction}) // NOOP
|
||||
}
|
||||
|
||||
return actions
|
||||
}
|
||||
|
||||
// Log is a wrapper of fmt.Printf. If Game.debug is true, Log will print
|
||||
// to stdout.
|
||||
func (resolver *QLearningResolver) Log(msg string, args ...interface{}) {
|
||||
logMsg := fmt.Sprintf("(%d moves, %d remaining attempts) %s\n", len(resolver.Attempted), resolver.attempts, msg)
|
||||
fmt.Println(fmt.Sprintf(logMsg, args...))
|
||||
}
|
||||
|
||||
// String returns a consistent hash for the current env state to be
|
||||
// used in a qlearning.Agent.
|
||||
func (resolver *QLearningResolver) String() string {
|
||||
return fmt.Sprintf("%v", resolver.Solver.(*Solver).Wanted)
|
||||
}
|
||||
|
||||
// Choice implements qlearning.Action for a package choice for removal from wanted targets
|
||||
type Choice struct {
|
||||
Package string `json:"pack"`
|
||||
Action ActionType `json:"action"`
|
||||
}
|
||||
|
||||
func ChoiceFromString(s string) (*Choice, error) {
|
||||
var p *Choice
|
||||
err := yaml.Unmarshal([]byte(s), &p)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return p, nil
|
||||
}
|
||||
|
||||
// String returns the character for the current action.
|
||||
func (choice *Choice) String() string {
|
||||
data, err := json.Marshal(choice)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return string(data)
|
||||
}
|
||||
|
||||
// Apply updates the state of the solver for the package choice.
|
||||
func (choice *Choice) Apply(state qlearning.State) qlearning.State {
|
||||
resolver := state.(*QLearningResolver)
|
||||
resolver.Choose(*choice)
|
||||
|
||||
return resolver
|
||||
}
|
182
pkg/solver/resolver_test.go
Normal file
182
pkg/solver/resolver_test.go
Normal file
@@ -0,0 +1,182 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package solver_test
|
||||
|
||||
import (
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
|
||||
. "github.com/mudler/luet/pkg/solver"
|
||||
)
|
||||
|
||||
var _ = Describe("Resolver", func() {
|
||||
|
||||
db := pkg.NewInMemoryDatabase(false)
|
||||
dbInstalled := pkg.NewInMemoryDatabase(false)
|
||||
dbDefinitions := pkg.NewInMemoryDatabase(false)
|
||||
s := NewSolver(dbInstalled, dbDefinitions, db)
|
||||
|
||||
BeforeEach(func() {
|
||||
db = pkg.NewInMemoryDatabase(false)
|
||||
dbInstalled = pkg.NewInMemoryDatabase(false)
|
||||
dbDefinitions = pkg.NewInMemoryDatabase(false)
|
||||
s = NewSolver(dbInstalled, dbDefinitions, db)
|
||||
})
|
||||
|
||||
Context("Conflict set", func() {
|
||||
Context("DummyPackageResolver", func() {
|
||||
It("is unsolvable - as we something we ask to install conflict with system stuff", func() {
|
||||
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{C})
|
||||
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
|
||||
|
||||
for _, p := range []pkg.Package{A, B, C} {
|
||||
_, err := dbDefinitions.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
for _, p := range []pkg.Package{C} {
|
||||
_, err := dbInstalled.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
solution, err := s.Install([]pkg.Package{A})
|
||||
Expect(len(solution)).To(Equal(0))
|
||||
Expect(err).To(HaveOccurred())
|
||||
})
|
||||
It("succeeds to install D and F if explictly requested", func() {
|
||||
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{C})
|
||||
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
|
||||
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
E := pkg.NewPackage("E", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
|
||||
F := pkg.NewPackage("F", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
|
||||
for _, p := range []pkg.Package{A, B, C, D, E, F} {
|
||||
_, err := dbDefinitions.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
for _, p := range []pkg.Package{C} {
|
||||
_, err := dbInstalled.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
solution, err := s.Install([]pkg.Package{D, F}) // D and F should go as they have no deps. A/E should be filtered by QLearn
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(len(solution)).To(Equal(6))
|
||||
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: A, Value: false}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: false}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: D, Value: true}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: E, Value: false}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: F, Value: true}))
|
||||
|
||||
})
|
||||
|
||||
})
|
||||
Context("QLearningResolver", func() {
|
||||
It("will find out that we can install D by ignoring A", func() {
|
||||
s.SetResolver(SimpleQLearningSolver())
|
||||
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{C})
|
||||
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
|
||||
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
|
||||
for _, p := range []pkg.Package{A, B, C, D} {
|
||||
_, err := dbDefinitions.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
for _, p := range []pkg.Package{C} {
|
||||
_, err := dbInstalled.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
solution, err := s.Install([]pkg.Package{A, D})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: A, Value: false}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: false}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: D, Value: true}))
|
||||
|
||||
Expect(len(solution)).To(Equal(4))
|
||||
})
|
||||
|
||||
It("will find out that we can install D and F by ignoring E and A", func() {
|
||||
s.SetResolver(SimpleQLearningSolver())
|
||||
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{C})
|
||||
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
|
||||
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
E := pkg.NewPackage("E", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
|
||||
F := pkg.NewPackage("F", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
|
||||
for _, p := range []pkg.Package{A, B, C, D, E, F} {
|
||||
_, err := dbDefinitions.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
for _, p := range []pkg.Package{C} {
|
||||
_, err := dbInstalled.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
solution, err := s.Install([]pkg.Package{A, D, E, F}) // D and F should go as they have no deps. A/E should be filtered by QLearn
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: A, Value: false}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: B, Value: false}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: C, Value: true})) // Was already installed
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: D, Value: true}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: E, Value: false}))
|
||||
Expect(solution).To(ContainElement(PackageAssert{Package: F, Value: true}))
|
||||
Expect(len(solution)).To(Equal(6))
|
||||
|
||||
})
|
||||
})
|
||||
|
||||
Context("DummyPackageResolver", func() {
|
||||
It("cannot find a solution", func() {
|
||||
C := pkg.NewPackage("C", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
B := pkg.NewPackage("B", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{C})
|
||||
A := pkg.NewPackage("A", "", []*pkg.DefaultPackage{B}, []*pkg.DefaultPackage{})
|
||||
D := pkg.NewPackage("D", "", []*pkg.DefaultPackage{}, []*pkg.DefaultPackage{})
|
||||
|
||||
for _, p := range []pkg.Package{A, B, C, D} {
|
||||
_, err := dbDefinitions.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
for _, p := range []pkg.Package{C} {
|
||||
_, err := dbInstalled.CreatePackage(p)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
solution, err := s.Install([]pkg.Package{A, D})
|
||||
Expect(err).To(HaveOccurred())
|
||||
|
||||
Expect(len(solution)).To(Equal(0))
|
||||
})
|
||||
|
||||
})
|
||||
})
|
||||
|
||||
})
|
@@ -33,6 +33,10 @@ type PackageSolver interface {
|
||||
ConflictsWith(p pkg.Package, ls []pkg.Package) (bool, error)
|
||||
World() []pkg.Package
|
||||
Upgrade() ([]pkg.Package, PackagesAssertions, error)
|
||||
|
||||
SetResolver(PackageResolver)
|
||||
|
||||
Solve() (PackagesAssertions, error)
|
||||
}
|
||||
|
||||
// Solver is the default solver for luet
|
||||
@@ -41,20 +45,33 @@ type Solver struct {
|
||||
SolverDatabase pkg.PackageDatabase
|
||||
Wanted []pkg.Package
|
||||
InstalledDatabase pkg.PackageDatabase
|
||||
|
||||
Resolver PackageResolver
|
||||
}
|
||||
|
||||
// NewSolver accepts as argument two lists of packages, the first is the initial set,
|
||||
// the second represent all the known packages.
|
||||
func NewSolver(installed pkg.PackageDatabase, definitiondb pkg.PackageDatabase, solverdb pkg.PackageDatabase) PackageSolver {
|
||||
return &Solver{InstalledDatabase: installed, DefinitionDatabase: definitiondb, SolverDatabase: solverdb}
|
||||
return NewResolver(installed, definitiondb, solverdb, &DummyPackageResolver{})
|
||||
}
|
||||
|
||||
// SetWorld is a setter for the list of all known packages to the solver
|
||||
// NewReSolver accepts as argument two lists of packages, the first is the initial set,
|
||||
// the second represent all the known packages.
|
||||
func NewResolver(installed pkg.PackageDatabase, definitiondb pkg.PackageDatabase, solverdb pkg.PackageDatabase, re PackageResolver) PackageSolver {
|
||||
return &Solver{InstalledDatabase: installed, DefinitionDatabase: definitiondb, SolverDatabase: solverdb, Resolver: re}
|
||||
}
|
||||
|
||||
// SetDefinitionDatabase is a setter for the definition Database
|
||||
|
||||
func (s *Solver) SetDefinitionDatabase(db pkg.PackageDatabase) {
|
||||
s.DefinitionDatabase = db
|
||||
}
|
||||
|
||||
// SetResolver is a setter for the unsat resolver backend
|
||||
func (s *Solver) SetResolver(r PackageResolver) {
|
||||
s.Resolver = r
|
||||
}
|
||||
|
||||
func (s *Solver) World() []pkg.Package {
|
||||
return s.DefinitionDatabase.World()
|
||||
}
|
||||
@@ -221,6 +238,7 @@ func (s *Solver) Upgrade() ([]pkg.Package, PackagesAssertions, error) {
|
||||
}
|
||||
|
||||
s2 := NewSolver(installedcopy, s.DefinitionDatabase, pkg.NewInMemoryDatabase(false))
|
||||
s2.SetResolver(s.Resolver)
|
||||
// Then try to uninstall the versions in the system, and store that tree
|
||||
for _, p := range toUninstall {
|
||||
r, err := s.Uninstall(p)
|
||||
@@ -361,13 +379,20 @@ func (s *Solver) solve(f bf.Formula) (map[string]bool, bf.Formula, error) {
|
||||
|
||||
// Solve builds the formula given the current state and returns package assertions
|
||||
func (s *Solver) Solve() (PackagesAssertions, error) {
|
||||
var model map[string]bool
|
||||
var err error
|
||||
|
||||
f, err := s.BuildFormula()
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
model, _, err := s.solve(f)
|
||||
model, _, err = s.solve(f)
|
||||
if err != nil && s.Resolver != nil {
|
||||
return s.Resolver.Solve(f, s)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@@ -18,11 +18,15 @@ package solver_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestSolver(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Solver Suite")
|
||||
}
|
||||
|
@@ -18,11 +18,15 @@ package gentoo_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestGentooBuilder(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Gentoo Suite")
|
||||
}
|
||||
|
@@ -18,11 +18,15 @@ package tree_test
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestTree(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Tree Suite")
|
||||
}
|
||||
|
8
tests/fixtures/qlearning/a/build.yaml
vendored
Normal file
8
tests/fixtures/qlearning/a/build.yaml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
prelude:
|
||||
- echo a > /aprelude
|
||||
steps:
|
||||
- echo a > /a
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "b"
|
||||
version: "1.0"
|
8
tests/fixtures/qlearning/a/definition.yaml
vendored
Normal file
8
tests/fixtures/qlearning/a/definition.yaml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
category: "test"
|
||||
name: "a"
|
||||
version: "1.0"
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "b"
|
||||
version: "1.0"
|
||||
|
5
tests/fixtures/qlearning/b/build.yaml
vendored
Normal file
5
tests/fixtures/qlearning/b/build.yaml
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
image: "alpine"
|
||||
prelude:
|
||||
- echo b > /bprelude
|
||||
steps:
|
||||
- echo b > /b
|
7
tests/fixtures/qlearning/b/definition.yaml
vendored
Normal file
7
tests/fixtures/qlearning/b/definition.yaml
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
category: "test"
|
||||
name: "b"
|
||||
version: "1.0"
|
||||
conflicts:
|
||||
- category: "test"
|
||||
name: "c"
|
||||
version: "1.0"
|
5
tests/fixtures/qlearning/c/build.yaml
vendored
Normal file
5
tests/fixtures/qlearning/c/build.yaml
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
image: "alpine"
|
||||
prelude:
|
||||
- echo c > /cprelude
|
||||
steps:
|
||||
- echo c > /c
|
3
tests/fixtures/qlearning/c/definition.yaml
vendored
Normal file
3
tests/fixtures/qlearning/c/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "c"
|
||||
version: "1.0"
|
5
tests/fixtures/qlearning/d/build.yaml
vendored
Normal file
5
tests/fixtures/qlearning/d/build.yaml
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
image: "alpine"
|
||||
prelude:
|
||||
- echo d > /dprelude
|
||||
steps:
|
||||
- echo d > /d
|
3
tests/fixtures/qlearning/d/definition.yaml
vendored
Normal file
3
tests/fixtures/qlearning/d/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "d"
|
||||
version: "1.0"
|
9
tests/fixtures/qlearning/e/build.yaml
vendored
Normal file
9
tests/fixtures/qlearning/e/build.yaml
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
prelude:
|
||||
- echo e > /eprelude
|
||||
steps:
|
||||
- echo e > /e
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "b"
|
||||
version: "1.0"
|
||||
|
8
tests/fixtures/qlearning/e/definition.yaml
vendored
Normal file
8
tests/fixtures/qlearning/e/definition.yaml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
category: "test"
|
||||
name: "e"
|
||||
version: "1.0"
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "b"
|
||||
version: "1.0"
|
||||
|
5
tests/fixtures/qlearning/f/build.yaml
vendored
Normal file
5
tests/fixtures/qlearning/f/build.yaml
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
prelude:
|
||||
- echo f > /eprelude
|
||||
steps:
|
||||
- echo f > /f
|
||||
image: "alpine"
|
3
tests/fixtures/qlearning/f/definition.yaml
vendored
Normal file
3
tests/fixtures/qlearning/f/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "f"
|
||||
version: "1.0"
|
9
tests/fixtures/repos.conf.d/repo-test.yml
vendored
Normal file
9
tests/fixtures/repos.conf.d/repo-test.yml
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
name: "test1"
|
||||
description: "Test1 Repository"
|
||||
type: "disk"
|
||||
urls:
|
||||
- "tests/repos/test1"
|
||||
priority: 999
|
||||
enable: true
|
||||
# auth:
|
||||
# token: "xxxxx"
|
8
tests/fixtures/repos.conf.d/repo.yml.skipped
vendored
Normal file
8
tests/fixtures/repos.conf.d/repo.yml.skipped
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
name: "test1-skipped"
|
||||
description: "Test1 Repository"
|
||||
type: "disk"
|
||||
path: "tests/repos/test1"
|
||||
priority: 1
|
||||
enable: true
|
||||
# auth:
|
||||
# token: "xxxxx"
|
3
tests/fixtures/retrieve-integration/a/build.yaml
vendored
Normal file
3
tests/fixtures/retrieve-integration/a/build.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
image: "alpine"
|
||||
steps:
|
||||
- echo a > /a
|
3
tests/fixtures/retrieve-integration/a/definition.yaml
vendored
Normal file
3
tests/fixtures/retrieve-integration/a/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
name: "a"
|
||||
version: "1.0"
|
||||
category: "test"
|
10
tests/fixtures/retrieve-integration/b/build.yaml
vendored
Normal file
10
tests/fixtures/retrieve-integration/b/build.yaml
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
|
||||
steps:
|
||||
- tar xvf a-test-1.0.package.* -C ./
|
||||
- mv a /b
|
||||
requires:
|
||||
- name: "a"
|
||||
version: "1.0"
|
||||
category: "test"
|
||||
retrieve:
|
||||
- a-test-1.0.package.*
|
7
tests/fixtures/retrieve-integration/b/definition.yaml
vendored
Normal file
7
tests/fixtures/retrieve-integration/b/definition.yaml
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
name: "b"
|
||||
version: "1.0"
|
||||
category: "test"
|
||||
requires:
|
||||
- name: "a"
|
||||
version: "1.0"
|
||||
category: "test"
|
10
tests/fixtures/retrieve/a/build.yaml
vendored
Normal file
10
tests/fixtures/retrieve/a/build.yaml
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
image: "luet/base"
|
||||
seed: "alpine"
|
||||
steps:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
retrieve:
|
||||
- test
|
||||
- http://www.google.com
|
||||
env:
|
||||
- test=1
|
3
tests/fixtures/retrieve/a/definition.yaml
vendored
Normal file
3
tests/fixtures/retrieve/a/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
name: "a"
|
||||
version: "1.0"
|
||||
category: "test"
|
@@ -12,7 +12,7 @@ oneTimeTearDown() {
|
||||
|
||||
testBuild() {
|
||||
mkdir $tmpdir/testbuild
|
||||
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c-1.0 > /dev/null
|
||||
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c > /dev/null
|
||||
buildst=$?
|
||||
assertEquals 'builds successfully' "$buildst" "0"
|
||||
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
|
||||
@@ -25,37 +25,53 @@ testRepo() {
|
||||
--output $tmpdir/testbuild \
|
||||
--packages $tmpdir/testbuild \
|
||||
--name "test" \
|
||||
--uri $tmpdir/testrootfs \
|
||||
--type local > /dev/null
|
||||
--descr "Test Repo" \
|
||||
--urls $tmpdir/testrootfs \
|
||||
--type disk > /dev/null
|
||||
|
||||
createst=$?
|
||||
assertEquals 'create repo successfully' "$createst" "0"
|
||||
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
|
||||
}
|
||||
|
||||
testInstall() {
|
||||
testConfig() {
|
||||
mkdir $tmpdir/testrootfs
|
||||
cat <<EOF > $tmpdir/luet.yaml
|
||||
system-repositories:
|
||||
- name: "main"
|
||||
type: "local"
|
||||
uri: "$tmpdir/testbuild"
|
||||
cat <<EOF > $tmpdir/luet.yaml
|
||||
general:
|
||||
debug: true
|
||||
system:
|
||||
rootfs: $tmpdir/testrootfs
|
||||
database_path: "/"
|
||||
database_engine: "boltdb"
|
||||
repositories:
|
||||
- name: "main"
|
||||
type: "disk"
|
||||
enable: true
|
||||
urls:
|
||||
- "$tmpdir/testbuild"
|
||||
EOF
|
||||
luet install --config $tmpdir/luet.yaml --system-dbpath $tmpdir/testrootfs --system-target $tmpdir/testrootfs test/c-1.0 > /dev/null
|
||||
luet config --config $tmpdir/luet.yaml
|
||||
res=$?
|
||||
assertEquals 'config test successfully' "$res" "0"
|
||||
}
|
||||
|
||||
testInstall() {
|
||||
luet install --config $tmpdir/luet.yaml test/c
|
||||
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
|
||||
}
|
||||
|
||||
testReInstall() {
|
||||
output=$(luet install --config $tmpdir/luet.yaml --system-dbpath $tmpdir/testrootfs --system-target $tmpdir/testrootfs test/c-1.0)
|
||||
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertContains 'contains warning' "$output" 'Filtering out'
|
||||
}
|
||||
|
||||
testUnInstall() {
|
||||
luet uninstall --config $tmpdir/luet.yaml --system-dbpath $tmpdir/testrootfs --system-target $tmpdir/testrootfs test/c-1.0 > /dev/null
|
||||
luet uninstall --config $tmpdir/luet.yaml test/c
|
||||
installst=$?
|
||||
assertEquals 'uninstall test successfully' "$installst" "0"
|
||||
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
|
||||
@@ -63,11 +79,19 @@ testUnInstall() {
|
||||
|
||||
testInstallAgain() {
|
||||
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
|
||||
output=$(luet install --config $tmpdir/luet.yaml --system-dbpath $tmpdir/testrootfs --system-target $tmpdir/testrootfs test/c-1.0)
|
||||
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertNotContains 'contains warning' "$output" 'Filtering out'
|
||||
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
|
||||
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
testCleanup() {
|
||||
luet cleanup --config $tmpdir/luet.yaml
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package installed' "[ ! -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
# Load shUnit2.
|
||||
|
104
tests/integration/01_simple_gzip.sh
Executable file
104
tests/integration/01_simple_gzip.sh
Executable file
@@ -0,0 +1,104 @@
|
||||
#!/bin/bash
|
||||
|
||||
export LUET_NOLOCK=true
|
||||
|
||||
oneTimeSetUp() {
|
||||
export tmpdir="$(mktemp -d)"
|
||||
}
|
||||
|
||||
oneTimeTearDown() {
|
||||
rm -rf "$tmpdir"
|
||||
}
|
||||
|
||||
testBuild() {
|
||||
mkdir $tmpdir/testbuild
|
||||
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c-1.0 > /dev/null
|
||||
buildst=$?
|
||||
assertEquals 'builds successfully' "$buildst" "0"
|
||||
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
|
||||
assertTrue 'create package' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
testRepo() {
|
||||
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
|
||||
luet create-repo --tree "$ROOT_DIR/tests/fixtures/buildableseed" \
|
||||
--output $tmpdir/testbuild \
|
||||
--packages $tmpdir/testbuild \
|
||||
--name "test" \
|
||||
--descr "Test Repo" \
|
||||
--urls $tmpdir/testrootfs \
|
||||
--tree-compression gzip \
|
||||
--tree-path foo.tar \
|
||||
--type disk > /dev/null
|
||||
|
||||
createst=$?
|
||||
assertEquals 'create repo successfully' "$createst" "0"
|
||||
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
|
||||
assertTrue 'create named tree in gzip' "[ -e '$tmpdir/testbuild/foo.tar.gz' ]"
|
||||
assertTrue 'create tree in gzip-only' "[ ! -e '$tmpdir/testbuild/foo.tar' ]"
|
||||
|
||||
}
|
||||
|
||||
testConfig() {
|
||||
mkdir $tmpdir/testrootfs
|
||||
cat <<EOF > $tmpdir/luet.yaml
|
||||
general:
|
||||
debug: true
|
||||
system:
|
||||
rootfs: $tmpdir/testrootfs
|
||||
database_path: "/"
|
||||
database_engine: "boltdb"
|
||||
repositories:
|
||||
- name: "main"
|
||||
type: "disk"
|
||||
enable: true
|
||||
urls:
|
||||
- "$tmpdir/testbuild"
|
||||
EOF
|
||||
luet config --config $tmpdir/luet.yaml
|
||||
res=$?
|
||||
assertEquals 'config test successfully' "$res" "0"
|
||||
}
|
||||
|
||||
testInstall() {
|
||||
luet install --config $tmpdir/luet.yaml test/c-1.0
|
||||
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
|
||||
}
|
||||
|
||||
testReInstall() {
|
||||
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertContains 'contains warning' "$output" 'Filtering out'
|
||||
}
|
||||
|
||||
testUnInstall() {
|
||||
luet uninstall --config $tmpdir/luet.yaml test/c-1.0
|
||||
installst=$?
|
||||
assertEquals 'uninstall test successfully' "$installst" "0"
|
||||
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
|
||||
}
|
||||
|
||||
testInstallAgain() {
|
||||
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
|
||||
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertNotContains 'contains warning' "$output" 'Filtering out'
|
||||
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
|
||||
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
testCleanup() {
|
||||
luet cleanup --config $tmpdir/luet.yaml
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package installed' "[ ! -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
# Load shUnit2.
|
||||
. "$ROOT_DIR/tests/integration/shunit2"/shunit2
|
||||
|
97
tests/integration/02_create_repo_from_config.sh
Executable file
97
tests/integration/02_create_repo_from_config.sh
Executable file
@@ -0,0 +1,97 @@
|
||||
#!/bin/bash
|
||||
|
||||
export LUET_NOLOCK=true
|
||||
|
||||
oneTimeSetUp() {
|
||||
export tmpdir="$(mktemp -d)"
|
||||
}
|
||||
|
||||
oneTimeTearDown() {
|
||||
rm -rf "$tmpdir"
|
||||
}
|
||||
|
||||
testBuild() {
|
||||
mkdir $tmpdir/testbuild
|
||||
luet build --tree "$ROOT_DIR/tests/fixtures/buildableseed" --destination $tmpdir/testbuild --compression gzip test/c-1.0 > /dev/null
|
||||
buildst=$?
|
||||
assertEquals 'builds successfully' "$buildst" "0"
|
||||
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
|
||||
assertTrue 'create package' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
testConfig() {
|
||||
mkdir $tmpdir/testrootfs
|
||||
cat <<EOF > $tmpdir/luet.yaml
|
||||
general:
|
||||
debug: true
|
||||
system:
|
||||
rootfs: $tmpdir/testrootfs
|
||||
database_path: "/"
|
||||
database_engine: "boltdb"
|
||||
repositories:
|
||||
- name: "main"
|
||||
type: "disk"
|
||||
enable: true
|
||||
urls:
|
||||
- "$tmpdir/testbuild"
|
||||
EOF
|
||||
luet config --config $tmpdir/luet.yaml
|
||||
res=$?
|
||||
assertEquals 'config test successfully' "$res" "0"
|
||||
}
|
||||
|
||||
testRepo() {
|
||||
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
|
||||
luet create-repo --tree "$ROOT_DIR/tests/fixtures/buildableseed" \
|
||||
--config $tmpdir/luet.yaml \
|
||||
--output $tmpdir/testbuild \
|
||||
--packages $tmpdir/testbuild \
|
||||
--repo "main" \
|
||||
|
||||
createst=$?
|
||||
assertEquals 'create repo successfully' "$createst" "0"
|
||||
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
|
||||
}
|
||||
|
||||
testInstall() {
|
||||
luet install --config $tmpdir/luet.yaml test/c-1.0
|
||||
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
|
||||
}
|
||||
|
||||
testReInstall() {
|
||||
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertContains 'contains warning' "$output" 'Filtering out'
|
||||
}
|
||||
|
||||
testUnInstall() {
|
||||
luet uninstall --config $tmpdir/luet.yaml test/c-1.0
|
||||
installst=$?
|
||||
assertEquals 'uninstall test successfully' "$installst" "0"
|
||||
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
|
||||
}
|
||||
|
||||
testInstallAgain() {
|
||||
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/c' ]"
|
||||
output=$(luet install --config $tmpdir/luet.yaml test/c-1.0)
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertNotContains 'contains warning' "$output" 'Filtering out'
|
||||
assertTrue 'package installed' "[ -e '$tmpdir/testrootfs/c' ]"
|
||||
assertTrue 'package in cache' "[ -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
testCleanup() {
|
||||
luet cleanup --config $tmpdir/luet.yaml
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package installed' "[ ! -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
# Load shUnit2.
|
||||
. "$ROOT_DIR/tests/integration/shunit2"/shunit2
|
||||
|
94
tests/integration/03_qlearning.sh
Executable file
94
tests/integration/03_qlearning.sh
Executable file
@@ -0,0 +1,94 @@
|
||||
#!/bin/bash
|
||||
|
||||
export LUET_NOLOCK=true
|
||||
|
||||
oneTimeSetUp() {
|
||||
export tmpdir="$(mktemp -d)"
|
||||
}
|
||||
|
||||
oneTimeTearDown() {
|
||||
rm -rf "$tmpdir"
|
||||
}
|
||||
|
||||
testBuild() {
|
||||
mkdir $tmpdir/testbuild
|
||||
luet build --all --tree "$ROOT_DIR/tests/fixtures/qlearning" --destination $tmpdir/testbuild --compression gzip
|
||||
buildst=$?
|
||||
assertEquals 'builds successfully' "$buildst" "0"
|
||||
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
|
||||
assertTrue 'create package' "[ -e '$tmpdir/testbuild/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
testRepo() {
|
||||
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
|
||||
luet create-repo --tree "$ROOT_DIR/tests/fixtures/qlearning" \
|
||||
--output $tmpdir/testbuild \
|
||||
--packages $tmpdir/testbuild \
|
||||
--name "test" \
|
||||
--descr "Test Repo" \
|
||||
--urls $tmpdir/testrootfs \
|
||||
--type disk > /dev/null
|
||||
|
||||
createst=$?
|
||||
assertEquals 'create repo successfully' "$createst" "0"
|
||||
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
|
||||
}
|
||||
|
||||
testConfig() {
|
||||
mkdir $tmpdir/testrootfs
|
||||
cat <<EOF > $tmpdir/luet.yaml
|
||||
general:
|
||||
debug: true
|
||||
system:
|
||||
rootfs: $tmpdir/testrootfs
|
||||
database_path: "/"
|
||||
database_engine: "boltdb"
|
||||
repositories:
|
||||
- name: "main"
|
||||
type: "disk"
|
||||
enable: true
|
||||
urls:
|
||||
- "$tmpdir/testbuild"
|
||||
EOF
|
||||
luet config --config $tmpdir/luet.yaml
|
||||
res=$?
|
||||
assertEquals 'config test successfully' "$res" "0"
|
||||
}
|
||||
|
||||
testInstall() {
|
||||
luet install --config $tmpdir/luet.yaml test/c
|
||||
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package C installed' "[ -e '$tmpdir/testrootfs/c' ]"
|
||||
}
|
||||
|
||||
testFullInstall() {
|
||||
output=$(luet install --config $tmpdir/luet.yaml test/d test/f test/e test/a)
|
||||
installst=$?
|
||||
assertEquals 'cannot install' "$installst" "1"
|
||||
assertTrue 'package D installed' "[ ! -e '$tmpdir/testrootfs/d' ]"
|
||||
assertTrue 'package F installed' "[ ! -e '$tmpdir/testrootfs/f' ]"
|
||||
}
|
||||
|
||||
testInstallAgain() {
|
||||
output=$(luet install --solver-type qlearning --config $tmpdir/luet.yaml test/d test/f test/e test/a)
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertNotContains 'contains warning' "$output" 'Filtering out'
|
||||
assertTrue 'package D installed' "[ -e '$tmpdir/testrootfs/d' ]"
|
||||
assertTrue 'package F installed' "[ -e '$tmpdir/testrootfs/f' ]"
|
||||
assertTrue 'package E not installed' "[ ! -e '$tmpdir/testrootfs/e' ]"
|
||||
assertTrue 'package A not installed' "[ ! -e '$tmpdir/testrootfs/a' ]"
|
||||
}
|
||||
|
||||
testCleanup() {
|
||||
luet cleanup --config $tmpdir/luet.yaml
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package installed' "[ ! -e '$tmpdir/testrootfs/packages/c-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
# Load shUnit2.
|
||||
. "$ROOT_DIR/tests/integration/shunit2"/shunit2
|
||||
|
90
tests/integration/04_retrieve.sh
Executable file
90
tests/integration/04_retrieve.sh
Executable file
@@ -0,0 +1,90 @@
|
||||
#!/bin/bash
|
||||
|
||||
export LUET_NOLOCK=true
|
||||
|
||||
oneTimeSetUp() {
|
||||
export tmpdir="$(mktemp -d)"
|
||||
}
|
||||
|
||||
oneTimeTearDown() {
|
||||
rm -rf "$tmpdir"
|
||||
}
|
||||
|
||||
testBuild() {
|
||||
mkdir $tmpdir/testbuild
|
||||
luet build --tree "$ROOT_DIR/tests/fixtures/retrieve-integration" --destination $tmpdir/testbuild --compression gzip test/b
|
||||
buildst=$?
|
||||
assertEquals 'builds successfully' "$buildst" "0"
|
||||
assertTrue 'create package dep B' "[ -e '$tmpdir/testbuild/b-test-1.0.package.tar.gz' ]"
|
||||
assertTrue 'create package' "[ -e '$tmpdir/testbuild/a-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
testRepo() {
|
||||
assertTrue 'no repository' "[ ! -e '$tmpdir/testbuild/repository.yaml' ]"
|
||||
luet create-repo --tree "$ROOT_DIR/tests/fixtures/retrieve-integration" \
|
||||
--output $tmpdir/testbuild \
|
||||
--packages $tmpdir/testbuild \
|
||||
--name "test" \
|
||||
--descr "Test Repo" \
|
||||
--urls $tmpdir/testrootfs \
|
||||
--type disk > /dev/null
|
||||
|
||||
createst=$?
|
||||
assertEquals 'create repo successfully' "$createst" "0"
|
||||
assertTrue 'create repository' "[ -e '$tmpdir/testbuild/repository.yaml' ]"
|
||||
}
|
||||
|
||||
testConfig() {
|
||||
mkdir $tmpdir/testrootfs
|
||||
cat <<EOF > $tmpdir/luet.yaml
|
||||
general:
|
||||
debug: true
|
||||
system:
|
||||
rootfs: $tmpdir/testrootfs
|
||||
database_path: "/"
|
||||
database_engine: "boltdb"
|
||||
repositories:
|
||||
- name: "main"
|
||||
type: "disk"
|
||||
enable: true
|
||||
urls:
|
||||
- "$tmpdir/testbuild"
|
||||
EOF
|
||||
luet config --config $tmpdir/luet.yaml
|
||||
res=$?
|
||||
assertEquals 'config test successfully' "$res" "0"
|
||||
}
|
||||
|
||||
|
||||
|
||||
testInstall() {
|
||||
luet install --config $tmpdir/luet.yaml test/b
|
||||
#luet install --config $tmpdir/luet.yaml test/c-1.0 > /dev/null
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package B installed' "[ -e '$tmpdir/testrootfs/b' ]"
|
||||
val=$(cat "$tmpdir/testrootfs/b")
|
||||
assertEquals 'package B content comes from a' "$val" "a"
|
||||
assertTrue 'package A installed' "[ -e '$tmpdir/testrootfs/a' ]"
|
||||
}
|
||||
|
||||
|
||||
testUnInstall() {
|
||||
luet uninstall --config $tmpdir/luet.yaml test/b
|
||||
installst=$?
|
||||
assertEquals 'uninstall test successfully' "$installst" "0"
|
||||
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/b' ]"
|
||||
assertTrue 'package uninstalled' "[ ! -e '$tmpdir/testrootfs/a' ]"
|
||||
}
|
||||
|
||||
|
||||
testCleanup() {
|
||||
luet cleanup --config $tmpdir/luet.yaml
|
||||
installst=$?
|
||||
assertEquals 'install test successfully' "$installst" "0"
|
||||
assertTrue 'package installed' "[ ! -e '$tmpdir/testrootfs/packages/b-test-1.0.package.tar.gz' ]"
|
||||
}
|
||||
|
||||
# Load shUnit2.
|
||||
. "$ROOT_DIR/tests/integration/shunit2"/shunit2
|
||||
|
@@ -11,5 +11,7 @@ popd
|
||||
|
||||
export PATH=$ROOT_DIR/tests/integration/bin/:$PATH
|
||||
|
||||
"$ROOT_DIR/tests/integration/01_simple.sh"
|
||||
|
||||
for script in $(ls "$ROOT_DIR/tests/integration/" | grep '^[0-9]*_.*.sh'); do
|
||||
echo "Executing script '$script'."
|
||||
$ROOT_DIR/tests/integration/$script
|
||||
done
|
5
vendor/github.com/BurntSushi/toml/.gitignore
generated
vendored
Normal file
5
vendor/github.com/BurntSushi/toml/.gitignore
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
TAGS
|
||||
tags
|
||||
.*.swp
|
||||
tomlcheck/tomlcheck
|
||||
toml.test
|
15
vendor/github.com/BurntSushi/toml/.travis.yml
generated
vendored
Normal file
15
vendor/github.com/BurntSushi/toml/.travis.yml
generated
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
language: go
|
||||
go:
|
||||
- 1.1
|
||||
- 1.2
|
||||
- 1.3
|
||||
- 1.4
|
||||
- 1.5
|
||||
- 1.6
|
||||
- tip
|
||||
install:
|
||||
- go install ./...
|
||||
- go get github.com/BurntSushi/toml-test
|
||||
script:
|
||||
- export PATH="$PATH:$HOME/gopath/bin"
|
||||
- make test
|
3
vendor/github.com/BurntSushi/toml/COMPATIBLE
generated
vendored
Normal file
3
vendor/github.com/BurntSushi/toml/COMPATIBLE
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
Compatible with TOML version
|
||||
[v0.4.0](https://github.com/toml-lang/toml/blob/v0.4.0/versions/en/toml-v0.4.0.md)
|
||||
|
21
vendor/github.com/BurntSushi/toml/COPYING
generated
vendored
Normal file
21
vendor/github.com/BurntSushi/toml/COPYING
generated
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2013 TOML authors
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
19
vendor/github.com/BurntSushi/toml/Makefile
generated
vendored
Normal file
19
vendor/github.com/BurntSushi/toml/Makefile
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
install:
|
||||
go install ./...
|
||||
|
||||
test: install
|
||||
go test -v
|
||||
toml-test toml-test-decoder
|
||||
toml-test -encoder toml-test-encoder
|
||||
|
||||
fmt:
|
||||
gofmt -w *.go */*.go
|
||||
colcheck *.go */*.go
|
||||
|
||||
tags:
|
||||
find ./ -name '*.go' -print0 | xargs -0 gotags > TAGS
|
||||
|
||||
push:
|
||||
git push origin master
|
||||
git push github master
|
||||
|
218
vendor/github.com/BurntSushi/toml/README.md
generated
vendored
Normal file
218
vendor/github.com/BurntSushi/toml/README.md
generated
vendored
Normal file
@@ -0,0 +1,218 @@
|
||||
## TOML parser and encoder for Go with reflection
|
||||
|
||||
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
|
||||
reflection interface similar to Go's standard library `json` and `xml`
|
||||
packages. This package also supports the `encoding.TextUnmarshaler` and
|
||||
`encoding.TextMarshaler` interfaces so that you can define custom data
|
||||
representations. (There is an example of this below.)
|
||||
|
||||
Spec: https://github.com/toml-lang/toml
|
||||
|
||||
Compatible with TOML version
|
||||
[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
|
||||
|
||||
Documentation: https://godoc.org/github.com/BurntSushi/toml
|
||||
|
||||
Installation:
|
||||
|
||||
```bash
|
||||
go get github.com/BurntSushi/toml
|
||||
```
|
||||
|
||||
Try the toml validator:
|
||||
|
||||
```bash
|
||||
go get github.com/BurntSushi/toml/cmd/tomlv
|
||||
tomlv some-toml-file.toml
|
||||
```
|
||||
|
||||
[](https://travis-ci.org/BurntSushi/toml) [](https://godoc.org/github.com/BurntSushi/toml)
|
||||
|
||||
### Testing
|
||||
|
||||
This package passes all tests in
|
||||
[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
|
||||
and the encoder.
|
||||
|
||||
### Examples
|
||||
|
||||
This package works similarly to how the Go standard library handles `XML`
|
||||
and `JSON`. Namely, data is loaded into Go values via reflection.
|
||||
|
||||
For the simplest example, consider some TOML file as just a list of keys
|
||||
and values:
|
||||
|
||||
```toml
|
||||
Age = 25
|
||||
Cats = [ "Cauchy", "Plato" ]
|
||||
Pi = 3.14
|
||||
Perfection = [ 6, 28, 496, 8128 ]
|
||||
DOB = 1987-07-05T05:45:00Z
|
||||
```
|
||||
|
||||
Which could be defined in Go as:
|
||||
|
||||
```go
|
||||
type Config struct {
|
||||
Age int
|
||||
Cats []string
|
||||
Pi float64
|
||||
Perfection []int
|
||||
DOB time.Time // requires `import time`
|
||||
}
|
||||
```
|
||||
|
||||
And then decoded with:
|
||||
|
||||
```go
|
||||
var conf Config
|
||||
if _, err := toml.Decode(tomlData, &conf); err != nil {
|
||||
// handle error
|
||||
}
|
||||
```
|
||||
|
||||
You can also use struct tags if your struct field name doesn't map to a TOML
|
||||
key value directly:
|
||||
|
||||
```toml
|
||||
some_key_NAME = "wat"
|
||||
```
|
||||
|
||||
```go
|
||||
type TOML struct {
|
||||
ObscureKey string `toml:"some_key_NAME"`
|
||||
}
|
||||
```
|
||||
|
||||
### Using the `encoding.TextUnmarshaler` interface
|
||||
|
||||
Here's an example that automatically parses duration strings into
|
||||
`time.Duration` values:
|
||||
|
||||
```toml
|
||||
[[song]]
|
||||
name = "Thunder Road"
|
||||
duration = "4m49s"
|
||||
|
||||
[[song]]
|
||||
name = "Stairway to Heaven"
|
||||
duration = "8m03s"
|
||||
```
|
||||
|
||||
Which can be decoded with:
|
||||
|
||||
```go
|
||||
type song struct {
|
||||
Name string
|
||||
Duration duration
|
||||
}
|
||||
type songs struct {
|
||||
Song []song
|
||||
}
|
||||
var favorites songs
|
||||
if _, err := toml.Decode(blob, &favorites); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
for _, s := range favorites.Song {
|
||||
fmt.Printf("%s (%s)\n", s.Name, s.Duration)
|
||||
}
|
||||
```
|
||||
|
||||
And you'll also need a `duration` type that satisfies the
|
||||
`encoding.TextUnmarshaler` interface:
|
||||
|
||||
```go
|
||||
type duration struct {
|
||||
time.Duration
|
||||
}
|
||||
|
||||
func (d *duration) UnmarshalText(text []byte) error {
|
||||
var err error
|
||||
d.Duration, err = time.ParseDuration(string(text))
|
||||
return err
|
||||
}
|
||||
```
|
||||
|
||||
### More complex usage
|
||||
|
||||
Here's an example of how to load the example from the official spec page:
|
||||
|
||||
```toml
|
||||
# This is a TOML document. Boom.
|
||||
|
||||
title = "TOML Example"
|
||||
|
||||
[owner]
|
||||
name = "Tom Preston-Werner"
|
||||
organization = "GitHub"
|
||||
bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
|
||||
dob = 1979-05-27T07:32:00Z # First class dates? Why not?
|
||||
|
||||
[database]
|
||||
server = "192.168.1.1"
|
||||
ports = [ 8001, 8001, 8002 ]
|
||||
connection_max = 5000
|
||||
enabled = true
|
||||
|
||||
[servers]
|
||||
|
||||
# You can indent as you please. Tabs or spaces. TOML don't care.
|
||||
[servers.alpha]
|
||||
ip = "10.0.0.1"
|
||||
dc = "eqdc10"
|
||||
|
||||
[servers.beta]
|
||||
ip = "10.0.0.2"
|
||||
dc = "eqdc10"
|
||||
|
||||
[clients]
|
||||
data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
|
||||
|
||||
# Line breaks are OK when inside arrays
|
||||
hosts = [
|
||||
"alpha",
|
||||
"omega"
|
||||
]
|
||||
```
|
||||
|
||||
And the corresponding Go types are:
|
||||
|
||||
```go
|
||||
type tomlConfig struct {
|
||||
Title string
|
||||
Owner ownerInfo
|
||||
DB database `toml:"database"`
|
||||
Servers map[string]server
|
||||
Clients clients
|
||||
}
|
||||
|
||||
type ownerInfo struct {
|
||||
Name string
|
||||
Org string `toml:"organization"`
|
||||
Bio string
|
||||
DOB time.Time
|
||||
}
|
||||
|
||||
type database struct {
|
||||
Server string
|
||||
Ports []int
|
||||
ConnMax int `toml:"connection_max"`
|
||||
Enabled bool
|
||||
}
|
||||
|
||||
type server struct {
|
||||
IP string
|
||||
DC string
|
||||
}
|
||||
|
||||
type clients struct {
|
||||
Data [][]interface{}
|
||||
Hosts []string
|
||||
}
|
||||
```
|
||||
|
||||
Note that a case insensitive match will be tried if an exact match can't be
|
||||
found.
|
||||
|
||||
A working example of the above can be found in `_examples/example.{go,toml}`.
|
509
vendor/github.com/BurntSushi/toml/decode.go
generated
vendored
Normal file
509
vendor/github.com/BurntSushi/toml/decode.go
generated
vendored
Normal file
@@ -0,0 +1,509 @@
|
||||
package toml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func e(format string, args ...interface{}) error {
|
||||
return fmt.Errorf("toml: "+format, args...)
|
||||
}
|
||||
|
||||
// Unmarshaler is the interface implemented by objects that can unmarshal a
|
||||
// TOML description of themselves.
|
||||
type Unmarshaler interface {
|
||||
UnmarshalTOML(interface{}) error
|
||||
}
|
||||
|
||||
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
|
||||
func Unmarshal(p []byte, v interface{}) error {
|
||||
_, err := Decode(string(p), v)
|
||||
return err
|
||||
}
|
||||
|
||||
// Primitive is a TOML value that hasn't been decoded into a Go value.
|
||||
// When using the various `Decode*` functions, the type `Primitive` may
|
||||
// be given to any value, and its decoding will be delayed.
|
||||
//
|
||||
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
|
||||
//
|
||||
// The underlying representation of a `Primitive` value is subject to change.
|
||||
// Do not rely on it.
|
||||
//
|
||||
// N.B. Primitive values are still parsed, so using them will only avoid
|
||||
// the overhead of reflection. They can be useful when you don't know the
|
||||
// exact type of TOML data until run time.
|
||||
type Primitive struct {
|
||||
undecoded interface{}
|
||||
context Key
|
||||
}
|
||||
|
||||
// DEPRECATED!
|
||||
//
|
||||
// Use MetaData.PrimitiveDecode instead.
|
||||
func PrimitiveDecode(primValue Primitive, v interface{}) error {
|
||||
md := MetaData{decoded: make(map[string]bool)}
|
||||
return md.unify(primValue.undecoded, rvalue(v))
|
||||
}
|
||||
|
||||
// PrimitiveDecode is just like the other `Decode*` functions, except it
|
||||
// decodes a TOML value that has already been parsed. Valid primitive values
|
||||
// can *only* be obtained from values filled by the decoder functions,
|
||||
// including this method. (i.e., `v` may contain more `Primitive`
|
||||
// values.)
|
||||
//
|
||||
// Meta data for primitive values is included in the meta data returned by
|
||||
// the `Decode*` functions with one exception: keys returned by the Undecoded
|
||||
// method will only reflect keys that were decoded. Namely, any keys hidden
|
||||
// behind a Primitive will be considered undecoded. Executing this method will
|
||||
// update the undecoded keys in the meta data. (See the example.)
|
||||
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
|
||||
md.context = primValue.context
|
||||
defer func() { md.context = nil }()
|
||||
return md.unify(primValue.undecoded, rvalue(v))
|
||||
}
|
||||
|
||||
// Decode will decode the contents of `data` in TOML format into a pointer
|
||||
// `v`.
|
||||
//
|
||||
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
|
||||
// used interchangeably.)
|
||||
//
|
||||
// TOML arrays of tables correspond to either a slice of structs or a slice
|
||||
// of maps.
|
||||
//
|
||||
// TOML datetimes correspond to Go `time.Time` values.
|
||||
//
|
||||
// All other TOML types (float, string, int, bool and array) correspond
|
||||
// to the obvious Go types.
|
||||
//
|
||||
// An exception to the above rules is if a type implements the
|
||||
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
|
||||
// (floats, strings, integers, booleans and datetimes) will be converted to
|
||||
// a byte string and given to the value's UnmarshalText method. See the
|
||||
// Unmarshaler example for a demonstration with time duration strings.
|
||||
//
|
||||
// Key mapping
|
||||
//
|
||||
// TOML keys can map to either keys in a Go map or field names in a Go
|
||||
// struct. The special `toml` struct tag may be used to map TOML keys to
|
||||
// struct fields that don't match the key name exactly. (See the example.)
|
||||
// A case insensitive match to struct names will be tried if an exact match
|
||||
// can't be found.
|
||||
//
|
||||
// The mapping between TOML values and Go values is loose. That is, there
|
||||
// may exist TOML values that cannot be placed into your representation, and
|
||||
// there may be parts of your representation that do not correspond to
|
||||
// TOML values. This loose mapping can be made stricter by using the IsDefined
|
||||
// and/or Undecoded methods on the MetaData returned.
|
||||
//
|
||||
// This decoder will not handle cyclic types. If a cyclic type is passed,
|
||||
// `Decode` will not terminate.
|
||||
func Decode(data string, v interface{}) (MetaData, error) {
|
||||
rv := reflect.ValueOf(v)
|
||||
if rv.Kind() != reflect.Ptr {
|
||||
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
|
||||
}
|
||||
if rv.IsNil() {
|
||||
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
|
||||
}
|
||||
p, err := parse(data)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
md := MetaData{
|
||||
p.mapping, p.types, p.ordered,
|
||||
make(map[string]bool, len(p.ordered)), nil,
|
||||
}
|
||||
return md, md.unify(p.mapping, indirect(rv))
|
||||
}
|
||||
|
||||
// DecodeFile is just like Decode, except it will automatically read the
|
||||
// contents of the file at `fpath` and decode it for you.
|
||||
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
|
||||
bs, err := ioutil.ReadFile(fpath)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
return Decode(string(bs), v)
|
||||
}
|
||||
|
||||
// DecodeReader is just like Decode, except it will consume all bytes
|
||||
// from the reader and decode it for you.
|
||||
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
|
||||
bs, err := ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
return Decode(string(bs), v)
|
||||
}
|
||||
|
||||
// unify performs a sort of type unification based on the structure of `rv`,
|
||||
// which is the client representation.
|
||||
//
|
||||
// Any type mismatch produces an error. Finding a type that we don't know
|
||||
// how to handle produces an unsupported type error.
|
||||
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
|
||||
|
||||
// Special case. Look for a `Primitive` value.
|
||||
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
|
||||
// Save the undecoded data and the key context into the primitive
|
||||
// value.
|
||||
context := make(Key, len(md.context))
|
||||
copy(context, md.context)
|
||||
rv.Set(reflect.ValueOf(Primitive{
|
||||
undecoded: data,
|
||||
context: context,
|
||||
}))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Special case. Unmarshaler Interface support.
|
||||
if rv.CanAddr() {
|
||||
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
|
||||
return v.UnmarshalTOML(data)
|
||||
}
|
||||
}
|
||||
|
||||
// Special case. Handle time.Time values specifically.
|
||||
// TODO: Remove this code when we decide to drop support for Go 1.1.
|
||||
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
|
||||
// interfaces.
|
||||
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
|
||||
return md.unifyDatetime(data, rv)
|
||||
}
|
||||
|
||||
// Special case. Look for a value satisfying the TextUnmarshaler interface.
|
||||
if v, ok := rv.Interface().(TextUnmarshaler); ok {
|
||||
return md.unifyText(data, v)
|
||||
}
|
||||
// BUG(burntsushi)
|
||||
// The behavior here is incorrect whenever a Go type satisfies the
|
||||
// encoding.TextUnmarshaler interface but also corresponds to a TOML
|
||||
// hash or array. In particular, the unmarshaler should only be applied
|
||||
// to primitive TOML values. But at this point, it will be applied to
|
||||
// all kinds of values and produce an incorrect error whenever those values
|
||||
// are hashes or arrays (including arrays of tables).
|
||||
|
||||
k := rv.Kind()
|
||||
|
||||
// laziness
|
||||
if k >= reflect.Int && k <= reflect.Uint64 {
|
||||
return md.unifyInt(data, rv)
|
||||
}
|
||||
switch k {
|
||||
case reflect.Ptr:
|
||||
elem := reflect.New(rv.Type().Elem())
|
||||
err := md.unify(data, reflect.Indirect(elem))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rv.Set(elem)
|
||||
return nil
|
||||
case reflect.Struct:
|
||||
return md.unifyStruct(data, rv)
|
||||
case reflect.Map:
|
||||
return md.unifyMap(data, rv)
|
||||
case reflect.Array:
|
||||
return md.unifyArray(data, rv)
|
||||
case reflect.Slice:
|
||||
return md.unifySlice(data, rv)
|
||||
case reflect.String:
|
||||
return md.unifyString(data, rv)
|
||||
case reflect.Bool:
|
||||
return md.unifyBool(data, rv)
|
||||
case reflect.Interface:
|
||||
// we only support empty interfaces.
|
||||
if rv.NumMethod() > 0 {
|
||||
return e("unsupported type %s", rv.Type())
|
||||
}
|
||||
return md.unifyAnything(data, rv)
|
||||
case reflect.Float32:
|
||||
fallthrough
|
||||
case reflect.Float64:
|
||||
return md.unifyFloat64(data, rv)
|
||||
}
|
||||
return e("unsupported type %s", rv.Kind())
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
|
||||
tmap, ok := mapping.(map[string]interface{})
|
||||
if !ok {
|
||||
if mapping == nil {
|
||||
return nil
|
||||
}
|
||||
return e("type mismatch for %s: expected table but found %T",
|
||||
rv.Type().String(), mapping)
|
||||
}
|
||||
|
||||
for key, datum := range tmap {
|
||||
var f *field
|
||||
fields := cachedTypeFields(rv.Type())
|
||||
for i := range fields {
|
||||
ff := &fields[i]
|
||||
if ff.name == key {
|
||||
f = ff
|
||||
break
|
||||
}
|
||||
if f == nil && strings.EqualFold(ff.name, key) {
|
||||
f = ff
|
||||
}
|
||||
}
|
||||
if f != nil {
|
||||
subv := rv
|
||||
for _, i := range f.index {
|
||||
subv = indirect(subv.Field(i))
|
||||
}
|
||||
if isUnifiable(subv) {
|
||||
md.decoded[md.context.add(key).String()] = true
|
||||
md.context = append(md.context, key)
|
||||
if err := md.unify(datum, subv); err != nil {
|
||||
return err
|
||||
}
|
||||
md.context = md.context[0 : len(md.context)-1]
|
||||
} else if f.name != "" {
|
||||
// Bad user! No soup for you!
|
||||
return e("cannot write unexported field %s.%s",
|
||||
rv.Type().String(), f.name)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
|
||||
tmap, ok := mapping.(map[string]interface{})
|
||||
if !ok {
|
||||
if tmap == nil {
|
||||
return nil
|
||||
}
|
||||
return badtype("map", mapping)
|
||||
}
|
||||
if rv.IsNil() {
|
||||
rv.Set(reflect.MakeMap(rv.Type()))
|
||||
}
|
||||
for k, v := range tmap {
|
||||
md.decoded[md.context.add(k).String()] = true
|
||||
md.context = append(md.context, k)
|
||||
|
||||
rvkey := indirect(reflect.New(rv.Type().Key()))
|
||||
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
|
||||
if err := md.unify(v, rvval); err != nil {
|
||||
return err
|
||||
}
|
||||
md.context = md.context[0 : len(md.context)-1]
|
||||
|
||||
rvkey.SetString(k)
|
||||
rv.SetMapIndex(rvkey, rvval)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
|
||||
datav := reflect.ValueOf(data)
|
||||
if datav.Kind() != reflect.Slice {
|
||||
if !datav.IsValid() {
|
||||
return nil
|
||||
}
|
||||
return badtype("slice", data)
|
||||
}
|
||||
sliceLen := datav.Len()
|
||||
if sliceLen != rv.Len() {
|
||||
return e("expected array length %d; got TOML array of length %d",
|
||||
rv.Len(), sliceLen)
|
||||
}
|
||||
return md.unifySliceArray(datav, rv)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
|
||||
datav := reflect.ValueOf(data)
|
||||
if datav.Kind() != reflect.Slice {
|
||||
if !datav.IsValid() {
|
||||
return nil
|
||||
}
|
||||
return badtype("slice", data)
|
||||
}
|
||||
n := datav.Len()
|
||||
if rv.IsNil() || rv.Cap() < n {
|
||||
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
|
||||
}
|
||||
rv.SetLen(n)
|
||||
return md.unifySliceArray(datav, rv)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
|
||||
sliceLen := data.Len()
|
||||
for i := 0; i < sliceLen; i++ {
|
||||
v := data.Index(i).Interface()
|
||||
sliceval := indirect(rv.Index(i))
|
||||
if err := md.unify(v, sliceval); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
|
||||
if _, ok := data.(time.Time); ok {
|
||||
rv.Set(reflect.ValueOf(data))
|
||||
return nil
|
||||
}
|
||||
return badtype("time.Time", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
|
||||
if s, ok := data.(string); ok {
|
||||
rv.SetString(s)
|
||||
return nil
|
||||
}
|
||||
return badtype("string", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
|
||||
if num, ok := data.(float64); ok {
|
||||
switch rv.Kind() {
|
||||
case reflect.Float32:
|
||||
fallthrough
|
||||
case reflect.Float64:
|
||||
rv.SetFloat(num)
|
||||
default:
|
||||
panic("bug")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return badtype("float", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
|
||||
if num, ok := data.(int64); ok {
|
||||
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
|
||||
switch rv.Kind() {
|
||||
case reflect.Int, reflect.Int64:
|
||||
// No bounds checking necessary.
|
||||
case reflect.Int8:
|
||||
if num < math.MinInt8 || num > math.MaxInt8 {
|
||||
return e("value %d is out of range for int8", num)
|
||||
}
|
||||
case reflect.Int16:
|
||||
if num < math.MinInt16 || num > math.MaxInt16 {
|
||||
return e("value %d is out of range for int16", num)
|
||||
}
|
||||
case reflect.Int32:
|
||||
if num < math.MinInt32 || num > math.MaxInt32 {
|
||||
return e("value %d is out of range for int32", num)
|
||||
}
|
||||
}
|
||||
rv.SetInt(num)
|
||||
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
|
||||
unum := uint64(num)
|
||||
switch rv.Kind() {
|
||||
case reflect.Uint, reflect.Uint64:
|
||||
// No bounds checking necessary.
|
||||
case reflect.Uint8:
|
||||
if num < 0 || unum > math.MaxUint8 {
|
||||
return e("value %d is out of range for uint8", num)
|
||||
}
|
||||
case reflect.Uint16:
|
||||
if num < 0 || unum > math.MaxUint16 {
|
||||
return e("value %d is out of range for uint16", num)
|
||||
}
|
||||
case reflect.Uint32:
|
||||
if num < 0 || unum > math.MaxUint32 {
|
||||
return e("value %d is out of range for uint32", num)
|
||||
}
|
||||
}
|
||||
rv.SetUint(unum)
|
||||
} else {
|
||||
panic("unreachable")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return badtype("integer", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
|
||||
if b, ok := data.(bool); ok {
|
||||
rv.SetBool(b)
|
||||
return nil
|
||||
}
|
||||
return badtype("boolean", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
|
||||
rv.Set(reflect.ValueOf(data))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
|
||||
var s string
|
||||
switch sdata := data.(type) {
|
||||
case TextMarshaler:
|
||||
text, err := sdata.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s = string(text)
|
||||
case fmt.Stringer:
|
||||
s = sdata.String()
|
||||
case string:
|
||||
s = sdata
|
||||
case bool:
|
||||
s = fmt.Sprintf("%v", sdata)
|
||||
case int64:
|
||||
s = fmt.Sprintf("%d", sdata)
|
||||
case float64:
|
||||
s = fmt.Sprintf("%f", sdata)
|
||||
default:
|
||||
return badtype("primitive (string-like)", data)
|
||||
}
|
||||
if err := v.UnmarshalText([]byte(s)); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
|
||||
func rvalue(v interface{}) reflect.Value {
|
||||
return indirect(reflect.ValueOf(v))
|
||||
}
|
||||
|
||||
// indirect returns the value pointed to by a pointer.
|
||||
// Pointers are followed until the value is not a pointer.
|
||||
// New values are allocated for each nil pointer.
|
||||
//
|
||||
// An exception to this rule is if the value satisfies an interface of
|
||||
// interest to us (like encoding.TextUnmarshaler).
|
||||
func indirect(v reflect.Value) reflect.Value {
|
||||
if v.Kind() != reflect.Ptr {
|
||||
if v.CanSet() {
|
||||
pv := v.Addr()
|
||||
if _, ok := pv.Interface().(TextUnmarshaler); ok {
|
||||
return pv
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
if v.IsNil() {
|
||||
v.Set(reflect.New(v.Type().Elem()))
|
||||
}
|
||||
return indirect(reflect.Indirect(v))
|
||||
}
|
||||
|
||||
func isUnifiable(rv reflect.Value) bool {
|
||||
if rv.CanSet() {
|
||||
return true
|
||||
}
|
||||
if _, ok := rv.Interface().(TextUnmarshaler); ok {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func badtype(expected string, data interface{}) error {
|
||||
return e("cannot load TOML value of type %T into a Go %s", data, expected)
|
||||
}
|
121
vendor/github.com/BurntSushi/toml/decode_meta.go
generated
vendored
Normal file
121
vendor/github.com/BurntSushi/toml/decode_meta.go
generated
vendored
Normal file
@@ -0,0 +1,121 @@
|
||||
package toml
|
||||
|
||||
import "strings"
|
||||
|
||||
// MetaData allows access to meta information about TOML data that may not
|
||||
// be inferrable via reflection. In particular, whether a key has been defined
|
||||
// and the TOML type of a key.
|
||||
type MetaData struct {
|
||||
mapping map[string]interface{}
|
||||
types map[string]tomlType
|
||||
keys []Key
|
||||
decoded map[string]bool
|
||||
context Key // Used only during decoding.
|
||||
}
|
||||
|
||||
// IsDefined returns true if the key given exists in the TOML data. The key
|
||||
// should be specified hierarchially. e.g.,
|
||||
//
|
||||
// // access the TOML key 'a.b.c'
|
||||
// IsDefined("a", "b", "c")
|
||||
//
|
||||
// IsDefined will return false if an empty key given. Keys are case sensitive.
|
||||
func (md *MetaData) IsDefined(key ...string) bool {
|
||||
if len(key) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
var hash map[string]interface{}
|
||||
var ok bool
|
||||
var hashOrVal interface{} = md.mapping
|
||||
for _, k := range key {
|
||||
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
|
||||
return false
|
||||
}
|
||||
if hashOrVal, ok = hash[k]; !ok {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Type returns a string representation of the type of the key specified.
|
||||
//
|
||||
// Type will return the empty string if given an empty key or a key that
|
||||
// does not exist. Keys are case sensitive.
|
||||
func (md *MetaData) Type(key ...string) string {
|
||||
fullkey := strings.Join(key, ".")
|
||||
if typ, ok := md.types[fullkey]; ok {
|
||||
return typ.typeString()
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
|
||||
// to get values of this type.
|
||||
type Key []string
|
||||
|
||||
func (k Key) String() string {
|
||||
return strings.Join(k, ".")
|
||||
}
|
||||
|
||||
func (k Key) maybeQuotedAll() string {
|
||||
var ss []string
|
||||
for i := range k {
|
||||
ss = append(ss, k.maybeQuoted(i))
|
||||
}
|
||||
return strings.Join(ss, ".")
|
||||
}
|
||||
|
||||
func (k Key) maybeQuoted(i int) string {
|
||||
quote := false
|
||||
for _, c := range k[i] {
|
||||
if !isBareKeyChar(c) {
|
||||
quote = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if quote {
|
||||
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
|
||||
}
|
||||
return k[i]
|
||||
}
|
||||
|
||||
func (k Key) add(piece string) Key {
|
||||
newKey := make(Key, len(k)+1)
|
||||
copy(newKey, k)
|
||||
newKey[len(k)] = piece
|
||||
return newKey
|
||||
}
|
||||
|
||||
// Keys returns a slice of every key in the TOML data, including key groups.
|
||||
// Each key is itself a slice, where the first element is the top of the
|
||||
// hierarchy and the last is the most specific.
|
||||
//
|
||||
// The list will have the same order as the keys appeared in the TOML data.
|
||||
//
|
||||
// All keys returned are non-empty.
|
||||
func (md *MetaData) Keys() []Key {
|
||||
return md.keys
|
||||
}
|
||||
|
||||
// Undecoded returns all keys that have not been decoded in the order in which
|
||||
// they appear in the original TOML document.
|
||||
//
|
||||
// This includes keys that haven't been decoded because of a Primitive value.
|
||||
// Once the Primitive value is decoded, the keys will be considered decoded.
|
||||
//
|
||||
// Also note that decoding into an empty interface will result in no decoding,
|
||||
// and so no keys will be considered decoded.
|
||||
//
|
||||
// In this sense, the Undecoded keys correspond to keys in the TOML document
|
||||
// that do not have a concrete type in your representation.
|
||||
func (md *MetaData) Undecoded() []Key {
|
||||
undecoded := make([]Key, 0, len(md.keys))
|
||||
for _, key := range md.keys {
|
||||
if !md.decoded[key.String()] {
|
||||
undecoded = append(undecoded, key)
|
||||
}
|
||||
}
|
||||
return undecoded
|
||||
}
|
27
vendor/github.com/BurntSushi/toml/doc.go
generated
vendored
Normal file
27
vendor/github.com/BurntSushi/toml/doc.go
generated
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
/*
|
||||
Package toml provides facilities for decoding and encoding TOML configuration
|
||||
files via reflection. There is also support for delaying decoding with
|
||||
the Primitive type, and querying the set of keys in a TOML document with the
|
||||
MetaData type.
|
||||
|
||||
The specification implemented: https://github.com/toml-lang/toml
|
||||
|
||||
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
|
||||
whether a file is a valid TOML document. It can also be used to print the
|
||||
type of each key in a TOML document.
|
||||
|
||||
Testing
|
||||
|
||||
There are two important types of tests used for this package. The first is
|
||||
contained inside '*_test.go' files and uses the standard Go unit testing
|
||||
framework. These tests are primarily devoted to holistically testing the
|
||||
decoder and encoder.
|
||||
|
||||
The second type of testing is used to verify the implementation's adherence
|
||||
to the TOML specification. These tests have been factored into their own
|
||||
project: https://github.com/BurntSushi/toml-test
|
||||
|
||||
The reason the tests are in a separate project is so that they can be used by
|
||||
any implementation of TOML. Namely, it is language agnostic.
|
||||
*/
|
||||
package toml
|
568
vendor/github.com/BurntSushi/toml/encode.go
generated
vendored
Normal file
568
vendor/github.com/BurntSushi/toml/encode.go
generated
vendored
Normal file
@@ -0,0 +1,568 @@
|
||||
package toml
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type tomlEncodeError struct{ error }
|
||||
|
||||
var (
|
||||
errArrayMixedElementTypes = errors.New(
|
||||
"toml: cannot encode array with mixed element types")
|
||||
errArrayNilElement = errors.New(
|
||||
"toml: cannot encode array with nil element")
|
||||
errNonString = errors.New(
|
||||
"toml: cannot encode a map with non-string key type")
|
||||
errAnonNonStruct = errors.New(
|
||||
"toml: cannot encode an anonymous field that is not a struct")
|
||||
errArrayNoTable = errors.New(
|
||||
"toml: TOML array element cannot contain a table")
|
||||
errNoKey = errors.New(
|
||||
"toml: top-level values must be Go maps or structs")
|
||||
errAnything = errors.New("") // used in testing
|
||||
)
|
||||
|
||||
var quotedReplacer = strings.NewReplacer(
|
||||
"\t", "\\t",
|
||||
"\n", "\\n",
|
||||
"\r", "\\r",
|
||||
"\"", "\\\"",
|
||||
"\\", "\\\\",
|
||||
)
|
||||
|
||||
// Encoder controls the encoding of Go values to a TOML document to some
|
||||
// io.Writer.
|
||||
//
|
||||
// The indentation level can be controlled with the Indent field.
|
||||
type Encoder struct {
|
||||
// A single indentation level. By default it is two spaces.
|
||||
Indent string
|
||||
|
||||
// hasWritten is whether we have written any output to w yet.
|
||||
hasWritten bool
|
||||
w *bufio.Writer
|
||||
}
|
||||
|
||||
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
|
||||
// given. By default, a single indentation level is 2 spaces.
|
||||
func NewEncoder(w io.Writer) *Encoder {
|
||||
return &Encoder{
|
||||
w: bufio.NewWriter(w),
|
||||
Indent: " ",
|
||||
}
|
||||
}
|
||||
|
||||
// Encode writes a TOML representation of the Go value to the underlying
|
||||
// io.Writer. If the value given cannot be encoded to a valid TOML document,
|
||||
// then an error is returned.
|
||||
//
|
||||
// The mapping between Go values and TOML values should be precisely the same
|
||||
// as for the Decode* functions. Similarly, the TextMarshaler interface is
|
||||
// supported by encoding the resulting bytes as strings. (If you want to write
|
||||
// arbitrary binary data then you will need to use something like base64 since
|
||||
// TOML does not have any binary types.)
|
||||
//
|
||||
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
|
||||
// sub-hashes are encoded first.
|
||||
//
|
||||
// If a Go map is encoded, then its keys are sorted alphabetically for
|
||||
// deterministic output. More control over this behavior may be provided if
|
||||
// there is demand for it.
|
||||
//
|
||||
// Encoding Go values without a corresponding TOML representation---like map
|
||||
// types with non-string keys---will cause an error to be returned. Similarly
|
||||
// for mixed arrays/slices, arrays/slices with nil elements, embedded
|
||||
// non-struct types and nested slices containing maps or structs.
|
||||
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
|
||||
// and so is []map[string][]string.)
|
||||
func (enc *Encoder) Encode(v interface{}) error {
|
||||
rv := eindirect(reflect.ValueOf(v))
|
||||
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
|
||||
return err
|
||||
}
|
||||
return enc.w.Flush()
|
||||
}
|
||||
|
||||
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
if terr, ok := r.(tomlEncodeError); ok {
|
||||
err = terr.error
|
||||
return
|
||||
}
|
||||
panic(r)
|
||||
}
|
||||
}()
|
||||
enc.encode(key, rv)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (enc *Encoder) encode(key Key, rv reflect.Value) {
|
||||
// Special case. Time needs to be in ISO8601 format.
|
||||
// Special case. If we can marshal the type to text, then we used that.
|
||||
// Basically, this prevents the encoder for handling these types as
|
||||
// generic structs (or whatever the underlying type of a TextMarshaler is).
|
||||
switch rv.Interface().(type) {
|
||||
case time.Time, TextMarshaler:
|
||||
enc.keyEqElement(key, rv)
|
||||
return
|
||||
}
|
||||
|
||||
k := rv.Kind()
|
||||
switch k {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
|
||||
reflect.Uint64,
|
||||
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
|
||||
enc.keyEqElement(key, rv)
|
||||
case reflect.Array, reflect.Slice:
|
||||
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
|
||||
enc.eArrayOfTables(key, rv)
|
||||
} else {
|
||||
enc.keyEqElement(key, rv)
|
||||
}
|
||||
case reflect.Interface:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.encode(key, rv.Elem())
|
||||
case reflect.Map:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.eTable(key, rv)
|
||||
case reflect.Ptr:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.encode(key, rv.Elem())
|
||||
case reflect.Struct:
|
||||
enc.eTable(key, rv)
|
||||
default:
|
||||
panic(e("unsupported type for key '%s': %s", key, k))
|
||||
}
|
||||
}
|
||||
|
||||
// eElement encodes any value that can be an array element (primitives and
|
||||
// arrays).
|
||||
func (enc *Encoder) eElement(rv reflect.Value) {
|
||||
switch v := rv.Interface().(type) {
|
||||
case time.Time:
|
||||
// Special case time.Time as a primitive. Has to come before
|
||||
// TextMarshaler below because time.Time implements
|
||||
// encoding.TextMarshaler, but we need to always use UTC.
|
||||
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
|
||||
return
|
||||
case TextMarshaler:
|
||||
// Special case. Use text marshaler if it's available for this value.
|
||||
if s, err := v.MarshalText(); err != nil {
|
||||
encPanic(err)
|
||||
} else {
|
||||
enc.writeQuoted(string(s))
|
||||
}
|
||||
return
|
||||
}
|
||||
switch rv.Kind() {
|
||||
case reflect.Bool:
|
||||
enc.wf(strconv.FormatBool(rv.Bool()))
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64:
|
||||
enc.wf(strconv.FormatInt(rv.Int(), 10))
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16,
|
||||
reflect.Uint32, reflect.Uint64:
|
||||
enc.wf(strconv.FormatUint(rv.Uint(), 10))
|
||||
case reflect.Float32:
|
||||
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
|
||||
case reflect.Float64:
|
||||
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
|
||||
case reflect.Array, reflect.Slice:
|
||||
enc.eArrayOrSliceElement(rv)
|
||||
case reflect.Interface:
|
||||
enc.eElement(rv.Elem())
|
||||
case reflect.String:
|
||||
enc.writeQuoted(rv.String())
|
||||
default:
|
||||
panic(e("unexpected primitive type: %s", rv.Kind()))
|
||||
}
|
||||
}
|
||||
|
||||
// By the TOML spec, all floats must have a decimal with at least one
|
||||
// number on either side.
|
||||
func floatAddDecimal(fstr string) string {
|
||||
if !strings.Contains(fstr, ".") {
|
||||
return fstr + ".0"
|
||||
}
|
||||
return fstr
|
||||
}
|
||||
|
||||
func (enc *Encoder) writeQuoted(s string) {
|
||||
enc.wf("\"%s\"", quotedReplacer.Replace(s))
|
||||
}
|
||||
|
||||
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
|
||||
length := rv.Len()
|
||||
enc.wf("[")
|
||||
for i := 0; i < length; i++ {
|
||||
elem := rv.Index(i)
|
||||
enc.eElement(elem)
|
||||
if i != length-1 {
|
||||
enc.wf(", ")
|
||||
}
|
||||
}
|
||||
enc.wf("]")
|
||||
}
|
||||
|
||||
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
|
||||
if len(key) == 0 {
|
||||
encPanic(errNoKey)
|
||||
}
|
||||
for i := 0; i < rv.Len(); i++ {
|
||||
trv := rv.Index(i)
|
||||
if isNil(trv) {
|
||||
continue
|
||||
}
|
||||
panicIfInvalidKey(key)
|
||||
enc.newline()
|
||||
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
|
||||
enc.newline()
|
||||
enc.eMapOrStruct(key, trv)
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
|
||||
panicIfInvalidKey(key)
|
||||
if len(key) == 1 {
|
||||
// Output an extra newline between top-level tables.
|
||||
// (The newline isn't written if nothing else has been written though.)
|
||||
enc.newline()
|
||||
}
|
||||
if len(key) > 0 {
|
||||
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
|
||||
enc.newline()
|
||||
}
|
||||
enc.eMapOrStruct(key, rv)
|
||||
}
|
||||
|
||||
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
|
||||
switch rv := eindirect(rv); rv.Kind() {
|
||||
case reflect.Map:
|
||||
enc.eMap(key, rv)
|
||||
case reflect.Struct:
|
||||
enc.eStruct(key, rv)
|
||||
default:
|
||||
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
|
||||
rt := rv.Type()
|
||||
if rt.Key().Kind() != reflect.String {
|
||||
encPanic(errNonString)
|
||||
}
|
||||
|
||||
// Sort keys so that we have deterministic output. And write keys directly
|
||||
// underneath this key first, before writing sub-structs or sub-maps.
|
||||
var mapKeysDirect, mapKeysSub []string
|
||||
for _, mapKey := range rv.MapKeys() {
|
||||
k := mapKey.String()
|
||||
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
|
||||
mapKeysSub = append(mapKeysSub, k)
|
||||
} else {
|
||||
mapKeysDirect = append(mapKeysDirect, k)
|
||||
}
|
||||
}
|
||||
|
||||
var writeMapKeys = func(mapKeys []string) {
|
||||
sort.Strings(mapKeys)
|
||||
for _, mapKey := range mapKeys {
|
||||
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
|
||||
if isNil(mrv) {
|
||||
// Don't write anything for nil fields.
|
||||
continue
|
||||
}
|
||||
enc.encode(key.add(mapKey), mrv)
|
||||
}
|
||||
}
|
||||
writeMapKeys(mapKeysDirect)
|
||||
writeMapKeys(mapKeysSub)
|
||||
}
|
||||
|
||||
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
|
||||
// Write keys for fields directly under this key first, because if we write
|
||||
// a field that creates a new table, then all keys under it will be in that
|
||||
// table (not the one we're writing here).
|
||||
rt := rv.Type()
|
||||
var fieldsDirect, fieldsSub [][]int
|
||||
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
|
||||
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
|
||||
for i := 0; i < rt.NumField(); i++ {
|
||||
f := rt.Field(i)
|
||||
// skip unexported fields
|
||||
if f.PkgPath != "" && !f.Anonymous {
|
||||
continue
|
||||
}
|
||||
frv := rv.Field(i)
|
||||
if f.Anonymous {
|
||||
t := f.Type
|
||||
switch t.Kind() {
|
||||
case reflect.Struct:
|
||||
// Treat anonymous struct fields with
|
||||
// tag names as though they are not
|
||||
// anonymous, like encoding/json does.
|
||||
if getOptions(f.Tag).name == "" {
|
||||
addFields(t, frv, f.Index)
|
||||
continue
|
||||
}
|
||||
case reflect.Ptr:
|
||||
if t.Elem().Kind() == reflect.Struct &&
|
||||
getOptions(f.Tag).name == "" {
|
||||
if !frv.IsNil() {
|
||||
addFields(t.Elem(), frv.Elem(), f.Index)
|
||||
}
|
||||
continue
|
||||
}
|
||||
// Fall through to the normal field encoding logic below
|
||||
// for non-struct anonymous fields.
|
||||
}
|
||||
}
|
||||
|
||||
if typeIsHash(tomlTypeOfGo(frv)) {
|
||||
fieldsSub = append(fieldsSub, append(start, f.Index...))
|
||||
} else {
|
||||
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
|
||||
}
|
||||
}
|
||||
}
|
||||
addFields(rt, rv, nil)
|
||||
|
||||
var writeFields = func(fields [][]int) {
|
||||
for _, fieldIndex := range fields {
|
||||
sft := rt.FieldByIndex(fieldIndex)
|
||||
sf := rv.FieldByIndex(fieldIndex)
|
||||
if isNil(sf) {
|
||||
// Don't write anything for nil fields.
|
||||
continue
|
||||
}
|
||||
|
||||
opts := getOptions(sft.Tag)
|
||||
if opts.skip {
|
||||
continue
|
||||
}
|
||||
keyName := sft.Name
|
||||
if opts.name != "" {
|
||||
keyName = opts.name
|
||||
}
|
||||
if opts.omitempty && isEmpty(sf) {
|
||||
continue
|
||||
}
|
||||
if opts.omitzero && isZero(sf) {
|
||||
continue
|
||||
}
|
||||
|
||||
enc.encode(key.add(keyName), sf)
|
||||
}
|
||||
}
|
||||
writeFields(fieldsDirect)
|
||||
writeFields(fieldsSub)
|
||||
}
|
||||
|
||||
// tomlTypeName returns the TOML type name of the Go value's type. It is
|
||||
// used to determine whether the types of array elements are mixed (which is
|
||||
// forbidden). If the Go value is nil, then it is illegal for it to be an array
|
||||
// element, and valueIsNil is returned as true.
|
||||
|
||||
// Returns the TOML type of a Go value. The type may be `nil`, which means
|
||||
// no concrete TOML type could be found.
|
||||
func tomlTypeOfGo(rv reflect.Value) tomlType {
|
||||
if isNil(rv) || !rv.IsValid() {
|
||||
return nil
|
||||
}
|
||||
switch rv.Kind() {
|
||||
case reflect.Bool:
|
||||
return tomlBool
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
|
||||
reflect.Uint64:
|
||||
return tomlInteger
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return tomlFloat
|
||||
case reflect.Array, reflect.Slice:
|
||||
if typeEqual(tomlHash, tomlArrayType(rv)) {
|
||||
return tomlArrayHash
|
||||
}
|
||||
return tomlArray
|
||||
case reflect.Ptr, reflect.Interface:
|
||||
return tomlTypeOfGo(rv.Elem())
|
||||
case reflect.String:
|
||||
return tomlString
|
||||
case reflect.Map:
|
||||
return tomlHash
|
||||
case reflect.Struct:
|
||||
switch rv.Interface().(type) {
|
||||
case time.Time:
|
||||
return tomlDatetime
|
||||
case TextMarshaler:
|
||||
return tomlString
|
||||
default:
|
||||
return tomlHash
|
||||
}
|
||||
default:
|
||||
panic("unexpected reflect.Kind: " + rv.Kind().String())
|
||||
}
|
||||
}
|
||||
|
||||
// tomlArrayType returns the element type of a TOML array. The type returned
|
||||
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
|
||||
// slize). This function may also panic if it finds a type that cannot be
|
||||
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
|
||||
// nested arrays of tables).
|
||||
func tomlArrayType(rv reflect.Value) tomlType {
|
||||
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
|
||||
return nil
|
||||
}
|
||||
firstType := tomlTypeOfGo(rv.Index(0))
|
||||
if firstType == nil {
|
||||
encPanic(errArrayNilElement)
|
||||
}
|
||||
|
||||
rvlen := rv.Len()
|
||||
for i := 1; i < rvlen; i++ {
|
||||
elem := rv.Index(i)
|
||||
switch elemType := tomlTypeOfGo(elem); {
|
||||
case elemType == nil:
|
||||
encPanic(errArrayNilElement)
|
||||
case !typeEqual(firstType, elemType):
|
||||
encPanic(errArrayMixedElementTypes)
|
||||
}
|
||||
}
|
||||
// If we have a nested array, then we must make sure that the nested
|
||||
// array contains ONLY primitives.
|
||||
// This checks arbitrarily nested arrays.
|
||||
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
|
||||
nest := tomlArrayType(eindirect(rv.Index(0)))
|
||||
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
|
||||
encPanic(errArrayNoTable)
|
||||
}
|
||||
}
|
||||
return firstType
|
||||
}
|
||||
|
||||
type tagOptions struct {
|
||||
skip bool // "-"
|
||||
name string
|
||||
omitempty bool
|
||||
omitzero bool
|
||||
}
|
||||
|
||||
func getOptions(tag reflect.StructTag) tagOptions {
|
||||
t := tag.Get("toml")
|
||||
if t == "-" {
|
||||
return tagOptions{skip: true}
|
||||
}
|
||||
var opts tagOptions
|
||||
parts := strings.Split(t, ",")
|
||||
opts.name = parts[0]
|
||||
for _, s := range parts[1:] {
|
||||
switch s {
|
||||
case "omitempty":
|
||||
opts.omitempty = true
|
||||
case "omitzero":
|
||||
opts.omitzero = true
|
||||
}
|
||||
}
|
||||
return opts
|
||||
}
|
||||
|
||||
func isZero(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return rv.Int() == 0
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
return rv.Uint() == 0
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return rv.Float() == 0.0
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func isEmpty(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
|
||||
return rv.Len() == 0
|
||||
case reflect.Bool:
|
||||
return !rv.Bool()
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (enc *Encoder) newline() {
|
||||
if enc.hasWritten {
|
||||
enc.wf("\n")
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
|
||||
if len(key) == 0 {
|
||||
encPanic(errNoKey)
|
||||
}
|
||||
panicIfInvalidKey(key)
|
||||
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
|
||||
enc.eElement(val)
|
||||
enc.newline()
|
||||
}
|
||||
|
||||
func (enc *Encoder) wf(format string, v ...interface{}) {
|
||||
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
|
||||
encPanic(err)
|
||||
}
|
||||
enc.hasWritten = true
|
||||
}
|
||||
|
||||
func (enc *Encoder) indentStr(key Key) string {
|
||||
return strings.Repeat(enc.Indent, len(key)-1)
|
||||
}
|
||||
|
||||
func encPanic(err error) {
|
||||
panic(tomlEncodeError{err})
|
||||
}
|
||||
|
||||
func eindirect(v reflect.Value) reflect.Value {
|
||||
switch v.Kind() {
|
||||
case reflect.Ptr, reflect.Interface:
|
||||
return eindirect(v.Elem())
|
||||
default:
|
||||
return v
|
||||
}
|
||||
}
|
||||
|
||||
func isNil(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
|
||||
return rv.IsNil()
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func panicIfInvalidKey(key Key) {
|
||||
for _, k := range key {
|
||||
if len(k) == 0 {
|
||||
encPanic(e("Key '%s' is not a valid table name. Key names "+
|
||||
"cannot be empty.", key.maybeQuotedAll()))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func isValidKeyName(s string) bool {
|
||||
return len(s) != 0
|
||||
}
|
19
vendor/github.com/BurntSushi/toml/encoding_types.go
generated
vendored
Normal file
19
vendor/github.com/BurntSushi/toml/encoding_types.go
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
// +build go1.2
|
||||
|
||||
package toml
|
||||
|
||||
// In order to support Go 1.1, we define our own TextMarshaler and
|
||||
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
|
||||
// standard library interfaces.
|
||||
|
||||
import (
|
||||
"encoding"
|
||||
)
|
||||
|
||||
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
|
||||
// so that Go 1.1 can be supported.
|
||||
type TextMarshaler encoding.TextMarshaler
|
||||
|
||||
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
|
||||
// here so that Go 1.1 can be supported.
|
||||
type TextUnmarshaler encoding.TextUnmarshaler
|
18
vendor/github.com/BurntSushi/toml/encoding_types_1.1.go
generated
vendored
Normal file
18
vendor/github.com/BurntSushi/toml/encoding_types_1.1.go
generated
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
// +build !go1.2
|
||||
|
||||
package toml
|
||||
|
||||
// These interfaces were introduced in Go 1.2, so we add them manually when
|
||||
// compiling for Go 1.1.
|
||||
|
||||
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
|
||||
// so that Go 1.1 can be supported.
|
||||
type TextMarshaler interface {
|
||||
MarshalText() (text []byte, err error)
|
||||
}
|
||||
|
||||
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
|
||||
// here so that Go 1.1 can be supported.
|
||||
type TextUnmarshaler interface {
|
||||
UnmarshalText(text []byte) error
|
||||
}
|
953
vendor/github.com/BurntSushi/toml/lex.go
generated
vendored
Normal file
953
vendor/github.com/BurntSushi/toml/lex.go
generated
vendored
Normal file
@@ -0,0 +1,953 @@
|
||||
package toml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
type itemType int
|
||||
|
||||
const (
|
||||
itemError itemType = iota
|
||||
itemNIL // used in the parser to indicate no type
|
||||
itemEOF
|
||||
itemText
|
||||
itemString
|
||||
itemRawString
|
||||
itemMultilineString
|
||||
itemRawMultilineString
|
||||
itemBool
|
||||
itemInteger
|
||||
itemFloat
|
||||
itemDatetime
|
||||
itemArray // the start of an array
|
||||
itemArrayEnd
|
||||
itemTableStart
|
||||
itemTableEnd
|
||||
itemArrayTableStart
|
||||
itemArrayTableEnd
|
||||
itemKeyStart
|
||||
itemCommentStart
|
||||
itemInlineTableStart
|
||||
itemInlineTableEnd
|
||||
)
|
||||
|
||||
const (
|
||||
eof = 0
|
||||
comma = ','
|
||||
tableStart = '['
|
||||
tableEnd = ']'
|
||||
arrayTableStart = '['
|
||||
arrayTableEnd = ']'
|
||||
tableSep = '.'
|
||||
keySep = '='
|
||||
arrayStart = '['
|
||||
arrayEnd = ']'
|
||||
commentStart = '#'
|
||||
stringStart = '"'
|
||||
stringEnd = '"'
|
||||
rawStringStart = '\''
|
||||
rawStringEnd = '\''
|
||||
inlineTableStart = '{'
|
||||
inlineTableEnd = '}'
|
||||
)
|
||||
|
||||
type stateFn func(lx *lexer) stateFn
|
||||
|
||||
type lexer struct {
|
||||
input string
|
||||
start int
|
||||
pos int
|
||||
line int
|
||||
state stateFn
|
||||
items chan item
|
||||
|
||||
// Allow for backing up up to three runes.
|
||||
// This is necessary because TOML contains 3-rune tokens (""" and ''').
|
||||
prevWidths [3]int
|
||||
nprev int // how many of prevWidths are in use
|
||||
// If we emit an eof, we can still back up, but it is not OK to call
|
||||
// next again.
|
||||
atEOF bool
|
||||
|
||||
// A stack of state functions used to maintain context.
|
||||
// The idea is to reuse parts of the state machine in various places.
|
||||
// For example, values can appear at the top level or within arbitrarily
|
||||
// nested arrays. The last state on the stack is used after a value has
|
||||
// been lexed. Similarly for comments.
|
||||
stack []stateFn
|
||||
}
|
||||
|
||||
type item struct {
|
||||
typ itemType
|
||||
val string
|
||||
line int
|
||||
}
|
||||
|
||||
func (lx *lexer) nextItem() item {
|
||||
for {
|
||||
select {
|
||||
case item := <-lx.items:
|
||||
return item
|
||||
default:
|
||||
lx.state = lx.state(lx)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func lex(input string) *lexer {
|
||||
lx := &lexer{
|
||||
input: input,
|
||||
state: lexTop,
|
||||
line: 1,
|
||||
items: make(chan item, 10),
|
||||
stack: make([]stateFn, 0, 10),
|
||||
}
|
||||
return lx
|
||||
}
|
||||
|
||||
func (lx *lexer) push(state stateFn) {
|
||||
lx.stack = append(lx.stack, state)
|
||||
}
|
||||
|
||||
func (lx *lexer) pop() stateFn {
|
||||
if len(lx.stack) == 0 {
|
||||
return lx.errorf("BUG in lexer: no states to pop")
|
||||
}
|
||||
last := lx.stack[len(lx.stack)-1]
|
||||
lx.stack = lx.stack[0 : len(lx.stack)-1]
|
||||
return last
|
||||
}
|
||||
|
||||
func (lx *lexer) current() string {
|
||||
return lx.input[lx.start:lx.pos]
|
||||
}
|
||||
|
||||
func (lx *lexer) emit(typ itemType) {
|
||||
lx.items <- item{typ, lx.current(), lx.line}
|
||||
lx.start = lx.pos
|
||||
}
|
||||
|
||||
func (lx *lexer) emitTrim(typ itemType) {
|
||||
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
|
||||
lx.start = lx.pos
|
||||
}
|
||||
|
||||
func (lx *lexer) next() (r rune) {
|
||||
if lx.atEOF {
|
||||
panic("next called after EOF")
|
||||
}
|
||||
if lx.pos >= len(lx.input) {
|
||||
lx.atEOF = true
|
||||
return eof
|
||||
}
|
||||
|
||||
if lx.input[lx.pos] == '\n' {
|
||||
lx.line++
|
||||
}
|
||||
lx.prevWidths[2] = lx.prevWidths[1]
|
||||
lx.prevWidths[1] = lx.prevWidths[0]
|
||||
if lx.nprev < 3 {
|
||||
lx.nprev++
|
||||
}
|
||||
r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
|
||||
lx.prevWidths[0] = w
|
||||
lx.pos += w
|
||||
return r
|
||||
}
|
||||
|
||||
// ignore skips over the pending input before this point.
|
||||
func (lx *lexer) ignore() {
|
||||
lx.start = lx.pos
|
||||
}
|
||||
|
||||
// backup steps back one rune. Can be called only twice between calls to next.
|
||||
func (lx *lexer) backup() {
|
||||
if lx.atEOF {
|
||||
lx.atEOF = false
|
||||
return
|
||||
}
|
||||
if lx.nprev < 1 {
|
||||
panic("backed up too far")
|
||||
}
|
||||
w := lx.prevWidths[0]
|
||||
lx.prevWidths[0] = lx.prevWidths[1]
|
||||
lx.prevWidths[1] = lx.prevWidths[2]
|
||||
lx.nprev--
|
||||
lx.pos -= w
|
||||
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
|
||||
lx.line--
|
||||
}
|
||||
}
|
||||
|
||||
// accept consumes the next rune if it's equal to `valid`.
|
||||
func (lx *lexer) accept(valid rune) bool {
|
||||
if lx.next() == valid {
|
||||
return true
|
||||
}
|
||||
lx.backup()
|
||||
return false
|
||||
}
|
||||
|
||||
// peek returns but does not consume the next rune in the input.
|
||||
func (lx *lexer) peek() rune {
|
||||
r := lx.next()
|
||||
lx.backup()
|
||||
return r
|
||||
}
|
||||
|
||||
// skip ignores all input that matches the given predicate.
|
||||
func (lx *lexer) skip(pred func(rune) bool) {
|
||||
for {
|
||||
r := lx.next()
|
||||
if pred(r) {
|
||||
continue
|
||||
}
|
||||
lx.backup()
|
||||
lx.ignore()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// errorf stops all lexing by emitting an error and returning `nil`.
|
||||
// Note that any value that is a character is escaped if it's a special
|
||||
// character (newlines, tabs, etc.).
|
||||
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
|
||||
lx.items <- item{
|
||||
itemError,
|
||||
fmt.Sprintf(format, values...),
|
||||
lx.line,
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// lexTop consumes elements at the top level of TOML data.
|
||||
func lexTop(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isWhitespace(r) || isNL(r) {
|
||||
return lexSkip(lx, lexTop)
|
||||
}
|
||||
switch r {
|
||||
case commentStart:
|
||||
lx.push(lexTop)
|
||||
return lexCommentStart
|
||||
case tableStart:
|
||||
return lexTableStart
|
||||
case eof:
|
||||
if lx.pos > lx.start {
|
||||
return lx.errorf("unexpected EOF")
|
||||
}
|
||||
lx.emit(itemEOF)
|
||||
return nil
|
||||
}
|
||||
|
||||
// At this point, the only valid item can be a key, so we back up
|
||||
// and let the key lexer do the rest.
|
||||
lx.backup()
|
||||
lx.push(lexTopEnd)
|
||||
return lexKeyStart
|
||||
}
|
||||
|
||||
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
|
||||
// or a table.) It must see only whitespace, and will turn back to lexTop
|
||||
// upon a newline. If it sees EOF, it will quit the lexer successfully.
|
||||
func lexTopEnd(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case r == commentStart:
|
||||
// a comment will read to a newline for us.
|
||||
lx.push(lexTop)
|
||||
return lexCommentStart
|
||||
case isWhitespace(r):
|
||||
return lexTopEnd
|
||||
case isNL(r):
|
||||
lx.ignore()
|
||||
return lexTop
|
||||
case r == eof:
|
||||
lx.emit(itemEOF)
|
||||
return nil
|
||||
}
|
||||
return lx.errorf("expected a top-level item to end with a newline, "+
|
||||
"comment, or EOF, but got %q instead", r)
|
||||
}
|
||||
|
||||
// lexTable lexes the beginning of a table. Namely, it makes sure that
|
||||
// it starts with a character other than '.' and ']'.
|
||||
// It assumes that '[' has already been consumed.
|
||||
// It also handles the case that this is an item in an array of tables.
|
||||
// e.g., '[[name]]'.
|
||||
func lexTableStart(lx *lexer) stateFn {
|
||||
if lx.peek() == arrayTableStart {
|
||||
lx.next()
|
||||
lx.emit(itemArrayTableStart)
|
||||
lx.push(lexArrayTableEnd)
|
||||
} else {
|
||||
lx.emit(itemTableStart)
|
||||
lx.push(lexTableEnd)
|
||||
}
|
||||
return lexTableNameStart
|
||||
}
|
||||
|
||||
func lexTableEnd(lx *lexer) stateFn {
|
||||
lx.emit(itemTableEnd)
|
||||
return lexTopEnd
|
||||
}
|
||||
|
||||
func lexArrayTableEnd(lx *lexer) stateFn {
|
||||
if r := lx.next(); r != arrayTableEnd {
|
||||
return lx.errorf("expected end of table array name delimiter %q, "+
|
||||
"but got %q instead", arrayTableEnd, r)
|
||||
}
|
||||
lx.emit(itemArrayTableEnd)
|
||||
return lexTopEnd
|
||||
}
|
||||
|
||||
func lexTableNameStart(lx *lexer) stateFn {
|
||||
lx.skip(isWhitespace)
|
||||
switch r := lx.peek(); {
|
||||
case r == tableEnd || r == eof:
|
||||
return lx.errorf("unexpected end of table name " +
|
||||
"(table names cannot be empty)")
|
||||
case r == tableSep:
|
||||
return lx.errorf("unexpected table separator " +
|
||||
"(table names cannot be empty)")
|
||||
case r == stringStart || r == rawStringStart:
|
||||
lx.ignore()
|
||||
lx.push(lexTableNameEnd)
|
||||
return lexValue // reuse string lexing
|
||||
default:
|
||||
return lexBareTableName
|
||||
}
|
||||
}
|
||||
|
||||
// lexBareTableName lexes the name of a table. It assumes that at least one
|
||||
// valid character for the table has already been read.
|
||||
func lexBareTableName(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isBareKeyChar(r) {
|
||||
return lexBareTableName
|
||||
}
|
||||
lx.backup()
|
||||
lx.emit(itemText)
|
||||
return lexTableNameEnd
|
||||
}
|
||||
|
||||
// lexTableNameEnd reads the end of a piece of a table name, optionally
|
||||
// consuming whitespace.
|
||||
func lexTableNameEnd(lx *lexer) stateFn {
|
||||
lx.skip(isWhitespace)
|
||||
switch r := lx.next(); {
|
||||
case isWhitespace(r):
|
||||
return lexTableNameEnd
|
||||
case r == tableSep:
|
||||
lx.ignore()
|
||||
return lexTableNameStart
|
||||
case r == tableEnd:
|
||||
return lx.pop()
|
||||
default:
|
||||
return lx.errorf("expected '.' or ']' to end table name, "+
|
||||
"but got %q instead", r)
|
||||
}
|
||||
}
|
||||
|
||||
// lexKeyStart consumes a key name up until the first non-whitespace character.
|
||||
// lexKeyStart will ignore whitespace.
|
||||
func lexKeyStart(lx *lexer) stateFn {
|
||||
r := lx.peek()
|
||||
switch {
|
||||
case r == keySep:
|
||||
return lx.errorf("unexpected key separator %q", keySep)
|
||||
case isWhitespace(r) || isNL(r):
|
||||
lx.next()
|
||||
return lexSkip(lx, lexKeyStart)
|
||||
case r == stringStart || r == rawStringStart:
|
||||
lx.ignore()
|
||||
lx.emit(itemKeyStart)
|
||||
lx.push(lexKeyEnd)
|
||||
return lexValue // reuse string lexing
|
||||
default:
|
||||
lx.ignore()
|
||||
lx.emit(itemKeyStart)
|
||||
return lexBareKey
|
||||
}
|
||||
}
|
||||
|
||||
// lexBareKey consumes the text of a bare key. Assumes that the first character
|
||||
// (which is not whitespace) has not yet been consumed.
|
||||
func lexBareKey(lx *lexer) stateFn {
|
||||
switch r := lx.next(); {
|
||||
case isBareKeyChar(r):
|
||||
return lexBareKey
|
||||
case isWhitespace(r):
|
||||
lx.backup()
|
||||
lx.emit(itemText)
|
||||
return lexKeyEnd
|
||||
case r == keySep:
|
||||
lx.backup()
|
||||
lx.emit(itemText)
|
||||
return lexKeyEnd
|
||||
default:
|
||||
return lx.errorf("bare keys cannot contain %q", r)
|
||||
}
|
||||
}
|
||||
|
||||
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
|
||||
// separator).
|
||||
func lexKeyEnd(lx *lexer) stateFn {
|
||||
switch r := lx.next(); {
|
||||
case r == keySep:
|
||||
return lexSkip(lx, lexValue)
|
||||
case isWhitespace(r):
|
||||
return lexSkip(lx, lexKeyEnd)
|
||||
default:
|
||||
return lx.errorf("expected key separator %q, but got %q instead",
|
||||
keySep, r)
|
||||
}
|
||||
}
|
||||
|
||||
// lexValue starts the consumption of a value anywhere a value is expected.
|
||||
// lexValue will ignore whitespace.
|
||||
// After a value is lexed, the last state on the next is popped and returned.
|
||||
func lexValue(lx *lexer) stateFn {
|
||||
// We allow whitespace to precede a value, but NOT newlines.
|
||||
// In array syntax, the array states are responsible for ignoring newlines.
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r):
|
||||
return lexSkip(lx, lexValue)
|
||||
case isDigit(r):
|
||||
lx.backup() // avoid an extra state and use the same as above
|
||||
return lexNumberOrDateStart
|
||||
}
|
||||
switch r {
|
||||
case arrayStart:
|
||||
lx.ignore()
|
||||
lx.emit(itemArray)
|
||||
return lexArrayValue
|
||||
case inlineTableStart:
|
||||
lx.ignore()
|
||||
lx.emit(itemInlineTableStart)
|
||||
return lexInlineTableValue
|
||||
case stringStart:
|
||||
if lx.accept(stringStart) {
|
||||
if lx.accept(stringStart) {
|
||||
lx.ignore() // Ignore """
|
||||
return lexMultilineString
|
||||
}
|
||||
lx.backup()
|
||||
}
|
||||
lx.ignore() // ignore the '"'
|
||||
return lexString
|
||||
case rawStringStart:
|
||||
if lx.accept(rawStringStart) {
|
||||
if lx.accept(rawStringStart) {
|
||||
lx.ignore() // Ignore """
|
||||
return lexMultilineRawString
|
||||
}
|
||||
lx.backup()
|
||||
}
|
||||
lx.ignore() // ignore the "'"
|
||||
return lexRawString
|
||||
case '+', '-':
|
||||
return lexNumberStart
|
||||
case '.': // special error case, be kind to users
|
||||
return lx.errorf("floats must start with a digit, not '.'")
|
||||
}
|
||||
if unicode.IsLetter(r) {
|
||||
// Be permissive here; lexBool will give a nice error if the
|
||||
// user wrote something like
|
||||
// x = foo
|
||||
// (i.e. not 'true' or 'false' but is something else word-like.)
|
||||
lx.backup()
|
||||
return lexBool
|
||||
}
|
||||
return lx.errorf("expected value but found %q instead", r)
|
||||
}
|
||||
|
||||
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
|
||||
// have already been consumed. All whitespace and newlines are ignored.
|
||||
func lexArrayValue(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r) || isNL(r):
|
||||
return lexSkip(lx, lexArrayValue)
|
||||
case r == commentStart:
|
||||
lx.push(lexArrayValue)
|
||||
return lexCommentStart
|
||||
case r == comma:
|
||||
return lx.errorf("unexpected comma")
|
||||
case r == arrayEnd:
|
||||
// NOTE(caleb): The spec isn't clear about whether you can have
|
||||
// a trailing comma or not, so we'll allow it.
|
||||
return lexArrayEnd
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.push(lexArrayValueEnd)
|
||||
return lexValue
|
||||
}
|
||||
|
||||
// lexArrayValueEnd consumes everything between the end of an array value and
|
||||
// the next value (or the end of the array): it ignores whitespace and newlines
|
||||
// and expects either a ',' or a ']'.
|
||||
func lexArrayValueEnd(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r) || isNL(r):
|
||||
return lexSkip(lx, lexArrayValueEnd)
|
||||
case r == commentStart:
|
||||
lx.push(lexArrayValueEnd)
|
||||
return lexCommentStart
|
||||
case r == comma:
|
||||
lx.ignore()
|
||||
return lexArrayValue // move on to the next value
|
||||
case r == arrayEnd:
|
||||
return lexArrayEnd
|
||||
}
|
||||
return lx.errorf(
|
||||
"expected a comma or array terminator %q, but got %q instead",
|
||||
arrayEnd, r,
|
||||
)
|
||||
}
|
||||
|
||||
// lexArrayEnd finishes the lexing of an array.
|
||||
// It assumes that a ']' has just been consumed.
|
||||
func lexArrayEnd(lx *lexer) stateFn {
|
||||
lx.ignore()
|
||||
lx.emit(itemArrayEnd)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexInlineTableValue consumes one key/value pair in an inline table.
|
||||
// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
|
||||
func lexInlineTableValue(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r):
|
||||
return lexSkip(lx, lexInlineTableValue)
|
||||
case isNL(r):
|
||||
return lx.errorf("newlines not allowed within inline tables")
|
||||
case r == commentStart:
|
||||
lx.push(lexInlineTableValue)
|
||||
return lexCommentStart
|
||||
case r == comma:
|
||||
return lx.errorf("unexpected comma")
|
||||
case r == inlineTableEnd:
|
||||
return lexInlineTableEnd
|
||||
}
|
||||
lx.backup()
|
||||
lx.push(lexInlineTableValueEnd)
|
||||
return lexKeyStart
|
||||
}
|
||||
|
||||
// lexInlineTableValueEnd consumes everything between the end of an inline table
|
||||
// key/value pair and the next pair (or the end of the table):
|
||||
// it ignores whitespace and expects either a ',' or a '}'.
|
||||
func lexInlineTableValueEnd(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case isWhitespace(r):
|
||||
return lexSkip(lx, lexInlineTableValueEnd)
|
||||
case isNL(r):
|
||||
return lx.errorf("newlines not allowed within inline tables")
|
||||
case r == commentStart:
|
||||
lx.push(lexInlineTableValueEnd)
|
||||
return lexCommentStart
|
||||
case r == comma:
|
||||
lx.ignore()
|
||||
return lexInlineTableValue
|
||||
case r == inlineTableEnd:
|
||||
return lexInlineTableEnd
|
||||
}
|
||||
return lx.errorf("expected a comma or an inline table terminator %q, "+
|
||||
"but got %q instead", inlineTableEnd, r)
|
||||
}
|
||||
|
||||
// lexInlineTableEnd finishes the lexing of an inline table.
|
||||
// It assumes that a '}' has just been consumed.
|
||||
func lexInlineTableEnd(lx *lexer) stateFn {
|
||||
lx.ignore()
|
||||
lx.emit(itemInlineTableEnd)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexString consumes the inner contents of a string. It assumes that the
|
||||
// beginning '"' has already been consumed and ignored.
|
||||
func lexString(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case r == eof:
|
||||
return lx.errorf("unexpected EOF")
|
||||
case isNL(r):
|
||||
return lx.errorf("strings cannot contain newlines")
|
||||
case r == '\\':
|
||||
lx.push(lexString)
|
||||
return lexStringEscape
|
||||
case r == stringEnd:
|
||||
lx.backup()
|
||||
lx.emit(itemString)
|
||||
lx.next()
|
||||
lx.ignore()
|
||||
return lx.pop()
|
||||
}
|
||||
return lexString
|
||||
}
|
||||
|
||||
// lexMultilineString consumes the inner contents of a string. It assumes that
|
||||
// the beginning '"""' has already been consumed and ignored.
|
||||
func lexMultilineString(lx *lexer) stateFn {
|
||||
switch lx.next() {
|
||||
case eof:
|
||||
return lx.errorf("unexpected EOF")
|
||||
case '\\':
|
||||
return lexMultilineStringEscape
|
||||
case stringEnd:
|
||||
if lx.accept(stringEnd) {
|
||||
if lx.accept(stringEnd) {
|
||||
lx.backup()
|
||||
lx.backup()
|
||||
lx.backup()
|
||||
lx.emit(itemMultilineString)
|
||||
lx.next()
|
||||
lx.next()
|
||||
lx.next()
|
||||
lx.ignore()
|
||||
return lx.pop()
|
||||
}
|
||||
lx.backup()
|
||||
}
|
||||
}
|
||||
return lexMultilineString
|
||||
}
|
||||
|
||||
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
|
||||
// It assumes that the beginning "'" has already been consumed and ignored.
|
||||
func lexRawString(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch {
|
||||
case r == eof:
|
||||
return lx.errorf("unexpected EOF")
|
||||
case isNL(r):
|
||||
return lx.errorf("strings cannot contain newlines")
|
||||
case r == rawStringEnd:
|
||||
lx.backup()
|
||||
lx.emit(itemRawString)
|
||||
lx.next()
|
||||
lx.ignore()
|
||||
return lx.pop()
|
||||
}
|
||||
return lexRawString
|
||||
}
|
||||
|
||||
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
|
||||
// a string. It assumes that the beginning "'''" has already been consumed and
|
||||
// ignored.
|
||||
func lexMultilineRawString(lx *lexer) stateFn {
|
||||
switch lx.next() {
|
||||
case eof:
|
||||
return lx.errorf("unexpected EOF")
|
||||
case rawStringEnd:
|
||||
if lx.accept(rawStringEnd) {
|
||||
if lx.accept(rawStringEnd) {
|
||||
lx.backup()
|
||||
lx.backup()
|
||||
lx.backup()
|
||||
lx.emit(itemRawMultilineString)
|
||||
lx.next()
|
||||
lx.next()
|
||||
lx.next()
|
||||
lx.ignore()
|
||||
return lx.pop()
|
||||
}
|
||||
lx.backup()
|
||||
}
|
||||
}
|
||||
return lexMultilineRawString
|
||||
}
|
||||
|
||||
// lexMultilineStringEscape consumes an escaped character. It assumes that the
|
||||
// preceding '\\' has already been consumed.
|
||||
func lexMultilineStringEscape(lx *lexer) stateFn {
|
||||
// Handle the special case first:
|
||||
if isNL(lx.next()) {
|
||||
return lexMultilineString
|
||||
}
|
||||
lx.backup()
|
||||
lx.push(lexMultilineString)
|
||||
return lexStringEscape(lx)
|
||||
}
|
||||
|
||||
func lexStringEscape(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
switch r {
|
||||
case 'b':
|
||||
fallthrough
|
||||
case 't':
|
||||
fallthrough
|
||||
case 'n':
|
||||
fallthrough
|
||||
case 'f':
|
||||
fallthrough
|
||||
case 'r':
|
||||
fallthrough
|
||||
case '"':
|
||||
fallthrough
|
||||
case '\\':
|
||||
return lx.pop()
|
||||
case 'u':
|
||||
return lexShortUnicodeEscape
|
||||
case 'U':
|
||||
return lexLongUnicodeEscape
|
||||
}
|
||||
return lx.errorf("invalid escape character %q; only the following "+
|
||||
"escape characters are allowed: "+
|
||||
`\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
|
||||
}
|
||||
|
||||
func lexShortUnicodeEscape(lx *lexer) stateFn {
|
||||
var r rune
|
||||
for i := 0; i < 4; i++ {
|
||||
r = lx.next()
|
||||
if !isHexadecimal(r) {
|
||||
return lx.errorf(`expected four hexadecimal digits after '\u', `+
|
||||
"but got %q instead", lx.current())
|
||||
}
|
||||
}
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
func lexLongUnicodeEscape(lx *lexer) stateFn {
|
||||
var r rune
|
||||
for i := 0; i < 8; i++ {
|
||||
r = lx.next()
|
||||
if !isHexadecimal(r) {
|
||||
return lx.errorf(`expected eight hexadecimal digits after '\U', `+
|
||||
"but got %q instead", lx.current())
|
||||
}
|
||||
}
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
|
||||
func lexNumberOrDateStart(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexNumberOrDate
|
||||
}
|
||||
switch r {
|
||||
case '_':
|
||||
return lexNumber
|
||||
case 'e', 'E':
|
||||
return lexFloat
|
||||
case '.':
|
||||
return lx.errorf("floats must start with a digit, not '.'")
|
||||
}
|
||||
return lx.errorf("expected a digit but got %q", r)
|
||||
}
|
||||
|
||||
// lexNumberOrDate consumes either an integer, float or datetime.
|
||||
func lexNumberOrDate(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexNumberOrDate
|
||||
}
|
||||
switch r {
|
||||
case '-':
|
||||
return lexDatetime
|
||||
case '_':
|
||||
return lexNumber
|
||||
case '.', 'e', 'E':
|
||||
return lexFloat
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.emit(itemInteger)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexDatetime consumes a Datetime, to a first approximation.
|
||||
// The parser validates that it matches one of the accepted formats.
|
||||
func lexDatetime(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexDatetime
|
||||
}
|
||||
switch r {
|
||||
case '-', 'T', ':', '.', 'Z', '+':
|
||||
return lexDatetime
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.emit(itemDatetime)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexNumberStart consumes either an integer or a float. It assumes that a sign
|
||||
// has already been read, but that *no* digits have been consumed.
|
||||
// lexNumberStart will move to the appropriate integer or float states.
|
||||
func lexNumberStart(lx *lexer) stateFn {
|
||||
// We MUST see a digit. Even floats have to start with a digit.
|
||||
r := lx.next()
|
||||
if !isDigit(r) {
|
||||
if r == '.' {
|
||||
return lx.errorf("floats must start with a digit, not '.'")
|
||||
}
|
||||
return lx.errorf("expected a digit but got %q", r)
|
||||
}
|
||||
return lexNumber
|
||||
}
|
||||
|
||||
// lexNumber consumes an integer or a float after seeing the first digit.
|
||||
func lexNumber(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexNumber
|
||||
}
|
||||
switch r {
|
||||
case '_':
|
||||
return lexNumber
|
||||
case '.', 'e', 'E':
|
||||
return lexFloat
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.emit(itemInteger)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexFloat consumes the elements of a float. It allows any sequence of
|
||||
// float-like characters, so floats emitted by the lexer are only a first
|
||||
// approximation and must be validated by the parser.
|
||||
func lexFloat(lx *lexer) stateFn {
|
||||
r := lx.next()
|
||||
if isDigit(r) {
|
||||
return lexFloat
|
||||
}
|
||||
switch r {
|
||||
case '_', '.', '-', '+', 'e', 'E':
|
||||
return lexFloat
|
||||
}
|
||||
|
||||
lx.backup()
|
||||
lx.emit(itemFloat)
|
||||
return lx.pop()
|
||||
}
|
||||
|
||||
// lexBool consumes a bool string: 'true' or 'false.
|
||||
func lexBool(lx *lexer) stateFn {
|
||||
var rs []rune
|
||||
for {
|
||||
r := lx.next()
|
||||
if !unicode.IsLetter(r) {
|
||||
lx.backup()
|
||||
break
|
||||
}
|
||||
rs = append(rs, r)
|
||||
}
|
||||
s := string(rs)
|
||||
switch s {
|
||||
case "true", "false":
|
||||
lx.emit(itemBool)
|
||||
return lx.pop()
|
||||
}
|
||||
return lx.errorf("expected value but found %q instead", s)
|
||||
}
|
||||
|
||||
// lexCommentStart begins the lexing of a comment. It will emit
|
||||
// itemCommentStart and consume no characters, passing control to lexComment.
|
||||
func lexCommentStart(lx *lexer) stateFn {
|
||||
lx.ignore()
|
||||
lx.emit(itemCommentStart)
|
||||
return lexComment
|
||||
}
|
||||
|
||||
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
|
||||
// It will consume *up to* the first newline character, and pass control
|
||||
// back to the last state on the stack.
|
||||
func lexComment(lx *lexer) stateFn {
|
||||
r := lx.peek()
|
||||
if isNL(r) || r == eof {
|
||||
lx.emit(itemText)
|
||||
return lx.pop()
|
||||
}
|
||||
lx.next()
|
||||
return lexComment
|
||||
}
|
||||
|
||||
// lexSkip ignores all slurped input and moves on to the next state.
|
||||
func lexSkip(lx *lexer, nextState stateFn) stateFn {
|
||||
return func(lx *lexer) stateFn {
|
||||
lx.ignore()
|
||||
return nextState
|
||||
}
|
||||
}
|
||||
|
||||
// isWhitespace returns true if `r` is a whitespace character according
|
||||
// to the spec.
|
||||
func isWhitespace(r rune) bool {
|
||||
return r == '\t' || r == ' '
|
||||
}
|
||||
|
||||
func isNL(r rune) bool {
|
||||
return r == '\n' || r == '\r'
|
||||
}
|
||||
|
||||
func isDigit(r rune) bool {
|
||||
return r >= '0' && r <= '9'
|
||||
}
|
||||
|
||||
func isHexadecimal(r rune) bool {
|
||||
return (r >= '0' && r <= '9') ||
|
||||
(r >= 'a' && r <= 'f') ||
|
||||
(r >= 'A' && r <= 'F')
|
||||
}
|
||||
|
||||
func isBareKeyChar(r rune) bool {
|
||||
return (r >= 'A' && r <= 'Z') ||
|
||||
(r >= 'a' && r <= 'z') ||
|
||||
(r >= '0' && r <= '9') ||
|
||||
r == '_' ||
|
||||
r == '-'
|
||||
}
|
||||
|
||||
func (itype itemType) String() string {
|
||||
switch itype {
|
||||
case itemError:
|
||||
return "Error"
|
||||
case itemNIL:
|
||||
return "NIL"
|
||||
case itemEOF:
|
||||
return "EOF"
|
||||
case itemText:
|
||||
return "Text"
|
||||
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
|
||||
return "String"
|
||||
case itemBool:
|
||||
return "Bool"
|
||||
case itemInteger:
|
||||
return "Integer"
|
||||
case itemFloat:
|
||||
return "Float"
|
||||
case itemDatetime:
|
||||
return "DateTime"
|
||||
case itemTableStart:
|
||||
return "TableStart"
|
||||
case itemTableEnd:
|
||||
return "TableEnd"
|
||||
case itemKeyStart:
|
||||
return "KeyStart"
|
||||
case itemArray:
|
||||
return "Array"
|
||||
case itemArrayEnd:
|
||||
return "ArrayEnd"
|
||||
case itemCommentStart:
|
||||
return "CommentStart"
|
||||
}
|
||||
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
|
||||
}
|
||||
|
||||
func (item item) String() string {
|
||||
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user