mirror of
https://github.com/mudler/luet.git
synced 2025-09-02 15:54:39 +00:00
Compare commits
67 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
a7b4ae67c9 | ||
|
68edfd58e7 | ||
|
0658020c60 | ||
|
c3b552103f | ||
|
796967cc9d | ||
|
5cccc34f32 | ||
|
5ef1d04055 | ||
|
1bd4d520a4 | ||
|
b12c7678d4 | ||
|
32a99a4a49 | ||
|
56e9c6f82e | ||
|
92ea69a2b9 | ||
|
838899aa83 | ||
|
76695b2fc8 | ||
|
5c84e5b0a7 | ||
|
06fa8b1c87 | ||
|
ff153f367f | ||
|
459676397c | ||
|
93057fbf6d | ||
|
5e1a7c50df | ||
|
0ceaf09615 | ||
|
0dc78ebe41 | ||
|
27c2e3c51f | ||
|
e83f600ed3 | ||
|
6344e47eb3 | ||
|
8b1c5558b2 | ||
|
c277ac0f94 | ||
|
d8c8c2194f | ||
|
4494385f5b | ||
|
85a7968ecc | ||
|
1ba987b0f1 | ||
|
c72b5be364 | ||
|
1ef18ed2c5 | ||
|
4b1b711a5c | ||
|
7f047e4fc2 | ||
|
356350f724 | ||
|
9d2ee1b760 | ||
|
fd12227d53 | ||
|
1e617b0c67 | ||
|
77b49d9c4a | ||
|
4c3532e3c6 | ||
|
f2ec065a89 | ||
|
7193ea03f9 | ||
|
beeb0dcaaa | ||
|
0de3177ddd | ||
|
45c8dfa19f | ||
|
186ac33156 | ||
|
bdc24b84a4 | ||
|
e5d6d21178 | ||
|
0379855592 | ||
|
958b8c32e1 | ||
|
b0b95d1721 | ||
|
f85891e362 | ||
|
946524f90d | ||
|
2cbd97ff3a | ||
|
a4d77f8f99 | ||
|
adcb459fd2 | ||
|
55ae67be0f | ||
|
848215eef0 | ||
|
7bfff97f57 | ||
|
a73f5f9b65 | ||
|
0288eedbc3 | ||
|
b27237b7ff | ||
|
c9aed37fa7 | ||
|
788b889d14 | ||
|
ef92f23221 | ||
|
562fcc2421 |
53
CONTRIBUTING.md
Normal file
53
CONTRIBUTING.md
Normal file
@@ -0,0 +1,53 @@
|
||||
|
||||
We love your input! We want to make contributing to this project as easy and transparent as possible, whether it's:
|
||||
|
||||
- Reporting a bug
|
||||
- Discussing the current state of the code
|
||||
- Submitting a fix
|
||||
- Proposing new features
|
||||
- Becoming a maintainer
|
||||
|
||||
## We Develop with Github
|
||||
We use github to host code, to track issues and feature requests, as well as accept pull requests.
|
||||
|
||||
## Stay in touch
|
||||
|
||||
Join us in [slack](https://luet.slack.com/join/shared_invite/enQtOTQxMjcyNDQ0MDUxLWQ5ODVlNTI1MTYzNDRkYzkyYmM1YWE5YjM0NTliNDEzNmQwMTkxNDRhNDIzM2Y5NDBlOTZjZTYxYWQyNDE4YzY#/) and hang out with the community! It will be much easier to get started and do your first steps in contributing to the project.
|
||||
|
||||
## All Code Changes Happen Through Pull Requests
|
||||
Pull requests are the best way to propose changes to the codebase. We actively welcome your pull requests:
|
||||
|
||||
1. Fork the repo you want to contribute to and create your branch from `develop`.
|
||||
2. If you've added code that should be tested, add tests.
|
||||
3. If you've changed APIs, update the [documentation](https://github.com/Luet-lab/docs).
|
||||
4. Ensure the test suite passes.
|
||||
5. Make sure your code lints.
|
||||
6. Issue that pull request!
|
||||
|
||||
## Any contributions you make will be under the Software License of the repository
|
||||
In short, when you submit code changes, your submissions are understood to be under the same License that covers the project. Feel free to contact the maintainers if that's a concern.
|
||||
|
||||
## Report bugs using Github's [issues](https://github.com/mudler/luet/issues)
|
||||
We use GitHub issues to track public bugs. Report a bug by [opening a new issue](https://github.com/mudler/luet/issues/new); it's that easy!
|
||||
|
||||
## Write bug reports with detail, background, and sample code
|
||||
Try to be as more descriptive as possible. When opening a new issue you will be prompted to choose between a bug or a feature request, with a small template to fill details with. Be specific!
|
||||
|
||||
**Great Bug Reports** tend to have:
|
||||
|
||||
- A quick summary and/or background
|
||||
- Steps to reproduce
|
||||
- Be specific!
|
||||
- Give sample code if you can.
|
||||
- What you expected would happen
|
||||
- What actually happens
|
||||
- Notes (possibly including why you think this might be happening, or stuff you tried that didn't work)
|
||||
|
||||
People *love* thorough bug reports.
|
||||
|
||||
|
||||
## License
|
||||
By contributing, you agree that your contributions will be licensed under the project Licenses.
|
||||
|
||||
## References
|
||||
This document was adapted from the open-source contribution guidelines from https://gist.github.com/briandk/3d2e8b3ec8daf5a27a62
|
13
README.md
13
README.md
@@ -1,19 +1,24 @@
|
||||
|
||||
<p align="center">
|
||||
<img width=150 height=150 src="https://user-images.githubusercontent.com/2420543/119691600-0293d700-be4b-11eb-827f-49ff1174a07a.png">
|
||||
</p>
|
||||
|
||||
# luet - Container-based Package manager
|
||||
|
||||
[](https://quay.io/repository/luet/base)
|
||||
[](https://goreportcard.com/report/github.com/mudler/luet)
|
||||
[](https://travis-ci.org/mudler/luet)
|
||||
[](https://github.com/mudler/luet/actions/workflows/release.yml)
|
||||
[](https://godoc.org/github.com/mudler/luet)
|
||||
[](https://codecov.io/gh/mudler/luet)
|
||||
|
||||
[](https://asciinema.org/a/388348)
|
||||
|
||||
Luet is a multi-platform Package Manager based off from containers - it uses Docker (and others) to build packages. It has zero dependencies and it is well suitable for "from scratch" environments. It can also version entire rootfs and enables delivery of OTA-alike updates, making it a perfect fit for the Edge computing era and IoT embedded devices.
|
||||
|
||||
It offers a simple [specfile format](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/) in YAML notation to define both [packages](https://luet-lab.github.io/docs/docs/concepts/packages/) and [rootfs](https://luet-lab.github.io/docs/docs/concepts/packages/#package-layers). As it is based on containers, it can be also used to build stages for Linux From Scratch installations and it can build and track updates for those systems.
|
||||
|
||||
It is written entirely in Golang and where used as package manager, it can run in from scratch environment, with zero dependencies.
|
||||
|
||||
[](https://asciinema.org/a/388348)
|
||||
|
||||
|
||||
## In a glance
|
||||
|
||||
- Luet can reuse Gentoo's portage tree hierarchy, and it is heavily inspired from it.
|
||||
|
10
cmd/build.go
10
cmd/build.go
@@ -111,6 +111,8 @@ Build packages specifying multiple definition trees:
|
||||
onlydeps := viper.GetBool("onlydeps")
|
||||
onlyTarget, _ := cmd.Flags().GetBool("only-target-package")
|
||||
full, _ := cmd.Flags().GetBool("full")
|
||||
rebuild, _ := cmd.Flags().GetBool("rebuild")
|
||||
|
||||
concurrent, _ := cmd.Flags().GetBool("solver-concurrent")
|
||||
var results Results
|
||||
backendArgs := viper.GetStringSlice("backend-args")
|
||||
@@ -176,6 +178,7 @@ Build packages specifying multiple definition trees:
|
||||
options.WithBuildValues(values),
|
||||
options.WithPullRepositories(pullRepo),
|
||||
options.WithPushRepository(imageRepository),
|
||||
options.Rebuild(rebuild),
|
||||
options.WithSolverOptions(*opts),
|
||||
options.Wait(wait),
|
||||
options.OnlyTarget(onlyTarget),
|
||||
@@ -244,11 +247,12 @@ Build packages specifying multiple definition trees:
|
||||
}
|
||||
|
||||
for _, sp := range toCalculate {
|
||||
packs, err := luetCompiler.ComputeDepTree(sp)
|
||||
ht := compiler.NewHashTree(generalRecipe.GetDatabase())
|
||||
hashTree, err := ht.Query(luetCompiler, sp)
|
||||
if err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
for _, p := range packs {
|
||||
for _, p := range hashTree.Dependencies {
|
||||
results.Packages = append(results.Packages,
|
||||
PackageResult{
|
||||
Name: p.Package.GetName(),
|
||||
@@ -329,7 +333,7 @@ func init() {
|
||||
buildCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
|
||||
buildCmd.Flags().Bool("live-output", LuetCfg.GetGeneral().ShowBuildOutput, "Enable live output of the build phase.")
|
||||
buildCmd.Flags().Bool("from-repositories", false, "Consume the user-defined repositories to pull specfiles from")
|
||||
|
||||
buildCmd.Flags().Bool("rebuild", false, "To combine with --pull. Allows to rebuild the target package even if an image is available, against a local values file")
|
||||
buildCmd.Flags().Bool("pretend", false, "Just print what packages will be compiled")
|
||||
buildCmd.Flags().StringArrayP("pull-repository", "p", []string{}, "A list of repositories to pull the cache from")
|
||||
|
||||
|
@@ -22,7 +22,7 @@ import (
|
||||
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
@@ -47,7 +47,7 @@ var cleanupCmd = &cobra.Command{
|
||||
LuetCfg.System.DatabasePath = dbpath
|
||||
LuetCfg.System.Rootfs = rootfs
|
||||
// Check if cache dir exists
|
||||
if helpers.Exists(LuetCfg.GetSystem().GetSystemPkgsCacheDirPath()) {
|
||||
if fileHelper.Exists(LuetCfg.GetSystem().GetSystemPkgsCacheDirPath()) {
|
||||
|
||||
files, err := ioutil.ReadDir(LuetCfg.GetSystem().GetSystemPkgsCacheDirPath())
|
||||
if err != nil {
|
||||
|
@@ -25,6 +25,7 @@ import (
|
||||
|
||||
"github.com/marcsauter/single"
|
||||
bus "github.com/mudler/luet/pkg/bus"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
extensions "github.com/mudler/cobra-extensions"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
@@ -40,7 +41,7 @@ var Verbose bool
|
||||
var LockedCommands = []string{"install", "uninstall", "upgrade"}
|
||||
|
||||
const (
|
||||
LuetCLIVersion = "0.13.1"
|
||||
LuetCLIVersion = "0.16.6"
|
||||
LuetEnvPrefix = "LUET"
|
||||
)
|
||||
|
||||
@@ -97,7 +98,7 @@ To build a package, from a tree definition:
|
||||
|
||||
plugin := viper.GetStringSlice("plugin")
|
||||
|
||||
bus.Manager.Load(plugin...).Register()
|
||||
bus.Manager.Initialize(plugin...)
|
||||
if len(bus.Manager.Plugins) != 0 {
|
||||
Info(":lollipop:Enabled plugins:")
|
||||
for _, p := range bus.Manager.Plugins {
|
||||
@@ -252,7 +253,7 @@ func initConfig() {
|
||||
}
|
||||
homeDir := helpers.GetHomeDir()
|
||||
|
||||
if helpers.Exists(filepath.Join(pwdDir, ".luet.yaml")) || (homeDir != "" && helpers.Exists(filepath.Join(homeDir, ".luet.yaml"))) {
|
||||
if fileHelper.Exists(filepath.Join(pwdDir, ".luet.yaml")) || (homeDir != "" && fileHelper.Exists(filepath.Join(homeDir, ".luet.yaml"))) {
|
||||
viper.AddConfigPath(".")
|
||||
if homeDir != "" {
|
||||
viper.AddConfigPath(homeDir)
|
||||
|
@@ -96,9 +96,14 @@ func NewTreeImageCommand() *cobra.Command {
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
asserts, err := luetCompiler.ComputeDepTree(spec)
|
||||
|
||||
for _, assertion := range asserts { //highly dependent on the order
|
||||
ht := compiler.NewHashTree(reciper.GetDatabase())
|
||||
hashtree, err := ht.Query(luetCompiler, spec)
|
||||
if err != nil {
|
||||
Fatal("Error: " + err.Error())
|
||||
}
|
||||
|
||||
for _, assertion := range hashtree.Solution { //highly dependent on the order
|
||||
|
||||
//buildImageHash := imageRepository + ":" + assertion.Hash.BuildHash
|
||||
currentPackageImageHash := imageRepository + ":" + assertion.Hash.PackageHash
|
||||
|
@@ -23,7 +23,7 @@ import (
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/go-units"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
"github.com/mudler/luet/pkg/helpers/docker"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
@@ -77,7 +77,7 @@ func NewUnpackCommand() *cobra.Command {
|
||||
RegistryToken: registryToken,
|
||||
}
|
||||
|
||||
info, err := helpers.DownloadAndExtractDockerImage(temp, image, destination, auth, verify)
|
||||
info, err := docker.DownloadAndExtractDockerImage(temp, image, destination, auth, verify)
|
||||
if err != nil {
|
||||
Error(err.Error())
|
||||
os.Exit(1)
|
||||
|
@@ -7,7 +7,7 @@ fi
|
||||
set -ex
|
||||
export LUET_NOLOCK=true
|
||||
|
||||
LUET_VERSION=$(curl -s https://api.github.com/repos/mudler/luet/releases/latest | ( grep -oP '"tag_name": "\K(.*)(?=")' || echo "0.9.24" ))
|
||||
LUET_VERSION=$(curl -s https://api.github.com/repos/mudler/luet/releases/latest | grep tag_name | awk '{ print $2 }' | sed -e 's/\"//g' -e 's/,//g' || echo "0.9.24" )
|
||||
LUET_ROOTFS=${LUET_ROOTFS:-/}
|
||||
LUET_DATABASE_PATH=${LUET_DATABASE_PATH:-/var/luet/db}
|
||||
LUET_DATABASE_ENGINE=${LUET_DATABASE_ENGINE:-boltdb}
|
||||
|
19
go.mod
19
go.mod
@@ -9,15 +9,17 @@ require (
|
||||
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154
|
||||
github.com/briandowns/spinner v1.12.1-0.20201220203425-e201aaea0a31
|
||||
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec
|
||||
github.com/containerd/cgroups v0.0.0-20200217135630-d732e370d46d // indirect
|
||||
github.com/containerd/containerd v1.4.1-0.20201117152358-0edc412565dc
|
||||
github.com/containerd/ttrpc v0.0.0-20200121165050-0be804eadb15 // indirect
|
||||
github.com/containerd/typeurl v0.0.0-20200205145503-b45ef1f1f737 // indirect
|
||||
github.com/crillab/gophersat v1.3.2-0.20201023142334-3fc2ac466765
|
||||
github.com/docker/cli v0.0.0-20200227165822-2298e6a3fe24
|
||||
github.com/docker/distribution v2.7.1+incompatible
|
||||
github.com/docker/docker v20.10.0-beta1.0.20201110211921-af34b94a78a1+incompatible
|
||||
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
|
||||
github.com/docker/go-units v0.4.0
|
||||
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd
|
||||
github.com/fsouza/go-dockerclient v1.6.4
|
||||
github.com/genuinetools/img v0.5.11
|
||||
github.com/ghodss/yaml v1.0.0
|
||||
github.com/google/go-containerregistry v0.2.1
|
||||
github.com/google/renameio v1.0.0
|
||||
@@ -33,16 +35,17 @@ require (
|
||||
github.com/kyokomi/emoji v2.1.0+incompatible
|
||||
github.com/logrusorgru/aurora v0.0.0-20190417123914-21d75270181e
|
||||
github.com/marcsauter/single v0.0.0-20181104081128-f8bf46f26ec0
|
||||
github.com/moby/buildkit v0.7.2
|
||||
github.com/mitchellh/hashstructure/v2 v2.0.1
|
||||
github.com/moby/sys/mount v0.2.0 // indirect
|
||||
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d
|
||||
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87
|
||||
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82
|
||||
github.com/mudler/go-pluggable v0.0.0-20210513155700-54c6443073af
|
||||
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290
|
||||
github.com/onsi/ginkgo v1.14.2
|
||||
github.com/onsi/gomega v1.10.3
|
||||
github.com/opencontainers/go-digest v1.0.0
|
||||
github.com/opencontainers/image-spec v1.0.1
|
||||
github.com/opencontainers/runc v1.0.0-rc9.0.20200221051241-688cf6d43cc4 // indirect
|
||||
github.com/otiai10/copy v1.2.1-0.20200916181228-26f84a0b1578
|
||||
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f
|
||||
github.com/pkg/errors v0.9.1
|
||||
@@ -55,8 +58,8 @@ require (
|
||||
go.uber.org/atomic v1.5.1 // indirect
|
||||
go.uber.org/multierr v1.4.0
|
||||
go.uber.org/zap v1.13.0
|
||||
google.golang.org/grpc v1.29.1
|
||||
gopkg.in/yaml.v2 v2.3.0
|
||||
gotest.tools/v3 v3.0.2 // indirect
|
||||
helm.sh/helm/v3 v3.3.4
|
||||
|
||||
)
|
||||
@@ -64,9 +67,3 @@ require (
|
||||
replace github.com/docker/docker => github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200605210607-749178b8f80d+incompatible
|
||||
|
||||
replace github.com/containerd/containerd => github.com/containerd/containerd v1.3.1-0.20200227195959-4d242818bf55
|
||||
|
||||
replace github.com/hashicorp/go-immutable-radix => github.com/tonistiigi/go-immutable-radix v0.0.0-20170803185627-826af9ccf0fe
|
||||
|
||||
replace github.com/jaguilar/vt100 => github.com/tonistiigi/vt100 v0.0.0-20190402012908-ad4c4a574305
|
||||
|
||||
replace github.com/opencontainers/runc => github.com/opencontainers/runc v1.0.0-rc9.0.20200221051241-688cf6d43cc4
|
||||
|
142
go.sum
142
go.sum
@@ -23,8 +23,6 @@ cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiy
|
||||
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
|
||||
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
|
||||
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
|
||||
github.com/AkihiroSuda/containerd-fuse-overlayfs v0.0.0-20200220082720-bb896865146c h1:2pWkaq3X2yFR5o5OI7QP0CYNNKtfE2ZCK3hMRaTkhmc=
|
||||
github.com/AkihiroSuda/containerd-fuse-overlayfs v0.0.0-20200220082720-bb896865146c/go.mod h1:K4kx7xAA5JimeQCnN+dbeLlfaBxzZLaLiDD8lusFI8w=
|
||||
github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||
github.com/Azure/azure-sdk-for-go v35.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||
github.com/Azure/azure-sdk-for-go v38.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
|
||||
@@ -73,31 +71,27 @@ github.com/Masterminds/sprig/v3 v3.1.0 h1:j7GpgZ7PdFqNsmncycTHsLmVPf5/3wJtlgW9TN
|
||||
github.com/Masterminds/sprig/v3 v3.1.0/go.mod h1:ONGMf7UfYGAbMXCZmQLy8x3lCDIPrEZE/rU8pmrbihA=
|
||||
github.com/Masterminds/squirrel v1.4.0/go.mod h1:yaPeOnPG5ZRwL9oKdTsO/prlkPbXWZlRVMQ/gGlzIuA=
|
||||
github.com/Masterminds/vcs v1.13.1/go.mod h1:N09YCmOQr6RLxC6UNHzuVwAdodYbbnycGHSmwVJjcKA=
|
||||
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
|
||||
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
|
||||
github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
|
||||
github.com/Microsoft/go-winio v0.4.15-0.20200113171025-3fe6c5262873 h1:93nQ7k53GjoMQ07HVP8g6Zj1fQZDDj7Xy2VkNNtvX8o=
|
||||
github.com/Microsoft/go-winio v0.4.15-0.20200113171025-3fe6c5262873/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
|
||||
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
|
||||
github.com/Microsoft/hcsshim v0.8.7 h1:ptnOoufxGSzauVTsdE+wMYnCWA301PdoN4xg5oRdZpg=
|
||||
github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
|
||||
github.com/MottainaiCI/simplestreams-builder v0.1.0 h1:A8KJN22Xkx7NUKC9/zWmd1UhIqRn3bdHo0wv/HsAHx8=
|
||||
github.com/MottainaiCI/simplestreams-builder v0.1.0/go.mod h1:+Gbv6dg6TPHWq4oDjZY1vn978PLCEZ2hOu8kvn+S7t4=
|
||||
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
|
||||
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
|
||||
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
|
||||
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
|
||||
github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
|
||||
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
|
||||
github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
|
||||
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
|
||||
github.com/Sabayon/pkgs-checker v0.7.2 h1:mh53u5D7FTCeBJevYQA9cCxAWGTSuKqw7m/x7GsQVb0=
|
||||
github.com/Sabayon/pkgs-checker v0.7.2/go.mod h1:GFGM6ZzSE5owdGgjLnulj0+Vt9UTd5LFGmB2AOVPYrE=
|
||||
github.com/Sabayon/pkgs-checker v0.8.1 h1:pVen975z9WIecq7luntUn+0XzGdiyz2CsDay8w+ZmOw=
|
||||
github.com/Sabayon/pkgs-checker v0.8.1/go.mod h1:GC9PBUzcq0QVEBGRA1IIMXf6wHxo34KH5BeqoyJsLpo=
|
||||
github.com/Sereal/Sereal v0.0.0-20181211220259-509a78ddbda3 h1:Xu7z47ZiE/J+sKXHZMGxEor/oY2q6dq51fkO0JqdSwY=
|
||||
github.com/Sereal/Sereal v0.0.0-20181211220259-509a78ddbda3/go.mod h1:D0JMgToj/WdxCgd30Kc1UcA9E+WdZoJqeVOuYW7iTBM=
|
||||
github.com/Shopify/logrus-bugsnag v0.0.0-20170309145241-6dbc35f2c30d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
|
||||
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d h1:UrqY+r/OJnIp5u0s1SbQ8dVfLCZJsnvazdBP5hS4iRs=
|
||||
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
|
||||
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
|
||||
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
|
||||
@@ -109,7 +103,6 @@ github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuy
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/andreyvit/diff v0.0.0-20170406064948-c7f18ee00883/go.mod h1:rCTlJbsFo29Kk6CurOXKm700vrz8f0KW0JNfpkRJY/8=
|
||||
github.com/apache/thrift v0.0.0-20161221203622-b2a4d4ae21c7/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
||||
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
||||
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
||||
github.com/apex/log v1.1.1 h1:BwhRZ0qbjYtTob0I+2M+smavV0kOC8XgcnGZcyL9liA=
|
||||
@@ -146,18 +139,24 @@ github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+Ce
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
|
||||
github.com/bitly/go-hostpool v0.1.0 h1:XKmsF6k5el6xHG3WPJ8U0Ku/ye7njX7W81Ng7O2ioR0=
|
||||
github.com/bitly/go-hostpool v0.1.0/go.mod h1:4gOCgp6+NZnVqlKyZ/iBZFTAJKembaVENUpMkpg42fw=
|
||||
github.com/bitly/go-simplejson v0.5.0 h1:6IH+V8/tVMab511d5bn4M7EwGXZf9Hj6i2xSwkNEM+Y=
|
||||
github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
|
||||
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
|
||||
github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
|
||||
github.com/blang/semver v3.5.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
|
||||
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
|
||||
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
|
||||
github.com/briandowns/spinner v1.12.1-0.20201220203425-e201aaea0a31 h1:yInAg9pE5qGec5eQ7XdfOTTaGwGxD3bKFVjmD6VKkwc=
|
||||
github.com/briandowns/spinner v1.12.1-0.20201220203425-e201aaea0a31/go.mod h1:QOuQk7x+EaDASo80FEXwlwiA+j/PPIcX3FScO+3/ZPQ=
|
||||
github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
|
||||
github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
|
||||
github.com/bugsnag/bugsnag-go v1.0.5-0.20150529004307-13fd6b8acda0 h1:s7+5BfS4WFJoVF9pnB8kBk03S7pZXRdKamnV0FOl5Sc=
|
||||
github.com/bugsnag/bugsnag-go v1.0.5-0.20150529004307-13fd6b8acda0/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
|
||||
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b h1:otBG+dV+YK+Soembjv71DPz3uX/V/6MMlSyD9JBQ6kQ=
|
||||
github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
|
||||
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0 h1:nvj0OLI3YqYXer/kZD8Ri1aaunCxIEsOst1BVJswV0o=
|
||||
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
|
||||
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
|
||||
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec h1:4XvMn0XuV7qxCH22gbnR79r+xTUaLOSA0GW/egpO3SQ=
|
||||
@@ -177,51 +176,33 @@ github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMn
|
||||
github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
|
||||
github.com/clbanning/x2j v0.0.0-20191024224557-825249438eec/go.mod h1:jMjuTZXRI4dUb/I5gc9Hdhagfvm9+RyrPryS/auMzxE=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004 h1:lkAMpLVBDaj17e85keuznYcH5rqI438v41pKcBl4ZxQ=
|
||||
github.com/cloudflare/cfssl v0.0.0-20180223231731-4e2dcbde5004/go.mod h1:yMWuSON2oQp+43nFtAV/uvKQIFpSPerB57DCt9t8sSA=
|
||||
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
|
||||
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
|
||||
github.com/codahale/hdrhistogram v0.0.0-20160425231609-f8ad88b59a58/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
|
||||
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
|
||||
github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0 h1:sDMmm+q/3+BukdIpxwO365v/Rbspp2Nt5XntgQRXq8Q=
|
||||
github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM=
|
||||
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f h1:tSNMc+rJDfmYntojat8lljbt1mgKNpTxUZJsSzJ9Y1s=
|
||||
github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
|
||||
github.com/containerd/cgroups v0.0.0-20200217135630-d732e370d46d h1:UKAt78F1OvM4ceTn1VvXuYuatXohsFU1eSI2IBtTw9g=
|
||||
github.com/containerd/cgroups v0.0.0-20200217135630-d732e370d46d/go.mod h1:CStdkl05lBnJej94BPFoJ7vB8cELKXwViS+dgfW0/M8=
|
||||
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
|
||||
github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
|
||||
github.com/containerd/console v0.0.0-20191219165238-8375c3424e4d h1:VuiIRfgJ2M3vYEU0F6E5lg3+V0l9YpbGQr3jpZor5fo=
|
||||
github.com/containerd/console v0.0.0-20191219165238-8375c3424e4d/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
|
||||
github.com/containerd/containerd v1.3.1-0.20200227195959-4d242818bf55 h1:FGO0nwSBESgoGCakj+w3OQXyrMLsz2omdo9b2UfG/BQ=
|
||||
github.com/containerd/containerd v1.3.1-0.20200227195959-4d242818bf55/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
github.com/containerd/continuity v0.0.0-20180921161001-7f53d412b9eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
|
||||
github.com/containerd/continuity v0.0.0-20181001140422-bd77b46c8352/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
|
||||
github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
|
||||
github.com/containerd/continuity v0.0.0-20200107194136-26c1120b8d41/go.mod h1:Dq467ZllaHgAtVp4p1xUQWBrFXR9s/wyoTpG8zOJGkY=
|
||||
github.com/containerd/continuity v0.0.0-20200228182428-0f16d7a0959c h1:8ahmSVELW1wghbjerVAyuEYD5+Dio66RYvSS0iGfL1M=
|
||||
github.com/containerd/continuity v0.0.0-20200228182428-0f16d7a0959c/go.mod h1:Dq467ZllaHgAtVp4p1xUQWBrFXR9s/wyoTpG8zOJGkY=
|
||||
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
|
||||
github.com/containerd/fifo v0.0.0-20191213151349-ff969a566b00 h1:lsjC5ENBl+Zgf38+B0ymougXFp0BaubeIVETltYZTQw=
|
||||
github.com/containerd/fifo v0.0.0-20191213151349-ff969a566b00/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
|
||||
github.com/containerd/go-cni v0.0.0-20200107172653-c154a49e2c75 h1:5Q5C6jDObSVpjeX8CuZ5yac8d/KIYuPzUHbUzdL+NFw=
|
||||
github.com/containerd/go-cni v0.0.0-20200107172653-c154a49e2c75/go.mod h1:0mg8r6FCdbxvLDqCXwAx2rO+KA37QICjKL8+wHOG5OE=
|
||||
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
|
||||
github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328 h1:PRTagVMbJcCezLcHXe8UJvR1oBzp2lG3CEumeFOLOds=
|
||||
github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328/go.mod h1:PpyHrqVs8FTi9vpyHwPwiNEGaACDxT/N/pLcvMSRA9g=
|
||||
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de h1:dlfGmNcE3jDAecLqwKPMNX6nk2qh1c1Vg1/YTzpOOF4=
|
||||
github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
|
||||
github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
|
||||
github.com/containerd/ttrpc v0.0.0-20200121165050-0be804eadb15 h1:+jgiLE5QylzgADj0Yldb4id1NQNRrDOROj7KDvY9PEc=
|
||||
github.com/containerd/ttrpc v0.0.0-20200121165050-0be804eadb15/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
|
||||
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd h1:JNn81o/xG+8NEo3bC/vx9pbi/g2WI8mtP2/nXzu297Y=
|
||||
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
|
||||
github.com/containerd/typeurl v0.0.0-20190911142611-5eb25027c9fd/go.mod h1:GeKYzf2pQcqv7tJ0AoCuuhtnqhva5LNU3U+OyKxxJpk=
|
||||
github.com/containerd/typeurl v0.0.0-20200205145503-b45ef1f1f737 h1:HovfQDS/K3Mr7eyS0QJLxE1CbVUhjZCl6g3OhFJgP1o=
|
||||
github.com/containerd/typeurl v0.0.0-20200205145503-b45ef1f1f737/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
|
||||
github.com/containernetworking/cni v0.7.1 h1:fE3r16wpSEyaqY4Z4oFrLMmIGfBYIKpPrHK31EJ9FzE=
|
||||
github.com/containernetworking/cni v0.7.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
|
||||
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
||||
github.com/coreos/clair v0.0.0-20180919182544-44ae4bc9590a/go.mod h1:uXhHPWAoRqw0jJc2f8RrPCwRhIo9otQ8OEWUFtpCiwA=
|
||||
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
|
||||
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
|
||||
@@ -255,45 +236,34 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
|
||||
github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE=
|
||||
github.com/deislabs/oras v0.8.1/go.mod h1:Mx0rMSbBNaNfY9hjpccEnxkOqJL6KGjtxNHPLC4G4As=
|
||||
github.com/denisenkom/go-mssqldb v0.0.0-20191001013358-cfbb681360f0/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU=
|
||||
github.com/denisenkom/go-mssqldb v0.0.0-20191128021309-1d7a30a10f73 h1:OGNva6WhsKst5OZf7eZOklDztV3hwtTHovdrLHV+MsA=
|
||||
github.com/denisenkom/go-mssqldb v0.0.0-20191128021309-1d7a30a10f73/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU=
|
||||
github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
|
||||
github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
|
||||
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
|
||||
github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
|
||||
github.com/docker/cli v0.0.0-20180920165730-54c19e67f69c/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
|
||||
github.com/docker/cli v0.0.0-20191017083524-a8ff7f821017/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
|
||||
github.com/docker/cli v0.0.0-20200130152716-5d0cf8839492 h1:FwssHbCDJD025h+BchanCwE1Q8fyMgqDr2mOQAWOLGw=
|
||||
github.com/docker/cli v0.0.0-20200130152716-5d0cf8839492/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
|
||||
github.com/docker/cli v0.0.0-20200227165822-2298e6a3fe24 h1:bjsfAvm8BVtvQFxV7TYznmKa35J8+fmgrRJWvcS3yJo=
|
||||
github.com/docker/cli v0.0.0-20200227165822-2298e6a3fe24/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
|
||||
github.com/docker/distribution v0.0.0-20180920194744-16128bbac47f/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
|
||||
github.com/docker/distribution v0.0.0-20191216044856-a8371794149d/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
|
||||
github.com/docker/distribution v0.0.0-20200223014041-6b972e50feee/go.mod h1:xgJxuOjyp98AvnpRTR1+lGOqQ493ylRnRPmewD5GWtc=
|
||||
github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
|
||||
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
|
||||
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
|
||||
github.com/docker/docker-ce v0.0.0-20180924210327-f53bd8bb8e43/go.mod h1:l1FUGRYBvbjnZ8MS6A2xOji4aZFlY/Qmgz7p4oXH7ac=
|
||||
github.com/docker/docker-credential-helpers v0.6.0/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
|
||||
github.com/docker/docker-credential-helpers v0.6.1/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
|
||||
github.com/docker/docker-credential-helpers v0.6.3 h1:zI2p9+1NQYdnG6sMU26EX4aVGlqbInSQxQXLvzJ4RPQ=
|
||||
github.com/docker/docker-credential-helpers v0.6.3/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
|
||||
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c h1:lzqkGL9b3znc+ZUgi7FlLnqjQhcXxkNM/quxIjBVMD0=
|
||||
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c/go.mod h1:CADgU4DSXK5QUlFslkQu2yW2TKzFZcXq/leZfM0UH5Q=
|
||||
github.com/docker/go-connections v0.0.0-20180821093606-97c2040d34df/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
|
||||
github.com/docker/go-connections v0.3.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
|
||||
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
|
||||
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
|
||||
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
|
||||
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
|
||||
github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916 h1:yWHOI+vFjEsAakUTSrtqc/SAHrhSkmn48pqjidZX3QA=
|
||||
github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
|
||||
github.com/docker/go-units v0.3.1/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
|
||||
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/docker/libnetwork v0.8.0-dev.2.0.20200226230617-d8334ccdb9be h1:GJzljYRqZapOwyfeRyExF2/5qfv8f1feNDMiz0hmRPY=
|
||||
github.com/docker/libnetwork v0.8.0-dev.2.0.20200226230617-d8334ccdb9be/go.mod h1:93m0aTqz6z+g32wla4l4WxTrdtvBRmVzYRkYvasA5Z8=
|
||||
github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
|
||||
github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7 h1:UhxFibDNY/bfvqU5CAUmr9zpesgbU6SWc8/B4mflAE4=
|
||||
github.com/docker/libtrust v0.0.0-20160708172513-aabc10ec26b7/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
|
||||
@@ -315,13 +285,13 @@ github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymF
|
||||
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5 h1:Yzb9+7DPaBjB8zlTR87/ElzFsnQfuHnVUVqpZZIcV5Y=
|
||||
github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5/go.mod h1:a2zkGnVExMxdzMo3M0Hi/3sEU+cWnZpSni0O6/Yb/P0=
|
||||
github.com/evanphx/json-patch v0.0.0-20200808040245-162e5629780b/go.mod h1:NAJj0yf/KaRKURN6nyi7A9IZydMivZEm9oQLWNjfKDc=
|
||||
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4=
|
||||
github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
|
||||
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
|
||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||
github.com/fernet/fernet-go v0.0.0-20180830025343-9eac43b88a5e/go.mod h1:2H9hjfbpSMHwY503FclkV/lZTBh2YlOmLLSda12uL8c=
|
||||
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
|
||||
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
|
||||
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
|
||||
@@ -332,10 +302,6 @@ github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4
|
||||
github.com/fsouza/go-dockerclient v1.6.4 h1:B+L+1lz1LUrNgEUUh8PSG76s70EYC49ssv2xvTefTMM=
|
||||
github.com/fsouza/go-dockerclient v1.6.4/go.mod h1:GOdftxWLWIbIWKbIMDroKFJzPdg6Iw7r+jX1DDZdVsA=
|
||||
github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
|
||||
github.com/genuinetools/img v0.5.11 h1:qIP3nVgRYbXmtSNMjmcdx+TBzWjZS03AdqTNBf6QtGY=
|
||||
github.com/genuinetools/img v0.5.11/go.mod h1:m58lsi0AC97XSTXBTKpqZE1zCZpMCg9nLjikvgLCo0A=
|
||||
github.com/genuinetools/pkg v0.0.0-20180910213200-1c141f661797/go.mod h1:XTcrCYlXPxnxL2UpnwuRn7tcaTn9HAhxFoFJucootk8=
|
||||
github.com/genuinetools/reg v0.16.0/go.mod h1:12Fe9EIvK3dG/qWhNk5e9O96I8SGmCKLsJ8GsXUbk+Y=
|
||||
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
|
||||
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
|
||||
@@ -400,6 +366,7 @@ github.com/go-openapi/validate v0.19.5/go.mod h1:8DJv2CVJQ6kGNpFW6eV9N3JviE1C85n
|
||||
github.com/go-sql-driver/mysql v1.3.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
|
||||
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
|
||||
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
|
||||
github.com/go-sql-driver/mysql v1.5.0 h1:ozyZYNQW3x3HtqT1jira07DN2PArx2v7/mN66gGcHOs=
|
||||
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
|
||||
github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
@@ -417,21 +384,18 @@ github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6
|
||||
github.com/godbus/dbus/v5 v5.0.3 h1:ZqHaoEF7TBzh4jzPmqVhE/5A1z9of6orkAe5uHoAeME=
|
||||
github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||
github.com/godror/godror v0.13.3/go.mod h1:2ouUT4kdhUBk7TAkHWD4SN0CdI0pgEQbo8FVHhbSKWg=
|
||||
github.com/gofrs/flock v0.7.0/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
|
||||
github.com/gofrs/flock v0.7.1 h1:DP+LD/t0njgoPBvT5MJLeliUIVQR03hiKR6vezdwHlc=
|
||||
github.com/gofrs/flock v0.7.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
|
||||
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
|
||||
github.com/gogo/googleapis v1.3.2 h1:kX1es4djPJrsDhY7aZKJy7aZasdcB5oSOEphMjSB53c=
|
||||
github.com/gogo/googleapis v1.3.2/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
|
||||
github.com/gogo/protobuf v1.0.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.2.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
|
||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
||||
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
|
||||
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe h1:lXe2qZdvpiX5WZkZR4hgp4KJVfY3nMkvmwbVkpv1rVY=
|
||||
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
@@ -442,7 +406,6 @@ github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4er
|
||||
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
|
||||
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
|
||||
@@ -470,6 +433,7 @@ github.com/golangplus/fmt v0.0.0-20150411045040-2a5d6d7d2995/go.mod h1:lJgMEyOkY
|
||||
github.com/golangplus/testing v0.0.0-20180327235837-af21d9c3145e/go.mod h1:0AA//k/eakGydO4jKRoRL2j92ZKSzTgj9tclaCrvXHk=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/certificate-transparency-go v1.0.10-0.20180222191210-5ab67e519c93 h1:jc2UWq7CbdszqeH6qu1ougXMIUBfSy8Pbh/anURYbGI=
|
||||
github.com/google/certificate-transparency-go v1.0.10-0.20180222191210-5ab67e519c93/go.mod h1:QeJfpSbVSfYc7RgB3gJFj9cbuQMMchQxrWXz8Ruopmg=
|
||||
github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
@@ -495,8 +459,6 @@ github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hf
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/google/renameio v1.0.0 h1:xhp2CnJmgQmpJU4RY8chagahUq5mbPPAbiSQstKpVMA=
|
||||
github.com/google/renameio v1.0.0/go.mod h1:t/HQoYBZSsWSNK35C6CO/TpPLDVWvxOHboWUAweKUpk=
|
||||
github.com/google/shlex v0.0.0-20150127133951-6f45313302b9 h1:JM174NTeGNJ2m/oLH3UOWOvWQQKd+BoL3hcSCUWFLt0=
|
||||
github.com/google/shlex v0.0.0-20150127133951-6f45313302b9/go.mod h1:RpwtwJQFrIEPstU94h88MWPXP2ektJZ8cZ0YntAmXiE=
|
||||
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
@@ -522,16 +484,13 @@ github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY
|
||||
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/gosuri/uitable v0.0.4/go.mod h1:tKR86bXuXPZazfOTG1FIzvjIdXzd0mo4Vtn16vt0PJo=
|
||||
github.com/gotestyourself/gotestyourself v2.2.0+incompatible/go.mod h1:zZKM6oeNM8k+FRljX1mnzVYeS8wiGgQyvST1/GafPbY=
|
||||
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
|
||||
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645 h1:MJG/KsmcqMwFAkh8mTnAwhyKoB+sTAnY4CACC110tbU=
|
||||
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645/go.mod h1:6iZfnjpejD4L/4DwD7NryNaJyCQdzwWwH2MWhCA90Kw=
|
||||
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
|
||||
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed/go.mod h1:tMWxXQ9wFIaZeTI9F+hmhFiGpFmhOHzyShyFUhRm0H4=
|
||||
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
|
||||
github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
|
||||
@@ -541,6 +500,7 @@ github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FK
|
||||
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
|
||||
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
|
||||
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
|
||||
github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
|
||||
github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uPribsnS6o=
|
||||
@@ -548,6 +508,7 @@ github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHh
|
||||
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
|
||||
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
|
||||
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
|
||||
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
|
||||
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
|
||||
github.com/hashicorp/go-version v1.2.0 h1:3vNe/fWF5CBgRIguda1meWhsZHy3m8gCJ5wx+dIzX/E=
|
||||
github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
|
||||
@@ -565,8 +526,6 @@ github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO
|
||||
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
|
||||
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
|
||||
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
|
||||
github.com/hashicorp/uuid v0.0.0-20160311170451-ebb0a03e909c h1:nQcv325vxv2fFHJsOt53eSRf1eINt6vOdYUFfXs4rgk=
|
||||
github.com/hashicorp/uuid v0.0.0-20160311170451-ebb0a03e909c/go.mod h1:fHzc09UnyJyqyW+bFuq864eh+wC7dj65aXmXLRe5to0=
|
||||
github.com/heroku/docker-registry-client v0.0.0-20181004091502-47ecf50fd8d4 h1:44WMsEqwiYnpHA3E4Rg1K379MH5iZllp2sO5nzXARI0=
|
||||
github.com/heroku/docker-registry-client v0.0.0-20181004091502-47ecf50fd8d4/go.mod h1:ceV82AfTGFCOL/b0cdpP54uKVSL1Gef0TBSTGFDuqyY=
|
||||
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
|
||||
@@ -576,14 +535,11 @@ github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq
|
||||
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
|
||||
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
|
||||
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
||||
github.com/imdario/mergo v0.3.7/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
||||
github.com/imdario/mergo v0.3.8 h1:CGgOkSJeqMRmt0D9XLWExdT4m4F1vd3FV3VPt+0VxkQ=
|
||||
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
||||
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
|
||||
github.com/ishidawataru/sctp v0.0.0-20191218070446-00ab2ac2db07 h1:rw3IAne6CDuVFlZbPOkA7bhxlqawFh7RJJ+CejfMaxE=
|
||||
github.com/ishidawataru/sctp v0.0.0-20191218070446-00ab2ac2db07/go.mod h1:co9pwDoBCm1kGxawmb4sPq0cSIOOWNPT4KnHotMP1Zg=
|
||||
github.com/jedib0t/go-pretty v4.3.0+incompatible h1:CGs8AVhEKg/n9YbUenWmNStRW2PHJzaeDodcfvRAbIo=
|
||||
github.com/jedib0t/go-pretty v4.3.0+incompatible/go.mod h1:XemHduiw8R651AF9Pt4FwCTKeG3oo7hrHJAoznj9nag=
|
||||
github.com/jedib0t/go-pretty/v6 v6.0.5 h1:oOo0/jSb3NEYKT6l1hhFXoX2UZnkanMuCE2DVT1mqnE=
|
||||
@@ -591,8 +547,11 @@ github.com/jedib0t/go-pretty/v6 v6.0.5/go.mod h1:MTr6FgcfNdnN5wPVBzJ6mhJeDyiF0yB
|
||||
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
|
||||
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3 h1:sHsPfNMAG70QAvKbddQ0uScZCHQoZsT5NykGRCeeeIs=
|
||||
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3/go.mod h1:yL958EeXv8Ylng6IfnvG4oflryUi3vgA3xPs9hmII1s=
|
||||
github.com/jinzhu/gorm v0.0.0-20170222002820-5409931a1bb8 h1:CZkYfurY6KGhVtlalI4QwQ6T0Cu6iuY3e0x5RLu96WE=
|
||||
github.com/jinzhu/gorm v0.0.0-20170222002820-5409931a1bb8/go.mod h1:Vla75njaFJ8clLU1W44h34PjIkijhjHIYnZxMqCdxqo=
|
||||
github.com/jinzhu/inflection v0.0.0-20170102125226-1c35d901db3d h1:jRQLvyVGL+iVtDElaEIDdKwpPqUIZJfzkNLV34htpEc=
|
||||
github.com/jinzhu/inflection v0.0.0-20170102125226-1c35d901db3d/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
|
||||
github.com/jinzhu/now v1.1.1 h1:g39TucaRWyV3dwDO++eEc6qf8TVIQ/Da48WmqjZ3i7E=
|
||||
github.com/jinzhu/now v1.1.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
|
||||
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
||||
github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
|
||||
@@ -613,6 +572,7 @@ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1
|
||||
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
|
||||
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
|
||||
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
|
||||
github.com/juju/loggo v0.0.0-20190526231331-6e530bcce5d8 h1:UUHMLvzt/31azWTN/ifGWef4WUqvXk0iRqdhdy/2uzI=
|
||||
github.com/juju/loggo v0.0.0-20190526231331-6e530bcce5d8/go.mod h1:vgyd7OREkbtVEN/8IXZe5Ooef3LQePvuBm9UWj6ZL8U=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
github.com/k0kubun/go-ansi v0.0.0-20180517002512-3bf9e2903213/go.mod h1:vNUNkEQ1e29fT/6vq2aBdFsgNPmy8qMdSay1npru+Sw=
|
||||
@@ -649,6 +609,7 @@ github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0/go.mod h1:vmVJ0l/dxyfGW6Fm
|
||||
github.com/lib/pq v0.0.0-20150723085316-0dad96c0b94f/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
|
||||
github.com/lib/pq v1.7.0 h1:h93mCPfUSkaul3Ka/VG8uZdmW1uMHDGxzu0NWHuJmHY=
|
||||
github.com/lib/pq v1.7.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
|
||||
github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM=
|
||||
@@ -710,9 +671,8 @@ github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrk
|
||||
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
|
||||
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
|
||||
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
|
||||
github.com/mitchellh/hashstructure v0.0.0-20170609045927-2bca23e0e452/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1DutKwClXU/ABz6AQ=
|
||||
github.com/mitchellh/hashstructure v1.0.0 h1:ZkRJX1CyOoTkar7p/mLS5TZU4nJ1Rn/F8u9dGS02Q3Y=
|
||||
github.com/mitchellh/hashstructure v1.0.0/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1DutKwClXU/ABz6AQ=
|
||||
github.com/mitchellh/hashstructure/v2 v2.0.1 h1:L60q1+q7cXE4JeEJJKMnh2brFIe3rZxCihYAB61ypAY=
|
||||
github.com/mitchellh/hashstructure/v2 v2.0.1/go.mod h1:MG3aRVU/N29oo/V/IhBX8GR/zz4kQkprJgF2EVszyDE=
|
||||
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
|
||||
github.com/mitchellh/mapstructure v0.0.0-20150613213606-2caf8efc9366/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||
@@ -721,8 +681,6 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
|
||||
github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
|
||||
github.com/mitchellh/reflectwalk v1.0.0 h1:9D+8oIskB4VJBN5SFlmc27fSlIBZaov1Wpk/IfikLNY=
|
||||
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
|
||||
github.com/moby/buildkit v0.7.2 h1:wp4R0QMXSqwjTJKhhWlJNOCSQ/OVPnsCf3N8rs09+vQ=
|
||||
github.com/moby/buildkit v0.7.2/go.mod h1:D3DN/Nl4DyMH1LkwpRUJuoghqdigdXd1A6HXt5aZS40=
|
||||
github.com/moby/sys/mount v0.2.0 h1:WhCW5B355jtxndN5ovugJlMFJawbUODuW8fSnEH6SSM=
|
||||
github.com/moby/sys/mount v0.2.0/go.mod h1:aAivFE2LB3W4bACsUXChRHQ0qKWsetY4Y9V7sxOougM=
|
||||
github.com/moby/sys/mountinfo v0.4.0 h1:1KInV3Huv18akCu58V7lzNlt+jFmqlu1EaErnEHE/VM=
|
||||
@@ -736,15 +694,14 @@ github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 h1:RWengNIwukTxcDr9M+97sNutRR1RKhG96O6jWumTTnw=
|
||||
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
|
||||
github.com/morikuni/aec v0.0.0-20170113033406-39771216ff4c/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
|
||||
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
|
||||
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
|
||||
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d h1:fKh+rvwZQCA+TPzK0EMwwbqhjvRHaQ6H8AsVU1Wt+NQ=
|
||||
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d/go.mod h1:puRUWSwyecW2V355tKncwPVPRAjQBduPsFjG0mrV/Nw=
|
||||
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87 h1:mGz7T8KvmHH0gLWPI5tQne8xl2cO3T8wrrb6Aa16Jxo=
|
||||
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87/go.mod h1:1w4zI1LYXDeiUXqedPcrT5eQJnmKR6dbg5iJMgSIP/Y=
|
||||
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82 h1:Hkefw2tzoKATVUTFsCtDlUnY180+OE851qGbq45ATxk=
|
||||
github.com/mudler/go-pluggable v0.0.0-20201113184918-d36448fc8f82/go.mod h1:4P/ULate+2QxoAQtojaRjyO5VGMhV0KLnSdAS8nuBbo=
|
||||
github.com/mudler/go-pluggable v0.0.0-20210513155700-54c6443073af h1:jixIxEgLSqu24eMiyzfCI+roa5IaOUhF546ePSFyHeY=
|
||||
github.com/mudler/go-pluggable v0.0.0-20210513155700-54c6443073af/go.mod h1:WmKcT8ONmhDQIqQ+HxU+tkGWjzBEyY/KFO8LTGCu4AI=
|
||||
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290 h1:426hFyXMpXeqIeGJn2cGAW9ogvM2Jf+Jv23gtVPvBLM=
|
||||
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290/go.mod h1:uP5BBgFxq2wNWo7n1vnY5SSbgL0WDshVJrOO12tZ/lA=
|
||||
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
|
||||
@@ -775,7 +732,6 @@ github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
|
||||
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
|
||||
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg=
|
||||
github.com/onsi/ginkgo v1.12.1 h1:mFwc4LvZ0xpSvDZ3E+k8Yte0hLOMxXUlP+yXtJqkYfQ=
|
||||
@@ -783,7 +739,6 @@ github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108
|
||||
github.com/onsi/ginkgo v1.14.2 h1:8mVmC9kjFFmA8H4pKMUhcblgifdkOIXPvbhN1T36q1M=
|
||||
github.com/onsi/ginkgo v1.14.2/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
|
||||
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
|
||||
github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
|
||||
@@ -807,20 +762,16 @@ github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3I
|
||||
github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
|
||||
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
|
||||
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
|
||||
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
|
||||
github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
|
||||
github.com/opencontainers/runc v1.0.0-rc9.0.20200221051241-688cf6d43cc4 h1:JhRvjyrjq24YPSDS0MQo9KJHQh95naK5fYl9IT+dzPM=
|
||||
github.com/opencontainers/runc v1.0.0-rc9.0.20200221051241-688cf6d43cc4/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
|
||||
github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
|
||||
github.com/opencontainers/runtime-spec v1.0.1 h1:wY4pOY8fBdSIvs9+IDHC55thBuEulhzfSgKeC1yFvzQ=
|
||||
github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
|
||||
github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
|
||||
github.com/opencontainers/selinux v1.3.2 h1:DR4lL9SYVjgcTZKEZIncvDU06fKSc/eygjmNGOA3E1s=
|
||||
github.com/opencontainers/selinux v1.3.2/go.mod h1:yTcKuYAh6R95iDpefGLQaPaRwJFwyzAJufJyiTt7s0g=
|
||||
github.com/opentracing-contrib/go-observer v0.0.0-20170622124052-a52f23424492/go.mod h1:Ngi6UdF0k5OKD5t5wlmGhe/EDKPoUM3BXZSSfIuJbis=
|
||||
github.com/opentracing-contrib/go-stdlib v0.0.0-20171029140428-b1a47cfbdd75/go.mod h1:PLldrQSroqzH70Xl+1DQcGnefIbqsKR7UDaiux3zV+w=
|
||||
github.com/opentracing-contrib/go-stdlib v0.0.0-20180702182724-07a764486eb1 h1:gmB1XmLjI0RXG8rJCP0PK6g8rwhX8COSGFTiOgJ4Wx4=
|
||||
github.com/opentracing-contrib/go-stdlib v0.0.0-20180702182724-07a764486eb1/go.mod h1:PLldrQSroqzH70Xl+1DQcGnefIbqsKR7UDaiux3zV+w=
|
||||
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
|
||||
github.com/opentracing/opentracing-go v0.0.0-20171003133519-1361b9cd60be/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
||||
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
||||
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
|
||||
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
||||
@@ -845,7 +796,6 @@ github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181
|
||||
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||
github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
|
||||
github.com/peterhellberg/link v1.0.0/go.mod h1:gtSlOT4jmkY8P47hbTc8PTgiDDWpdPbFYl75keYyBB8=
|
||||
github.com/phayes/freeport v0.0.0-20180830031419-95f893ade6f2/go.mod h1:iIss55rKnNBTvrwdmkUpLnDpZoAHvWaiq5+iMmen4AE=
|
||||
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f h1:WyCn68lTiytVSkk7W1K9nBiSGTSRlUOdyTnSjwrIlok=
|
||||
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f/go.mod h1:/iRjX3DdSK956SzsUdV55J+wIsQ+2IBWmBrB4RvZfk4=
|
||||
@@ -864,7 +814,6 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN
|
||||
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
|
||||
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
|
||||
github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v0.0.0-20180924113449-f69c853d21c1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v0.9.0-pre1.0.20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
|
||||
@@ -881,7 +830,6 @@ github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6T
|
||||
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
|
||||
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
|
||||
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
@@ -889,7 +837,6 @@ github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y8
|
||||
github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY=
|
||||
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
|
||||
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20180920065004-418d78d0b9a7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
@@ -925,11 +872,8 @@ github.com/schollz/progressbar/v3 v3.7.1/go.mod h1:CG/f0JmacksUc6TkZToO7tVq4t03z
|
||||
github.com/sclevine/spec v1.2.0/go.mod h1:W4J29eT/Kzv7/b9IWLB055Z+qvVC9vt0Arko24q7p+U=
|
||||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
|
||||
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
|
||||
github.com/serialx/hashring v0.0.0-20190422032157-8b2912629002/go.mod h1:/yeG0My1xr/u+HZrFQ1tOQQQQrOawfyMUH13ai5brBc=
|
||||
github.com/shurcooL/httpfs v0.0.0-20171119174359-809beceb2371/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
|
||||
github.com/shurcooL/sanitized_anchor_name v1.0.0 h1:PdmoCO6wvbs+7yrJyMORt4/BmY5IYyJwS/kOiWx8mHo=
|
||||
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
|
||||
github.com/sirupsen/logrus v1.0.3/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
|
||||
github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
|
||||
github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
|
||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||
@@ -992,13 +936,11 @@ github.com/spf13/viper v1.7.1/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5q
|
||||
github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
|
||||
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
|
||||
github.com/streadway/handy v0.0.0-20190108123426-d5acb3125c2a/go.mod h1:qNTQ5P5JnDBl6z3cMAg/SywNDC5ABu5ApDIw6lUbRmI=
|
||||
github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
|
||||
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
|
||||
github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
|
||||
@@ -1009,8 +951,6 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
|
||||
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
|
||||
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
|
||||
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
|
||||
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2 h1:b6uOv7YOFK0TYG7HtkIgExQo+2RdLuwRft63jn2HWj8=
|
||||
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
|
||||
github.com/theupdateframework/notary v0.7.0 h1:QyagRZ7wlSpjT5N2qQAh/pN+DVqgekv4DzbAiAiEL3c=
|
||||
github.com/theupdateframework/notary v0.7.0/go.mod h1:c9DRxcmhHmVLDay4/2fUYdISnHqbFDGRSlXPO0AhYWw=
|
||||
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
|
||||
@@ -1021,14 +961,6 @@ github.com/tj/go-kinesis v0.0.0-20171128231115-08b17f58cb1b/go.mod h1:/yhzCV0xPf
|
||||
github.com/tj/go-spin v1.1.0/go.mod h1:Mg1mzmePZm4dva8Qz60H2lHwmJ2loum4VIrLgVnKwh4=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
github.com/tonistiigi/fsutil v0.0.0-20200326231323-c2c7d7b0e144 h1:6RY1EKxCnPQShPM46xFDHta2JSOd+YKCgHyyBHtKuo8=
|
||||
github.com/tonistiigi/fsutil v0.0.0-20200326231323-c2c7d7b0e144/go.mod h1:0G1sLZ/0ttFf09xvh7GR4AEECnjifHRNJN/sYbLianU=
|
||||
github.com/tonistiigi/go-immutable-radix v0.0.0-20170803185627-826af9ccf0fe h1:pd7hrFSqUPxYS9IB+UMG1AB/8EXGXo17ssx0bSQ5L6Y=
|
||||
github.com/tonistiigi/go-immutable-radix v0.0.0-20170803185627-826af9ccf0fe/go.mod h1:/+MCh11CJf2oz0BXmlmqyopK/ad1rKkcOXPoYuPCJYU=
|
||||
github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea/go.mod h1:WPnis/6cRcDZSUvVmezrxJPkiO87ThFYsoUiMwWNDJk=
|
||||
github.com/tonistiigi/vt100 v0.0.0-20190402012908-ad4c4a574305/go.mod h1:gXOLibKqQTRAVuVZ9gX7G9Ykky8ll8yb4slxsEMoY0c=
|
||||
github.com/uber/jaeger-client-go v0.0.0-20180103221425-e02c85f9069e/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
|
||||
github.com/uber/jaeger-lib v1.2.1/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
|
||||
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
|
||||
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
|
||||
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
|
||||
@@ -1041,10 +973,6 @@ github.com/vbatts/go-mtree v0.4.4 h1:+CncqETnSpxBCCUhRnNQBvxhsjWXNuc+ExZsLSNaj5o
|
||||
github.com/vbatts/go-mtree v0.4.4/go.mod h1:3sazBqLG4bZYmgRTgdh9X3iKTzwBpp5CrREJDzrNSXY=
|
||||
github.com/vdemeester/k8s-pkg-credentialprovider v1.18.1-0.20201019120933-f1d16962a4db/go.mod h1:grWy0bkr1XO6hqbaaCKaPXqkBVlMGHYG6PGykktwbJc=
|
||||
github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw=
|
||||
github.com/vishvananda/netlink v1.0.0 h1:bqNY2lgheFIu1meHUFSH3d7vG93AFyqg3oGbJCOJgSM=
|
||||
github.com/vishvananda/netlink v1.0.0/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
|
||||
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc h1:R83G5ikgLMxrBvLh22JhdfI8K6YXEPHx5P03Uu3DRs4=
|
||||
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
|
||||
github.com/vmihailenco/msgpack v4.0.1+incompatible h1:RMF1enSPeKTlXrXdOcqjFUElywVZjjC6pqse21bKbEU=
|
||||
github.com/vmihailenco/msgpack v4.0.1+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
|
||||
github.com/vmware/govmomi v0.20.3/go.mod h1:URlwyTFZX72RmxtxuaFL2Uj3fD1JTvZdx59bHWk6aFU=
|
||||
@@ -1099,7 +1027,6 @@ go.uber.org/zap v1.13.0 h1:nR6NoDBgAf67s68NhaXbsojM+2gxp3S1hWkHDl27pVU=
|
||||
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
|
||||
golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20180910181607-0e37d006457b/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
@@ -1119,7 +1046,6 @@ golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413/go.mod h1:LzIPMQfyMNhhGPh
|
||||
golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975 h1:/Tl7pH94bvbAAHBdZJT947M/+gp0+CqQXDtMRC0fseo=
|
||||
golang.org/x/crypto v0.0.0-20200220183623-bac4c82f6975/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200221231518-2aa609cf4a9d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200414173820-0848c9571904/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 h1:psW17arqaxU48Z5kZ0CQnkZWQJsqcURM6tKiBApRjXI=
|
||||
@@ -1143,7 +1069,6 @@ golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EH
|
||||
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
|
||||
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
|
||||
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
|
||||
golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
@@ -1169,7 +1094,6 @@ golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73r
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180925072008-f04abc6bdfa7/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
@@ -1228,7 +1152,6 @@ golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5h
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180925112736-b09afc3d579e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
@@ -1245,7 +1168,6 @@ golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190515120540-06a5c4944438/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190522044717-8097e1b27ff5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -1261,10 +1183,8 @@ golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191220142924-d4481acd189f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -1390,12 +1310,10 @@ google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCID
|
||||
google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20180924164928-221a8d4f7494/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
google.golang.org/genproto v0.0.0-20190522204451-c2c4e71fbf69/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
|
||||
google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
|
||||
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
|
||||
@@ -1411,14 +1329,12 @@ google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvx
|
||||
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
|
||||
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200227132054-3f1135a288c9/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/genproto v0.0.0-20200527145253-8367513e4ece h1:1YM0uhfumvoDu9sx8+RyWwTI63zoCQvI23IYFRlvte0=
|
||||
google.golang.org/genproto v0.0.0-20200527145253-8367513e4ece/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
|
||||
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
|
||||
google.golang.org/grpc v1.0.5/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
|
||||
google.golang.org/grpc v1.15.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
||||
@@ -1429,7 +1345,6 @@ google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ij
|
||||
google.golang.org/grpc v1.22.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
|
||||
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
|
||||
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
@@ -1450,6 +1365,7 @@ google.golang.org/protobuf v1.24.0 h1:UhZDfRO8JRQru4/+LlLE0BRKGF8L+PICnvYZmx/fEG
|
||||
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
|
||||
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/cenkalti/backoff.v2 v2.2.1 h1:eJ9UAg01/HIHG987TwxvnzK2MgxXq97YY6rYDpY9aII=
|
||||
gopkg.in/cenkalti/backoff.v2 v2.2.1/go.mod h1:S0QdOvT2AlerfSBkp0O+dk+bbIMaNbEmVk876gPCthU=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
@@ -1472,6 +1388,7 @@ gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
|
||||
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
|
||||
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
|
||||
gopkg.in/rethinkdb/rethinkdb-go.v6 v6.2.1 h1:d4KQkxAaAiRY2h5Zqis161Pv91A37uZyJOx73duwUwM=
|
||||
gopkg.in/rethinkdb/rethinkdb-go.v6 v6.2.1/go.mod h1:WbjuEoo1oadwzQ4apSDU+JTvmllEHtsNHS6y7vFc7iw=
|
||||
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
|
||||
@@ -1490,7 +1407,6 @@ gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
|
||||
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gotest.tools v2.1.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
|
||||
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
|
||||
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
|
||||
gotest.tools/v3 v3.0.2 h1:kG1BFyqVHuQoVQiR1bWGnfz/fmHvvuiSPIV7rvl360E=
|
||||
|
@@ -23,9 +23,9 @@ import (
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
helpers "github.com/mudler/luet/pkg/helpers"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type Box interface {
|
||||
@@ -107,7 +107,7 @@ func (b *DefaultBox) Exec() error {
|
||||
|
||||
func (b *DefaultBox) Run() error {
|
||||
|
||||
if !helpers.Exists(b.Root) {
|
||||
if !fileHelper.Exists(b.Root) {
|
||||
return errors.New(b.Root + " does not exist")
|
||||
}
|
||||
|
||||
|
@@ -2,6 +2,7 @@ package bus
|
||||
|
||||
import (
|
||||
"github.com/mudler/go-pluggable"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -44,24 +45,59 @@ var (
|
||||
EventRepositoryPreBuild pluggable.EventType = "repository.pre.build"
|
||||
// EventRepositoryPostBuild is the event fired after a repository was built
|
||||
EventRepositoryPostBuild pluggable.EventType = "repository.post.build"
|
||||
|
||||
// Image unpack
|
||||
|
||||
// EventImagePreUnPack is the event fired before unpacking an image to a local dir
|
||||
EventImagePreUnPack pluggable.EventType = "image.pre.unpack"
|
||||
// EventImagePostUnPack is the event fired after unpacking an image to a local dir
|
||||
EventImagePostUnPack pluggable.EventType = "image.post.unpack"
|
||||
)
|
||||
|
||||
// Manager is the bus instance manager, which subscribes plugins to events emitted by Luet
|
||||
var Manager *pluggable.Manager = pluggable.NewManager(
|
||||
[]pluggable.EventType{
|
||||
EventPackageInstall,
|
||||
EventPackageUnInstall,
|
||||
EventPackagePreBuild,
|
||||
EventPackagePreBuildArtifact,
|
||||
EventPackagePostBuildArtifact,
|
||||
EventPackagePostBuild,
|
||||
EventRepositoryPreBuild,
|
||||
EventRepositoryPostBuild,
|
||||
EventImagePreBuild,
|
||||
EventImagePrePull,
|
||||
EventImagePrePush,
|
||||
EventImagePostBuild,
|
||||
EventImagePostPull,
|
||||
EventImagePostPush,
|
||||
},
|
||||
)
|
||||
var Manager *Bus = &Bus{
|
||||
Manager: pluggable.NewManager(
|
||||
[]pluggable.EventType{
|
||||
EventPackageInstall,
|
||||
EventPackageUnInstall,
|
||||
EventPackagePreBuild,
|
||||
EventPackagePreBuildArtifact,
|
||||
EventPackagePostBuildArtifact,
|
||||
EventPackagePostBuild,
|
||||
EventRepositoryPreBuild,
|
||||
EventRepositoryPostBuild,
|
||||
EventImagePreBuild,
|
||||
EventImagePrePull,
|
||||
EventImagePrePush,
|
||||
EventImagePostBuild,
|
||||
EventImagePostPull,
|
||||
EventImagePostPush,
|
||||
EventImagePreUnPack,
|
||||
EventImagePostUnPack,
|
||||
},
|
||||
),
|
||||
}
|
||||
|
||||
type Bus struct {
|
||||
*pluggable.Manager
|
||||
}
|
||||
|
||||
func (b *Bus) Initialize(plugin ...string) {
|
||||
b.Manager.Load(plugin...).Register()
|
||||
|
||||
for _, e := range b.Manager.Events {
|
||||
b.Manager.Response(e, func(p *pluggable.Plugin, r *pluggable.EventResponse) {
|
||||
if r.Errored() {
|
||||
logrus.Fatal("Plugin", p.Name, "at", p.Executable, "Error", r.Error)
|
||||
}
|
||||
logrus.Debug(
|
||||
"plugin_event",
|
||||
"received from",
|
||||
p.Name,
|
||||
"at",
|
||||
p.Executable,
|
||||
r,
|
||||
)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
@@ -24,8 +24,8 @@ import (
|
||||
"strings"
|
||||
|
||||
bus "github.com/mudler/luet/pkg/bus"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
docker "github.com/fsouza/go-dockerclient"
|
||||
capi "github.com/mudler/docker-companion/api"
|
||||
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
@@ -56,24 +56,6 @@ func (*SimpleDocker) BuildImage(opts Options) error {
|
||||
|
||||
Info(":whale: Building image " + name + " done")
|
||||
|
||||
if os.Getenv("DOCKER_SQUASH") == "true" {
|
||||
Info(":whale: Squashing image " + name)
|
||||
var client *docker.Client
|
||||
|
||||
Spinner(22)
|
||||
defer SpinnerStop()
|
||||
client, err = docker.NewClientFromEnv()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not connect to the Docker daemon")
|
||||
}
|
||||
err = capi.Squash(client, name, name)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Failed squashing image")
|
||||
}
|
||||
|
||||
Info(":whale: Squashing image " + name + " done")
|
||||
}
|
||||
|
||||
bus.Manager.Publish(bus.EventImagePostBuild, opts)
|
||||
|
||||
return nil
|
||||
@@ -200,6 +182,12 @@ func (b *SimpleDocker) ExtractRootfs(opts Options, keepPerms bool) error {
|
||||
name := opts.ImageName
|
||||
dst := opts.Destination
|
||||
|
||||
if !b.ImageExists(name) {
|
||||
if err := b.DownloadImage(opts); err != nil {
|
||||
return errors.Wrap(err, "failed pulling image "+name+" during extraction")
|
||||
}
|
||||
}
|
||||
|
||||
tempexport, err := ioutil.TempDir(dst, "tmprootfs")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error met while creating tempdir for rootfs")
|
||||
@@ -250,7 +238,7 @@ func (b *SimpleDocker) ExtractRootfs(opts Options, keepPerms bool) error {
|
||||
return errors.Wrap(err, "Error met while unpacking rootfs")
|
||||
}
|
||||
|
||||
manifest, err := helpers.Read(filepath.Join(rootfs, "manifest.json"))
|
||||
manifest, err := fileHelper.Read(filepath.Join(rootfs, "manifest.json"))
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error met while reading image manifest")
|
||||
}
|
||||
|
@@ -21,12 +21,12 @@ import (
|
||||
"github.com/mudler/luet/pkg/compiler/backend"
|
||||
. "github.com/mudler/luet/pkg/compiler/backend"
|
||||
"github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
helpers "github.com/mudler/luet/pkg/helpers"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/tree"
|
||||
. "github.com/onsi/ginkgo"
|
||||
@@ -60,7 +60,7 @@ var _ = Describe("Docker backend", func() {
|
||||
|
||||
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err := helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
dockerfile, err := fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM alpine
|
||||
@@ -79,11 +79,11 @@ ENV PACKAGE_CATEGORY=app-admin`))
|
||||
|
||||
Expect(b.BuildImage(opts)).ToNot(HaveOccurred())
|
||||
Expect(b.ExportImage(opts)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
|
||||
|
||||
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "LuetDockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "LuetDockerfile"))
|
||||
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "LuetDockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM luet/base
|
||||
@@ -103,7 +103,7 @@ RUN echo bar > /test2`))
|
||||
|
||||
Expect(b.BuildImage(opts2)).ToNot(HaveOccurred())
|
||||
Expect(b.ExportImage(opts2)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
|
||||
|
||||
artifacts := []artifact.ArtifactNode{{
|
||||
Name: "/luetbuild/LuetDockerfile",
|
||||
@@ -132,7 +132,7 @@ RUN echo bar > /test2`))
|
||||
}
|
||||
|
||||
Expect(b.ImageDefinitionToTar(opts2)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(filepath.Join(tmpdir, "output3.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(tmpdir, "output3.tar"))).To(BeTrue())
|
||||
Expect(b.ImageExists(opts2.ImageName)).To(BeFalse())
|
||||
})
|
||||
|
||||
|
@@ -16,7 +16,9 @@
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
@@ -34,6 +36,7 @@ import (
|
||||
"github.com/mudler/luet/pkg/compiler/types/options"
|
||||
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/solver"
|
||||
@@ -89,7 +92,7 @@ func (cs *LuetCompiler) compilerWorker(i int, wg *sync.WaitGroup, cspecs chan *c
|
||||
defer wg.Done()
|
||||
|
||||
for s := range cspecs {
|
||||
ar, err := cs.compile(concurrency, keepPermissions, s)
|
||||
ar, err := cs.compile(concurrency, keepPermissions, nil, nil, s)
|
||||
if err != nil {
|
||||
errors <- err
|
||||
}
|
||||
@@ -146,11 +149,6 @@ func (cs *LuetCompiler) CompileParallel(keepPermissions bool, ps *compilerspec.L
|
||||
}
|
||||
|
||||
for _, p := range ps.All() {
|
||||
asserts, err := cs.ComputeDepTree(p)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
p.SetSourceAssertion(asserts)
|
||||
all <- p
|
||||
}
|
||||
|
||||
@@ -303,20 +301,6 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
|
||||
|
||||
pkgTag := ":package: " + p.GetPackage().HumanReadableString()
|
||||
|
||||
// Use packageImage as salt into the fp being used
|
||||
// so the hash is unique also in cases where
|
||||
// some package deps does have completely different
|
||||
// depgraphs
|
||||
// TODO: We should use the image tag, or pass by the package assertion hash which is unique
|
||||
// and identifies the deptree of the package.
|
||||
|
||||
fp := p.GetPackage().HashFingerprint(helpers.StripRegistryFromImage(packageImage))
|
||||
|
||||
if buildertaggedImage == "" {
|
||||
buildertaggedImage = cs.Options.PushImageRepository + ":builder-" + fp
|
||||
Debug(pkgTag, "Creating intermediary image", buildertaggedImage, "from", image)
|
||||
}
|
||||
|
||||
// TODO: Cleanup, not actually hit
|
||||
if packageImage == "" {
|
||||
return runnerOpts, builderOpts, errors.New("no package image given")
|
||||
@@ -336,7 +320,7 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
|
||||
defer os.RemoveAll(buildDir) // clean up
|
||||
|
||||
// First we copy the source definitions into the output - we create a copy which the builds will need (we need to cache this phase somehow)
|
||||
err = helpers.CopyDir(p.GetPackage().GetPath(), buildDir)
|
||||
err = fileHelper.CopyDir(p.GetPackage().GetPath(), buildDir)
|
||||
if err != nil {
|
||||
return builderOpts, runnerOpts, errors.Wrap(err, "Could not copy package sources")
|
||||
}
|
||||
@@ -354,9 +338,15 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
|
||||
return builderOpts, runnerOpts, errors.Wrap(err, "Could not generate image definition")
|
||||
}
|
||||
|
||||
if len(p.GetPreBuildSteps()) == 0 {
|
||||
buildertaggedImage = image
|
||||
}
|
||||
// Even if we don't have prelude steps, we want to push
|
||||
// An intermediate image to tag images which are outside of the tree.
|
||||
// Those don't have an hash otherwise, and thus makes build unreproducible
|
||||
// see SKIPBUILD for the other logic
|
||||
// if len(p.GetPreBuildSteps()) == 0 {
|
||||
// buildertaggedImage = image
|
||||
// }
|
||||
// We might want to skip this phase but replacing with a tag that we push. But in case
|
||||
// steps in prelude are == 0 those are equivalent.
|
||||
|
||||
// Then we write the step image, which uses the builder one
|
||||
if err := p.WriteStepImageDefinition(buildertaggedImage, filepath.Join(buildDir, p.GetPackage().GetFingerPrint()+".dockerfile")); err != nil {
|
||||
@@ -401,12 +391,13 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if len(p.GetPreBuildSteps()) != 0 {
|
||||
Info(pkgTag, ":whale: Generating 'builder' image from", image, "as", buildertaggedImage, "with prelude steps")
|
||||
if err := buildAndPush(builderOpts); err != nil {
|
||||
return builderOpts, runnerOpts, errors.Wrapf(err, "Could not push image: %s %s", image, builderOpts.DockerFileName)
|
||||
}
|
||||
// SKIPBUILD
|
||||
// if len(p.GetPreBuildSteps()) != 0 {
|
||||
Info(pkgTag, ":whale: Generating 'builder' image from", image, "as", buildertaggedImage, "with prelude steps")
|
||||
if err := buildAndPush(builderOpts); err != nil {
|
||||
return builderOpts, runnerOpts, errors.Wrapf(err, "Could not push image: %s %s", image, builderOpts.DockerFileName)
|
||||
}
|
||||
//}
|
||||
|
||||
// Even if we might not have any steps to build, we do that so we can tag the image used in this moment and use that to cache it in a registry, or in the system.
|
||||
// acting as a docker tag.
|
||||
@@ -425,7 +416,7 @@ func (cs *LuetCompiler) genArtifact(p *compilerspec.LuetCompilationSpec, builder
|
||||
var rootfs string
|
||||
var err error
|
||||
pkgTag := ":package: " + p.GetPackage().HumanReadableString()
|
||||
|
||||
Debug(pkgTag, "Generating artifact")
|
||||
// We can't generate delta in this case. It implies the package is a virtual, and nothing has to be done really
|
||||
if p.EmptyPackage() {
|
||||
fakePackage := p.Rel(p.GetPackage().GetFingerPrint() + ".package.tar")
|
||||
@@ -470,7 +461,7 @@ func (cs *LuetCompiler) genArtifact(p *compilerspec.LuetCompilationSpec, builder
|
||||
|
||||
filelist, err := a.FileList()
|
||||
if err != nil {
|
||||
return a, errors.Wrap(err, "Failed getting package list")
|
||||
return a, errors.Wrapf(err, "Failed getting package list for '%s' '%s'", a.Path, a.CompileSpec.Package.HumanReadableString())
|
||||
}
|
||||
|
||||
a.Files = filelist
|
||||
@@ -518,10 +509,11 @@ func oneOfImagesAvailable(images []string, b CompilerBackend) (bool, string) {
|
||||
return false, ""
|
||||
}
|
||||
|
||||
func (cs *LuetCompiler) resolveExistingImageHash(imageHash string) string {
|
||||
func (cs *LuetCompiler) findImageHash(imageHash string, p *compilerspec.LuetCompilationSpec) string {
|
||||
var resolvedImage string
|
||||
Debug("Resolving image hash for", p.Package.HumanReadableString(), "hash", imageHash, "Pull repositories", p.BuildOptions.PullImageRepository)
|
||||
toChecklist := append([]string{fmt.Sprintf("%s:%s", cs.Options.PushImageRepository, imageHash)},
|
||||
genImageList(cs.Options.PullImageRepository, imageHash)...)
|
||||
genImageList(p.BuildOptions.PullImageRepository, imageHash)...)
|
||||
if exists, which := oneOfImagesExists(toChecklist, cs.Backend); exists {
|
||||
resolvedImage = which
|
||||
}
|
||||
@@ -530,6 +522,11 @@ func (cs *LuetCompiler) resolveExistingImageHash(imageHash string) string {
|
||||
resolvedImage = which
|
||||
}
|
||||
}
|
||||
return resolvedImage
|
||||
}
|
||||
|
||||
func (cs *LuetCompiler) resolveExistingImageHash(imageHash string, p *compilerspec.LuetCompilationSpec) string {
|
||||
resolvedImage := cs.findImageHash(imageHash, p)
|
||||
|
||||
if resolvedImage == "" {
|
||||
resolvedImage = fmt.Sprintf("%s:%s", cs.Options.PushImageRepository, imageHash)
|
||||
@@ -555,9 +552,10 @@ func LoadArtifactFromYaml(spec *compilerspec.LuetCompilationSpec) (*artifact.Pac
|
||||
func (cs *LuetCompiler) getImageArtifact(hash string, p *compilerspec.LuetCompilationSpec) (*artifact.PackageArtifact, error) {
|
||||
// we check if there is an available image with the given hash and
|
||||
// we return a full artifact if can be loaded locally.
|
||||
Debug("Get image artifact for", p.Package.HumanReadableString(), "hash", hash, "Pull repositories", p.BuildOptions.PullImageRepository)
|
||||
|
||||
toChecklist := append([]string{fmt.Sprintf("%s:%s", cs.Options.PushImageRepository, hash)},
|
||||
genImageList(cs.Options.PullImageRepository, hash)...)
|
||||
genImageList(p.BuildOptions.PullImageRepository, hash)...)
|
||||
|
||||
exists, _ := oneOfImagesExists(toChecklist, cs.Backend)
|
||||
if art, err := LoadArtifactFromYaml(p); err == nil && exists { // If YAML is correctly loaded, and both images exists, no reason to rebuild.
|
||||
@@ -578,7 +576,7 @@ func (cs *LuetCompiler) getImageArtifact(hash string, p *compilerspec.LuetCompil
|
||||
// image buildertaggedImage.
|
||||
// Images that can be resolved from repositories are prefered over the local ones if PullFirst is set to true
|
||||
// avoiding to rebuild images as much as possible
|
||||
func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage string, packageTagHash string,
|
||||
func (cs *LuetCompiler) compileWithImage(image, builderHash string, packageTagHash string,
|
||||
concurrency int,
|
||||
keepPermissions, keepImg bool,
|
||||
p *compilerspec.LuetCompilationSpec, generateArtifact bool) (*artifact.PackageArtifact, error) {
|
||||
@@ -591,17 +589,52 @@ func (cs *LuetCompiler) compileWithImage(image, buildertaggedImage string, packa
|
||||
}
|
||||
|
||||
if !generateArtifact {
|
||||
// try to avoid regenerating the image if possible by checking the hash in the
|
||||
// given repositories
|
||||
// It is best effort. If we fail resolving, we will generate the images and keep going
|
||||
if art, err := cs.getImageArtifact(packageTagHash, p); err == nil {
|
||||
// try to avoid regenerating the image if possible by checking the hash in the
|
||||
// given repositories
|
||||
// It is best effort. If we fail resolving, we will generate the images and keep going
|
||||
return art, nil
|
||||
}
|
||||
}
|
||||
|
||||
// always going to point at the destination from the repo defined
|
||||
packageImage := fmt.Sprintf("%s:%s", cs.Options.PushImageRepository, packageTagHash)
|
||||
builderOpts, runnerOpts, err := cs.buildPackageImage(image, buildertaggedImage, packageImage, concurrency, keepPermissions, p)
|
||||
remoteBuildertaggedImage := fmt.Sprintf("%s:%s", cs.Options.PushImageRepository, builderHash)
|
||||
builderResolved := cs.resolveExistingImageHash(builderHash, p)
|
||||
//generated := false
|
||||
// if buildertaggedImage == "" {
|
||||
// buildertaggedImage = fmt.Sprintf("%s:%s", cs.Options.PushImageRepository, buildertaggedImage)
|
||||
// generated = true
|
||||
// // Debug(pkgTag, "Creating intermediary image", buildertaggedImage, "from", image)
|
||||
// }
|
||||
|
||||
if cs.Options.PullFirst && !cs.Options.Rebuild {
|
||||
Debug("Checking if an image is already available")
|
||||
// FIXUP here. If packageimage hash exists and pull is true, generate package
|
||||
resolved := cs.resolveExistingImageHash(packageTagHash, p)
|
||||
Debug("Resolved: " + resolved)
|
||||
Debug("Expected remote: " + resolved)
|
||||
Debug("Package image: " + packageImage)
|
||||
Debug("Resolved builder image: " + builderResolved)
|
||||
|
||||
// a remote image is there already
|
||||
remoteImageAvailable := resolved != packageImage && remoteBuildertaggedImage != builderResolved
|
||||
// or a local one is available
|
||||
localImageAvailable := cs.Backend.ImageExists(remoteBuildertaggedImage) && cs.Backend.ImageExists(packageImage)
|
||||
|
||||
switch {
|
||||
case remoteImageAvailable:
|
||||
Debug("Images available remotely for", p.Package.HumanReadableString(), "generating artifact from remote images:", resolved)
|
||||
return cs.genArtifact(p, backend.Options{ImageName: builderResolved}, backend.Options{ImageName: resolved}, concurrency, keepPermissions)
|
||||
case localImageAvailable:
|
||||
Debug("Images locally available for", p.Package.HumanReadableString(), "generating artifact from image:", resolved)
|
||||
return cs.genArtifact(p, backend.Options{ImageName: remoteBuildertaggedImage}, backend.Options{ImageName: packageImage}, concurrency, keepPermissions)
|
||||
default:
|
||||
Debug("Images not available for", p.Package.HumanReadableString())
|
||||
}
|
||||
}
|
||||
|
||||
// always going to point at the destination from the repo defined
|
||||
builderOpts, runnerOpts, err := cs.buildPackageImage(image, builderResolved, packageImage, concurrency, keepPermissions, p)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed building package image")
|
||||
}
|
||||
@@ -651,33 +684,7 @@ func (cs *LuetCompiler) FromDatabase(db pkg.PackageDatabase, minimum bool, dst s
|
||||
}
|
||||
}
|
||||
|
||||
// ComputeMinimumCompilableSet strips specs that are eventually compiled by leafs
|
||||
func (cs *LuetCompiler) ComputeMinimumCompilableSet(p ...*compilerspec.LuetCompilationSpec) ([]*compilerspec.LuetCompilationSpec, error) {
|
||||
// Generate a set with all the deps of the provided specs
|
||||
// we will use that set to remove the deps from the list of provided compilation specs
|
||||
allDependencies := solver.PackagesAssertions{} // Get all packages that will be in deps
|
||||
result := []*compilerspec.LuetCompilationSpec{}
|
||||
for _, spec := range p {
|
||||
ass, err := cs.ComputeDepTree(spec)
|
||||
if err != nil {
|
||||
return result, errors.Wrap(err, "computin specs deptree")
|
||||
}
|
||||
|
||||
allDependencies = append(allDependencies, ass.Drop(spec.GetPackage())...)
|
||||
}
|
||||
|
||||
for _, spec := range p {
|
||||
if found := allDependencies.Search(spec.GetPackage().GetFingerPrint()); found == nil {
|
||||
result = append(result, spec)
|
||||
}
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// ComputeDepTree computes the dependency tree of a compilation spec and returns solver assertions
|
||||
// in order to be able to compile the spec.
|
||||
func (cs *LuetCompiler) ComputeDepTree(p *compilerspec.LuetCompilationSpec) (solver.PackagesAssertions, error) {
|
||||
|
||||
s := solver.NewResolver(cs.Options.SolverOptions.Options, pkg.NewInMemoryDatabase(false), cs.Database, pkg.NewInMemoryDatabase(false), cs.Options.SolverOptions.Resolver())
|
||||
|
||||
solution, err := s.Install(pkg.Packages{p.GetPackage()})
|
||||
@@ -689,31 +696,35 @@ func (cs *LuetCompiler) ComputeDepTree(p *compilerspec.LuetCompilationSpec) (sol
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "While order a solution for "+p.GetPackage().HumanReadableString())
|
||||
}
|
||||
return dependencies, nil
|
||||
}
|
||||
|
||||
assertions := solver.PackagesAssertions{}
|
||||
for _, assertion := range dependencies { //highly dependent on the order
|
||||
if assertion.Value {
|
||||
nthsolution := dependencies.Cut(assertion.Package)
|
||||
assertion.Hash = solver.PackageHash{
|
||||
BuildHash: nthsolution.HashFrom(assertion.Package),
|
||||
PackageHash: nthsolution.AssertionHash(),
|
||||
}
|
||||
assertions = append(assertions, assertion)
|
||||
// ComputeMinimumCompilableSet strips specs that are eventually compiled by leafs
|
||||
func (cs *LuetCompiler) ComputeMinimumCompilableSet(p ...*compilerspec.LuetCompilationSpec) ([]*compilerspec.LuetCompilationSpec, error) {
|
||||
// Generate a set with all the deps of the provided specs
|
||||
// we will use that set to remove the deps from the list of provided compilation specs
|
||||
allDependencies := solver.PackagesAssertions{} // Get all packages that will be in deps
|
||||
result := []*compilerspec.LuetCompilationSpec{}
|
||||
for _, spec := range p {
|
||||
sol, err := cs.ComputeDepTree(spec)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed querying hashtree")
|
||||
}
|
||||
allDependencies = append(allDependencies, sol.Drop(spec.GetPackage())...)
|
||||
}
|
||||
|
||||
for _, spec := range p {
|
||||
if found := allDependencies.Search(spec.GetPackage().GetFingerPrint()); found == nil {
|
||||
result = append(result, spec)
|
||||
}
|
||||
}
|
||||
p.SetSourceAssertion(assertions)
|
||||
return assertions, nil
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// Compile is a non-parallel version of CompileParallel. It builds the compilation specs and generates
|
||||
// an artifact
|
||||
func (cs *LuetCompiler) Compile(keepPermissions bool, p *compilerspec.LuetCompilationSpec) (*artifact.PackageArtifact, error) {
|
||||
asserts, err := cs.ComputeDepTree(p)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
p.SetSourceAssertion(asserts)
|
||||
return cs.compile(cs.Options.Concurrency, keepPermissions, p)
|
||||
return cs.compile(cs.Options.Concurrency, keepPermissions, nil, nil, p)
|
||||
}
|
||||
|
||||
func genImageList(refs []string, hash string) []string {
|
||||
@@ -724,15 +735,203 @@ func genImageList(refs []string, hash string) []string {
|
||||
return res
|
||||
}
|
||||
|
||||
func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p *compilerspec.LuetCompilationSpec) (*artifact.PackageArtifact, error) {
|
||||
if len(p.BuildOptions.PullImageRepository) != 0 {
|
||||
orig := cs.Options.PullImageRepository
|
||||
cs.Options.PullImageRepository = append(orig, p.BuildOptions.PullImageRepository...)
|
||||
defer func() { cs.Options.PullImageRepository = orig }()
|
||||
func (cs *LuetCompiler) inheritSpecBuildOptions(p *compilerspec.LuetCompilationSpec) {
|
||||
Debug(p.GetPackage().HumanReadableString(), "Build options before inherit", p.BuildOptions)
|
||||
|
||||
// Append push repositories from buildpsec buildoptions as pull if found.
|
||||
// This allows to resolve the hash automatically if we pulled the metadata from
|
||||
// repositories that are advertizing their cache.
|
||||
if len(p.BuildOptions.PushImageRepository) != 0 {
|
||||
p.BuildOptions.PullImageRepository = append(p.BuildOptions.PullImageRepository, p.BuildOptions.PushImageRepository)
|
||||
Debug("Inheriting pull repository from PushImageRepository buildoptions", p.BuildOptions.PullImageRepository)
|
||||
}
|
||||
|
||||
if len(cs.Options.PullImageRepository) != 0 {
|
||||
p.BuildOptions.PullImageRepository = append(p.BuildOptions.PullImageRepository, cs.Options.PullImageRepository...)
|
||||
Debug("Inheriting pull repository from PullImageRepository buildoptions", p.BuildOptions.PullImageRepository)
|
||||
}
|
||||
Debug(p.GetPackage().HumanReadableString(), "Build options after inherit", p.BuildOptions)
|
||||
}
|
||||
|
||||
func (cs *LuetCompiler) getSpecHash(pkgs pkg.DefaultPackages, salt string) (string, error) {
|
||||
ht := NewHashTree(cs.Database)
|
||||
overallFp := ""
|
||||
|
||||
for _, p := range pkgs {
|
||||
compileSpec, err := cs.FromPackage(p)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "Error while generating compilespec for "+p.GetName())
|
||||
}
|
||||
packageHashTree, err := ht.Query(cs, compileSpec)
|
||||
if err != nil {
|
||||
return "nil", errors.Wrap(err, "failed querying hashtree")
|
||||
}
|
||||
overallFp = overallFp + packageHashTree.Target.Hash.PackageHash + p.GetFingerPrint()
|
||||
}
|
||||
|
||||
h := md5.New()
|
||||
io.WriteString(h, fmt.Sprintf("%s-%s", overallFp, salt))
|
||||
return fmt.Sprintf("%x", h.Sum(nil)), nil
|
||||
}
|
||||
|
||||
func (cs *LuetCompiler) resolveJoinImages(concurrency int, keepPermissions bool, p *compilerspec.LuetCompilationSpec) error {
|
||||
|
||||
joinTag := ">:loop: join<"
|
||||
if len(p.Join) != 0 {
|
||||
Info(joinTag, "Generating a joint parent image from final packages")
|
||||
} else {
|
||||
return nil
|
||||
}
|
||||
|
||||
// First compute a hash and check if image is available. if it is, then directly consume that
|
||||
overallFp, err := cs.getSpecHash(p.Join, "join")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not generate image hash")
|
||||
}
|
||||
|
||||
Info(joinTag, "Searching existing image with hash ", overallFp)
|
||||
|
||||
image := cs.findImageHash(overallFp, p)
|
||||
if image != "" {
|
||||
Info("Image already found", image)
|
||||
p.SetImage(image)
|
||||
return nil
|
||||
}
|
||||
Info(joinTag, "Image not found. Generating image join with hash ", overallFp)
|
||||
|
||||
// Make sure there is an output path
|
||||
if err := os.MkdirAll(p.GetOutputPath(), os.ModePerm); err != nil {
|
||||
return errors.Wrap(err, "while creating output path")
|
||||
}
|
||||
|
||||
// otherwise, generate it and push it aside
|
||||
joinDir, err := ioutil.TempDir(p.GetOutputPath(), "join")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not create tempdir for joining images")
|
||||
}
|
||||
defer os.RemoveAll(joinDir) // clean up
|
||||
|
||||
for _, p := range p.Join { //highly dependent on the order
|
||||
Info(joinTag, ":arrow_right_hook:", p.HumanReadableString(), ":leaves:")
|
||||
}
|
||||
|
||||
current := 0
|
||||
for _, c := range p.Join {
|
||||
current++
|
||||
if c != nil && c.Name != "" && c.Version != "" {
|
||||
joinTag2 := fmt.Sprintf("%s %d/%d ⤑ :hammer: build %s", joinTag, current, len(p.Join), c.HumanReadableString())
|
||||
|
||||
Info(joinTag2, "compilation starts")
|
||||
spec, err := cs.FromPackage(c)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "while generating images to join from")
|
||||
}
|
||||
wantsArtifact := true
|
||||
genDepsArtifact := !cs.Options.PackageTargetOnly
|
||||
|
||||
spec.SetOutputPath(p.GetOutputPath())
|
||||
|
||||
artifact, err := cs.compile(concurrency, keepPermissions, &wantsArtifact, &genDepsArtifact, spec)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed building join image")
|
||||
}
|
||||
|
||||
err = artifact.Unpack(joinDir, keepPermissions)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed building join image")
|
||||
}
|
||||
Info(joinTag2, ":white_check_mark: Done")
|
||||
}
|
||||
}
|
||||
|
||||
artifactDir, err := ioutil.TempDir(p.GetOutputPath(), "artifact")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not create tempdir for final artifact")
|
||||
}
|
||||
defer os.RemoveAll(joinDir) // clean up
|
||||
|
||||
Info(joinTag, ":droplet: generating artifact for source image of", p.GetPackage().HumanReadableString())
|
||||
|
||||
// After unpack, create a new artifact and a new final image from it.
|
||||
// no need to compress, as we are going to toss it away.
|
||||
a := artifact.NewPackageArtifact(filepath.Join(artifactDir, p.GetPackage().GetFingerPrint()+".join.tar"))
|
||||
if err := a.Compress(joinDir, concurrency); err != nil {
|
||||
return errors.Wrap(err, "error met while creating package archive")
|
||||
}
|
||||
|
||||
joinImageName := fmt.Sprintf("%s:%s", cs.Options.PushImageRepository, overallFp)
|
||||
Info(joinTag, ":droplet: generating image from artifact", joinImageName)
|
||||
opts, err := a.GenerateFinalImage(joinImageName, cs.Backend, keepPermissions)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not create final image")
|
||||
}
|
||||
if cs.Options.Push {
|
||||
Info(joinTag, ":droplet: pushing image from artifact", joinImageName)
|
||||
if err = cs.Backend.Push(opts); err != nil {
|
||||
return errors.Wrapf(err, "Could not push image: %s %s", image, opts.DockerFileName)
|
||||
}
|
||||
}
|
||||
Info(joinTag, ":droplet: Consuming image", joinImageName)
|
||||
p.SetImage(joinImageName)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cs *LuetCompiler) resolveMultiStageImages(concurrency int, keepPermissions bool, p *compilerspec.LuetCompilationSpec) error {
|
||||
resolvedCopyFields := []compilerspec.CopyField{}
|
||||
copyTag := ">:droplet: copy<"
|
||||
|
||||
if len(p.Copy) != 0 {
|
||||
Info(copyTag, "Package has multi-stage copy, generating required images")
|
||||
}
|
||||
|
||||
current := 0
|
||||
// TODO: we should run this only if we are going to build the image
|
||||
for _, c := range p.Copy {
|
||||
current++
|
||||
if c.Package != nil && c.Package.Name != "" && c.Package.Version != "" {
|
||||
copyTag2 := fmt.Sprintf("%s %d/%d ⤑ :hammer: build %s", copyTag, current, len(p.Copy), c.Package.HumanReadableString())
|
||||
|
||||
Info(copyTag2, "generating multi-stage images for", c.Package.HumanReadableString())
|
||||
spec, err := cs.FromPackage(c.Package)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "while generating images to copy from")
|
||||
}
|
||||
|
||||
// If we specify --only-target package, we don't want any artifact, otherwise we do
|
||||
genArtifact := !cs.Options.PackageTargetOnly
|
||||
spec.SetOutputPath(p.GetOutputPath())
|
||||
artifact, err := cs.compile(concurrency, keepPermissions, &genArtifact, &genArtifact, spec)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed building multi-stage image")
|
||||
}
|
||||
|
||||
resolvedCopyFields = append(resolvedCopyFields, compilerspec.CopyField{
|
||||
Image: cs.resolveExistingImageHash(artifact.PackageCacheImage, spec),
|
||||
Source: c.Source,
|
||||
Destination: c.Destination,
|
||||
})
|
||||
Info(copyTag2, ":white_check_mark: Done")
|
||||
} else {
|
||||
resolvedCopyFields = append(resolvedCopyFields, c)
|
||||
}
|
||||
}
|
||||
p.Copy = resolvedCopyFields
|
||||
return nil
|
||||
}
|
||||
|
||||
func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, generateFinalArtifact *bool, generateDependenciesFinalArtifact *bool, p *compilerspec.LuetCompilationSpec) (*artifact.PackageArtifact, error) {
|
||||
Info(":package: Compiling", p.GetPackage().HumanReadableString(), ".... :coffee:")
|
||||
|
||||
//Before multistage : join - same as multistage, but keep artifacts, join them, create a new one and generate a final image.
|
||||
// When the image is there, use it as a source here, in place of GetImage().
|
||||
if err := cs.resolveJoinImages(concurrency, keepPermissions, p); err != nil {
|
||||
return nil, errors.Wrap(err, "while resolving join images")
|
||||
}
|
||||
|
||||
if err := cs.resolveMultiStageImages(concurrency, keepPermissions, p); err != nil {
|
||||
return nil, errors.Wrap(err, "while resolving multi-stage images")
|
||||
}
|
||||
|
||||
Debug(fmt.Sprintf("%s: has images %t, empty package: %t", p.GetPackage().HumanReadableString(), p.HasImageSource(), p.EmptyPackage()))
|
||||
if !p.HasImageSource() && !p.EmptyPackage() {
|
||||
return nil,
|
||||
@@ -742,38 +941,66 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p *compil
|
||||
)
|
||||
}
|
||||
|
||||
targetAssertion := p.GetSourceAssertion().Search(p.GetPackage().GetFingerPrint())
|
||||
ht := NewHashTree(cs.Database)
|
||||
|
||||
packageHashTree, err := ht.Query(cs, p)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed querying hashtree")
|
||||
}
|
||||
|
||||
// This is in order to have the metadata in the yaml
|
||||
p.SetSourceAssertion(packageHashTree.Solution)
|
||||
targetAssertion := packageHashTree.Target
|
||||
|
||||
bus.Manager.Publish(bus.EventPackagePreBuild, struct {
|
||||
CompileSpec *compilerspec.LuetCompilationSpec
|
||||
Assert solver.PackageAssert
|
||||
CompileSpec *compilerspec.LuetCompilationSpec
|
||||
Assert solver.PackageAssert
|
||||
PackageHashTree *PackageImageHashTree
|
||||
}{
|
||||
CompileSpec: p,
|
||||
Assert: *targetAssertion,
|
||||
CompileSpec: p,
|
||||
Assert: *targetAssertion,
|
||||
PackageHashTree: packageHashTree,
|
||||
})
|
||||
|
||||
// Update compilespec build options - it will be then serialized into the compilation metadata file
|
||||
p.SetBuildOptions(cs.Options)
|
||||
p.BuildOptions.PushImageRepository = cs.Options.PushImageRepository
|
||||
|
||||
// - If image is set we just generate a plain dockerfile
|
||||
// Treat last case (easier) first. The image is provided and we just compute a plain dockerfile with the images listed as above
|
||||
if p.GetImage() != "" {
|
||||
return cs.compileWithImage(p.GetImage(), "", targetAssertion.Hash.PackageHash, concurrency, keepPermissions, cs.Options.KeepImg, p, true)
|
||||
localGenerateArtifact := true
|
||||
if generateFinalArtifact != nil {
|
||||
localGenerateArtifact = *generateFinalArtifact
|
||||
}
|
||||
|
||||
a, err := cs.compileWithImage(p.GetImage(), packageHashTree.BuilderImageHash, targetAssertion.Hash.PackageHash, concurrency, keepPermissions, cs.Options.KeepImg, p, localGenerateArtifact)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "building direct image")
|
||||
}
|
||||
a.SourceAssertion = p.GetSourceAssertion()
|
||||
|
||||
a.PackageCacheImage = targetAssertion.Hash.PackageHash
|
||||
return a, nil
|
||||
}
|
||||
|
||||
// - If image is not set, we read a base_image. Then we will build one image from it to kick-off our build based
|
||||
// on how we compute the resolvable tree.
|
||||
// This means to recursively build all the build-images needed to reach that tree part.
|
||||
// - We later on compute an hash used to identify the image, so each similar deptree keeps the same build image.
|
||||
|
||||
dependencies := p.GetSourceAssertion().Drop(p.GetPackage()) // at this point we should have a flattened list of deps to build, including all of them (with all constraints propagated already)
|
||||
departifacts := []*artifact.PackageArtifact{} // TODO: Return this somehow
|
||||
var lastHash string
|
||||
dependencies := packageHashTree.Dependencies // at this point we should have a flattened list of deps to build, including all of them (with all constraints propagated already)
|
||||
departifacts := []*artifact.PackageArtifact{} // TODO: Return this somehow
|
||||
depsN := 0
|
||||
currentN := 0
|
||||
|
||||
packageDeps := !cs.Options.PackageTargetOnly
|
||||
if !cs.Options.NoDeps {
|
||||
if generateDependenciesFinalArtifact != nil {
|
||||
packageDeps = *generateDependenciesFinalArtifact
|
||||
}
|
||||
|
||||
buildDeps := !cs.Options.NoDeps
|
||||
buildTarget := !cs.Options.OnlyDeps
|
||||
|
||||
if buildDeps {
|
||||
Info(":deciduous_tree: Build dependencies for " + p.GetPackage().HumanReadableString())
|
||||
for _, assertion := range dependencies { //highly dependent on the order
|
||||
depsN++
|
||||
@@ -788,9 +1015,10 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p *compil
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error while generating compilespec for "+assertion.Package.GetName())
|
||||
}
|
||||
compileSpec.BuildOptions.PullImageRepository = append(compileSpec.BuildOptions.PullImageRepository, p.BuildOptions.PullImageRepository...)
|
||||
Debug("PullImage repos:", compileSpec.BuildOptions.PullImageRepository)
|
||||
|
||||
compileSpec.SetOutputPath(p.GetOutputPath())
|
||||
Debug(pkgTag, " :arrow_right_hook: :whale: Builder image from hash", assertion.Hash.BuildHash)
|
||||
Debug(pkgTag, " :arrow_right_hook: :whale: Package image from hash", assertion.Hash.PackageHash)
|
||||
|
||||
bus.Manager.Publish(bus.EventPackagePreBuild, struct {
|
||||
CompileSpec *compilerspec.LuetCompilationSpec
|
||||
@@ -800,29 +1028,53 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p *compil
|
||||
Assert: assertion,
|
||||
})
|
||||
|
||||
lastHash = assertion.Hash.PackageHash
|
||||
// for the source instead, pick an image and a buildertaggedImage from hashes if they exists.
|
||||
// otherways fallback to the pushed repo
|
||||
// Resolve images from the hashtree
|
||||
resolvedBuildImage := cs.resolveExistingImageHash(assertion.Hash.BuildHash)
|
||||
if compileSpec.GetImage() != "" {
|
||||
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().HumanReadableString()+" from image")
|
||||
|
||||
a, err := cs.compileWithImage(compileSpec.GetImage(), resolvedBuildImage, assertion.Hash.PackageHash, concurrency, keepPermissions, cs.Options.KeepImg, compileSpec, packageDeps)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Failed compiling "+compileSpec.GetPackage().HumanReadableString())
|
||||
}
|
||||
departifacts = append(departifacts, a)
|
||||
Info(pkgTag, ":white_check_mark: Done")
|
||||
continue
|
||||
if err := cs.resolveJoinImages(concurrency, keepPermissions, compileSpec); err != nil {
|
||||
return nil, errors.Wrap(err, "while resolving join images")
|
||||
}
|
||||
|
||||
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().HumanReadableString()+" from tree")
|
||||
a, err := cs.compileWithImage(resolvedBuildImage, "", assertion.Hash.PackageHash, concurrency, keepPermissions, cs.Options.KeepImg, compileSpec, packageDeps)
|
||||
if err := cs.resolveMultiStageImages(concurrency, keepPermissions, compileSpec); err != nil {
|
||||
return nil, errors.Wrap(err, "while resolving multi-stage images")
|
||||
}
|
||||
|
||||
buildHash, err := packageHashTree.DependencyBuildImage(assertion.Package)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed looking for dependency in hashtree")
|
||||
}
|
||||
|
||||
Debug(pkgTag, " :arrow_right_hook: :whale: Builder image from hash", assertion.Hash.BuildHash)
|
||||
Debug(pkgTag, " :arrow_right_hook: :whale: Package image from hash", assertion.Hash.PackageHash)
|
||||
|
||||
var sourceImage string
|
||||
|
||||
if compileSpec.GetImage() != "" {
|
||||
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().HumanReadableString()+" from image")
|
||||
sourceImage = compileSpec.GetImage()
|
||||
} else {
|
||||
// for the source instead, pick an image and a buildertaggedImage from hashes if they exists.
|
||||
// otherways fallback to the pushed repo
|
||||
// Resolve images from the hashtree
|
||||
sourceImage = cs.resolveExistingImageHash(assertion.Hash.BuildHash, compileSpec)
|
||||
Debug(pkgTag, " :wrench: Compiling "+compileSpec.GetPackage().HumanReadableString()+" from tree")
|
||||
}
|
||||
|
||||
a, err := cs.compileWithImage(
|
||||
sourceImage,
|
||||
buildHash,
|
||||
assertion.Hash.PackageHash,
|
||||
concurrency,
|
||||
keepPermissions,
|
||||
cs.Options.KeepImg,
|
||||
compileSpec,
|
||||
packageDeps,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Failed compiling "+compileSpec.GetPackage().HumanReadableString())
|
||||
}
|
||||
|
||||
a.PackageCacheImage = assertion.Hash.PackageHash
|
||||
|
||||
Info(pkgTag, ":white_check_mark: Done")
|
||||
|
||||
bus.Manager.Publish(bus.EventPackagePostBuild, struct {
|
||||
CompileSpec *compilerspec.LuetCompilationSpec
|
||||
Artifact *artifact.PackageArtifact
|
||||
@@ -832,24 +1084,24 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p *compil
|
||||
})
|
||||
|
||||
departifacts = append(departifacts, a)
|
||||
Info(pkgTag, ":white_check_mark: Done")
|
||||
}
|
||||
|
||||
} else if len(dependencies) > 0 {
|
||||
lastHash = dependencies[len(dependencies)-1].Hash.PackageHash
|
||||
}
|
||||
|
||||
if !cs.Options.OnlyDeps {
|
||||
resolvedBuildImage := cs.resolveExistingImageHash(lastHash)
|
||||
if buildTarget {
|
||||
localGenerateArtifact := true
|
||||
if generateFinalArtifact != nil {
|
||||
localGenerateArtifact = *generateFinalArtifact
|
||||
}
|
||||
resolvedSourceImage := cs.resolveExistingImageHash(packageHashTree.SourceHash, p)
|
||||
Info(":rocket: All dependencies are satisfied, building package requested by the user", p.GetPackage().HumanReadableString())
|
||||
Info(":package:", p.GetPackage().HumanReadableString(), " Using image: ", resolvedBuildImage)
|
||||
a, err := cs.compileWithImage(resolvedBuildImage, "", targetAssertion.Hash.PackageHash, concurrency, keepPermissions, cs.Options.KeepImg, p, true)
|
||||
Info(":package:", p.GetPackage().HumanReadableString(), " Using image: ", resolvedSourceImage)
|
||||
a, err := cs.compileWithImage(resolvedSourceImage, packageHashTree.BuilderImageHash, targetAssertion.Hash.PackageHash, concurrency, keepPermissions, cs.Options.KeepImg, p, localGenerateArtifact)
|
||||
if err != nil {
|
||||
return a, err
|
||||
}
|
||||
a.Dependencies = departifacts
|
||||
a.SourceAssertion = p.GetSourceAssertion()
|
||||
|
||||
a.PackageCacheImage = targetAssertion.Hash.PackageHash
|
||||
bus.Manager.Publish(bus.EventPackagePostBuild, struct {
|
||||
CompileSpec *compilerspec.LuetCompilationSpec
|
||||
Artifact *artifact.PackageArtifact
|
||||
@@ -866,18 +1118,11 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, p *compil
|
||||
|
||||
type templatedata map[string]interface{}
|
||||
|
||||
func (cs *LuetCompiler) templatePackage(vals []map[string]interface{}, pack pkg.Package) ([]byte, error) {
|
||||
func (cs *LuetCompiler) templatePackage(vals []map[string]interface{}, pack pkg.Package, dst templatedata) ([]byte, error) {
|
||||
|
||||
var dataresult []byte
|
||||
val := pack.Rel(DefinitionFile)
|
||||
|
||||
// Update processed build values
|
||||
dst, err := helpers.UnMarshalValues(cs.Options.BuildValuesFile)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "unmarshalling values")
|
||||
}
|
||||
cs.Options.BuildValues = append(vals, (map[string]interface{})(dst))
|
||||
|
||||
if _, err := os.Stat(pack.Rel(CollectionFile)); err == nil {
|
||||
val = pack.Rel(CollectionFile)
|
||||
|
||||
@@ -929,7 +1174,7 @@ func (cs *LuetCompiler) templatePackage(vals []map[string]interface{}, pack pkg.
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "while marshalling values file")
|
||||
}
|
||||
f := filepath.Join(valuesdir, helpers.RandStringRunes(20))
|
||||
f := filepath.Join(valuesdir, fileHelper.RandStringRunes(20))
|
||||
if err := ioutil.WriteFile(f, out, os.ModePerm); err != nil {
|
||||
return nil, errors.Wrap(err, "while writing temporary values file")
|
||||
}
|
||||
@@ -956,7 +1201,7 @@ func (cs *LuetCompiler) FromPackage(p pkg.Package) (*compilerspec.LuetCompilatio
|
||||
|
||||
opts := options.Compiler{}
|
||||
|
||||
artifactMetadataFile := filepath.Join(p.GetTreeDir(), "..", p.GetMetadataFilePath())
|
||||
artifactMetadataFile := filepath.Join(pack.GetTreeDir(), "..", pack.GetMetadataFilePath())
|
||||
Debug("Checking if metadata file is present", artifactMetadataFile)
|
||||
if _, err := os.Stat(artifactMetadataFile); err == nil {
|
||||
f, err := os.Open(artifactMetadataFile)
|
||||
@@ -972,17 +1217,24 @@ func (cs *LuetCompiler) FromPackage(p pkg.Package) (*compilerspec.LuetCompilatio
|
||||
return nil, errors.Wrap(err, "could not decode package from yaml")
|
||||
}
|
||||
|
||||
Debug("Read build options:", art.CompileSpec.BuildOptions)
|
||||
opts = art.CompileSpec.BuildOptions
|
||||
opts.PushImageRepository = ""
|
||||
|
||||
Debug("Read build options:", art.CompileSpec.BuildOptions, "from", artifactMetadataFile)
|
||||
if art.CompileSpec.BuildOptions != nil {
|
||||
opts = *art.CompileSpec.BuildOptions
|
||||
}
|
||||
} else if !os.IsNotExist(err) {
|
||||
Debug("error reading already existing artifact metadata file: ", err.Error())
|
||||
Debug("error reading artifact metadata file: ", err.Error())
|
||||
} else if os.IsNotExist(err) {
|
||||
Debug("metadata file not present, skipping", artifactMetadataFile)
|
||||
}
|
||||
|
||||
bytes, err := cs.templatePackage(opts.BuildValues, pack)
|
||||
// Update processed build values
|
||||
dst, err := helpers.UnMarshalValues(cs.Options.BuildValuesFile)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "unmarshalling values")
|
||||
}
|
||||
opts.BuildValues = append(opts.BuildValues, (map[string]interface{})(dst))
|
||||
|
||||
bytes, err := cs.templatePackage(opts.BuildValues, pack, templatedata(dst))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "while rendering package template")
|
||||
}
|
||||
@@ -991,7 +1243,9 @@ func (cs *LuetCompiler) FromPackage(p pkg.Package) (*compilerspec.LuetCompilatio
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
newSpec.BuildOptions = opts
|
||||
newSpec.BuildOptions = &opts
|
||||
|
||||
cs.inheritSpecBuildOptions(newSpec)
|
||||
|
||||
return newSpec, err
|
||||
}
|
||||
|
@@ -25,6 +25,7 @@ import (
|
||||
"github.com/mudler/luet/pkg/compiler/types/options"
|
||||
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
|
||||
helpers "github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/tree"
|
||||
. "github.com/onsi/ginkgo"
|
||||
@@ -59,15 +60,15 @@ var _ = Describe("Compiler", func() {
|
||||
|
||||
artifact, err := compiler.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
|
||||
content1, err := helpers.Read(spec.Rel("test5"))
|
||||
content1, err := fileHelper.Read(spec.Rel("test5"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err := helpers.Read(spec.Rel("test6"))
|
||||
content2, err := fileHelper.Read(spec.Rel("test6"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("artifact5\n"))
|
||||
Expect(content2).To(Equal("artifact6\n"))
|
||||
@@ -75,6 +76,68 @@ var _ = Describe("Compiler", func() {
|
||||
})
|
||||
})
|
||||
|
||||
Context("Copy and Join", func() {
|
||||
It("Compiles it correctly with Copy", func() {
|
||||
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
|
||||
|
||||
err := generalRecipe.Load("../../tests/fixtures/copy")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
|
||||
|
||||
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
|
||||
|
||||
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.2"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
|
||||
|
||||
tmpdir, err := ioutil.TempDir("", "tree")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(tmpdir) // clean up
|
||||
|
||||
spec.SetOutputPath(tmpdir)
|
||||
|
||||
artifact, err := compiler.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(fileHelper.Exists(spec.Rel("result"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("bina/busybox"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles it correctly with Join", func() {
|
||||
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
|
||||
|
||||
err := generalRecipe.Load("../../tests/fixtures/join")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
|
||||
|
||||
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
|
||||
|
||||
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.2"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
|
||||
|
||||
tmpdir, err := ioutil.TempDir("", "tree")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(tmpdir) // clean up
|
||||
|
||||
spec.SetOutputPath(tmpdir)
|
||||
|
||||
artifact, err := compiler.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.Exists(spec.Rel("newc"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test4"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
|
||||
})
|
||||
})
|
||||
|
||||
Context("Simple package build definition", func() {
|
||||
It("Compiles it in parallel", func() {
|
||||
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
|
||||
@@ -102,7 +165,7 @@ var _ = Describe("Compiler", func() {
|
||||
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec2))
|
||||
Expect(errs).To(BeNil())
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
@@ -165,23 +228,23 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(3))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test3"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test4"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test4"))).To(BeTrue())
|
||||
|
||||
content1, err := helpers.Read(spec.Rel("c"))
|
||||
content1, err := fileHelper.Read(spec.Rel("c"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err := helpers.Read(spec.Rel("cd"))
|
||||
content2, err := fileHelper.Read(spec.Rel("cd"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("c\n"))
|
||||
Expect(content2).To(Equal("c\n"))
|
||||
|
||||
content1, err = helpers.Read(spec.Rel("d"))
|
||||
content1, err = fileHelper.Read(spec.Rel("d"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err = helpers.Read(spec.Rel("dd"))
|
||||
content2, err = fileHelper.Read(spec.Rel("dd"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("s\n"))
|
||||
Expect(content2).To(Equal("dd\n"))
|
||||
@@ -215,17 +278,17 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts2)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
for _, artifact := range artifacts2 {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("etc/hosts"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test1"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("etc/hosts"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test1"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles and includes ony wanted files", func() {
|
||||
@@ -254,12 +317,12 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("marvin"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).ToNot(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles and excludes files", func() {
|
||||
@@ -288,13 +351,13 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("marvin"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("marvot"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("marvot"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles includes and excludes files", func() {
|
||||
@@ -323,13 +386,13 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("marvin"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("marvot"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("marvot"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).ToNot(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles and excludes ony wanted files also from unpacked packages", func() {
|
||||
@@ -357,12 +420,12 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles includes and excludes ony wanted files also from unpacked packages", func() {
|
||||
@@ -390,12 +453,12 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles and includes ony wanted files also from unpacked packages", func() {
|
||||
@@ -423,16 +486,16 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Exists(spec.Rel("var/lib/udhcpd"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("marvin"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test2"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("lib/firmware"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("var/lib/udhcpd"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test2"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("lib/firmware"))).ToNot(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles a more complex tree", func() {
|
||||
@@ -461,18 +524,18 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Untar(spec.Rel("extra-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("extra-layer"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("extra-layer"))).To(BeTrue())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("usr/bin/pkgs-checker"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("base-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("base-layer-0.1.metadata.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("extra-layer-0.1.metadata.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("extra-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("usr/bin/pkgs-checker"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("base-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("base-layer-0.1.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("extra-layer-0.1.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("extra-layer-0.1.package.tar"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles with provides support", func() {
|
||||
@@ -502,19 +565,19 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Untar(spec.Rel("c-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("d"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("dd"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("c"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("cd"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("d"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("dd"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("c"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("cd"))).To(BeTrue())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("d-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("d-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("c-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("c-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles with provides and selectors support", func() {
|
||||
@@ -545,19 +608,19 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Untar(spec.Rel("c-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("d"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("dd"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("c"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("cd"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("d"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("dd"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("c"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("cd"))).To(BeTrue())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("d-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("d-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("c-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("c-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
})
|
||||
It("Compiles revdeps", func() {
|
||||
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
|
||||
@@ -585,16 +648,16 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(2))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Untar(spec.Rel("extra-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("extra-layer"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("extra-layer"))).To(BeTrue())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("usr/bin/pkgs-checker"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("base-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("extra-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("usr/bin/pkgs-checker"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("base-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("extra-layer-0.1.package.tar"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles complex dependencies trees with best matches", func() {
|
||||
@@ -623,17 +686,17 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
Expect(len(artifacts[0].Dependencies)).To(Equal(6))
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
Expect(helpers.Untar(spec.Rel("vhba-sys-fs-5.4.2-20190410.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(spec.Rel("sabayon-build-portage-layer-0.20191126.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("build-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("build-sabayon-overlay-layer-0.20191212.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("build-sabayon-overlays-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("linux-sabayon-sys-kernel-5.4.2.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("sabayon-sources-sys-kernel-5.4.2.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("vhba"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("sabayon-build-portage-layer-0.20191126.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("build-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("build-sabayon-overlay-layer-0.20191212.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("build-sabayon-overlays-layer-0.1.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("linux-sabayon-sys-kernel-5.4.2.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("sabayon-sources-sys-kernel-5.4.2.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("vhba"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Compiles revdeps with seeds", func() {
|
||||
@@ -658,31 +721,31 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(4))
|
||||
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
|
||||
// A deps on B, so A artifacts are here:
|
||||
Expect(helpers.Exists(spec.Rel("test3"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test4"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test4"))).To(BeTrue())
|
||||
|
||||
// B
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("artifact42"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("artifact42"))).To(BeTrue())
|
||||
|
||||
// C depends on B, so B is here
|
||||
content1, err := helpers.Read(spec.Rel("c"))
|
||||
content1, err := fileHelper.Read(spec.Rel("c"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err := helpers.Read(spec.Rel("cd"))
|
||||
content2, err := fileHelper.Read(spec.Rel("cd"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("c\n"))
|
||||
Expect(content2).To(Equal("c\n"))
|
||||
|
||||
// D is here as it requires C, and C was recompiled
|
||||
content1, err = helpers.Read(spec.Rel("d"))
|
||||
content1, err = fileHelper.Read(spec.Rel("d"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err = helpers.Read(spec.Rel("dd"))
|
||||
content2, err = fileHelper.Read(spec.Rel("dd"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("s\n"))
|
||||
Expect(content2).To(Equal("dd\n"))
|
||||
@@ -715,19 +778,19 @@ var _ = Describe("Compiler", func() {
|
||||
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
|
||||
Expect(errs).To(BeNil())
|
||||
for _, artifact := range artifacts {
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
for _, d := range artifact.Dependencies {
|
||||
Expect(helpers.Exists(d.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(d.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(d.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
}
|
||||
}
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test3"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test4"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test4"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
|
||||
})
|
||||
})
|
||||
@@ -759,8 +822,8 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
|
||||
Expect(helpers.Untar(spec.Rel("runtime-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(spec.Rel("bin/busybox"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("var"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("bin/busybox"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("var"))).ToNot(BeTrue())
|
||||
})
|
||||
})
|
||||
|
||||
@@ -807,13 +870,13 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts[0].Dependencies)).To(Equal(0))
|
||||
|
||||
Expect(helpers.Untar(spec.Rel("dironly-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(spec.Rel("test1"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test2"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test1"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test2"))).To(BeTrue())
|
||||
|
||||
Expect(helpers.Untar(spec2.Rel("dironly_filter-test-1.0.package.tar"), tmpdir2, false)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(spec2.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec2.Rel("test6"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec2.Rel("artifact42"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec2.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec2.Rel("test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec2.Rel("artifact42"))).ToNot(BeTrue())
|
||||
})
|
||||
})
|
||||
|
||||
@@ -843,12 +906,12 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(errs).To(BeNil())
|
||||
Expect(len(artifacts)).To(Equal(1))
|
||||
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
|
||||
Expect(helpers.Exists(spec.Rel("runtime-layer-0.1.package.tar.gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("runtime-layer-0.1.package.tar"))).To(BeFalse())
|
||||
Expect(fileHelper.Exists(spec.Rel("runtime-layer-0.1.package.tar.gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("runtime-layer-0.1.package.tar"))).To(BeFalse())
|
||||
Expect(artifacts[0].Unpack(tmpdir, false)).ToNot(HaveOccurred())
|
||||
// Expect(helpers.Untar(spec.Rel("runtime-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(spec.Rel("bin/busybox"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("var"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("bin/busybox"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("var"))).ToNot(BeTrue())
|
||||
})
|
||||
})
|
||||
|
||||
@@ -899,7 +962,7 @@ var _ = Describe("Compiler", func() {
|
||||
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
|
||||
Expect(artifacts[0].Files).To(ContainElement("bin/busybox"))
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("runtime-layer-0.1.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("runtime-layer-0.1.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
art, err := LoadArtifactFromYaml(spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
161
pkg/compiler/imagehashtree.go
Normal file
161
pkg/compiler/imagehashtree.go
Normal file
@@ -0,0 +1,161 @@
|
||||
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package compiler
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/solver"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// ImageHashTree is holding the Database
|
||||
// and the options to resolve PackageImageHashTrees
|
||||
// for a given specfile
|
||||
// It is responsible of returning a concrete result
|
||||
// which identifies a Package in a HashTree
|
||||
type ImageHashTree struct {
|
||||
Database pkg.PackageDatabase
|
||||
SolverOptions config.LuetSolverOptions
|
||||
}
|
||||
|
||||
// PackageImageHashTree represent the Package into a given image hash tree
|
||||
// The hash tree is constructed by a set of images representing
|
||||
// the package during its build stage. A Hash is assigned to each image
|
||||
// from the package fingerprint, plus the SAT solver assertion result (which is hashed as well)
|
||||
// and the specfile signatures. This guarantees that each image of the build stage
|
||||
// is unique and can be identified later on.
|
||||
type PackageImageHashTree struct {
|
||||
Target *solver.PackageAssert
|
||||
Dependencies solver.PackagesAssertions
|
||||
Solution solver.PackagesAssertions
|
||||
dependencyBuilderImageHashes map[string]string
|
||||
SourceHash string
|
||||
BuilderImageHash string
|
||||
}
|
||||
|
||||
func NewHashTree(db pkg.PackageDatabase) *ImageHashTree {
|
||||
return &ImageHashTree{
|
||||
Database: db,
|
||||
}
|
||||
}
|
||||
|
||||
func (ht *PackageImageHashTree) DependencyBuildImage(p pkg.Package) (string, error) {
|
||||
found, ok := ht.dependencyBuilderImageHashes[p.GetFingerPrint()]
|
||||
if !ok {
|
||||
return "", errors.New("package hash not found")
|
||||
}
|
||||
return found, nil
|
||||
}
|
||||
|
||||
func (ht *PackageImageHashTree) String() string {
|
||||
return fmt.Sprintf(
|
||||
"Target buildhash: %s\nTarget packagehash: %s\nBuilder Imagehash: %s\nSource Imagehash: %s\n",
|
||||
ht.Target.Hash.BuildHash,
|
||||
ht.Target.Hash.PackageHash,
|
||||
ht.BuilderImageHash,
|
||||
ht.SourceHash,
|
||||
)
|
||||
}
|
||||
|
||||
// Query takes a compiler and a compilation spec and returns a PackageImageHashTree tied to it.
|
||||
// PackageImageHashTree contains all the informations to resolve the spec build images in order to
|
||||
// reproducibly re-build images from packages
|
||||
func (ht *ImageHashTree) Query(cs *LuetCompiler, p *compilerspec.LuetCompilationSpec) (*PackageImageHashTree, error) {
|
||||
assertions, err := ht.resolve(cs, p)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
targetAssertion := assertions.Search(p.GetPackage().GetFingerPrint())
|
||||
|
||||
dependencies := assertions.Drop(p.GetPackage())
|
||||
var sourceHash string
|
||||
imageHashes := map[string]string{}
|
||||
for _, assertion := range dependencies {
|
||||
var depbuildImageTag string
|
||||
compileSpec, err := cs.FromPackage(assertion.Package)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error while generating compilespec for "+assertion.Package.GetName())
|
||||
}
|
||||
if compileSpec.GetImage() != "" {
|
||||
depbuildImageTag = assertion.Hash.BuildHash
|
||||
} else {
|
||||
depbuildImageTag = ht.genBuilderImageTag(compileSpec, targetAssertion.Hash.PackageHash)
|
||||
}
|
||||
imageHashes[assertion.Package.GetFingerPrint()] = depbuildImageTag
|
||||
sourceHash = assertion.Hash.PackageHash
|
||||
}
|
||||
|
||||
return &PackageImageHashTree{
|
||||
Dependencies: dependencies,
|
||||
Target: targetAssertion,
|
||||
SourceHash: sourceHash,
|
||||
BuilderImageHash: ht.genBuilderImageTag(p, targetAssertion.Hash.PackageHash),
|
||||
dependencyBuilderImageHashes: imageHashes,
|
||||
Solution: assertions,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (ht *ImageHashTree) genBuilderImageTag(p *compilerspec.LuetCompilationSpec, packageImage string) string {
|
||||
// Use packageImage as salt into the fp being used
|
||||
// so the hash is unique also in cases where
|
||||
// some package deps does have completely different
|
||||
// depgraphs
|
||||
return fmt.Sprintf("builder-%s", p.GetPackage().HashFingerprint(packageImage))
|
||||
}
|
||||
|
||||
// resolve computes the dependency tree of a compilation spec and returns solver assertions
|
||||
// in order to be able to compile the spec.
|
||||
func (ht *ImageHashTree) resolve(cs *LuetCompiler, p *compilerspec.LuetCompilationSpec) (solver.PackagesAssertions, error) {
|
||||
dependencies, err := cs.ComputeDepTree(p)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "While computing a solution for "+p.GetPackage().HumanReadableString())
|
||||
}
|
||||
|
||||
// Get hash from buildpsecs
|
||||
salts := map[string]string{}
|
||||
for _, assertion := range dependencies { //highly dependent on the order
|
||||
if assertion.Value {
|
||||
spec, err := cs.FromPackage(assertion.Package)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "while computing hash buildspecs")
|
||||
}
|
||||
hash, err := spec.Hash()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed computing hash")
|
||||
}
|
||||
salts[assertion.Package.GetFingerPrint()] = hash
|
||||
}
|
||||
}
|
||||
|
||||
assertions := solver.PackagesAssertions{}
|
||||
for _, assertion := range dependencies { //highly dependent on the order
|
||||
if assertion.Value {
|
||||
nthsolution := dependencies.Cut(assertion.Package)
|
||||
assertion.Hash = solver.PackageHash{
|
||||
BuildHash: nthsolution.SaltedHashFrom(assertion.Package, salts),
|
||||
PackageHash: nthsolution.SaltedAssertionHash(salts),
|
||||
}
|
||||
assertion.Package.SetTreeDir(p.Package.GetTreeDir())
|
||||
assertions = append(assertions, assertion)
|
||||
}
|
||||
}
|
||||
|
||||
return assertions, nil
|
||||
}
|
145
pkg/compiler/imagehashtree_test.go
Normal file
145
pkg/compiler/imagehashtree_test.go
Normal file
@@ -0,0 +1,145 @@
|
||||
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package compiler_test
|
||||
|
||||
import (
|
||||
. "github.com/mudler/luet/pkg/compiler"
|
||||
sd "github.com/mudler/luet/pkg/compiler/backend"
|
||||
"github.com/mudler/luet/pkg/compiler/types/options"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/tree"
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("ImageHashTree", func() {
|
||||
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
|
||||
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
|
||||
hashtree := NewHashTree(generalRecipe.GetDatabase())
|
||||
Context("Simple package definition", func() {
|
||||
BeforeEach(func() {
|
||||
generalRecipe = tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
|
||||
err := generalRecipe.Load("../../tests/fixtures/buildable")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
compiler = NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
|
||||
hashtree = NewHashTree(generalRecipe.GetDatabase())
|
||||
|
||||
})
|
||||
|
||||
It("Calculates the hash correctly", func() {
|
||||
|
||||
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
packageHash, err := hashtree.Query(compiler, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(packageHash.Target.Hash.BuildHash).To(Equal("4db24406e8db30a3310a1cf8c4d4e19597745e6d41b189dc51a73ac4a50cc9e6"))
|
||||
Expect(packageHash.Target.Hash.PackageHash).To(Equal("4c867c9bab6c71d9420df75806e7a2f171dbc08487852ab4e2487bab04066cf2"))
|
||||
Expect(packageHash.BuilderImageHash).To(Equal("builder-e6f9c5552a67c463215b0a9e4f7c7fc8"))
|
||||
})
|
||||
})
|
||||
|
||||
expectedPackageHash := "15811d83d0f8360318c54d91dcae3714f8efb39bf872572294834880f00ee7a8"
|
||||
|
||||
Context("complex package definition", func() {
|
||||
BeforeEach(func() {
|
||||
generalRecipe = tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
|
||||
|
||||
err := generalRecipe.Load("../../tests/fixtures/upgrade_old_repo_revision")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
compiler = NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
|
||||
hashtree = NewHashTree(generalRecipe.GetDatabase())
|
||||
|
||||
})
|
||||
It("Calculates the hash correctly", func() {
|
||||
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
packageHash, err := hashtree.Query(compiler, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(packageHash.Dependencies[len(packageHash.Dependencies)-1].Hash.PackageHash).To(Equal(expectedPackageHash))
|
||||
Expect(packageHash.SourceHash).To(Equal(expectedPackageHash))
|
||||
Expect(packageHash.BuilderImageHash).To(Equal("builder-3d739cab442aec15a6da238481df73c5"))
|
||||
|
||||
//Expect(packageHash.Target.Hash.BuildHash).To(Equal("79d7107d13d578b362e6a7bf10ec850efce26316405b8d732ce8f9e004d64281"))
|
||||
Expect(packageHash.Target.Hash.PackageHash).To(Equal("99c4ebb4bc4754985fcc28677badf90f525aa231b1db0fe75659f11b86dc20e8"))
|
||||
a := &pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.1"}
|
||||
hash, err := packageHash.DependencyBuildImage(a)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(hash).To(Equal("f48f28ab62f1379a4247ec763681ccede68ea1e5c25aae8fb72459c0b2f8742e"))
|
||||
|
||||
assertionA := packageHash.Dependencies.Search(a.GetFingerPrint())
|
||||
Expect(assertionA.Hash.PackageHash).To(Equal(expectedPackageHash))
|
||||
b := &pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}
|
||||
assertionB := packageHash.Dependencies.Search(b.GetFingerPrint())
|
||||
Expect(assertionB.Hash.PackageHash).To(Equal("f48f28ab62f1379a4247ec763681ccede68ea1e5c25aae8fb72459c0b2f8742e"))
|
||||
hashB, err := packageHash.DependencyBuildImage(b)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(hashB).To(Equal("2668e418eab6861404834ad617713e39b8e58f68016a1fbfcc9384efdd037376"))
|
||||
})
|
||||
})
|
||||
|
||||
Context("complex package definition, with small change in build.yaml", func() {
|
||||
BeforeEach(func() {
|
||||
generalRecipe = tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
|
||||
|
||||
//Definition of A here is slightly changed in the steps build.yaml file (1 character only)
|
||||
err := generalRecipe.Load("../../tests/fixtures/upgrade_old_repo_revision_content_changed")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
compiler = NewLuetCompiler(sd.NewSimpleDockerBackend(), generalRecipe.GetDatabase(), options.Concurrency(2))
|
||||
hashtree = NewHashTree(generalRecipe.GetDatabase())
|
||||
|
||||
})
|
||||
It("Calculates the hash correctly", func() {
|
||||
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
packageHash, err := hashtree.Query(compiler, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(packageHash.Dependencies[len(packageHash.Dependencies)-1].Hash.PackageHash).ToNot(Equal(expectedPackageHash))
|
||||
sourceHash := "1d91b13d0246fa000085a1071c63397d21546300b17f69493f22315a64b717d4"
|
||||
Expect(packageHash.Dependencies[len(packageHash.Dependencies)-1].Hash.PackageHash).To(Equal(sourceHash))
|
||||
Expect(packageHash.SourceHash).To(Equal(sourceHash))
|
||||
Expect(packageHash.SourceHash).ToNot(Equal(expectedPackageHash))
|
||||
|
||||
Expect(packageHash.BuilderImageHash).To(Equal("builder-03ee108a7c56b17ee568ace0800dd16d"))
|
||||
|
||||
//Expect(packageHash.Target.Hash.BuildHash).To(Equal("79d7107d13d578b362e6a7bf10ec850efce26316405b8d732ce8f9e004d64281"))
|
||||
Expect(packageHash.Target.Hash.PackageHash).To(Equal("7677da23b2cc866c2d07aa4a58fbf703340f2f78c0efbb1ba9faf8979f250c87"))
|
||||
a := &pkg.DefaultPackage{Name: "a", Category: "test", Version: "1.1"}
|
||||
hash, err := packageHash.DependencyBuildImage(a)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(hash).To(Equal("f48f28ab62f1379a4247ec763681ccede68ea1e5c25aae8fb72459c0b2f8742e"))
|
||||
|
||||
assertionA := packageHash.Dependencies.Search(a.GetFingerPrint())
|
||||
|
||||
Expect(assertionA.Hash.PackageHash).To(Equal("1d91b13d0246fa000085a1071c63397d21546300b17f69493f22315a64b717d4"))
|
||||
Expect(assertionA.Hash.PackageHash).ToNot(Equal(expectedPackageHash))
|
||||
|
||||
b := &pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}
|
||||
assertionB := packageHash.Dependencies.Search(b.GetFingerPrint())
|
||||
|
||||
Expect(assertionB.Hash.PackageHash).To(Equal("f48f28ab62f1379a4247ec763681ccede68ea1e5c25aae8fb72459c0b2f8742e"))
|
||||
hashB, err := packageHash.DependencyBuildImage(b)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(hashB).To(Equal("2668e418eab6861404834ad617713e39b8e58f68016a1fbfcc9384efdd037376"))
|
||||
})
|
||||
})
|
||||
|
||||
})
|
@@ -19,6 +19,8 @@ import (
|
||||
"archive/tar"
|
||||
"bufio"
|
||||
"bytes"
|
||||
"crypto/sha1"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
@@ -27,7 +29,6 @@ import (
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
|
||||
system "github.com/docker/docker/pkg/system"
|
||||
zstd "github.com/klauspost/compress/zstd"
|
||||
gzip "github.com/klauspost/pgzip"
|
||||
|
||||
@@ -41,6 +42,7 @@ import (
|
||||
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
|
||||
. "github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/solver"
|
||||
@@ -54,12 +56,13 @@ import (
|
||||
type PackageArtifact struct {
|
||||
Path string `json:"path"`
|
||||
|
||||
Dependencies []*PackageArtifact `json:"dependencies"`
|
||||
CompileSpec *compilerspec.LuetCompilationSpec `json:"compilationspec"`
|
||||
Checksums Checksums `json:"checksums"`
|
||||
SourceAssertion solver.PackagesAssertions `json:"-"`
|
||||
CompressionType compression.Implementation `json:"compressiontype"`
|
||||
Files []string `json:"files"`
|
||||
Dependencies []*PackageArtifact `json:"dependencies"`
|
||||
CompileSpec *compilerspec.LuetCompilationSpec `json:"compilationspec"`
|
||||
Checksums Checksums `json:"checksums"`
|
||||
SourceAssertion solver.PackagesAssertions `json:"-"`
|
||||
CompressionType compression.Implementation `json:"compressiontype"`
|
||||
Files []string `json:"files"`
|
||||
PackageCacheImage string `json:"package_cacheimage"`
|
||||
}
|
||||
|
||||
func (p *PackageArtifact) ShallowCopy() *PackageArtifact {
|
||||
@@ -166,7 +169,7 @@ func CreateArtifactForFile(s string, opts ...func(*PackageArtifact)) (*PackageAr
|
||||
}
|
||||
defer os.RemoveAll(archive) // clean up
|
||||
dst := filepath.Join(archive, fileName)
|
||||
if err := helpers.CopyFile(s, dst); err != nil {
|
||||
if err := fileHelper.CopyFile(s, dst); err != nil {
|
||||
return nil, errors.Wrapf(err, "error while copying %s to %s", s, dst)
|
||||
}
|
||||
|
||||
@@ -207,7 +210,7 @@ func (a *PackageArtifact) GenerateFinalImage(imageName string, b ImageBuilder, k
|
||||
return builderOpts, errors.Wrap(err, "error met while uncompressing artifact "+a.Path)
|
||||
}
|
||||
|
||||
empty, err := helpers.DirectoryIsEmpty(uncompressedFiles)
|
||||
empty, err := fileHelper.DirectoryIsEmpty(uncompressedFiles)
|
||||
if err != nil {
|
||||
return builderOpts, errors.Wrap(err, "error met while checking if directory is empty "+uncompressedFiles)
|
||||
}
|
||||
@@ -216,7 +219,7 @@ func (a *PackageArtifact) GenerateFinalImage(imageName string, b ImageBuilder, k
|
||||
// We can't generate FROM scratch empty images. Docker will refuse to export them
|
||||
// workaround: Inject a .virtual empty file
|
||||
if empty {
|
||||
helpers.Touch(filepath.Join(uncompressedFiles, ".virtual"))
|
||||
fileHelper.Touch(filepath.Join(uncompressedFiles, ".virtual"))
|
||||
}
|
||||
|
||||
data := a.genDockerfile()
|
||||
@@ -317,7 +320,6 @@ func (a *PackageArtifact) Compress(src string, concurrency int) error {
|
||||
default:
|
||||
return helpers.Tar(src, a.getCompressedName())
|
||||
}
|
||||
return errors.New("Compression type must be supplied")
|
||||
}
|
||||
|
||||
func (a *PackageArtifact) getCompressedName() string {
|
||||
@@ -340,6 +342,28 @@ func (a *PackageArtifact) GetUncompressedName() string {
|
||||
return a.Path
|
||||
}
|
||||
|
||||
func hashContent(bv []byte) string {
|
||||
hasher := sha1.New()
|
||||
hasher.Write(bv)
|
||||
sha := base64.URLEncoding.EncodeToString(hasher.Sum(nil))
|
||||
return sha
|
||||
}
|
||||
|
||||
func hashFileContent(path string) (string, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
h := sha1.New()
|
||||
if _, err := io.Copy(h, f); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return base64.URLEncoding.EncodeToString(h.Sum(nil)), nil
|
||||
}
|
||||
|
||||
func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
|
||||
// If the destination path already exists I rename target file name with postfix.
|
||||
var destPath string
|
||||
@@ -351,6 +375,7 @@ func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Rea
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
tarHash := hashContent(buffer.Bytes())
|
||||
|
||||
// If file is not present on archive but is defined on mods
|
||||
// I receive the callback. Prevent nil exception.
|
||||
@@ -363,13 +388,26 @@ func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Rea
|
||||
return header, buffer.Bytes(), nil
|
||||
}
|
||||
|
||||
existingHash := ""
|
||||
f, err := os.Lstat(destPath)
|
||||
if err == nil {
|
||||
Debug("File exists already, computing hash for", destPath)
|
||||
hash, herr := hashFileContent(destPath)
|
||||
if herr == nil {
|
||||
existingHash = hash
|
||||
}
|
||||
}
|
||||
|
||||
Debug("Existing file hash: ", existingHash, "Tar file hashsum: ", tarHash)
|
||||
// We want to protect file only if the hash of the files are differing OR the file size are
|
||||
differs := (existingHash != "" && existingHash != tarHash) || (err != nil && f != nil && header.Size != f.Size())
|
||||
// Check if exists
|
||||
if helpers.Exists(destPath) {
|
||||
if fileHelper.Exists(destPath) && differs {
|
||||
for i := 1; i < 1000; i++ {
|
||||
name := filepath.Join(filepath.Join(filepath.Dir(path),
|
||||
fmt.Sprintf("._cfg%04d_%s", i, filepath.Base(path))))
|
||||
|
||||
if helpers.Exists(name) {
|
||||
if fileHelper.Exists(name) {
|
||||
continue
|
||||
}
|
||||
Info(fmt.Sprintf("Found protected file %s. Creating %s.", destPath,
|
||||
@@ -584,47 +622,16 @@ type CopyJob struct {
|
||||
Artifact string
|
||||
}
|
||||
|
||||
func copyXattr(srcPath, dstPath, attr string) error {
|
||||
data, err := system.Lgetxattr(srcPath, attr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if data != nil {
|
||||
if err := system.Lsetxattr(dstPath, attr, data, 0); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func doCopyXattrs(srcPath, dstPath string) error {
|
||||
if err := copyXattr(srcPath, dstPath, "security.capability"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return copyXattr(srcPath, dstPath, "trusted.overlay.opaque")
|
||||
}
|
||||
|
||||
func worker(i int, wg *sync.WaitGroup, s <-chan CopyJob) {
|
||||
defer wg.Done()
|
||||
|
||||
for job := range s {
|
||||
//Info("#"+strconv.Itoa(i), "copying", job.Src, "to", job.Dst)
|
||||
// if dir, err := helpers.IsDirectory(job.Src); err == nil && dir {
|
||||
// err = helpers.CopyDir(job.Src, job.Dst)
|
||||
// if err != nil {
|
||||
// Warning("Error copying dir", job, err)
|
||||
// }
|
||||
// continue
|
||||
// }
|
||||
|
||||
_, err := os.Lstat(job.Dst)
|
||||
if err != nil {
|
||||
Debug("Copying ", job.Src)
|
||||
if err := helpers.CopyFile(job.Src, job.Dst); err != nil {
|
||||
if err := fileHelper.DeepCopyFile(job.Src, job.Dst); err != nil {
|
||||
Warning("Error copying", job, err)
|
||||
}
|
||||
doCopyXattrs(job.Src, job.Dst)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
32
pkg/compiler/types/artifact/artifact_suite_test.go
Normal file
32
pkg/compiler/types/artifact/artifact_suite_test.go
Normal file
@@ -0,0 +1,32 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package artifact_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestArtifact(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Artifact Suite")
|
||||
}
|
@@ -29,6 +29,7 @@ import (
|
||||
|
||||
. "github.com/mudler/luet/pkg/compiler"
|
||||
helpers "github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/tree"
|
||||
. "github.com/onsi/ginkgo"
|
||||
@@ -41,7 +42,7 @@ var _ = Describe("Artifact", func() {
|
||||
|
||||
generalRecipe := tree.NewGeneralRecipe(pkg.NewInMemoryDatabase(false))
|
||||
|
||||
err := generalRecipe.Load("../../tests/fixtures/buildtree")
|
||||
err := generalRecipe.Load("../../../../tests/fixtures/buildtree")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
|
||||
@@ -71,7 +72,7 @@ var _ = Describe("Artifact", func() {
|
||||
|
||||
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err := helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
dockerfile, err := fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM alpine
|
||||
@@ -89,12 +90,12 @@ ENV PACKAGE_CATEGORY=app-admin`))
|
||||
}
|
||||
Expect(b.BuildImage(opts)).ToNot(HaveOccurred())
|
||||
Expect(b.ExportImage(opts)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(tmpdir2, "output1.tar"))).To(BeTrue())
|
||||
Expect(b.BuildImage(opts)).ToNot(HaveOccurred())
|
||||
|
||||
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "LuetDockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "LuetDockerfile"))
|
||||
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "LuetDockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM luet/base
|
||||
@@ -113,7 +114,7 @@ RUN echo bar > /test2`))
|
||||
}
|
||||
Expect(b.BuildImage(opts2)).ToNot(HaveOccurred())
|
||||
Expect(b.ExportImage(opts2)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
|
||||
diffs, err := compiler.GenerateChanges(b, opts, opts2)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -140,15 +141,15 @@ RUN echo bar > /test2`))
|
||||
|
||||
a, err := ExtractArtifactFromDelta(rootfs, filepath.Join(tmpdir, "package.tar"), diffs, 2, false, []string{}, []string{}, compression.None)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(filepath.Join(tmpdir, "package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(tmpdir, "package.tar"))).To(BeTrue())
|
||||
err = helpers.Untar(a.Path, unpacked, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(filepath.Join(unpacked, "test"))).To(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(unpacked, "test2"))).To(BeTrue())
|
||||
content1, err := helpers.Read(filepath.Join(unpacked, "test"))
|
||||
Expect(fileHelper.Exists(filepath.Join(unpacked, "test"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(unpacked, "test2"))).To(BeTrue())
|
||||
content1, err := fileHelper.Read(filepath.Join(unpacked, "test"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("foo\n"))
|
||||
content2, err := helpers.Read(filepath.Join(unpacked, "test2"))
|
||||
content2, err := fileHelper.Read(filepath.Join(unpacked, "test2"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content2).To(Equal("bar\n"))
|
||||
|
||||
@@ -156,7 +157,7 @@ RUN echo bar > /test2`))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
err = a.Verify()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.CopyFile(filepath.Join(tmpdir, "output2.tar"), filepath.Join(tmpdir, "package.tar"))).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.CopyFile(filepath.Join(tmpdir, "output2.tar"), filepath.Join(tmpdir, "package.tar"))).ToNot(HaveOccurred())
|
||||
|
||||
err = a.Verify()
|
||||
Expect(err).To(HaveOccurred())
|
||||
@@ -244,7 +245,7 @@ RUN echo bar > /test2`))
|
||||
err = b.ExtractRootfs(backend.Options{ImageName: resultingImage, Destination: result}, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.DirectoryIsEmpty(result)).To(BeFalse())
|
||||
Expect(fileHelper.DirectoryIsEmpty(result)).To(BeFalse())
|
||||
content, err := ioutil.ReadFile(filepath.Join(result, ".virtual"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
|
@@ -40,13 +40,13 @@ var _ = Describe("Checksum", func() {
|
||||
Expect(len(definitionsum)).To(Equal(0))
|
||||
Expect(len(definitionsum2)).To(Equal(0))
|
||||
|
||||
err = buildsum.Generate(NewPackageArtifact("../../tests/fixtures/layers/alpine/build.yaml"))
|
||||
err = buildsum.Generate(NewPackageArtifact("../../../../tests/fixtures/layers/alpine/build.yaml"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
err = definitionsum.Generate(NewPackageArtifact("../../tests/fixtures/layers/alpine/definition.yaml"))
|
||||
err = definitionsum.Generate(NewPackageArtifact("../../../../tests/fixtures/layers/alpine/definition.yaml"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
err = definitionsum2.Generate(NewPackageArtifact("../../tests/fixtures/layers/alpine/definition.yaml"))
|
||||
err = definitionsum2.Generate(NewPackageArtifact("../../../../tests/fixtures/layers/alpine/definition.yaml"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(len(buildsum)).To(Equal(1))
|
||||
|
@@ -38,6 +38,7 @@ type Compiler struct {
|
||||
BuildValues []map[string]interface{}
|
||||
|
||||
PackageTargetOnly bool
|
||||
Rebuild bool
|
||||
|
||||
BackendArgs []string
|
||||
|
||||
@@ -131,6 +132,13 @@ func KeepImg(b bool) func(cfg *Compiler) error {
|
||||
}
|
||||
}
|
||||
|
||||
func Rebuild(b bool) func(cfg *Compiler) error {
|
||||
return func(cfg *Compiler) error {
|
||||
cfg.Rebuild = b
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func PushImages(b bool) func(cfg *Compiler) error {
|
||||
return func(cfg *Compiler) error {
|
||||
cfg.Push = b
|
||||
|
@@ -16,9 +16,11 @@
|
||||
package compilerspec
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/mitchellh/hashstructure/v2"
|
||||
options "github.com/mudler/luet/pkg/compiler/types/options"
|
||||
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
@@ -85,6 +87,13 @@ func (specs *LuetCompilationspecs) Unique() *LuetCompilationspecs {
|
||||
return &newSpecs
|
||||
}
|
||||
|
||||
type CopyField struct {
|
||||
Package *pkg.DefaultPackage `json:"package"`
|
||||
Image string `json:"image"`
|
||||
Source string `json:"source"`
|
||||
Destination string `json:"destination"`
|
||||
}
|
||||
|
||||
type LuetCompilationSpec struct {
|
||||
Steps []string `json:"steps"` // Are run inside a container and the result layer diff is saved
|
||||
Env []string `json:"env"`
|
||||
@@ -102,7 +111,44 @@ type LuetCompilationSpec struct {
|
||||
Includes []string `json:"includes"`
|
||||
Excludes []string `json:"excludes"`
|
||||
|
||||
BuildOptions options.Compiler `json:"build_options"`
|
||||
BuildOptions *options.Compiler `json:"build_options"`
|
||||
|
||||
Copy []CopyField `json:"copy"`
|
||||
|
||||
Join pkg.DefaultPackages `json:"join"`
|
||||
}
|
||||
|
||||
// Signature is a portion of the spec that yields a signature for the hash
|
||||
type Signature struct {
|
||||
Image string
|
||||
Steps []string
|
||||
PackageDir string
|
||||
Prelude []string
|
||||
Seed string
|
||||
Env []string
|
||||
Retrieve []string
|
||||
Unpack bool
|
||||
Includes []string
|
||||
Excludes []string
|
||||
Copy []CopyField
|
||||
Join pkg.DefaultPackages
|
||||
}
|
||||
|
||||
func (cs *LuetCompilationSpec) signature() Signature {
|
||||
return Signature{
|
||||
Image: cs.Image,
|
||||
Steps: cs.Steps,
|
||||
PackageDir: cs.PackageDir,
|
||||
Prelude: cs.Prelude,
|
||||
Seed: cs.Seed,
|
||||
Env: cs.Env,
|
||||
Retrieve: cs.Retrieve,
|
||||
Unpack: cs.Unpack,
|
||||
Includes: cs.Includes,
|
||||
Excludes: cs.Excludes,
|
||||
Copy: cs.Copy,
|
||||
Join: cs.Join,
|
||||
}
|
||||
}
|
||||
|
||||
func NewLuetCompilationSpec(b []byte, p pkg.Package) (*LuetCompilationSpec, error) {
|
||||
@@ -119,7 +165,7 @@ func (cs *LuetCompilationSpec) GetSourceAssertion() solver.PackagesAssertions {
|
||||
}
|
||||
|
||||
func (cs *LuetCompilationSpec) SetBuildOptions(b options.Compiler) {
|
||||
cs.BuildOptions = b
|
||||
cs.BuildOptions = &b
|
||||
}
|
||||
|
||||
func (cs *LuetCompilationSpec) SetSourceAssertion(as solver.PackagesAssertions) {
|
||||
@@ -213,7 +259,14 @@ func (cs *LuetCompilationSpec) UnpackedPackage() bool {
|
||||
// a compilation spec has an image source when it depends on other packages or have a source image
|
||||
// explictly supplied
|
||||
func (cs *LuetCompilationSpec) HasImageSource() bool {
|
||||
return (cs.Package != nil && len(cs.GetPackage().GetRequires()) != 0) || cs.GetImage() != ""
|
||||
return (cs.Package != nil && len(cs.GetPackage().GetRequires()) != 0) || cs.GetImage() != "" || len(cs.Join) != 0
|
||||
}
|
||||
|
||||
func (cs *LuetCompilationSpec) Hash() (string, error) {
|
||||
// build a signature, we want to be part of the hash only the fields that are relevant for build purposes
|
||||
signature := cs.signature()
|
||||
h, err := hashstructure.Hash(signature, hashstructure.FormatV2, nil)
|
||||
return fmt.Sprint(h), err
|
||||
}
|
||||
|
||||
func (cs *LuetCompilationSpec) CopyRetrieves(dest string) error {
|
||||
@@ -256,6 +309,13 @@ ADD ` + s + ` /luetbuild/`
|
||||
}
|
||||
}
|
||||
|
||||
for _, c := range cs.Copy {
|
||||
if c.Image != "" {
|
||||
copyLine := fmt.Sprintf("\nCOPY --from=%s %s %s\n", c.Image, c.Source, c.Destination)
|
||||
spec = spec + copyLine
|
||||
}
|
||||
}
|
||||
|
||||
for _, s := range cs.Env {
|
||||
spec = spec + `
|
||||
ENV ` + s
|
||||
|
32
pkg/compiler/types/spec/spec_suite_test.go
Normal file
32
pkg/compiler/types/spec/spec_suite_test.go
Normal file
@@ -0,0 +1,32 @@
|
||||
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package compilerspec_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
. "github.com/mudler/luet/cmd"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
func TestSpec(t *testing.T) {
|
||||
RegisterFailHandler(Fail)
|
||||
LoadConfig(config.LuetCfg)
|
||||
RunSpecs(t, "Spec Suite")
|
||||
}
|
@@ -20,10 +20,11 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
options "github.com/mudler/luet/pkg/compiler/types/options"
|
||||
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
. "github.com/mudler/luet/pkg/compiler"
|
||||
helpers "github.com/mudler/luet/pkg/helpers"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/tree"
|
||||
. "github.com/onsi/ginkgo"
|
||||
@@ -74,11 +75,67 @@ var _ = Describe("Spec", func() {
|
||||
})
|
||||
})
|
||||
|
||||
Context("Image hashing", func() {
|
||||
It("is stable", func() {
|
||||
spec1 := &compilerspec.LuetCompilationSpec{
|
||||
Image: "foo",
|
||||
BuildOptions: &options.Compiler{BuildValues: []map[string]interface{}{{"foo": "bar", "baz": true}}},
|
||||
|
||||
Package: &pkg.DefaultPackage{
|
||||
Name: "foo",
|
||||
Category: "Bar",
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
},
|
||||
},
|
||||
}
|
||||
spec2 := &compilerspec.LuetCompilationSpec{
|
||||
Image: "foo",
|
||||
BuildOptions: &options.Compiler{BuildValues: []map[string]interface{}{{"foo": "bar", "baz": true}}},
|
||||
Package: &pkg.DefaultPackage{
|
||||
Name: "foo",
|
||||
Category: "Bar",
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
},
|
||||
},
|
||||
}
|
||||
spec3 := &compilerspec.LuetCompilationSpec{
|
||||
Image: "foo",
|
||||
Steps: []string{"foo"},
|
||||
Package: &pkg.DefaultPackage{
|
||||
Name: "foo",
|
||||
Category: "Bar",
|
||||
Labels: map[string]string{
|
||||
"foo": "bar",
|
||||
"baz": "foo",
|
||||
},
|
||||
},
|
||||
}
|
||||
hash, err := spec1.Hash()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
hash2, err := spec2.Hash()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
hash3, err := spec3.Hash()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(hash).To(Equal(hash2))
|
||||
hashagain, err := spec2.Hash()
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(hash).ToNot(Equal(hash3))
|
||||
Expect(hash).To(Equal(hashagain))
|
||||
})
|
||||
})
|
||||
|
||||
Context("Simple package build definition", func() {
|
||||
It("Loads it correctly", func() {
|
||||
generalRecipe := tree.NewGeneralRecipe(pkg.NewInMemoryDatabase(false))
|
||||
|
||||
err := generalRecipe.Load("../../tests/fixtures/buildtree")
|
||||
err := generalRecipe.Load("../../../../tests/fixtures/buildtree")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
|
||||
@@ -97,7 +154,7 @@ var _ = Describe("Spec", func() {
|
||||
lspec.Env = []string{"test=1"}
|
||||
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err := helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
dockerfile, err := fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM alpine
|
||||
@@ -110,7 +167,7 @@ ENV test=1`))
|
||||
|
||||
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM luet/base
|
||||
@@ -130,7 +187,7 @@ RUN echo bar > /test2`))
|
||||
It("Renders retrieve and env fields", func() {
|
||||
generalRecipe := tree.NewGeneralRecipe(pkg.NewInMemoryDatabase(false))
|
||||
|
||||
err := generalRecipe.Load("../../tests/fixtures/retrieve")
|
||||
err := generalRecipe.Load("../../../../tests/fixtures/retrieve")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
|
||||
@@ -148,7 +205,7 @@ RUN echo bar > /test2`))
|
||||
|
||||
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err := helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
dockerfile, err := fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM alpine
|
||||
@@ -165,7 +222,7 @@ ENV test=1`))
|
||||
|
||||
err = lspec.WriteBuildImageDefinition(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(dockerfile).To(Equal(`
|
||||
FROM alpine
|
||||
@@ -180,7 +237,7 @@ ENV test=1`))
|
||||
|
||||
err = lspec.WriteStepImageDefinition(lspec.Image, filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
dockerfile, err = helpers.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
dockerfile, err = fileHelper.Read(filepath.Join(tmpdir, "Dockerfile"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(dockerfile).To(Equal(`
|
||||
|
@@ -27,7 +27,6 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
solver "github.com/mudler/luet/pkg/solver"
|
||||
|
||||
@@ -406,8 +405,13 @@ system:
|
||||
}
|
||||
|
||||
func (c *LuetSystemConfig) InitTmpDir() error {
|
||||
if !helpers.Exists(c.TmpDirBase) {
|
||||
return os.MkdirAll(c.TmpDirBase, os.ModePerm)
|
||||
if _, err := os.Stat(c.TmpDirBase); err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
err = os.MkdirAll(c.TmpDirBase, os.ModePerm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
@@ -22,7 +22,7 @@ import (
|
||||
"strings"
|
||||
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
@@ -38,7 +38,7 @@ var _ = Describe("Config", func() {
|
||||
tmpDir, err := config.LuetCfg.GetSystem().TempDir("test1")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(strings.HasPrefix(tmpDir, filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
|
||||
Expect(helpers.Exists(tmpDir)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(tmpDir)).To(BeTrue())
|
||||
|
||||
defer os.RemoveAll(tmpDir)
|
||||
})
|
||||
@@ -49,7 +49,7 @@ var _ = Describe("Config", func() {
|
||||
tmpFile, err := config.LuetCfg.GetSystem().TempFile("testfile1")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(strings.HasPrefix(tmpFile.Name(), filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
|
||||
Expect(helpers.Exists(tmpFile.Name())).To(BeTrue())
|
||||
Expect(fileHelper.Exists(tmpFile.Name())).To(BeTrue())
|
||||
|
||||
defer os.Remove(tmpFile.Name())
|
||||
})
|
||||
|
@@ -90,8 +90,7 @@ func UntarProtect(src, dst string, sameOwner bool, protectedFiles []string, modi
|
||||
}
|
||||
|
||||
if sameOwner {
|
||||
// PRE: i have root privileged.
|
||||
|
||||
// we do have root permissions, so we can extract keeping the same permissions.
|
||||
replacerArchive := archive.ReplaceFileTarWrapper(in, mods)
|
||||
|
||||
opts := &archive.TarOptions{
|
||||
|
@@ -25,6 +25,8 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
. "github.com/mudler/luet/pkg/helpers"
|
||||
. "github.com/onsi/ginkgo"
|
||||
@@ -128,7 +130,7 @@ var _ = Describe("Helpers Archive", func() {
|
||||
err = archive.Untar(replacerArchive, targetDir, opts)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(Exists(filepath.Join(targetDir, "._cfg0001_file-0"))).Should(Equal(true))
|
||||
Expect(fileHelper.Exists(filepath.Join(targetDir, "._cfg0001_file-0"))).Should(Equal(true))
|
||||
})
|
||||
})
|
||||
})
|
||||
|
@@ -13,19 +13,27 @@
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package helpers
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/containerd/containerd/archive"
|
||||
"github.com/docker/cli/cli/trust"
|
||||
"github.com/docker/distribution/reference"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/registry"
|
||||
"github.com/mudler/luet/pkg/helpers/imgworker"
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/google/go-containerregistry/pkg/v1/mutate"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
"github.com/mudler/luet/pkg/bus"
|
||||
"github.com/opencontainers/go-digest"
|
||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/theupdateframework/notary/tuf/data"
|
||||
)
|
||||
@@ -93,10 +101,56 @@ func trustedResolveDigest(ctx context.Context, ref reference.NamedTagged, authCo
|
||||
return reference.WithDigest(ref, dgst)
|
||||
}
|
||||
|
||||
// DownloadAndExtractDockerImage is a re-adaption
|
||||
// from genuinetools/img https://github.com/genuinetools/img/blob/54d0ca981c1260546d43961a538550eef55c87cf/pull.go
|
||||
func DownloadAndExtractDockerImage(temp, image, dest string, auth *types.AuthConfig, verify bool) (*imgworker.ListedImage, error) {
|
||||
type staticAuth struct {
|
||||
auth *types.AuthConfig
|
||||
}
|
||||
|
||||
func (s staticAuth) Authorization() (*authn.AuthConfig, error) {
|
||||
if s.auth == nil {
|
||||
return nil, nil
|
||||
}
|
||||
return &authn.AuthConfig{
|
||||
Username: s.auth.Username,
|
||||
Password: s.auth.Password,
|
||||
Auth: s.auth.Auth,
|
||||
IdentityToken: s.auth.IdentityToken,
|
||||
RegistryToken: s.auth.RegistryToken,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type Image struct {
|
||||
// Name of the image.
|
||||
//
|
||||
// To be pulled, it must be a reference compatible with resolvers.
|
||||
//
|
||||
// This field is required.
|
||||
Name string
|
||||
|
||||
// Labels provide runtime decoration for the image record.
|
||||
//
|
||||
// There is no default behavior for how these labels are propagated. They
|
||||
// only decorate the static metadata object.
|
||||
//
|
||||
// This field is optional.
|
||||
Labels map[string]string
|
||||
|
||||
// Target describes the root content for this image. Typically, this is
|
||||
// a manifest, index or manifest list.
|
||||
Target specs.Descriptor
|
||||
|
||||
CreatedAt, UpdatedAt time.Time
|
||||
|
||||
ContentSize int64
|
||||
}
|
||||
|
||||
// UnpackEventData is the data structure to pass for the bus events
|
||||
type UnpackEventData struct {
|
||||
Image string
|
||||
Dest string
|
||||
}
|
||||
|
||||
// DownloadAndExtractDockerImage is a re-adaption
|
||||
func DownloadAndExtractDockerImage(temp, image, dest string, auth *types.AuthConfig, verify bool) (*Image, error) {
|
||||
if verify {
|
||||
img, err := verifyImage(image, auth)
|
||||
if err != nil {
|
||||
@@ -105,20 +159,63 @@ func DownloadAndExtractDockerImage(temp, image, dest string, auth *types.AuthCon
|
||||
image = img
|
||||
}
|
||||
|
||||
defer os.RemoveAll(temp)
|
||||
c, err := imgworker.New(temp, auth)
|
||||
ref, err := name.ParseReference(image)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed creating client")
|
||||
}
|
||||
defer c.Close()
|
||||
|
||||
listedImage, err := c.Pull(image)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed listing images")
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
img, err := remote.Image(ref, remote.WithAuth(staticAuth{auth}))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
m, err := img.Manifest()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mt, err := img.MediaType()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
d, err := img.Digest()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
reader := mutate.Extract(img)
|
||||
defer reader.Close()
|
||||
|
||||
os.RemoveAll(temp)
|
||||
os.RemoveAll(dest)
|
||||
err = c.Unpack(image, dest)
|
||||
return listedImage, err
|
||||
if err := os.MkdirAll(dest, 0700); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bus.Manager.Publish(bus.EventImagePreUnPack, UnpackEventData{Image: image, Dest: dest})
|
||||
|
||||
c, err := archive.Apply(context.TODO(), dest, reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bus.Manager.Publish(bus.EventImagePostUnPack, UnpackEventData{Image: image, Dest: dest})
|
||||
|
||||
return &Image{
|
||||
|
||||
Name: image,
|
||||
Labels: m.Annotations,
|
||||
Target: specs.Descriptor{
|
||||
MediaType: string(mt),
|
||||
Digest: digest.Digest(d.String()),
|
||||
Size: c,
|
||||
},
|
||||
|
||||
ContentSize: c,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func StripInvalidStringsFromImage(s string) string {
|
||||
return strings.ReplaceAll(s, "+", "-")
|
||||
}
|
30
pkg/helpers/docker/docker_test.go
Normal file
30
pkg/helpers/docker/docker_test.go
Normal file
@@ -0,0 +1,30 @@
|
||||
// Copyright © 2021 Ettore Di Giacinto <mudler@sabayon.org>
|
||||
//
|
||||
// This program is free software; you can redistribute it and/or modify
|
||||
// it under the terms of the GNU General Public License as published by
|
||||
// the Free Software Foundation; either version 2 of the License, or
|
||||
// (at your option) any later version.
|
||||
//
|
||||
// This program is distributed in the hope that it will be useful,
|
||||
// but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
// GNU General Public License for more details.
|
||||
//
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package docker_test
|
||||
|
||||
import (
|
||||
"github.com/mudler/luet/pkg/helpers/docker"
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
|
||||
var _ = Describe("StripInvalidStringsFromImage", func() {
|
||||
Context("Image names", func() {
|
||||
It("strips invalid chars", func() {
|
||||
Expect(docker.StripInvalidStringsFromImage("foo+bar")).To(Equal("foo-bar"))
|
||||
})
|
||||
})
|
||||
})
|
@@ -13,7 +13,7 @@
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package helpers
|
||||
package file
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
@@ -27,6 +27,7 @@ import (
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/docker/docker/pkg/system"
|
||||
"github.com/google/renameio"
|
||||
copy "github.com/otiai10/copy"
|
||||
"github.com/pkg/errors"
|
||||
@@ -167,9 +168,27 @@ func Read(file string) (string, error) {
|
||||
return string(dat), nil
|
||||
}
|
||||
|
||||
func EnsureDirPerm(src, dst string) {
|
||||
if info, err := os.Lstat(filepath.Dir(src)); err == nil {
|
||||
if _, err := os.Lstat(filepath.Dir(dst)); os.IsNotExist(err) {
|
||||
err := os.MkdirAll(filepath.Dir(dst), info.Mode().Perm())
|
||||
if err != nil {
|
||||
fmt.Println("warning: failed creating", filepath.Dir(dst), err.Error())
|
||||
}
|
||||
if stat, ok := info.Sys().(*syscall.Stat_t); ok {
|
||||
if err := os.Lchown(filepath.Dir(dst), int(stat.Uid), int(stat.Gid)); err != nil {
|
||||
fmt.Println("warning: failed chowning", filepath.Dir(dst), err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
EnsureDir(dst)
|
||||
}
|
||||
}
|
||||
|
||||
func EnsureDir(fileName string) error {
|
||||
dirName := filepath.Dir(fileName)
|
||||
if _, serr := os.Stat(dirName); serr != nil {
|
||||
if _, serr := os.Stat(dirName); os.IsNotExist(serr) {
|
||||
merr := os.MkdirAll(dirName, os.ModePerm) // FIXME: It should preserve permissions from src to dst instead
|
||||
if merr != nil {
|
||||
return merr
|
||||
@@ -178,12 +197,39 @@ func EnsureDir(fileName string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// CopyFile copies the contents of the file named src to the file named
|
||||
func CopyFile(src, dst string) (err error) {
|
||||
return copy.Copy(src, dst, copy.Options{
|
||||
Sync: true,
|
||||
OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
|
||||
}
|
||||
|
||||
func copyXattr(srcPath, dstPath, attr string) error {
|
||||
data, err := system.Lgetxattr(srcPath, attr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if data != nil {
|
||||
if err := system.Lsetxattr(dstPath, attr, data, 0); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func doCopyXattrs(srcPath, dstPath string) error {
|
||||
if err := copyXattr(srcPath, dstPath, "security.capability"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return copyXattr(srcPath, dstPath, "trusted.overlay.opaque")
|
||||
}
|
||||
|
||||
// DeepCopyFile copies the contents of the file named src to the file named
|
||||
// by dst. The file will be created if it does not already exist. If the
|
||||
// destination file exists, all it's contents will be replaced by the contents
|
||||
// of the source file. The file mode will be copied from the source and
|
||||
// the copied data is synced/flushed to stable storage.
|
||||
func CopyFile(src, dst string) (err error) {
|
||||
func DeepCopyFile(src, dst string) (err error) {
|
||||
// Workaround for https://github.com/otiai10/copy/issues/47
|
||||
fi, err := os.Lstat(src)
|
||||
if err != nil {
|
||||
@@ -193,7 +239,7 @@ func CopyFile(src, dst string) (err error) {
|
||||
fm := fi.Mode()
|
||||
switch {
|
||||
case fm&os.ModeNamedPipe != 0:
|
||||
EnsureDir(dst)
|
||||
EnsureDirPerm(src, dst)
|
||||
if err := syscall.Mkfifo(dst, uint32(fi.Mode())); err != nil {
|
||||
return errors.Wrap(err, "failed creating pipe")
|
||||
}
|
||||
@@ -205,6 +251,9 @@ func CopyFile(src, dst string) (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
//filepath.Dir(src)
|
||||
EnsureDirPerm(src, dst)
|
||||
|
||||
err = copy.Copy(src, dst, copy.Options{
|
||||
Sync: true,
|
||||
OnSymlink: func(string) copy.SymlinkAction { return copy.Shallow }})
|
||||
@@ -216,7 +265,8 @@ func CopyFile(src, dst string) (err error) {
|
||||
fmt.Println("warning: failed chowning", dst, err.Error())
|
||||
}
|
||||
}
|
||||
return err
|
||||
|
||||
return doCopyXattrs(src, dst)
|
||||
}
|
||||
|
||||
func IsDirectory(path string) (bool, error) {
|
@@ -13,14 +13,15 @@
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package helpers_test
|
||||
package file_test
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
. "github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
@@ -28,8 +29,8 @@ import (
|
||||
var _ = Describe("Helpers", func() {
|
||||
Context("Exists", func() {
|
||||
It("Detect existing and not-existing files", func() {
|
||||
Expect(Exists("../../tests/fixtures/buildtree/app-admin/enman/1.4.0/build.yaml")).To(BeTrue())
|
||||
Expect(Exists("../../tests/fixtures/buildtree/app-admin/enman/1.4.0/build.yaml.not.exists")).To(BeFalse())
|
||||
Expect(fileHelper.Exists("../../tests/fixtures/buildtree/app-admin/enman/1.4.0/build.yaml")).To(BeTrue())
|
||||
Expect(fileHelper.Exists("../../tests/fixtures/buildtree/app-admin/enman/1.4.0/build.yaml.not.exists")).To(BeFalse())
|
||||
})
|
||||
})
|
||||
|
||||
@@ -38,15 +39,15 @@ var _ = Describe("Helpers", func() {
|
||||
testDir, err := ioutil.TempDir(os.TempDir(), "test")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(testDir)
|
||||
Expect(DirectoryIsEmpty(testDir)).To(BeTrue())
|
||||
Expect(fileHelper.DirectoryIsEmpty(testDir)).To(BeTrue())
|
||||
})
|
||||
It("Detects directory with files", func() {
|
||||
testDir, err := ioutil.TempDir(os.TempDir(), "test")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(testDir)
|
||||
err = Touch(filepath.Join(testDir, "foo"))
|
||||
err = fileHelper.Touch(filepath.Join(testDir, "foo"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(DirectoryIsEmpty(testDir)).To(BeFalse())
|
||||
Expect(fileHelper.DirectoryIsEmpty(testDir)).To(BeFalse())
|
||||
})
|
||||
})
|
||||
|
||||
@@ -72,7 +73,7 @@ var _ = Describe("Helpers", func() {
|
||||
err = ioutil.WriteFile(filepath.Join(testDir, "baz2", "foo"), []byte("test\n"), 0644)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
ordered, notExisting := OrderFiles(testDir, []string{"bar", "baz", "bar/foo", "baz2", "foo", "baz2/foo", "notexisting"})
|
||||
ordered, notExisting := fileHelper.OrderFiles(testDir, []string{"bar", "baz", "bar/foo", "baz2", "foo", "baz2/foo", "notexisting"})
|
||||
|
||||
Expect(ordered).To(Equal([]string{"baz", "bar/foo", "foo", "baz2/foo", "bar", "baz2"}))
|
||||
Expect(notExisting).To(Equal([]string{"notexisting"}))
|
||||
@@ -96,7 +97,7 @@ var _ = Describe("Helpers", func() {
|
||||
err = os.MkdirAll(filepath.Join(testDir, "foo", "baz", "fa"), os.ModePerm)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
ordered, _ := OrderFiles(testDir, []string{"foo", "foo/bar", "bar", "foo/baz/fa", "foo/baz"})
|
||||
ordered, _ := fileHelper.OrderFiles(testDir, []string{"foo", "foo/bar", "bar", "foo/baz/fa", "foo/baz"})
|
||||
Expect(ordered).To(Equal([]string{"foo/baz/fa", "foo/bar", "foo/baz", "foo", "bar"}))
|
||||
})
|
||||
})
|
@@ -3,6 +3,8 @@ package helpers
|
||||
import (
|
||||
"io/ioutil"
|
||||
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
"github.com/imdario/mergo"
|
||||
"github.com/pkg/errors"
|
||||
"gopkg.in/yaml.v2"
|
||||
@@ -75,7 +77,7 @@ func RenderFiles(toTemplate, valuesFile string, defaultFile ...string) (string,
|
||||
return "", errors.Wrap(err, "reading file "+toTemplate)
|
||||
}
|
||||
|
||||
if !Exists(valuesFile) {
|
||||
if !fileHelper.Exists(valuesFile) {
|
||||
return "", errors.Wrap(err, "file not existing "+valuesFile)
|
||||
}
|
||||
val, err := ioutil.ReadFile(valuesFile)
|
||||
|
@@ -1,36 +0,0 @@
|
||||
package imgworker
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/docker/docker/api/types"
|
||||
|
||||
"github.com/moby/buildkit/session"
|
||||
"github.com/moby/buildkit/session/auth"
|
||||
"google.golang.org/grpc"
|
||||
)
|
||||
|
||||
func NewDockerAuthProvider(auth *types.AuthConfig) session.Attachable {
|
||||
return &authProvider{
|
||||
config: auth,
|
||||
}
|
||||
}
|
||||
|
||||
type authProvider struct {
|
||||
config *types.AuthConfig
|
||||
}
|
||||
|
||||
func (ap *authProvider) Register(server *grpc.Server) {
|
||||
// no-op
|
||||
}
|
||||
|
||||
func (ap *authProvider) Credentials(ctx context.Context, req *auth.CredentialsRequest) (*auth.CredentialsResponse, error) {
|
||||
res := &auth.CredentialsResponse{}
|
||||
if ap.config.IdentityToken != "" {
|
||||
res.Secret = ap.config.IdentityToken
|
||||
} else {
|
||||
res.Username = ap.config.Username
|
||||
res.Secret = ap.config.Password
|
||||
}
|
||||
return res, nil
|
||||
}
|
@@ -1,81 +0,0 @@
|
||||
package imgworker
|
||||
|
||||
// FROM Slightly adapted from genuinetools/img worker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/containerd/containerd/namespaces"
|
||||
dockertypes "github.com/docker/docker/api/types"
|
||||
"github.com/genuinetools/img/types"
|
||||
"github.com/moby/buildkit/control"
|
||||
"github.com/moby/buildkit/session"
|
||||
"github.com/moby/buildkit/util/appcontext"
|
||||
"github.com/moby/buildkit/worker/base"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Client holds the information for the client we will use for communicating
|
||||
// with the buildkit controller.
|
||||
type Client struct {
|
||||
backend string
|
||||
localDirs map[string]string
|
||||
root string
|
||||
|
||||
sessionManager *session.Manager
|
||||
controller *control.Controller
|
||||
opts *base.WorkerOpt
|
||||
|
||||
sess *session.Session
|
||||
ctx context.Context
|
||||
auth *dockertypes.AuthConfig
|
||||
}
|
||||
|
||||
// New returns a new client for communicating with the buildkit controller.
|
||||
func New(root string, auth *dockertypes.AuthConfig) (*Client, error) {
|
||||
// Native backend is fine, our images have just one layer. No need to depend on anything
|
||||
backend := types.NativeBackend
|
||||
|
||||
// Create the root/
|
||||
root = filepath.Join(root, "runc", backend)
|
||||
if err := os.MkdirAll(root, 0700); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
c := &Client{
|
||||
backend: types.NativeBackend,
|
||||
root: root,
|
||||
localDirs: nil,
|
||||
auth: auth,
|
||||
}
|
||||
|
||||
if err := c.prepare(); err != nil {
|
||||
return nil, errors.Wrapf(err, "failed preparing client")
|
||||
}
|
||||
|
||||
// Create the start of the client.
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *Client) Close() {
|
||||
c.sess.Close()
|
||||
}
|
||||
|
||||
func (c *Client) prepare() error {
|
||||
ctx := appcontext.Context()
|
||||
sess, sessDialer, err := c.Session(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "failed creating Session")
|
||||
}
|
||||
ctx = session.NewContext(ctx, sess.ID())
|
||||
ctx = namespaces.WithNamespace(ctx, "buildkit")
|
||||
|
||||
c.ctx = ctx
|
||||
c.sess = sess
|
||||
|
||||
go func() {
|
||||
sess.Run(ctx, sessDialer)
|
||||
}()
|
||||
return nil
|
||||
}
|
@@ -1,129 +0,0 @@
|
||||
package imgworker
|
||||
|
||||
// FROM Slightly adapted from genuinetools/img worker
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/containerd/containerd/images"
|
||||
"github.com/containerd/containerd/platforms"
|
||||
"github.com/docker/distribution/reference"
|
||||
"github.com/moby/buildkit/cache"
|
||||
"github.com/moby/buildkit/exporter"
|
||||
imageexporter "github.com/moby/buildkit/exporter/containerimage"
|
||||
"github.com/moby/buildkit/source"
|
||||
"github.com/moby/buildkit/source/containerimage"
|
||||
)
|
||||
|
||||
// ListedImage represents an image structure returuned from ListImages.
|
||||
// It extends containerd/images.Image with extra fields.
|
||||
type ListedImage struct {
|
||||
images.Image
|
||||
ContentSize int64
|
||||
}
|
||||
|
||||
// Pull retrieves an image from a remote registry.
|
||||
func (c *Client) Pull(image string) (*ListedImage, error) {
|
||||
|
||||
ctx := c.ctx
|
||||
|
||||
sm, err := c.getSessionManager()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Parse the image name and tag.
|
||||
named, err := reference.ParseNormalizedNamed(image)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing image name %q failed: %v", image, err)
|
||||
}
|
||||
// Add the latest lag if they did not provide one.
|
||||
named = reference.TagNameOnly(named)
|
||||
image = named.String()
|
||||
|
||||
// Get the identifier for the image.
|
||||
identifier, err := source.NewImageIdentifier(image)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Create the worker opts.
|
||||
opt, err := c.createWorkerOpt()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating worker opt failed: %v", err)
|
||||
}
|
||||
|
||||
cm, err := cache.NewManager(cache.ManagerOpt{
|
||||
Snapshotter: opt.Snapshotter,
|
||||
MetadataStore: opt.MetadataStore,
|
||||
ContentStore: opt.ContentStore,
|
||||
LeaseManager: opt.LeaseManager,
|
||||
GarbageCollect: opt.GarbageCollect,
|
||||
Applier: opt.Applier,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Create the source for the pull.
|
||||
srcOpt := containerimage.SourceOpt{
|
||||
Snapshotter: opt.Snapshotter,
|
||||
ContentStore: opt.ContentStore,
|
||||
Applier: opt.Applier,
|
||||
CacheAccessor: cm,
|
||||
ImageStore: opt.ImageStore,
|
||||
RegistryHosts: opt.RegistryHosts,
|
||||
LeaseManager: opt.LeaseManager,
|
||||
}
|
||||
src, err := containerimage.NewSource(srcOpt)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s, err := src.Resolve(ctx, identifier, sm)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ref, err := s.Snapshot(ctx)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Create the exporter for the pull.
|
||||
iw, err := imageexporter.NewImageWriter(imageexporter.WriterOpt{
|
||||
Snapshotter: opt.Snapshotter,
|
||||
ContentStore: opt.ContentStore,
|
||||
Differ: opt.Differ,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
expOpt := imageexporter.Opt{
|
||||
SessionManager: sm,
|
||||
ImageWriter: iw,
|
||||
Images: opt.ImageStore,
|
||||
RegistryHosts: opt.RegistryHosts,
|
||||
LeaseManager: opt.LeaseManager,
|
||||
}
|
||||
exp, err := imageexporter.New(expOpt)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
e, err := exp.Resolve(ctx, map[string]string{"name": image})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if _, err := e.Export(ctx, exporter.Source{Ref: ref}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Get the image.
|
||||
img, err := opt.ImageStore.Get(ctx, image)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getting image %s from image store failed: %v", image, err)
|
||||
}
|
||||
size, err := img.Size(ctx, opt.ContentStore, platforms.Default())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("calculating size of image %s failed: %v", img.Name, err)
|
||||
}
|
||||
|
||||
return &ListedImage{Image: img, ContentSize: size}, nil
|
||||
}
|
@@ -1,49 +0,0 @@
|
||||
package imgworker
|
||||
|
||||
// FROM Slightly adapted from genuinetools/img worker
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/moby/buildkit/session"
|
||||
"github.com/moby/buildkit/session/filesync"
|
||||
"github.com/moby/buildkit/session/testutil"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func (c *Client) getSessionManager() (*session.Manager, error) {
|
||||
if c.sessionManager == nil {
|
||||
var err error
|
||||
c.sessionManager, err = session.NewManager()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return c.sessionManager, nil
|
||||
}
|
||||
|
||||
// Session creates the session manager and returns the session and it's
|
||||
// dialer.
|
||||
func (c *Client) Session(ctx context.Context) (*session.Session, session.Dialer, error) {
|
||||
m, err := c.getSessionManager()
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "failed to create session manager")
|
||||
}
|
||||
sessionName := "luet"
|
||||
s, err := session.NewSession(ctx, sessionName, "")
|
||||
if err != nil {
|
||||
return nil, nil, errors.Wrap(err, "failed to create session")
|
||||
}
|
||||
syncedDirs := make([]filesync.SyncedDir, 0, len(c.localDirs))
|
||||
for name, d := range c.localDirs {
|
||||
syncedDirs = append(syncedDirs, filesync.SyncedDir{Name: name, Dir: d})
|
||||
}
|
||||
s.Allow(filesync.NewFSSyncProvider(syncedDirs))
|
||||
s.Allow(NewDockerAuthProvider(c.auth))
|
||||
return s, sessionDialer(s, m), err
|
||||
}
|
||||
|
||||
func sessionDialer(s *session.Session, m *session.Manager) session.Dialer {
|
||||
// FIXME: rename testutil
|
||||
return session.Dialer(testutil.TestStream(testutil.Handler(m.HandleConn)))
|
||||
}
|
@@ -1,82 +0,0 @@
|
||||
package imgworker
|
||||
|
||||
// FROM Slightly adapted from genuinetools/img worker
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/images"
|
||||
"github.com/containerd/containerd/platforms"
|
||||
"github.com/docker/distribution/reference"
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// TODO: this requires root permissions to mount/unmount layers, althrought it shouldn't be required.
|
||||
// See how backends are unpacking images without asking for root permissions.
|
||||
|
||||
// Unpack exports an image to a rootfs destination directory.
|
||||
func (c *Client) Unpack(image, dest string) error {
|
||||
|
||||
ctx := c.ctx
|
||||
if len(dest) < 1 {
|
||||
return errors.New("destination directory for rootfs cannot be empty")
|
||||
}
|
||||
|
||||
if _, err := os.Stat(dest); err == nil {
|
||||
return fmt.Errorf("destination directory already exists: %s", dest)
|
||||
}
|
||||
|
||||
// Parse the image name and tag.
|
||||
named, err := reference.ParseNormalizedNamed(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("parsing image name %q failed: %v", image, err)
|
||||
}
|
||||
// Add the latest lag if they did not provide one.
|
||||
named = reference.TagNameOnly(named)
|
||||
image = named.String()
|
||||
|
||||
// Create the worker opts.
|
||||
opt, err := c.createWorkerOpt()
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating worker opt failed: %v", err)
|
||||
}
|
||||
|
||||
if opt.ImageStore == nil {
|
||||
return errors.New("image store is nil")
|
||||
}
|
||||
|
||||
img, err := opt.ImageStore.Get(ctx, image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting image %s from image store failed: %v", image, err)
|
||||
}
|
||||
|
||||
manifest, err := images.Manifest(ctx, opt.ContentStore, img.Target, platforms.Default())
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting image manifest failed: %v", err)
|
||||
}
|
||||
|
||||
for _, desc := range manifest.Layers {
|
||||
logrus.Debugf("Unpacking layer %s", desc.Digest.String())
|
||||
|
||||
// Read the blob from the content store.
|
||||
layer, err := opt.ContentStore.ReaderAt(ctx, desc)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting reader for digest %s failed: %v", desc.Digest.String(), err)
|
||||
}
|
||||
|
||||
// Unpack the tarfile to the rootfs path.
|
||||
// FROM: https://godoc.org/github.com/moby/moby/pkg/archive#TarOptions
|
||||
if err := archive.Untar(content.NewReader(layer), dest, &archive.TarOptions{
|
||||
NoLchown: false,
|
||||
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
|
||||
}); err != nil {
|
||||
return fmt.Errorf("extracting tar for %s to directory %s failed: %v", desc.Digest.String(), dest, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
@@ -1,106 +0,0 @@
|
||||
package imgworker
|
||||
|
||||
// FROM Slightly adapted from genuinetools/img worker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/containerd/containerd/content/local"
|
||||
"github.com/containerd/containerd/diff/apply"
|
||||
"github.com/containerd/containerd/diff/walking"
|
||||
ctdmetadata "github.com/containerd/containerd/metadata"
|
||||
"github.com/containerd/containerd/platforms"
|
||||
"github.com/containerd/containerd/remotes/docker"
|
||||
ctdsnapshot "github.com/containerd/containerd/snapshots"
|
||||
"github.com/containerd/containerd/snapshots/native"
|
||||
"github.com/moby/buildkit/cache/metadata"
|
||||
containerdsnapshot "github.com/moby/buildkit/snapshot/containerd"
|
||||
"github.com/moby/buildkit/util/binfmt_misc"
|
||||
"github.com/moby/buildkit/util/leaseutil"
|
||||
"github.com/moby/buildkit/worker/base"
|
||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
// createWorkerOpt creates a base.WorkerOpt to be used for a new worker.
|
||||
func (c *Client) createWorkerOpt() (opt base.WorkerOpt, err error) {
|
||||
|
||||
if c.opts != nil {
|
||||
return *c.opts, nil
|
||||
}
|
||||
|
||||
// Create the metadata store.
|
||||
md, err := metadata.NewStore(filepath.Join(c.root, "metadata.db"))
|
||||
if err != nil {
|
||||
return opt, err
|
||||
}
|
||||
|
||||
snapshotRoot := filepath.Join(c.root, "snapshots")
|
||||
|
||||
s, err := native.NewSnapshotter(snapshotRoot)
|
||||
if err != nil {
|
||||
return opt, fmt.Errorf("creating %s snapshotter failed: %v", c.backend, err)
|
||||
}
|
||||
|
||||
// Create the content store locally.
|
||||
contentStore, err := local.NewStore(filepath.Join(c.root, "content"))
|
||||
if err != nil {
|
||||
return opt, err
|
||||
}
|
||||
|
||||
// Open the bolt database for metadata.
|
||||
db, err := bolt.Open(filepath.Join(c.root, "containerdmeta.db"), 0644, nil)
|
||||
if err != nil {
|
||||
return opt, err
|
||||
}
|
||||
|
||||
// Create the new database for metadata.
|
||||
mdb := ctdmetadata.NewDB(db, contentStore, map[string]ctdsnapshot.Snapshotter{
|
||||
c.backend: s,
|
||||
})
|
||||
if err := mdb.Init(context.TODO()); err != nil {
|
||||
return opt, err
|
||||
}
|
||||
|
||||
// Create the image store.
|
||||
imageStore := ctdmetadata.NewImageStore(mdb)
|
||||
|
||||
contentStore = containerdsnapshot.NewContentStore(mdb.ContentStore(), "buildkit")
|
||||
|
||||
id, err := base.ID(c.root)
|
||||
if err != nil {
|
||||
return opt, err
|
||||
}
|
||||
|
||||
xlabels := base.Labels("oci", c.backend)
|
||||
|
||||
var supportedPlatforms []specs.Platform
|
||||
for _, p := range binfmt_misc.SupportedPlatforms(false) {
|
||||
parsed, err := platforms.Parse(p)
|
||||
if err != nil {
|
||||
return opt, err
|
||||
}
|
||||
supportedPlatforms = append(supportedPlatforms, platforms.Normalize(parsed))
|
||||
}
|
||||
|
||||
opt = base.WorkerOpt{
|
||||
ID: id,
|
||||
Labels: xlabels,
|
||||
MetadataStore: md,
|
||||
Snapshotter: containerdsnapshot.NewSnapshotter(c.backend, mdb.Snapshotter(c.backend), "buildkit", nil),
|
||||
ContentStore: contentStore,
|
||||
Applier: apply.NewFileSystemApplier(contentStore),
|
||||
Differ: walking.NewWalkingDiff(contentStore),
|
||||
ImageStore: imageStore,
|
||||
Platforms: supportedPlatforms,
|
||||
RegistryHosts: docker.ConfigureDefaultRegistries(),
|
||||
LeaseManager: leaseutil.WithNamespace(ctdmetadata.NewLeaseManager(mdb), "buildkit"),
|
||||
GarbageCollect: mdb.GarbageCollect,
|
||||
}
|
||||
|
||||
c.opts = &opt
|
||||
|
||||
return opt, err
|
||||
}
|
@@ -14,12 +14,21 @@
|
||||
// You should have received a copy of the GNU General Public License along
|
||||
// with this program; if not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
package helpers
|
||||
package match
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
func ReverseAny(s interface{}) {
|
||||
n := reflect.ValueOf(s).Len()
|
||||
swap := reflect.Swapper(s)
|
||||
for i, j := 0, n-1; i < j; i, j = i+1, j-1 {
|
||||
swap(i, j)
|
||||
}
|
||||
}
|
||||
|
||||
func MapMatchRegex(m *map[string]string, r *regexp.Regexp) bool {
|
||||
ans := false
|
||||
|
@@ -28,7 +28,8 @@ import (
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
"github.com/mudler/luet/pkg/helpers/docker"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
)
|
||||
|
||||
@@ -63,7 +64,7 @@ func (c *DockerClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.
|
||||
artifactName := path.Base(a.Path)
|
||||
cacheFile := filepath.Join(config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), artifactName)
|
||||
Debug("Cache file", cacheFile)
|
||||
if err := helpers.EnsureDir(cacheFile); err != nil {
|
||||
if err := fileHelper.EnsureDir(cacheFile); err != nil {
|
||||
return nil, errors.Wrapf(err, "could not create cache folder %s for %s", config.LuetCfg.GetSystem().GetSystemPkgsCacheDirPath(), cacheFile)
|
||||
}
|
||||
ok := false
|
||||
@@ -76,7 +77,7 @@ func (c *DockerClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.
|
||||
// is done in such cases (see repository.go)
|
||||
|
||||
// Check if file is already in cache
|
||||
if helpers.Exists(cacheFile) {
|
||||
if fileHelper.Exists(cacheFile) {
|
||||
Debug("Cache hit for artifact", artifactName)
|
||||
resultingArtifact = a
|
||||
resultingArtifact.Path = cacheFile
|
||||
@@ -101,7 +102,7 @@ func (c *DockerClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.
|
||||
}
|
||||
|
||||
// imageName := fmt.Sprintf("%s/%s", uri, artifact.GetCompileSpec().GetPackage().GetPackageImageName())
|
||||
info, err := helpers.DownloadAndExtractDockerImage(contentstore, imageName, temp, c.auth, c.RepoData.Verify)
|
||||
info, err := docker.DownloadAndExtractDockerImage(contentstore, imageName, temp, c.auth, c.RepoData.Verify)
|
||||
if err != nil {
|
||||
Warning(fmt.Sprintf(errImageDownloadMsg, imageName, err.Error()))
|
||||
continue
|
||||
@@ -138,8 +139,8 @@ func (c *DockerClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.
|
||||
func (c *DockerClient) DownloadFile(name string) (string, error) {
|
||||
var file *os.File = nil
|
||||
var err error
|
||||
var temp string
|
||||
|
||||
var temp, contentstore string
|
||||
var info *docker.Image
|
||||
// Files should be in URI/repository:<file>
|
||||
ok := false
|
||||
|
||||
@@ -149,22 +150,21 @@ func (c *DockerClient) DownloadFile(name string) (string, error) {
|
||||
}
|
||||
|
||||
for _, uri := range c.RepoData.Urls {
|
||||
|
||||
file, err = config.LuetCfg.GetSystem().TempFile("DockerClient")
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
contentstore, err := config.LuetCfg.GetSystem().TempDir("contentstore")
|
||||
contentstore, err = config.LuetCfg.GetSystem().TempDir("contentstore")
|
||||
if err != nil {
|
||||
Warning("Cannot create contentstore", err.Error())
|
||||
continue
|
||||
}
|
||||
|
||||
imageName := fmt.Sprintf("%s:%s", uri, name)
|
||||
imageName := fmt.Sprintf("%s:%s", uri, docker.StripInvalidStringsFromImage(name))
|
||||
Info("Downloading", imageName)
|
||||
|
||||
info, err := helpers.DownloadAndExtractDockerImage(contentstore, imageName, temp, c.auth, c.RepoData.Verify)
|
||||
info, err = docker.DownloadAndExtractDockerImage(contentstore, imageName, temp, c.auth, c.RepoData.Verify)
|
||||
if err != nil {
|
||||
Warning(fmt.Sprintf(errImageDownloadMsg, imageName, err.Error()))
|
||||
continue
|
||||
@@ -174,8 +174,7 @@ func (c *DockerClient) DownloadFile(name string) (string, error) {
|
||||
Info(fmt.Sprintf("Size: %s", units.BytesSize(float64(info.ContentSize))))
|
||||
|
||||
Debug("\nCopying file ", filepath.Join(temp, name), "to", file.Name())
|
||||
err = helpers.CopyFile(filepath.Join(temp, name), file.Name())
|
||||
|
||||
err = fileHelper.CopyFile(filepath.Join(temp, name), file.Name())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
@@ -22,8 +22,8 @@ import (
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
helpers "github.com/mudler/luet/pkg/helpers"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
|
||||
. "github.com/mudler/luet/pkg/installer/client"
|
||||
@@ -32,7 +32,7 @@ import (
|
||||
)
|
||||
|
||||
// This test expect that the repository defined in UNIT_TEST_DOCKER_IMAGE is in zstd format.
|
||||
// the repository is built by the 01_simple_docker.sh integration test file.
|
||||
// the repository is built by the 01_simple_docker.sh integration test fileHelper.
|
||||
// This test also require root. At the moment, unpacking docker images with 'img' requires root permission to
|
||||
// mount/unmount layers.
|
||||
var _ = Describe("Docker client", func() {
|
||||
@@ -51,7 +51,7 @@ var _ = Describe("Docker client", func() {
|
||||
It("Downloads single files", func() {
|
||||
f, err := c.DownloadFile("repository.yaml")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(f)).To(ContainSubstring("Test Repo"))
|
||||
Expect(fileHelper.Read(f)).To(ContainSubstring("Test Repo"))
|
||||
os.RemoveAll(f)
|
||||
})
|
||||
|
||||
@@ -71,8 +71,8 @@ var _ = Describe("Docker client", func() {
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
defer os.RemoveAll(tmpdir) // clean up
|
||||
Expect(f.Unpack(tmpdir, false)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(filepath.Join(tmpdir, "c"))).To(Equal("c\n"))
|
||||
Expect(helpers.Read(filepath.Join(tmpdir, "cd"))).To(Equal("c\n"))
|
||||
Expect(fileHelper.Read(filepath.Join(tmpdir, "c"))).To(Equal("c\n"))
|
||||
Expect(fileHelper.Read(filepath.Join(tmpdir, "cd"))).To(Equal("c\n"))
|
||||
os.RemoveAll(f.Path)
|
||||
})
|
||||
})
|
||||
|
@@ -25,12 +25,11 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
|
||||
"github.com/cavaliercoder/grab"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
|
||||
"github.com/schollz/progressbar/v3"
|
||||
)
|
||||
@@ -77,7 +76,7 @@ func (c *HttpClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.Pa
|
||||
ok := false
|
||||
|
||||
// Check if file is already in cache
|
||||
if helpers.Exists(cacheFile) {
|
||||
if fileHelper.Exists(cacheFile) {
|
||||
Debug("Use artifact", artifactName, "from cache.")
|
||||
} else {
|
||||
|
||||
@@ -156,7 +155,7 @@ func (c *HttpClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.Pa
|
||||
fmt.Sprintf("%.2f", (float64(resp.BytesPerSecond())/1024)/1024), "MiB/s )")
|
||||
|
||||
Debug("\nCopying file ", filepath.Join(temp, artifactName), "to", cacheFile)
|
||||
err = helpers.CopyFile(filepath.Join(temp, artifactName), cacheFile)
|
||||
err = fileHelper.CopyFile(filepath.Join(temp, artifactName), cacheFile)
|
||||
|
||||
bar.Finish()
|
||||
ok = true
|
||||
@@ -218,7 +217,7 @@ func (c *HttpClient) DownloadFile(name string) (string, error) {
|
||||
fmt.Sprintf("%.2f", (float64(resp.BytesComplete())/1000)/1000), "MB (",
|
||||
fmt.Sprintf("%.2f", (float64(resp.BytesPerSecond())/1024)/1024), "MiB/s )")
|
||||
|
||||
err = helpers.CopyFile(filepath.Join(temp, name), file.Name())
|
||||
err = fileHelper.CopyFile(filepath.Join(temp, name), file.Name())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
@@ -23,8 +23,7 @@ import (
|
||||
"path/filepath"
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
helpers "github.com/mudler/luet/pkg/helpers"
|
||||
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/installer/client"
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
@@ -47,7 +46,7 @@ var _ = Describe("Http client", func() {
|
||||
c := NewHttpClient(RepoData{Urls: []string{ts.URL}})
|
||||
path, err := c.DownloadFile("test.txt")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(path)).To(Equal("test"))
|
||||
Expect(fileHelper.Read(path)).To(Equal("test"))
|
||||
os.RemoveAll(path)
|
||||
})
|
||||
|
||||
@@ -65,7 +64,7 @@ var _ = Describe("Http client", func() {
|
||||
c := NewHttpClient(RepoData{Urls: []string{ts.URL}})
|
||||
path, err := c.DownloadArtifact(&artifact.PackageArtifact{Path: "test.txt"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(path.Path)).To(Equal("test"))
|
||||
Expect(fileHelper.Read(path.Path)).To(Equal("test"))
|
||||
os.RemoveAll(path.Path)
|
||||
})
|
||||
|
||||
|
@@ -22,9 +22,8 @@ import (
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
)
|
||||
|
||||
type LocalClient struct {
|
||||
@@ -50,7 +49,7 @@ func (c *LocalClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.P
|
||||
}
|
||||
|
||||
// Check if file is already in cache
|
||||
if helpers.Exists(cacheFile) {
|
||||
if fileHelper.Exists(cacheFile) {
|
||||
Debug("Use artifact", artifactName, "from cache.")
|
||||
} else {
|
||||
ok := false
|
||||
@@ -61,7 +60,7 @@ func (c *LocalClient) DownloadArtifact(a *artifact.PackageArtifact) (*artifact.P
|
||||
Info("Downloading artifact", artifactName, "from", uri)
|
||||
|
||||
//defer os.Remove(file.Name())
|
||||
err = helpers.CopyFile(filepath.Join(uri, artifactName), cacheFile)
|
||||
err = fileHelper.CopyFile(filepath.Join(uri, artifactName), cacheFile)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
@@ -104,7 +103,7 @@ func (c *LocalClient) DownloadFile(name string) (string, error) {
|
||||
}
|
||||
//defer os.Remove(file.Name())
|
||||
|
||||
err = helpers.CopyFile(filepath.Join(uri, name), file.Name())
|
||||
err = fileHelper.CopyFile(filepath.Join(uri, name), file.Name())
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
@@ -21,8 +21,7 @@ import (
|
||||
"path/filepath"
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
helpers "github.com/mudler/luet/pkg/helpers"
|
||||
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/installer/client"
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
@@ -42,7 +41,7 @@ var _ = Describe("Local client", func() {
|
||||
c := NewLocalClient(RepoData{Urls: []string{tmpdir}})
|
||||
path, err := c.DownloadFile("test.txt")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(path)).To(Equal("test"))
|
||||
Expect(fileHelper.Read(path)).To(Equal("test"))
|
||||
os.RemoveAll(path)
|
||||
})
|
||||
|
||||
@@ -58,7 +57,7 @@ var _ = Describe("Local client", func() {
|
||||
c := NewLocalClient(RepoData{Urls: []string{tmpdir}})
|
||||
path, err := c.DownloadArtifact(&artifact.PackageArtifact{Path: "test.txt"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(path.Path)).To(Equal("test"))
|
||||
Expect(fileHelper.Read(path.Path)).To(Equal("test"))
|
||||
os.RemoveAll(path.Path)
|
||||
})
|
||||
|
||||
|
@@ -24,16 +24,16 @@ import (
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
|
||||
. "github.com/logrusorgru/aurora"
|
||||
"github.com/mudler/luet/pkg/bus"
|
||||
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
"github.com/mudler/luet/pkg/helpers/match"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/solver"
|
||||
|
||||
. "github.com/logrusorgru/aurora"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
@@ -128,6 +128,8 @@ func packsToList(p pkg.Packages) string {
|
||||
for _, pp := range p {
|
||||
packs = append(packs, pp.HumanReadableString())
|
||||
}
|
||||
|
||||
sort.Strings(packs)
|
||||
return strings.Join(packs, " ")
|
||||
}
|
||||
|
||||
@@ -137,6 +139,7 @@ func matchesToList(artefacts map[string]ArtifactMatch) string {
|
||||
for fingerprint, match := range artefacts {
|
||||
packs = append(packs, fmt.Sprintf("%s (%s)", fingerprint, match.Repository.GetName()))
|
||||
}
|
||||
sort.Strings(packs)
|
||||
return strings.Join(packs, " ")
|
||||
}
|
||||
|
||||
@@ -194,11 +197,19 @@ func (l *LuetInstaller) Swap(toRemove pkg.Packages, toInstall pkg.Packages, s *S
|
||||
toRemoveFinal = append(toRemoveFinal, pp)
|
||||
}
|
||||
}
|
||||
o := Option{
|
||||
FullUninstall: false,
|
||||
Force: true,
|
||||
CheckConflicts: false,
|
||||
FullCleanUninstall: false,
|
||||
NoDeps: l.Options.NoDeps,
|
||||
OnlyDeps: false,
|
||||
}
|
||||
|
||||
return l.swap(syncedRepos, toRemoveFinal, toInstall, s, false)
|
||||
return l.swap(o, syncedRepos, toRemoveFinal, toInstall, s)
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) computeSwap(syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
|
||||
func (l *LuetInstaller) computeSwap(o Option, syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
|
||||
|
||||
allRepos := pkg.NewInMemoryDatabase(false)
|
||||
syncedRepos.SyncDatabase(allRepos)
|
||||
@@ -213,8 +224,8 @@ func (l *LuetInstaller) computeSwap(syncedRepos Repositories, toRemove pkg.Packa
|
||||
|
||||
systemAfterChanges := &System{Database: installedtmp}
|
||||
|
||||
packs, err := l.computeUninstall(systemAfterChanges, toRemove...)
|
||||
if err != nil && !l.Options.Force {
|
||||
packs, err := l.computeUninstall(o, systemAfterChanges, toRemove...)
|
||||
if err != nil && !o.Force {
|
||||
Error("Failed computing uninstall for ", packsToList(toRemove))
|
||||
return nil, nil, nil, nil, errors.Wrap(err, "computing uninstall "+packsToList(toRemove))
|
||||
}
|
||||
@@ -225,30 +236,16 @@ func (l *LuetInstaller) computeSwap(syncedRepos Repositories, toRemove pkg.Packa
|
||||
}
|
||||
}
|
||||
|
||||
match, packages, assertions, allRepos, err := l.computeInstall(syncedRepos, toInstall, systemAfterChanges)
|
||||
match, packages, assertions, allRepos, err := l.computeInstall(o, syncedRepos, toInstall, systemAfterChanges)
|
||||
for _, p := range toInstall {
|
||||
assertions = append(assertions, solver.PackageAssert{Package: p.(*pkg.DefaultPackage), Value: true})
|
||||
}
|
||||
return match, packages, assertions, allRepos, err
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System, forceNodeps bool) error {
|
||||
forced := l.Options.Force
|
||||
nodeps := l.Options.NoDeps
|
||||
func (l *LuetInstaller) swap(o Option, syncedRepos Repositories, toRemove pkg.Packages, toInstall pkg.Packages, s *System) error {
|
||||
|
||||
// We don't want any conflict with the installed to raise during the upgrade.
|
||||
// In this way we both force uninstalls and we avoid to check with conflicts
|
||||
// against the current system state which is pending to deletion
|
||||
// E.g. you can't check for conflicts for an upgrade of a new version of A
|
||||
// if the old A results installed in the system. This is due to the fact that
|
||||
// now the solver enforces the constraints and explictly denies two packages
|
||||
// of the same version installed.
|
||||
l.Options.Force = true
|
||||
if forceNodeps {
|
||||
l.Options.NoDeps = true
|
||||
}
|
||||
|
||||
match, packages, assertions, allRepos, err := l.computeSwap(syncedRepos, toRemove, toInstall, s)
|
||||
match, packages, assertions, allRepos, err := l.computeSwap(o, syncedRepos, toRemove, toInstall, s)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed computing package replacement")
|
||||
}
|
||||
@@ -278,15 +275,173 @@ func (l *LuetInstaller) swap(syncedRepos Repositories, toRemove pkg.Packages, to
|
||||
return nil
|
||||
}
|
||||
|
||||
err = l.Uninstall(s, toRemove...)
|
||||
if err != nil && !l.Options.Force {
|
||||
Error("Failed uninstall for ", packsToList(toRemove))
|
||||
return errors.Wrap(err, "uninstalling "+packsToList(toRemove))
|
||||
ops := l.getOpsWithOptions(toRemove, match, Option{
|
||||
Force: o.Force,
|
||||
NoDeps: false,
|
||||
OnlyDeps: o.OnlyDeps,
|
||||
RunFinalizers: false,
|
||||
}, o, syncedRepos, packages, assertions, allRepos)
|
||||
|
||||
err = l.runOps(ops, s)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed running installer options")
|
||||
}
|
||||
|
||||
l.Options.Force = forced
|
||||
l.Options.NoDeps = nodeps
|
||||
return l.install(syncedRepos, match, packages, assertions, allRepos, s)
|
||||
toFinalize, err := l.getFinalizers(allRepos, assertions, match, o.NoDeps)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed getting package to finalize")
|
||||
}
|
||||
|
||||
return s.ExecuteFinalizers(toFinalize)
|
||||
}
|
||||
|
||||
type Option struct {
|
||||
Force bool
|
||||
NoDeps bool
|
||||
CheckConflicts bool
|
||||
FullUninstall bool
|
||||
FullCleanUninstall bool
|
||||
OnlyDeps bool
|
||||
RunFinalizers bool
|
||||
}
|
||||
|
||||
type operation struct {
|
||||
Option Option
|
||||
Package pkg.Package
|
||||
}
|
||||
|
||||
type installOperation struct {
|
||||
operation
|
||||
Reposiories Repositories
|
||||
Packages pkg.Packages
|
||||
Assertions solver.PackagesAssertions
|
||||
Database pkg.PackageDatabase
|
||||
Matches map[string]ArtifactMatch
|
||||
}
|
||||
|
||||
// installerOp is the operation that is sent to the
|
||||
// upgradeWorker's channel (todo)
|
||||
type installerOp struct {
|
||||
Uninstall operation
|
||||
Install installOperation
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) runOps(ops []installerOp, s *System) error {
|
||||
all := make(chan installerOp)
|
||||
|
||||
wg := new(sync.WaitGroup)
|
||||
|
||||
// Do the real install
|
||||
for i := 0; i < l.Options.Concurrency; i++ {
|
||||
wg.Add(1)
|
||||
go l.installerOpWorker(i, wg, all, s)
|
||||
}
|
||||
|
||||
for _, c := range ops {
|
||||
all <- c
|
||||
}
|
||||
close(all)
|
||||
wg.Wait()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// TODO: use installerOpWorker in place of all the other workers.
|
||||
// This one is general enough to read a list of operations and execute them.
|
||||
func (l *LuetInstaller) installerOpWorker(i int, wg *sync.WaitGroup, c <-chan installerOp, s *System) error {
|
||||
defer wg.Done()
|
||||
|
||||
for p := range c {
|
||||
if p.Uninstall.Package != nil {
|
||||
Debug("Replacing package inplace")
|
||||
toUninstall, uninstall, err := l.generateUninstallFn(p.Uninstall.Option, s, p.Uninstall.Package)
|
||||
if err != nil {
|
||||
Error("Failed to generate Uninstall function for" + err.Error())
|
||||
continue
|
||||
//return errors.Wrap(err, "while computing uninstall")
|
||||
}
|
||||
|
||||
err = uninstall()
|
||||
if err != nil {
|
||||
Error("Failed uninstall for ", packsToList(toUninstall))
|
||||
continue
|
||||
//return errors.Wrap(err, "uninstalling "+packsToList(toUninstall))
|
||||
}
|
||||
}
|
||||
if p.Install.Package != nil {
|
||||
artMatch := p.Install.Matches[p.Install.Package.GetFingerPrint()]
|
||||
ass := p.Install.Assertions.Search(p.Install.Package.GetFingerPrint())
|
||||
packageToInstall, _ := p.Install.Packages.Find(p.Install.Package.GetPackageName())
|
||||
|
||||
err := l.install(
|
||||
p.Install.Option,
|
||||
p.Install.Reposiories,
|
||||
map[string]ArtifactMatch{p.Install.Package.GetFingerPrint(): artMatch},
|
||||
pkg.Packages{packageToInstall},
|
||||
solver.PackagesAssertions{*ass},
|
||||
p.Install.Database,
|
||||
s,
|
||||
)
|
||||
if err != nil {
|
||||
Error(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// checks wheter we can uninstall and install in place and compose installer worker ops
|
||||
func (l *LuetInstaller) getOpsWithOptions(
|
||||
toUninstall pkg.Packages, installMatch map[string]ArtifactMatch, installOpt, uninstallOpt Option,
|
||||
syncedRepos Repositories, toInstall pkg.Packages, solution solver.PackagesAssertions, allRepos pkg.PackageDatabase) []installerOp {
|
||||
resOps := []installerOp{}
|
||||
for _, match := range installMatch {
|
||||
if pack, err := toUninstall.Find(match.Package.GetPackageName()); err == nil {
|
||||
resOps = append(resOps, installerOp{
|
||||
Uninstall: operation{Package: pack, Option: uninstallOpt},
|
||||
Install: installOperation{
|
||||
operation: operation{
|
||||
Package: match.Package,
|
||||
Option: installOpt,
|
||||
},
|
||||
Matches: installMatch,
|
||||
Packages: toInstall,
|
||||
Reposiories: syncedRepos,
|
||||
Assertions: solution,
|
||||
Database: allRepos,
|
||||
},
|
||||
})
|
||||
} else {
|
||||
resOps = append(resOps, installerOp{
|
||||
Install: installOperation{
|
||||
operation: operation{Package: match.Package, Option: installOpt},
|
||||
Matches: installMatch,
|
||||
Reposiories: syncedRepos,
|
||||
Packages: toInstall,
|
||||
Assertions: solution,
|
||||
Database: allRepos,
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
for _, p := range toUninstall {
|
||||
found := false
|
||||
|
||||
for _, match := range installMatch {
|
||||
if match.Package.GetPackageName() == p.GetPackageName() {
|
||||
found = true
|
||||
}
|
||||
|
||||
}
|
||||
if !found {
|
||||
resOps = append(resOps, installerOp{
|
||||
Uninstall: operation{Package: p, Option: uninstallOpt},
|
||||
})
|
||||
}
|
||||
}
|
||||
return resOps
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) checkAndUpgrade(r Repositories, s *System) error {
|
||||
@@ -310,19 +465,33 @@ func (l *LuetInstaller) checkAndUpgrade(r Repositories, s *System) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// We don't want any conflict with the installed to raise during the upgrade.
|
||||
// In this way we both force uninstalls and we avoid to check with conflicts
|
||||
// against the current system state which is pending to deletion
|
||||
// E.g. you can't check for conflicts for an upgrade of a new version of A
|
||||
// if the old A results installed in the system. This is due to the fact that
|
||||
// now the solver enforces the constraints and explictly denies two packages
|
||||
// of the same version installed.
|
||||
o := Option{
|
||||
FullUninstall: false,
|
||||
Force: true,
|
||||
CheckConflicts: false,
|
||||
FullCleanUninstall: false,
|
||||
NoDeps: true,
|
||||
OnlyDeps: false,
|
||||
}
|
||||
|
||||
if l.Options.Ask {
|
||||
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
|
||||
if Ask() {
|
||||
l.Options.Ask = false // Don't prompt anymore
|
||||
return l.swap(r, uninstall, toInstall, s, true)
|
||||
return l.swap(o, r, uninstall, toInstall, s)
|
||||
} else {
|
||||
return errors.New("Aborted by user")
|
||||
}
|
||||
}
|
||||
|
||||
Spinner(32)
|
||||
defer SpinnerStop()
|
||||
return l.swap(r, uninstall, toInstall, s, true)
|
||||
return l.swap(o, r, uninstall, toInstall, s)
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
|
||||
@@ -338,7 +507,13 @@ func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
|
||||
}
|
||||
}
|
||||
|
||||
match, packages, assertions, allRepos, err := l.computeInstall(syncedRepos, cp, s)
|
||||
o := Option{
|
||||
NoDeps: l.Options.NoDeps,
|
||||
Force: l.Options.Force,
|
||||
OnlyDeps: l.Options.OnlyDeps,
|
||||
RunFinalizers: true,
|
||||
}
|
||||
match, packages, assertions, allRepos, err := l.computeInstall(o, syncedRepos, cp, s)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -375,12 +550,12 @@ func (l *LuetInstaller) Install(cp pkg.Packages, s *System) error {
|
||||
Info("By going forward, you are also accepting the licenses of the packages that you are going to install in your system.")
|
||||
if Ask() {
|
||||
l.Options.Ask = false // Don't prompt anymore
|
||||
return l.install(syncedRepos, match, packages, assertions, allRepos, s)
|
||||
return l.install(o, syncedRepos, match, packages, assertions, allRepos, s)
|
||||
} else {
|
||||
return errors.New("Aborted by user")
|
||||
}
|
||||
}
|
||||
return l.install(syncedRepos, match, packages, assertions, allRepos, s)
|
||||
return l.install(o, syncedRepos, match, packages, assertions, allRepos, s)
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) download(syncedRepos Repositories, toDownload map[string]ArtifactMatch) error {
|
||||
@@ -422,7 +597,7 @@ func (l *LuetInstaller) Reclaim(s *System) error {
|
||||
"from", repo.GetName(), "is installed")
|
||||
FILES:
|
||||
for _, f := range artefact.Files {
|
||||
if helpers.Exists(filepath.Join(s.Target, f)) {
|
||||
if fileHelper.Exists(filepath.Join(s.Target, f)) {
|
||||
p, err := repo.GetTree().GetDatabase().FindPackage(artefact.CompileSpec.GetPackage())
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -455,7 +630,7 @@ func (l *LuetInstaller) Reclaim(s *System) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
|
||||
func (l *LuetInstaller) computeInstall(o Option, syncedRepos Repositories, cp pkg.Packages, s *System) (map[string]ArtifactMatch, pkg.Packages, solver.PackagesAssertions, pkg.PackageDatabase, error) {
|
||||
var p pkg.Packages
|
||||
toInstall := map[string]ArtifactMatch{}
|
||||
allRepos := pkg.NewInMemoryDatabase(false)
|
||||
@@ -488,11 +663,11 @@ func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages
|
||||
var packagesToInstall pkg.Packages
|
||||
var err error
|
||||
|
||||
if !l.Options.NoDeps {
|
||||
if !o.NoDeps {
|
||||
solv := solver.NewResolver(solver.Options{Type: l.Options.SolverOptions.Implementation, Concurrency: l.Options.Concurrency}, s.Database, allRepos, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
|
||||
solution, err = solv.Install(p)
|
||||
/// TODO: PackageAssertions needs to be a map[fingerprint]pack so lookup is in O(1)
|
||||
if err != nil && !l.Options.Force {
|
||||
if err != nil && !o.Force {
|
||||
return toInstall, p, solution, allRepos, errors.Wrap(err, "Failed solving solution for package")
|
||||
}
|
||||
// Gathers things to install
|
||||
@@ -505,7 +680,7 @@ func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages
|
||||
packagesToInstall = append(packagesToInstall, assertion.Package)
|
||||
}
|
||||
}
|
||||
} else if !l.Options.OnlyDeps {
|
||||
} else if !o.OnlyDeps {
|
||||
for _, currentPack := range p {
|
||||
if _, err := s.Database.FindPackage(currentPack); err == nil {
|
||||
// skip matching if it is installed already
|
||||
@@ -540,7 +715,47 @@ func (l *LuetInstaller) computeInstall(syncedRepos Repositories, cp pkg.Packages
|
||||
return toInstall, p, solution, allRepos, nil
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) install(syncedRepos Repositories, toInstall map[string]ArtifactMatch, p pkg.Packages, solution solver.PackagesAssertions, allRepos pkg.PackageDatabase, s *System) error {
|
||||
func (l *LuetInstaller) getFinalizers(allRepos pkg.PackageDatabase, solution solver.PackagesAssertions, toInstall map[string]ArtifactMatch, nodeps bool) ([]pkg.Package, error) {
|
||||
var toFinalize []pkg.Package
|
||||
if !nodeps {
|
||||
// TODO: Lower those errors as warning
|
||||
for _, w := range toInstall {
|
||||
// Finalizers needs to run in order and in sequence.
|
||||
ordered, err := solution.Order(allRepos, w.Package.GetFingerPrint())
|
||||
if err != nil {
|
||||
return toFinalize, errors.Wrap(err, "While order a solution for "+w.Package.HumanReadableString())
|
||||
}
|
||||
ORDER:
|
||||
for _, ass := range ordered {
|
||||
if ass.Value {
|
||||
installed, ok := toInstall[ass.Package.GetFingerPrint()]
|
||||
if !ok {
|
||||
// It was a dep already installed in the system, so we can skip it safely
|
||||
continue ORDER
|
||||
}
|
||||
treePackage, err := installed.Repository.GetTree().GetDatabase().FindPackage(ass.Package)
|
||||
if err != nil {
|
||||
return toFinalize, errors.Wrap(err, "Error getting package "+ass.Package.HumanReadableString())
|
||||
}
|
||||
|
||||
toFinalize = append(toFinalize, treePackage)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
} else {
|
||||
for _, c := range toInstall {
|
||||
treePackage, err := c.Repository.GetTree().GetDatabase().FindPackage(c.Package)
|
||||
if err != nil {
|
||||
return toFinalize, errors.Wrap(err, "Error getting package "+c.Package.HumanReadableString())
|
||||
}
|
||||
toFinalize = append(toFinalize, treePackage)
|
||||
}
|
||||
}
|
||||
return toFinalize, nil
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) install(o Option, syncedRepos Repositories, toInstall map[string]ArtifactMatch, p pkg.Packages, solution solver.PackagesAssertions, allRepos pkg.PackageDatabase, s *System) error {
|
||||
// Install packages into rootfs in parallel.
|
||||
if err := l.download(syncedRepos, toInstall); err != nil {
|
||||
return errors.Wrap(err, "Downloading packages")
|
||||
@@ -569,46 +784,19 @@ func (l *LuetInstaller) install(syncedRepos Repositories, toInstall map[string]A
|
||||
for _, c := range toInstall {
|
||||
// Annotate to the system that the package was installed
|
||||
_, err := s.Database.CreatePackage(c.Package)
|
||||
if err != nil && !l.Options.Force {
|
||||
if err != nil && !o.Force {
|
||||
return errors.Wrap(err, "Failed creating package")
|
||||
}
|
||||
bus.Manager.Publish(bus.EventPackageInstall, c)
|
||||
}
|
||||
var toFinalize []pkg.Package
|
||||
if !l.Options.NoDeps {
|
||||
// TODO: Lower those errors as warning
|
||||
for _, w := range p {
|
||||
// Finalizers needs to run in order and in sequence.
|
||||
ordered, err := solution.Order(allRepos, w.GetFingerPrint())
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "While order a solution for "+w.HumanReadableString())
|
||||
}
|
||||
ORDER:
|
||||
for _, ass := range ordered {
|
||||
if ass.Value {
|
||||
installed, ok := toInstall[ass.Package.GetFingerPrint()]
|
||||
if !ok {
|
||||
// It was a dep already installed in the system, so we can skip it safely
|
||||
continue ORDER
|
||||
}
|
||||
treePackage, err := installed.Repository.GetTree().GetDatabase().FindPackage(ass.Package)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error getting package "+ass.Package.HumanReadableString())
|
||||
}
|
||||
|
||||
toFinalize = append(toFinalize, treePackage)
|
||||
}
|
||||
}
|
||||
if !o.RunFinalizers {
|
||||
return nil
|
||||
}
|
||||
|
||||
}
|
||||
} else {
|
||||
for _, c := range toInstall {
|
||||
treePackage, err := c.Repository.GetTree().GetDatabase().FindPackage(c.Package)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Error getting package "+c.Package.HumanReadableString())
|
||||
}
|
||||
toFinalize = append(toFinalize, treePackage)
|
||||
}
|
||||
toFinalize, err := l.getFinalizers(allRepos, solution, toInstall, o.NoDeps)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed getting package to finalize")
|
||||
}
|
||||
|
||||
return s.ExecuteFinalizers(toFinalize)
|
||||
@@ -688,6 +876,51 @@ func (l *LuetInstaller) installerWorker(i int, wg *sync.WaitGroup, c <-chan Arti
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkAndPrunePath(path string) {
|
||||
// check if now the target path is empty
|
||||
targetPath := filepath.Dir(path)
|
||||
|
||||
fi, err := os.Lstat(targetPath)
|
||||
if err != nil {
|
||||
// Warning("Dir not found (it was before?) ", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
switch mode := fi.Mode(); {
|
||||
case mode.IsDir():
|
||||
files, err := ioutil.ReadDir(targetPath)
|
||||
if err != nil {
|
||||
Warning("Failed reading folder", targetPath, err.Error())
|
||||
}
|
||||
if len(files) != 0 {
|
||||
Debug("Preserving not-empty folder", targetPath)
|
||||
return
|
||||
}
|
||||
}
|
||||
if err = os.Remove(targetPath); err != nil {
|
||||
Warning("Failed removing file (maybe not present in the system target anymore ?)", targetPath, err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
// We will try to cleanup every path from the file, if the folders left behind are empty
|
||||
func pruneEmptyFilePath(path string) {
|
||||
checkAndPrunePath(path)
|
||||
|
||||
// A path is for e.g. /usr/bin/bar
|
||||
// we want to create an array as "/usr", "/usr/bin", "/usr/bin/bar"
|
||||
paths := strings.Split(path, string(os.PathSeparator))
|
||||
currentPath := filepath.Join(string(os.PathSeparator), paths[0])
|
||||
allPaths := []string{currentPath}
|
||||
for _, p := range paths[1:] {
|
||||
currentPath = filepath.Join(currentPath, p)
|
||||
allPaths = append(allPaths, currentPath)
|
||||
}
|
||||
match.ReverseAny(allPaths)
|
||||
for _, p := range allPaths {
|
||||
checkAndPrunePath(p)
|
||||
}
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
|
||||
var cp *config.ConfigProtect
|
||||
annotationDir := ""
|
||||
@@ -710,7 +943,7 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
|
||||
cp.Map(files)
|
||||
}
|
||||
|
||||
toRemove, notPresent := helpers.OrderFiles(s.Target, files)
|
||||
toRemove, notPresent := fileHelper.OrderFiles(s.Target, files)
|
||||
|
||||
// Remove from target
|
||||
for _, f := range toRemove {
|
||||
@@ -749,6 +982,8 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
|
||||
if err = os.Remove(target); err != nil {
|
||||
Warning("Failed removing file (maybe not present in the system target anymore ?)", target, err.Error())
|
||||
}
|
||||
|
||||
pruneEmptyFilePath(target)
|
||||
}
|
||||
|
||||
for _, f := range notPresent {
|
||||
@@ -762,6 +997,8 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
|
||||
if err = os.Remove(target); err != nil {
|
||||
Debug("Failed removing file (not present in the system target)", target, err.Error())
|
||||
}
|
||||
|
||||
pruneEmptyFilePath(target)
|
||||
}
|
||||
|
||||
err = s.Database.RemovePackageFiles(p)
|
||||
@@ -779,17 +1016,17 @@ func (l *LuetInstaller) uninstall(p pkg.Package, s *System) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) computeUninstall(s *System, packs ...pkg.Package) (pkg.Packages, error) {
|
||||
func (l *LuetInstaller) computeUninstall(o Option, s *System, packs ...pkg.Package) (pkg.Packages, error) {
|
||||
|
||||
var toUninstall pkg.Packages
|
||||
// compute uninstall from all world - remove packages in parallel - run uninstall finalizer (in order) TODO - mark the uninstallation in db
|
||||
// Get installed definition
|
||||
checkConflicts := l.Options.CheckConflicts
|
||||
full := l.Options.FullUninstall
|
||||
if l.Options.Force == true { // IF forced, we want to remove the package and all its requires
|
||||
checkConflicts = false
|
||||
full = false
|
||||
}
|
||||
checkConflicts := o.CheckConflicts
|
||||
full := o.FullUninstall
|
||||
// if o.Force == true { // IF forced, we want to remove the package and all its requires
|
||||
// checkConflicts = false
|
||||
// full = false
|
||||
// }
|
||||
|
||||
// Create a temporary DB with the installed packages
|
||||
// so the solver is much faster finding the deptree
|
||||
@@ -799,11 +1036,11 @@ func (l *LuetInstaller) computeUninstall(s *System, packs ...pkg.Package) (pkg.P
|
||||
return toUninstall, errors.Wrap(err, "Failed create temporary in-memory db")
|
||||
}
|
||||
|
||||
if !l.Options.NoDeps {
|
||||
if !o.NoDeps {
|
||||
solv := solver.NewResolver(solver.Options{Type: l.Options.SolverOptions.Implementation, Concurrency: l.Options.Concurrency}, installedtmp, installedtmp, pkg.NewInMemoryDatabase(false), l.Options.SolverOptions.Resolver())
|
||||
var solution pkg.Packages
|
||||
var err error
|
||||
if l.Options.FullCleanUninstall {
|
||||
if o.FullCleanUninstall {
|
||||
solution, err = solv.UninstallUniverse(packs)
|
||||
if err != nil {
|
||||
return toUninstall, errors.Wrap(err, "Could not solve the uninstall constraints. Tip: try with --solver-type qlearning or with --force, or by removing packages excluding their dependencies with --nodeps")
|
||||
@@ -824,32 +1061,47 @@ func (l *LuetInstaller) computeUninstall(s *System, packs ...pkg.Package) (pkg.P
|
||||
|
||||
return toUninstall, nil
|
||||
}
|
||||
func (l *LuetInstaller) Uninstall(s *System, packs ...pkg.Package) error {
|
||||
|
||||
func (l *LuetInstaller) generateUninstallFn(o Option, s *System, packs ...pkg.Package) (pkg.Packages, func() error, error) {
|
||||
for _, p := range packs {
|
||||
if packs, _ := s.Database.FindPackages(p); len(packs) == 0 {
|
||||
return errors.New("Package not found in the system")
|
||||
return nil, nil, errors.New("Package not found in the system")
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
Spinner(32)
|
||||
toUninstall, err := l.computeUninstall(s, packs...)
|
||||
toUninstall, err := l.computeUninstall(o, s, packs...)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "while computing uninstall")
|
||||
return nil, nil, errors.Wrap(err, "while computing uninstall")
|
||||
}
|
||||
SpinnerStop()
|
||||
|
||||
uninstall := func() error {
|
||||
for _, p := range toUninstall {
|
||||
err := l.uninstall(p, s)
|
||||
if err != nil && !l.Options.Force {
|
||||
if err != nil && !o.Force {
|
||||
return errors.Wrap(err, "Uninstall failed")
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
return toUninstall, uninstall, nil
|
||||
}
|
||||
|
||||
func (l *LuetInstaller) Uninstall(s *System, packs ...pkg.Package) error {
|
||||
|
||||
Spinner(32)
|
||||
o := Option{
|
||||
FullUninstall: l.Options.FullUninstall,
|
||||
Force: l.Options.Force,
|
||||
CheckConflicts: l.Options.CheckConflicts,
|
||||
FullCleanUninstall: l.Options.FullCleanUninstall,
|
||||
}
|
||||
toUninstall, uninstall, err := l.generateUninstallFn(o, s, packs...)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "while computing uninstall")
|
||||
}
|
||||
SpinnerStop()
|
||||
|
||||
if len(toUninstall) == 0 {
|
||||
Info("Nothing to do")
|
||||
return nil
|
||||
|
@@ -27,6 +27,7 @@ import (
|
||||
"github.com/mudler/luet/pkg/compiler/types/options"
|
||||
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
|
||||
. "github.com/mudler/luet/pkg/installer"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
@@ -84,34 +85,34 @@ var _ = Describe("Installer", func() {
|
||||
|
||||
a, err := c.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(a.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(a.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(a.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
|
||||
content1, err := helpers.Read(spec.Rel("test5"))
|
||||
content1, err := fileHelper.Read(spec.Rel("test5"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err := helpers.Read(spec.Rel("test6"))
|
||||
content2, err := fileHelper.Read(spec.Rel("test6"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("artifact5\n"))
|
||||
Expect(content2).To(Equal("artifact6\n"))
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
@@ -136,8 +137,8 @@ urls:
|
||||
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -154,8 +155,8 @@ urls:
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Nothing should be there anymore (files, packagedb entry)
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
|
||||
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).To(HaveOccurred())
|
||||
@@ -199,21 +200,21 @@ urls:
|
||||
|
||||
artifact, err := c.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
|
||||
content1, err := helpers.Read(spec.Rel("test5"))
|
||||
content1, err := fileHelper.Read(spec.Rel("test5"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err := helpers.Read(spec.Rel("test6"))
|
||||
content2, err := fileHelper.Read(spec.Rel("test6"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("artifact5\n"))
|
||||
Expect(content2).To(Equal("artifact6\n"))
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -223,15 +224,15 @@ urls:
|
||||
repo.SetRepositoryFile(REPOFILE_TREE_KEY, treeFile)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
@@ -256,8 +257,8 @@ urls:
|
||||
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -274,8 +275,8 @@ urls:
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Nothing should be there anymore (files, packagedb entry)
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
|
||||
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).To(HaveOccurred())
|
||||
@@ -319,21 +320,21 @@ urls:
|
||||
|
||||
artifact, err := c.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
|
||||
content1, err := helpers.Read(spec.Rel("test5"))
|
||||
content1, err := fileHelper.Read(spec.Rel("test5"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err := helpers.Read(spec.Rel("test6"))
|
||||
content2, err := fileHelper.Read(spec.Rel("test6"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("artifact5\n"))
|
||||
Expect(content2).To(Equal("artifact6\n"))
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
repo, err := GenerateRepository(
|
||||
"test",
|
||||
@@ -345,15 +346,15 @@ urls:
|
||||
pkg.NewInMemoryDatabase(false), nil, "", false, false, false, nil)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
@@ -383,8 +384,8 @@ urls:
|
||||
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -401,8 +402,8 @@ urls:
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Nothing should be there anymore (files, packagedb entry)
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
|
||||
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).To(HaveOccurred())
|
||||
@@ -444,21 +445,21 @@ urls:
|
||||
|
||||
artifact, err := c.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
|
||||
content1, err := helpers.Read(spec.Rel("test5"))
|
||||
content1, err := fileHelper.Read(spec.Rel("test5"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err := helpers.Read(spec.Rel("test6"))
|
||||
content2, err := fileHelper.Read(spec.Rel("test6"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("artifact5\n"))
|
||||
Expect(content2).To(Equal("artifact6\n"))
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
repo, err := GenerateRepository(
|
||||
"test",
|
||||
@@ -471,15 +472,15 @@ urls:
|
||||
pkg.NewInMemoryDatabase(false), nil, "", false, false, false, nil)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
@@ -531,7 +532,7 @@ urls:
|
||||
|
||||
artifact, err = c.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
|
||||
repo, err = stubRepo(tmpdir2, "../../tests/fixtures/alpine")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -604,15 +605,15 @@ urls:
|
||||
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
@@ -642,8 +643,8 @@ urls:
|
||||
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -660,11 +661,11 @@ urls:
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Nothing should be there anymore (files, packagedb entry)
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
|
||||
// New version - new files
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
|
||||
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).To(HaveOccurred())
|
||||
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
@@ -729,9 +730,9 @@ urls:
|
||||
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade_old_repo")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -774,8 +775,8 @@ urls:
|
||||
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -793,11 +794,11 @@ urls:
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Nothing should be there anymore (files, packagedb entry)
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
|
||||
// New version - new files
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
|
||||
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).To(HaveOccurred())
|
||||
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
@@ -856,17 +857,17 @@ urls:
|
||||
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
@@ -896,8 +897,8 @@ urls:
|
||||
err = inst.Install([]pkg.Package{&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"}}, system)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeTrue())
|
||||
_, err = systemDB.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -914,11 +915,11 @@ urls:
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Nothing should be there anymore (files, packagedb entry)
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
|
||||
// New version - new files
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
|
||||
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).To(HaveOccurred())
|
||||
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
@@ -1014,17 +1015,17 @@ urls:
|
||||
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.1.package.tar.gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.1.package.tar"))).ToNot(BeTrue())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
@@ -1059,13 +1060,13 @@ urls:
|
||||
Expect(err).To(HaveOccurred())
|
||||
|
||||
Expect(len(system.Database.World())).To(Equal(0))
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
|
||||
|
||||
Expect(helpers.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
|
||||
Expect(helpers.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
|
||||
Expect(helpers.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
|
||||
|
||||
err = inst.Reclaim(system)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -1115,15 +1116,15 @@ urls:
|
||||
repo, err := stubRepo(tmpdir, "../../tests/fixtures/upgrade_old_repo")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, false)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("repository.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(repo.GetUrls()[0]).To(Equal(tmpdir))
|
||||
Expect(repo.GetType()).To(Equal("disk"))
|
||||
|
||||
@@ -1158,13 +1159,13 @@ urls:
|
||||
Expect(err).To(HaveOccurred())
|
||||
|
||||
Expect(len(system.Database.World())).To(Equal(0))
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).To(BeFalse())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).To(BeFalse())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "c"))).To(BeFalse())
|
||||
|
||||
Expect(helpers.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
|
||||
Expect(helpers.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
|
||||
Expect(helpers.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.Touch(filepath.Join(fakeroot, "test5"))).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.Touch(filepath.Join(fakeroot, "test6"))).ToNot(HaveOccurred())
|
||||
Expect(fileHelper.Touch(filepath.Join(fakeroot, "c"))).ToNot(HaveOccurred())
|
||||
|
||||
err = inst.Reclaim(system)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
@@ -1218,11 +1219,11 @@ urls:
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
// Nothing should be there anymore (files, packagedb entry)
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test5"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "test6"))).ToNot(BeTrue())
|
||||
|
||||
// New version - new files
|
||||
Expect(helpers.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(filepath.Join(fakeroot, "newc"))).To(BeTrue())
|
||||
_, err = system.Database.GetPackageFiles(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
Expect(err).To(HaveOccurred())
|
||||
_, err = system.Database.FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
|
||||
|
@@ -28,11 +28,11 @@ import (
|
||||
|
||||
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
compression "github.com/mudler/luet/pkg/compiler/types/compression"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
"go.uber.org/multierr"
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
"github.com/mudler/luet/pkg/installer/client"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
@@ -723,7 +723,7 @@ func (r *LuetSystemRepository) SyncBuildMetadata(path string) error {
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "while downloading metadata for %s", ai.HumanReadableString())
|
||||
}
|
||||
if err := helpers.Move(file, filepath.Join(path, ai.GetMetadataFilePath())); err != nil {
|
||||
if err := fileHelper.Move(file, filepath.Join(path, ai.GetMetadataFilePath())); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -810,7 +810,7 @@ func (r *LuetSystemRepository) Sync(force bool) (*LuetSystemRepository, error) {
|
||||
|
||||
if r.Cached {
|
||||
// Copy updated repository.yaml file to repo dir now that the tree is synced.
|
||||
err = helpers.CopyFile(file, filepath.Join(repobasedir, REPOSITORY_SPECFILE))
|
||||
err = fileHelper.CopyFile(file, filepath.Join(repobasedir, REPOSITORY_SPECFILE))
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error on update "+REPOSITORY_SPECFILE)
|
||||
}
|
||||
|
@@ -24,16 +24,16 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
|
||||
"github.com/mudler/luet/pkg/bus"
|
||||
compiler "github.com/mudler/luet/pkg/compiler"
|
||||
"github.com/mudler/luet/pkg/compiler/backend"
|
||||
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
"github.com/mudler/luet/pkg/helpers/docker"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
|
||||
"github.com/mudler/luet/pkg/bus"
|
||||
compiler "github.com/mudler/luet/pkg/compiler"
|
||||
"github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
@@ -61,7 +61,7 @@ func (l *dockerRepositoryGenerator) Initialize(path string, db pkg.PackageDataba
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := l.pushImageFromArtifact(artifact.NewPackageArtifact(currentpath), l.b); err != nil {
|
||||
if err := l.pushImageFromArtifact(artifact.NewPackageArtifact(currentpath), l.b, true); err != nil {
|
||||
return errors.Wrap(err, "while pushing metadata file associated to the artifact")
|
||||
}
|
||||
|
||||
@@ -159,16 +159,20 @@ func (d *dockerRepositoryGenerator) pushRepoMetadata(repospec string, r *LuetSys
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *dockerRepositoryGenerator) pushImageFromArtifact(a *artifact.PackageArtifact, b compiler.CompilerBackend) error {
|
||||
func (d *dockerRepositoryGenerator) pushImageFromArtifact(a *artifact.PackageArtifact, b compiler.CompilerBackend, checkIfExists bool) error {
|
||||
// we generate a new archive containing the required compressed file.
|
||||
// TODO: Bundle all the extra files in 1 docker image only, instead of an image for each file
|
||||
treeArchive, err := artifact.CreateArtifactForFile(a.Path)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed generating checksums for tree")
|
||||
}
|
||||
imageTree := fmt.Sprintf("%s:%s", d.imagePrefix, a.GetFileName())
|
||||
|
||||
return d.pushFileFromArtifact(treeArchive, imageTree)
|
||||
imageTree := fmt.Sprintf("%s:%s", d.imagePrefix, docker.StripInvalidStringsFromImage(a.GetFileName()))
|
||||
if checkIfExists && d.imagePush && d.b.ImageAvailable(imageTree) && !d.force {
|
||||
Info("Image", imageTree, "already present, skipping. use --force-push to override")
|
||||
return nil
|
||||
} else {
|
||||
return d.pushFileFromArtifact(treeArchive, imageTree)
|
||||
}
|
||||
}
|
||||
|
||||
// Generate creates a Docker luet repository
|
||||
@@ -225,7 +229,7 @@ func (d *dockerRepositoryGenerator) Generate(r *LuetSystemRepository, imagePrefi
|
||||
|
||||
// we generate a new archive containing the required compressed file.
|
||||
// TODO: Bundle all the extra files in 1 docker image only, instead of an image for each file
|
||||
if err := d.pushImageFromArtifact(a, d.b); err != nil {
|
||||
if err := d.pushImageFromArtifact(a, d.b, false); err != nil {
|
||||
return errors.Wrap(err, "error met while pushing runtime tree")
|
||||
}
|
||||
|
||||
@@ -235,7 +239,7 @@ func (d *dockerRepositoryGenerator) Generate(r *LuetSystemRepository, imagePrefi
|
||||
}
|
||||
// we generate a new archive containing the required compressed file.
|
||||
// TODO: Bundle all the extra files in 1 docker image only, instead of an image for each file
|
||||
if err := d.pushImageFromArtifact(a, d.b); err != nil {
|
||||
if err := d.pushImageFromArtifact(a, d.b, false); err != nil {
|
||||
return errors.Wrap(err, "error met while pushing compiler tree")
|
||||
}
|
||||
|
||||
@@ -251,7 +255,7 @@ func (d *dockerRepositoryGenerator) Generate(r *LuetSystemRepository, imagePrefi
|
||||
return errors.Wrap(err, "failed adding Metadata file to repository")
|
||||
}
|
||||
|
||||
if err := d.pushImageFromArtifact(a, d.b); err != nil {
|
||||
if err := d.pushImageFromArtifact(a, d.b, false); err != nil {
|
||||
return errors.Wrap(err, "error met while pushing docker image from artifact")
|
||||
}
|
||||
|
||||
|
@@ -26,14 +26,15 @@ import (
|
||||
|
||||
"github.com/mudler/luet/pkg/compiler"
|
||||
backend "github.com/mudler/luet/pkg/compiler/backend"
|
||||
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
|
||||
|
||||
artifact "github.com/mudler/luet/pkg/compiler/types/artifact"
|
||||
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
|
||||
config "github.com/mudler/luet/pkg/config"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/installer"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/tree"
|
||||
|
||||
. "github.com/onsi/ginkgo"
|
||||
. "github.com/onsi/gomega"
|
||||
)
|
||||
@@ -83,34 +84,34 @@ var _ = Describe("Repository", func() {
|
||||
|
||||
artifact, err := compiler.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
|
||||
content1, err := helpers.Read(spec.Rel("test5"))
|
||||
content1, err := fileHelper.Read(spec.Rel("test5"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err := helpers.Read(spec.Rel("test6"))
|
||||
content2, err := fileHelper.Read(spec.Rel("test6"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("artifact5\n"))
|
||||
Expect(content2).To(Equal("artifact6\n"))
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
})
|
||||
|
||||
It("Generate repository metadata of files ONLY referenced in a tree", func() {
|
||||
@@ -156,41 +157,41 @@ var _ = Describe("Repository", func() {
|
||||
|
||||
artifact, err := compiler.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
artifact2, err := compiler2.Compile(false, spec2)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(artifact2.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(artifact2.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(artifact2.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
|
||||
content1, err := helpers.Read(spec.Rel("test5"))
|
||||
content1, err := fileHelper.Read(spec.Rel("test5"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
content2, err := helpers.Read(spec.Rel("test6"))
|
||||
content2, err := fileHelper.Read(spec.Rel("test6"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(content1).To(Equal("artifact5\n"))
|
||||
Expect(content2).To(Equal("artifact6\n"))
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec2.Rel("alpine-seed-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("b-test-1.0.metadata.yaml"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec2.Rel("alpine-seed-1.0.package.tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec2.Rel("alpine-seed-1.0.metadata.yaml"))).To(BeTrue())
|
||||
|
||||
repo, err := stubRepo(tmpdir, "../../tests/fixtures/buildable")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(tmpdir, false, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).To(BeTrue())
|
||||
|
||||
// We check now that the artifact not referenced in the tree
|
||||
// (spec2) is not indexed in the repository
|
||||
@@ -267,18 +268,18 @@ urls:
|
||||
|
||||
a, err := localcompiler.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(a.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(a.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(a.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
|
||||
|
||||
repo, err := dockerStubRepo(tmpdir, "../../tests/fixtures/buildable", repoImage, true, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(repoImage, false, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -295,7 +296,7 @@ urls:
|
||||
|
||||
f, err := c.DownloadFile("repository.yaml")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(f)).To(ContainSubstring("name: test"))
|
||||
Expect(fileHelper.Read(f)).To(ContainSubstring("name: test"))
|
||||
|
||||
a, err = c.DownloadArtifact(&artifact.PackageArtifact{
|
||||
Path: "test.tar",
|
||||
@@ -310,7 +311,7 @@ urls:
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
Expect(a.Unpack(extracted, false)).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(filepath.Join(extracted, "test6"))).To(Equal("artifact6\n"))
|
||||
Expect(fileHelper.Read(filepath.Join(extracted, "test6"))).To(Equal("artifact6\n"))
|
||||
})
|
||||
|
||||
It("generates images of virtual packages", func() {
|
||||
@@ -341,15 +342,15 @@ urls:
|
||||
|
||||
a, err := localcompiler.Compile(false, spec)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Exists(a.Path)).To(BeTrue())
|
||||
Expect(fileHelper.Exists(a.Path)).To(BeTrue())
|
||||
Expect(helpers.Untar(a.Path, tmpdir, false)).ToNot(HaveOccurred())
|
||||
|
||||
repo, err := dockerStubRepo(tmpdir, "../../tests/fixtures/virtuals", repoImage, true, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(repo.GetName()).To(Equal("test"))
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(helpers.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_SPECFILE))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(TREE_TARBALL + ".gz"))).ToNot(BeTrue())
|
||||
Expect(fileHelper.Exists(spec.Rel(REPOSITORY_METAFILE + ".tar"))).ToNot(BeTrue())
|
||||
err = repo.Write(repoImage, false, true)
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
@@ -366,7 +367,7 @@ urls:
|
||||
|
||||
f, err := c.DownloadFile("repository.yaml")
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
Expect(helpers.Read(f)).To(ContainSubstring("name: test"))
|
||||
Expect(fileHelper.Read(f)).To(ContainSubstring("name: test"))
|
||||
|
||||
a, err = c.DownloadArtifact(&artifact.PackageArtifact{
|
||||
Path: "test.tar",
|
||||
@@ -382,7 +383,7 @@ urls:
|
||||
|
||||
Expect(a.Unpack(extracted, false)).ToNot(HaveOccurred())
|
||||
|
||||
Expect(helpers.DirectoryIsEmpty(extracted)).To(BeFalse())
|
||||
Expect(fileHelper.DirectoryIsEmpty(extracted)).To(BeFalse())
|
||||
content, err := ioutil.ReadFile(filepath.Join(extracted, ".virtual"))
|
||||
Expect(err).ToNot(HaveOccurred())
|
||||
|
||||
|
@@ -3,6 +3,7 @@ package installer
|
||||
import (
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
. "github.com/mudler/luet/pkg/logger"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/mudler/luet/pkg/tree"
|
||||
@@ -23,7 +24,7 @@ func (s *System) ExecuteFinalizers(packs []pkg.Package) error {
|
||||
var errs error
|
||||
executedFinalizer := map[string]bool{}
|
||||
for _, p := range packs {
|
||||
if helpers.Exists(p.Rel(tree.FinalizerFile)) {
|
||||
if fileHelper.Exists(p.Rel(tree.FinalizerFile)) {
|
||||
out, err := helpers.RenderFiles(p.Rel(tree.FinalizerFile), p.Rel(tree.DefinitionFile))
|
||||
if err != nil {
|
||||
Warning("Failed rendering finalizer for ", p.HumanReadableString(), err.Error())
|
||||
|
@@ -26,7 +26,8 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
"github.com/mudler/luet/pkg/helpers/docker"
|
||||
"github.com/mudler/luet/pkg/helpers/match"
|
||||
version "github.com/mudler/luet/pkg/versioner"
|
||||
|
||||
gentoo "github.com/Sabayon/pkgs-checker/pkg/gentoo"
|
||||
@@ -118,6 +119,8 @@ type Package interface {
|
||||
SetTreeDir(s string)
|
||||
GetTreeDir() string
|
||||
|
||||
Mark() Package
|
||||
|
||||
JSON() ([]byte, error)
|
||||
}
|
||||
|
||||
@@ -131,6 +134,19 @@ type Tree interface {
|
||||
|
||||
type Packages []Package
|
||||
|
||||
type DefaultPackages []*DefaultPackage
|
||||
|
||||
func (d DefaultPackages) Hash(salt string) string {
|
||||
|
||||
overallFp := ""
|
||||
for _, c := range d {
|
||||
overallFp = overallFp + c.HashFingerprint("join")
|
||||
}
|
||||
h := md5.New()
|
||||
io.WriteString(h, fmt.Sprintf("%s-%s", overallFp, salt))
|
||||
return fmt.Sprintf("%x", h.Sum(nil))
|
||||
}
|
||||
|
||||
// >> Unmarshallers
|
||||
// DefaultPackageFromYaml decodes a package from yaml bytes
|
||||
func DefaultPackageFromYaml(yml []byte) (DefaultPackage, error) {
|
||||
@@ -309,7 +325,7 @@ func (p *DefaultPackage) GetPackageName() string {
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) ImageID() string {
|
||||
return strings.ReplaceAll(p.GetFingerPrint(), "+", "-")
|
||||
return docker.StripInvalidStringsFromImage(p.GetFingerPrint())
|
||||
}
|
||||
|
||||
// GetBuildTimestamp returns the package build timestamp
|
||||
@@ -344,19 +360,19 @@ func (p *DefaultPackage) IsHidden() bool {
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) HasLabel(label string) bool {
|
||||
return helpers.MapHasKey(&p.Labels, label)
|
||||
return match.MapHasKey(&p.Labels, label)
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) MatchLabel(r *regexp.Regexp) bool {
|
||||
return helpers.MapMatchRegex(&p.Labels, r)
|
||||
return match.MapMatchRegex(&p.Labels, r)
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) HasAnnotation(label string) bool {
|
||||
return helpers.MapHasKey(&p.Annotations, label)
|
||||
return match.MapHasKey(&p.Annotations, label)
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) MatchAnnotation(r *regexp.Regexp) bool {
|
||||
return helpers.MapMatchRegex(&p.Annotations, r)
|
||||
return match.MapMatchRegex(&p.Annotations, r)
|
||||
}
|
||||
|
||||
// AddUse adds a use to a package
|
||||
@@ -492,6 +508,12 @@ func (p *DefaultPackage) Matches(m Package) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) Mark() Package {
|
||||
marked := p.Clone()
|
||||
marked.SetName("@@" + marked.GetName())
|
||||
return marked
|
||||
}
|
||||
|
||||
func (p *DefaultPackage) Expand(definitiondb PackageDatabase) (Packages, error) {
|
||||
var versionsInWorld Packages
|
||||
|
||||
@@ -645,6 +667,16 @@ func (set Packages) Best(v version.Versioner) Package {
|
||||
return versionsMap[sorted[len(sorted)-1]]
|
||||
}
|
||||
|
||||
func (set Packages) Find(packageName string) (Package, error) {
|
||||
for _, p := range set {
|
||||
if p.GetPackageName() == packageName {
|
||||
return p, nil
|
||||
}
|
||||
}
|
||||
|
||||
return &DefaultPackage{}, errors.New("package not found")
|
||||
}
|
||||
|
||||
func (set Packages) Unique() Packages {
|
||||
var result Packages
|
||||
uniq := make(map[string]Package)
|
||||
|
@@ -260,24 +260,42 @@ func (a PackagesAssertions) TrueLen() int {
|
||||
// and checks it's not the only one. if it's unique it marks it specially - so the hash
|
||||
// which is generated is unique for the selected package
|
||||
func (assertions PackagesAssertions) HashFrom(p pkg.Package) string {
|
||||
return assertions.SaltedHashFrom(p, map[string]string{})
|
||||
}
|
||||
|
||||
func (assertions PackagesAssertions) AssertionHash() string {
|
||||
return assertions.SaltedAssertionHash(map[string]string{})
|
||||
}
|
||||
|
||||
func (assertions PackagesAssertions) SaltedHashFrom(p pkg.Package, salts map[string]string) string {
|
||||
var assertionhash string
|
||||
|
||||
// When we don't have any solution to hash for, we need to generate an UUID by ourselves
|
||||
latestsolution := assertions.Drop(p)
|
||||
if latestsolution.TrueLen() == 0 {
|
||||
assertionhash = assertions.Mark(p).AssertionHash()
|
||||
// Preserve the hash if supplied of marked packages
|
||||
marked := p.Mark()
|
||||
if markedHash, exists := salts[p.GetFingerPrint()]; exists {
|
||||
salts[marked.GetFingerPrint()] = markedHash
|
||||
}
|
||||
assertionhash = assertions.Mark(p).SaltedAssertionHash(salts)
|
||||
} else {
|
||||
assertionhash = latestsolution.AssertionHash()
|
||||
assertionhash = latestsolution.SaltedAssertionHash(salts)
|
||||
}
|
||||
return assertionhash
|
||||
}
|
||||
|
||||
func (assertions PackagesAssertions) AssertionHash() string {
|
||||
func (assertions PackagesAssertions) SaltedAssertionHash(salts map[string]string) string {
|
||||
var fingerprint string
|
||||
for _, assertion := range assertions { // Note: Always order them first!
|
||||
if assertion.Value { // Tke into account only dependencies installed (get fingerprint of subgraph)
|
||||
fingerprint += assertion.ToString() + "\n"
|
||||
salt, exists := salts[assertion.Package.GetFingerPrint()]
|
||||
if exists {
|
||||
fingerprint += assertion.ToString() + salt + "\n"
|
||||
|
||||
} else {
|
||||
fingerprint += assertion.ToString() + "\n"
|
||||
}
|
||||
}
|
||||
}
|
||||
hash := sha256.Sum256([]byte(fingerprint))
|
||||
@@ -316,8 +334,7 @@ func (assertions PackagesAssertions) Mark(p pkg.Package) PackagesAssertions {
|
||||
|
||||
for _, a := range assertions {
|
||||
if a.Package.Matches(p) {
|
||||
marked := a.Package.Clone()
|
||||
marked.SetName("@@" + marked.GetName())
|
||||
marked := a.Package.Mark()
|
||||
a = PackageAssert{Package: marked.(*pkg.DefaultPackage), Value: a.Value, Hash: a.Hash}
|
||||
}
|
||||
ass = append(ass, a)
|
||||
|
@@ -382,6 +382,9 @@ var _ = Describe("Decoder", func() {
|
||||
|
||||
Expect(solution.HashFrom(X)).ToNot(Equal(solution2.HashFrom(F)))
|
||||
Expect(solution3.HashFrom(D)).To(Equal(solution.HashFrom(X)))
|
||||
Expect(solution3.SaltedHashFrom(D, map[string]string{D.GetFingerPrint(): "foo"})).ToNot(Equal(solution3.HashFrom(D)))
|
||||
|
||||
Expect(solution4.SaltedHashFrom(Y, map[string]string{X.GetFingerPrint(): "foo"})).ToNot(Equal(solution4.HashFrom(Y)))
|
||||
|
||||
Expect(empty.AssertionHash()).ToNot(Equal(solution3.HashFrom(D)))
|
||||
Expect(empty.AssertionHash()).ToNot(Equal(solution2.HashFrom(F)))
|
||||
|
@@ -26,6 +26,7 @@ import (
|
||||
"path/filepath"
|
||||
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
@@ -61,7 +62,7 @@ type CompilerRecipe struct {
|
||||
// and the build context required for reproducible builds
|
||||
func (r *CompilerRecipe) Save(path string) error {
|
||||
for _, p := range r.SourcePath {
|
||||
if err := helpers.CopyDir(p, filepath.Join(path, filepath.Base(p))); err != nil {
|
||||
if err := fileHelper.CopyDir(p, filepath.Join(path, filepath.Base(p))); err != nil {
|
||||
return errors.Wrap(err, "while copying source tree")
|
||||
}
|
||||
}
|
||||
@@ -102,7 +103,7 @@ func (r *CompilerRecipe) Load(path string) error {
|
||||
|
||||
// Instead of rdeps, have a different tree for build deps.
|
||||
compileDefPath := pack.Rel(CompilerDefinitionFile)
|
||||
if helpers.Exists(compileDefPath) {
|
||||
if fileHelper.Exists(compileDefPath) {
|
||||
|
||||
dat, err := helpers.RenderFiles(compileDefPath, currentpath)
|
||||
if err != nil {
|
||||
@@ -149,7 +150,7 @@ func (r *CompilerRecipe) Load(path string) error {
|
||||
|
||||
// Instead of rdeps, have a different tree for build deps.
|
||||
compileDefPath := pack.Rel(CompilerDefinitionFile)
|
||||
if helpers.Exists(compileDefPath) {
|
||||
if fileHelper.Exists(compileDefPath) {
|
||||
|
||||
raw := packsRaw.Find(pack.GetName(), pack.GetCategory(), pack.GetVersion())
|
||||
buildyaml, err := ioutil.ReadFile(compileDefPath)
|
||||
|
@@ -26,8 +26,9 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
@@ -61,8 +62,8 @@ func (r *InstallerRecipe) Save(path string) error {
|
||||
}
|
||||
// Instead of rdeps, have a different tree for build deps.
|
||||
finalizerPath := p.Rel(FinalizerFile)
|
||||
if helpers.Exists(finalizerPath) { // copy finalizer file from the source tree
|
||||
helpers.CopyFile(finalizerPath, filepath.Join(dir, FinalizerFile))
|
||||
if fileHelper.Exists(finalizerPath) { // copy finalizer file from the source tree
|
||||
fileHelper.CopyFile(finalizerPath, filepath.Join(dir, FinalizerFile))
|
||||
}
|
||||
|
||||
}
|
||||
@@ -71,7 +72,7 @@ func (r *InstallerRecipe) Save(path string) error {
|
||||
|
||||
func (r *InstallerRecipe) Load(path string) error {
|
||||
|
||||
if !helpers.Exists(path) {
|
||||
if !fileHelper.Exists(path) {
|
||||
return errors.New(fmt.Sprintf(
|
||||
"Path %s doesn't exit.", path,
|
||||
))
|
||||
|
@@ -26,9 +26,10 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
helpers "github.com/mudler/luet/pkg/helpers"
|
||||
fileHelper "github.com/mudler/luet/pkg/helpers/file"
|
||||
pkg "github.com/mudler/luet/pkg/package"
|
||||
spectooling "github.com/mudler/luet/pkg/spectooling"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
@@ -77,7 +78,7 @@ func (r *Recipe) Load(path string) error {
|
||||
// if err != nil {
|
||||
// return err
|
||||
// }
|
||||
if !helpers.Exists(path) {
|
||||
if !fileHelper.Exists(path) {
|
||||
return errors.New(fmt.Sprintf(
|
||||
"Path %s doesn't exit.", path,
|
||||
))
|
||||
|
17
tests/fixtures/copy/c/a/build.yaml
vendored
Normal file
17
tests/fixtures/copy/c/a/build.yaml
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
image: "alpine"
|
||||
|
||||
copy:
|
||||
- package:
|
||||
name: "a"
|
||||
category: "test"
|
||||
version: ">=0"
|
||||
source: /test3
|
||||
destination: /test3
|
||||
- image: "busybox"
|
||||
source: /bin/busybox
|
||||
destination: /busybox
|
||||
|
||||
steps:
|
||||
- mkdir /bina
|
||||
- cp /test3 /result
|
||||
- cp -rf /busybox /bina/busybox
|
3
tests/fixtures/copy/c/a/definition.yaml
vendored
Normal file
3
tests/fixtures/copy/c/a/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "c"
|
||||
version: "1.2"
|
7
tests/fixtures/copy/cat/a/a/build.yaml
vendored
Normal file
7
tests/fixtures/copy/cat/a/a/build.yaml
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
image: "alpine"
|
||||
prelude:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
steps:
|
||||
- echo artifact3 > /test3
|
||||
- echo artifact4 > /test4
|
3
tests/fixtures/copy/cat/a/a/definition.yaml
vendored
Normal file
3
tests/fixtures/copy/cat/a/a/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "a"
|
||||
version: "1.2"
|
13
tests/fixtures/copy/cat/b-1.1/build.yaml
vendored
Normal file
13
tests/fixtures/copy/cat/b-1.1/build.yaml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "a"
|
||||
version: ">=0"
|
||||
|
||||
prelude:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
steps:
|
||||
- echo artifact5 > /newc
|
||||
- echo artifact6 > /newnewc
|
||||
- chmod +x generate.sh
|
||||
- ./generate.sh
|
3
tests/fixtures/copy/cat/b-1.1/definition.yaml
vendored
Normal file
3
tests/fixtures/copy/cat/b-1.1/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "b"
|
||||
version: "1.1"
|
1
tests/fixtures/copy/cat/b-1.1/generate.sh
vendored
Normal file
1
tests/fixtures/copy/cat/b-1.1/generate.sh
vendored
Normal file
@@ -0,0 +1 @@
|
||||
echo generated > /sonewc
|
11
tests/fixtures/docker_repo/c/build.yaml
vendored
Normal file
11
tests/fixtures/docker_repo/c/build.yaml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
prelude:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
steps:
|
||||
- echo c > /c
|
||||
- echo c > /cd
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "b"
|
||||
version: "1.0"
|
||||
|
3
tests/fixtures/docker_repo/c/definition.yaml
vendored
Normal file
3
tests/fixtures/docker_repo/c/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "c"
|
||||
version: "1.0"
|
9
tests/fixtures/docker_repo/cat/b/build.yaml
vendored
Normal file
9
tests/fixtures/docker_repo/cat/b/build.yaml
vendored
Normal file
@@ -0,0 +1,9 @@
|
||||
image: "alpine"
|
||||
prelude:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
steps:
|
||||
- echo artifact5 > /test5
|
||||
- echo artifact6 > /test6
|
||||
- chmod +x generate.sh
|
||||
- ./generate.sh
|
3
tests/fixtures/docker_repo/cat/b/definition.yaml
vendored
Normal file
3
tests/fixtures/docker_repo/cat/b/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "b"
|
||||
version: "1.0"
|
1
tests/fixtures/docker_repo/cat/b/generate.sh
vendored
Normal file
1
tests/fixtures/docker_repo/cat/b/generate.sh
vendored
Normal file
@@ -0,0 +1 @@
|
||||
echo generated > /artifact42
|
10
tests/fixtures/docker_repo/cat/cat2/a/build.yaml
vendored
Normal file
10
tests/fixtures/docker_repo/cat/cat2/a/build.yaml
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
prelude:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
steps:
|
||||
- echo artifact3 > /test3
|
||||
- echo artifact4 > /test4
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "b"
|
||||
version: "1.0"
|
8
tests/fixtures/docker_repo/cat/cat2/a/definition.yaml
vendored
Normal file
8
tests/fixtures/docker_repo/cat/cat2/a/definition.yaml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
category: "test"
|
||||
name: "a"
|
||||
version: "1.0"
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "b"
|
||||
version: "1.0"
|
||||
|
11
tests/fixtures/docker_repo/d/build.yaml
vendored
Normal file
11
tests/fixtures/docker_repo/d/build.yaml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
prelude:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
steps:
|
||||
- echo s > /d
|
||||
- echo dd > /dd
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "c"
|
||||
version: "1.0"
|
||||
|
3
tests/fixtures/docker_repo/d/definition.yaml
vendored
Normal file
3
tests/fixtures/docker_repo/d/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "d"
|
||||
version: "1.0"
|
3
tests/fixtures/docker_repo/interpolated/build.yaml
vendored
Normal file
3
tests/fixtures/docker_repo/interpolated/build.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
steps:
|
||||
- echo s > /interpolated-{{.Values.foo}}-{{.Values.extra}}
|
||||
image: "alpine"
|
4
tests/fixtures/docker_repo/interpolated/definition.yaml
vendored
Normal file
4
tests/fixtures/docker_repo/interpolated/definition.yaml
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
category: "test"
|
||||
name: "interpolated"
|
||||
version: "1.0+2"
|
||||
foo: "bar"
|
3
tests/fixtures/docker_repo/z/build.yaml
vendored
Normal file
3
tests/fixtures/docker_repo/z/build.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
steps:
|
||||
- echo s > /z
|
||||
image: "alpine"
|
3
tests/fixtures/docker_repo/z/definition.yaml
vendored
Normal file
3
tests/fixtures/docker_repo/z/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "z"
|
||||
version: "1.0+2"
|
8
tests/fixtures/join/c/a/build.yaml
vendored
Normal file
8
tests/fixtures/join/c/a/build.yaml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
join:
|
||||
- name: "a"
|
||||
category: "test"
|
||||
version: ">=0"
|
||||
- name: "b"
|
||||
category: "test"
|
||||
version: ">=0"
|
||||
unpack: true
|
3
tests/fixtures/join/c/a/definition.yaml
vendored
Normal file
3
tests/fixtures/join/c/a/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "c"
|
||||
version: "1.2"
|
7
tests/fixtures/join/cat/a/a/build.yaml
vendored
Normal file
7
tests/fixtures/join/cat/a/a/build.yaml
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
image: "alpine"
|
||||
prelude:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
steps:
|
||||
- echo artifact3 > /test3
|
||||
- echo artifact4 > /test4
|
3
tests/fixtures/join/cat/a/a/definition.yaml
vendored
Normal file
3
tests/fixtures/join/cat/a/a/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "a"
|
||||
version: "1.2"
|
13
tests/fixtures/join/cat/b-1.1/build.yaml
vendored
Normal file
13
tests/fixtures/join/cat/b-1.1/build.yaml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "a"
|
||||
version: ">=0"
|
||||
|
||||
prelude:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
steps:
|
||||
- echo artifact5 > /newc
|
||||
- echo artifact6 > /newnewc
|
||||
- chmod +x generate.sh
|
||||
- ./generate.sh
|
3
tests/fixtures/join/cat/b-1.1/definition.yaml
vendored
Normal file
3
tests/fixtures/join/cat/b-1.1/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "b"
|
||||
version: "1.1"
|
1
tests/fixtures/join/cat/b-1.1/generate.sh
vendored
Normal file
1
tests/fixtures/join/cat/b-1.1/generate.sh
vendored
Normal file
@@ -0,0 +1 @@
|
||||
echo generated > /sonewc
|
7
tests/fixtures/join_complex/c/c1/build.yaml
vendored
Normal file
7
tests/fixtures/join_complex/c/c1/build.yaml
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
join:
|
||||
- name: "a"
|
||||
category: "test"
|
||||
version: ">=0"
|
||||
- name: "b"
|
||||
category: "test"
|
||||
version: ">=0"
|
3
tests/fixtures/join_complex/c/c1/definition.yaml
vendored
Normal file
3
tests/fixtures/join_complex/c/c1/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "c"
|
||||
version: "1.2"
|
8
tests/fixtures/join_complex/c/f/build.yaml
vendored
Normal file
8
tests/fixtures/join_complex/c/f/build.yaml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
join:
|
||||
- name: "a"
|
||||
category: "test"
|
||||
version: ">=0"
|
||||
- name: "b"
|
||||
category: "test"
|
||||
version: ">=0"
|
||||
unpack: true
|
3
tests/fixtures/join_complex/c/f/definition.yaml
vendored
Normal file
3
tests/fixtures/join_complex/c/f/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "f"
|
||||
version: "1.2"
|
2
tests/fixtures/join_complex/cat/a/a/build.yaml
vendored
Normal file
2
tests/fixtures/join_complex/cat/a/a/build.yaml
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
image: "alpine"
|
||||
unpack: true
|
3
tests/fixtures/join_complex/cat/a/a/definition.yaml
vendored
Normal file
3
tests/fixtures/join_complex/cat/a/a/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "a"
|
||||
version: "1.2"
|
13
tests/fixtures/join_complex/cat/b-1.1/build.yaml
vendored
Normal file
13
tests/fixtures/join_complex/cat/b-1.1/build.yaml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "a"
|
||||
version: ">=0"
|
||||
|
||||
prelude:
|
||||
- echo foo > /test
|
||||
- echo bar > /test2
|
||||
steps:
|
||||
- echo artifact5 > /newc
|
||||
- echo artifact6 > /newnewc
|
||||
- chmod +x generate.sh
|
||||
- ./generate.sh
|
3
tests/fixtures/join_complex/cat/b-1.1/definition.yaml
vendored
Normal file
3
tests/fixtures/join_complex/cat/b-1.1/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
category: "test"
|
||||
name: "b"
|
||||
version: "1.1"
|
1
tests/fixtures/join_complex/cat/b-1.1/generate.sh
vendored
Normal file
1
tests/fixtures/join_complex/cat/b-1.1/generate.sh
vendored
Normal file
@@ -0,0 +1 @@
|
||||
echo generated > /sonewc
|
10
tests/fixtures/join_complex/x/build.yaml
vendored
Normal file
10
tests/fixtures/join_complex/x/build.yaml
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
join:
|
||||
- category: "test"
|
||||
name: "f"
|
||||
version: ">=0"
|
||||
# this is actually a virtual, no files shipped in c
|
||||
- category: "test"
|
||||
name: "c"
|
||||
version: ">=0"
|
||||
steps:
|
||||
- mv /newnewc /x
|
3
tests/fixtures/join_complex/x/definition.yaml
vendored
Normal file
3
tests/fixtures/join_complex/x/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
name: "x"
|
||||
category: "test"
|
||||
version: "0.1"
|
6
tests/fixtures/join_complex/z/build.yaml
vendored
Normal file
6
tests/fixtures/join_complex/z/build.yaml
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
requires:
|
||||
- category: "test"
|
||||
name: "c"
|
||||
version: ">=0"
|
||||
steps:
|
||||
- mv /newnewc /z
|
3
tests/fixtures/join_complex/z/definition.yaml
vendored
Normal file
3
tests/fixtures/join_complex/z/definition.yaml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
name: "z"
|
||||
category: "test"
|
||||
version: "0.1"
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user