Compare commits

...

110 Commits

Author SHA1 Message Date
Ettore Di Giacinto
f0200018c7 🆕 Tag 0.22.5 2022-01-04 22:46:28 +01:00
Ettore Di Giacinto
6198eba3b8 ♻️ Drop travis file and chglog 2022-01-04 20:44:31 +01:00
Ettore Di Giacinto
9bd6730aeb 🤖 Adapt makefile/scripts to ginkgo changes 2022-01-04 18:47:21 +01:00
Ettore Di Giacinto
2bd623a61c 🔧 Drop containerd workaround
Partly reverts
37cc186c0b,
but re-enable test.
2022-01-04 17:03:21 +01:00
Ettore Di Giacinto
80bc5429bc 🆕 Tag 0.22.4 2021-12-28 22:08:04 +01:00
Ettore Di Giacinto
9274f87a80 🔧 ci: disable flaky test 2021-12-28 21:06:31 +01:00
Ettore Di Giacinto
1d651a5878 🔧 ci: disable -race on scripts/ginkgo.coverage.sh 2021-12-28 20:45:24 +01:00
Ettore Di Giacinto
f7357a60a6 🔧 ci: disable -race on tests
Seems race conditions are triggered from the go-containerregistry
underlying library.
2021-12-28 20:35:19 +01:00
Ettore Di Giacinto
57eedf8e7e 🆕 Tag 0.22.3 2021-12-28 19:02:20 +01:00
Ettore Di Giacinto
96aaf5235b 🔧 Update modules 2021-12-28 18:56:13 +01:00
Ettore Di Giacinto
196cdc5cfc 🔧 Extract common func into api function, also set sane defaults 2021-12-28 18:55:59 +01:00
Ettore Di Giacinto
719ef16161 🆕 Tag 0.22.2 2021-12-28 16:01:35 +01:00
Ettore Di Giacinto
1a9073a97a 🎨 Display installed packages in luet search
Fixes #236
2021-12-28 15:04:00 +01:00
Ettore Di Giacinto
7e825400e2 🔧 Use crane.Insecure while checking image availability
As those checks are not consuming any digest, we just use them to assess
if we need to build or not certain packages. The backend will refuse the
image if not configured appropriately
2021-12-28 14:54:11 +01:00
Ettore Di Giacinto
39e62f3321 🆕 Tag 0.22.1 2021-12-28 14:36:44 +01:00
Ettore Di Giacinto
9dcaeb0870 🔧 Defer write repository synctime 2021-12-28 12:06:09 +01:00
Ettore Di Giacinto
c4affb0f0e 🔧 Fixup live-output CLI parameter 2021-12-27 23:11:16 +01:00
Ludea
4c1b9b92af Unpack local image (#277)
* [WIP] Unpack local docker images

* unpack local image

* PR feedback + missing new function call

Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2021-12-26 20:06:15 +01:00
Ettore Di Giacinto
7f7e1418c1 Tag 0.22.0 2021-12-25 11:37:13 +01:00
Ettore Di Giacinto
e8c5e237b2 🎨 Display missing files in oscheck with --debug 2021-12-25 10:40:07 +01:00
Ettore Di Giacinto
a363b53043 🔧 Speedup package upgrades
Now we can just remove the necessary files and let the installation
handle the rest
2021-12-25 10:40:07 +01:00
Ettore Di Giacinto
c98f427156 🎨 Introduce contextualized logging
This commit is multi-fold as it also refactors internally context and logger
as interfaces so it is easier to plug luet as a library externally.

Introduces a garbage collector (related to #227) but doesn't handle yet
parallelism.

Closes #265
2021-12-21 21:54:14 +01:00
Ettore Di Giacinto
fd90e0d627 🆕 Tag 0.21.2 2021-12-18 17:30:30 +01:00
Ettore Di Giacinto
20d01e43c7 🎨 Update repos automatically only if out-of-sync
Fixes #274
Fixes #212
2021-12-18 16:32:03 +01:00
Ettore Di Giacinto
ed63236516 🔧 take into account of multiple installs 2021-12-18 15:32:35 +01:00
Ettore Di Giacinto
50b23095b2 Tag 0.21.1 2021-12-17 23:58:36 +01:00
Ettore Di Giacinto
9665bc1481 🎨 Display generated ops, speedup filecheck 2021-12-17 23:58:36 +01:00
Ettore Di Giacinto
37f4289cdd 🔧 Allow to specify a snapshot ID #276 2021-12-17 15:41:17 +01:00
Ettore Di Giacinto
01638567a7 Tag 0.21.0 2021-12-16 00:22:17 +01:00
Ettore Di Giacinto
fbe9b038dd 🔧 Consider removals when appending packages to be uninstalled 2021-12-15 21:11:21 +01:00
Ettore Di Giacinto
0a90129e34 🔧 Restore tree imglist hash output
Fixes #271
2021-12-15 18:38:47 +01:00
Ettore Di Giacinto
b05b00c615 🔧 🎨 Enhance package upgrade strategy order
Enhance package upgrade ordering during swap taking into accounts of files
shipped by packages.

This change also introduce a new method for clients to get the
underlying cache data, thus consuming it in installer to fix progressbar display
2021-12-15 18:04:45 +01:00
Ettore Di Giacinto
938d41fe9e 🔧 Allow to perform automatically oscheck after upgrades 2021-12-12 12:23:30 +01:00
Ettore Di Giacinto
163bd77d27 🔧 Emit post/pre upgrade events 2021-12-12 10:45:28 +01:00
Ettore Di Giacinto
309f5c0559 📒 update vendor/ 2021-12-07 18:26:35 +01:00
Ettore Di Giacinto
1f6d0cc66c 🆕 Update go-pluggable 2021-12-07 18:23:49 +01:00
Ettore Di Giacinto
07e37ea059 🔧 Add luet reinstall --installed
Fixes #273
2021-12-07 18:22:05 +01:00
Ettore Di Giacinto
432b1db116 🆕 Tag 0.20.13 2021-12-06 21:47:12 +01:00
Ettore Di Giacinto
8e16d3abd3 🔧 Use ImageID for generating dockerfile names
It is safer, and plays better with buildx
2021-12-06 21:46:15 +01:00
Ettore Di Giacinto
1f29fdd680 🔧 Add oscheck
Fixes #50
2021-12-05 23:22:56 +01:00
Ettore Di Giacinto
da85a7306f 🔧 Consistently use Tempdir in compiler 2021-12-04 21:48:43 +01:00
Ettore Di Giacinto
78307eef57 🔧 Add contextual logging accessors 2021-12-04 21:40:32 +01:00
Ettore Di Giacinto
e11521ddce 📒 Update CONTRIBUTING 2021-12-04 21:35:59 +01:00
Ettore Di Giacinto
1e6aca0ba1 🔧 CLI: add quiet mode 2021-12-04 21:35:34 +01:00
Ettore Di Giacinto
79e98af604 Handle error if we can't generate a compilation spec from a package 2021-11-27 21:12:14 +01:00
Ettore Di Giacinto
71d5b03382 Tag 0.20.12 2021-11-25 15:04:16 +01:00
Ettore Di Giacinto
a02ab16510 Don't load requires while parsing compilespec that consume final images
When depending on those package otherwise we try to compile the full
tree instead of reconstrucing the image which is result of a join while
keeping the revdep tree invariate
2021-11-25 14:18:15 +01:00
Ettore Di Giacinto
ba0551caab Tag 0.20.11 2021-11-22 12:11:39 +01:00
Ettore Di Giacinto
44e66cc729 Use tarball.LayerFromOpener
tarball.LayerFromReader slurps the whole src in memory. The payoff is
that we might read the file multiple time as internally it's called
multiple times.
2021-11-22 11:27:46 +01:00
Ettore Di Giacinto
80412e2e5d Add luet util pack 2021-11-18 15:33:18 +01:00
Ettore Di Giacinto
df2be8acfe Tag 0.20.10 2021-11-15 22:14:45 +01:00
Ettore Di Giacinto
a2d91a2aee fixup: sanitize metadata images name 2021-11-15 21:10:15 +01:00
Ettore Di Giacinto
bb88fe7e9c 🆕 Tag 0.20.9 2021-11-10 16:29:48 +01:00
Ettore Di Giacinto
702a9f17db Drop code which is called already by containerd
Drop also direct xattrs handling
2021-11-10 16:28:22 +01:00
Ettore Di Giacinto
c58a462e79 🆕 Tag 0.20.8 2021-11-05 23:44:27 +01:00
Ettore Di Giacinto
1e78570c50 Allow to push final images while compiling
Add --push-final-images, --push-final-images-repository and
--push-final-images-force to luet build.

--push-final-images enables pushing final artifact during build. Each
package built will be packed and pushed to the final repository
specified with --push-final-images-repository. By default if no
final-repository is specified and pushing final images is enabled will
default to the cache repository.

--push-final-images-force allows to force-push final images even if
there are already available on the specified registry
2021-11-05 23:03:29 +01:00
Ettore Di Giacinto
0589bead99 Tag 0.20.7 2021-11-04 13:02:23 +01:00
Ettore Di Giacinto
fba420865a 🔧 Preserve suid,sgid and sticky bits when extracting images 2021-11-04 11:34:36 +01:00
Ettore Di Giacinto
9857bea5ff 🎨 Lazy progressbar start 2021-10-31 21:21:08 +01:00
Ettore Di Giacinto
100c313804 Tag 0.20.6 2021-10-31 12:21:15 +01:00
Ettore Di Giacinto
d43b8c4af0 Attach platform data when creating images from tars 2021-10-31 10:48:35 +01:00
Ettore Di Giacinto
384ae8e833 Tag 0.20.5 2021-10-29 11:34:28 +02:00
Ettore Di Giacinto
c7f9708f90 Add CreateTar to image API
Add api call which uses go-container registry to create OCI images from
standard tar archives.

Consume new API when generating final images instead of docker building them
and adapts/add tests as necessary.

This change now allows to carry over xattrs to final images.

Fixes #266
2021-10-29 10:35:03 +02:00
Itxaka
1b35a674ea Print plugin success messages + print plugin location on load (#267)
* report plugin state if succeed

We havbe a state field in the plugin response that its not being used
for anything. This patch makes luet print the state reported from the
plugin if its not empty as a way for plugins to report data on success
to users. If the field is empty it will be ignored.

Signed-off-by: Itxaka <igarcia@suse.com>

* Print plugin path

This patch adds the plugin location to the printed plugin list for a
more rich view of the loaded plugins

Signed-off-by: Itxaka <igarcia@suse.com>
2021-10-28 11:42:56 +02:00
Ettore Di Giacinto
5e8a9c75dc Tag 0.20.4 2021-10-27 12:05:03 +02:00
Ettore Di Giacinto
b5def989ac Drop unused code 2021-10-27 11:22:06 +02:00
Ettore Di Giacinto
fdb49ce70d cli: render table/lists only on terminal output 2021-10-27 11:17:38 +02:00
Ettore Di Giacinto
37cc186c0b delta: trim path when computing src files set
The path contains an ending "/" which wouldn't match when we walk dst as
it's not there.

That had the unpleasant effect of creating empty folders in the
destinations
2021-10-27 11:00:10 +02:00
Ettore Di Giacinto
f2f85a2384 ci: Add back -race 2021-10-26 18:05:34 +02:00
Ettore Di Giacinto
9c17432ee9 Tag 0.20.3 2021-10-26 17:34:03 +02:00
Ettore Di Giacinto
9799b7c94b Add Image reference by pipe, refactor 2021-10-26 16:56:49 +02:00
Ettore Di Giacinto
5a7e97d0fb Update vendor 2021-10-26 16:56:49 +02:00
Ettore Di Giacinto
262d09dfbc Lower message levels 2021-10-26 16:56:49 +02:00
Ettore Di Giacinto
b974f44095 Add cache to avoid RAM consumption
When we have huge file lists we can burst too much into RAM which would
cause OOMs in certain devices. Use instead a smart cache which
automatically drops to disk when necessary.
2021-10-26 16:56:49 +02:00
Ettore Di Giacinto
35fcd868ee Switch to ondisk also when unpacking FS
From benchmarks it seems to be still faster. Add a note for a future
improvement
2021-10-26 16:56:49 +02:00
Ettore Di Giacinto
aea3cdff8d Use ondisk reference for deltas 2021-10-26 16:56:49 +02:00
Ettore Di Giacinto
daa9eb98d2 Walk destination only once when computing delta
Avoid the double pass by constructing the list on the fly
2021-10-26 16:56:49 +02:00
Itxaka
1f0324c452 Log debug before failing (#263)
If a plugin failed, we were skipping the debug info which is kind of
useful :=)

Signed-off-by: Itxaka <igarcia@suse.com>
2021-10-26 11:18:56 +02:00
Ettore Di Giacinto
e705c471eb Tag 0.20.2 2021-10-26 00:30:12 +02:00
Itxaka
7cd455fff4 Set proper error message on plugin failure
Currently we are setting the error message in a no-space full sentence
which is pretty ugly:

| FATA[0000] Pluginluet-cosignat/usr/local/bin/luet-cosignErrorerror while executing plugin: exit status 1

Signed-off-by: Itxaka <igarcia@suse.com>
2021-10-26 00:28:30 +02:00
Ettore Di Giacinto
144c409908 Disable buffer on docker remote
This causes to load otherwise the full tarball into memory
2021-10-25 23:57:09 +02:00
Ettore Di Giacinto
f6bb7a9405 Make sure to pull images before generating artifacts
Fixes #262
2021-10-25 23:56:38 +02:00
Ettore Di Giacinto
9d3af649f1 Tag 0.20.1 2021-10-24 22:31:20 +02:00
Ettore Di Giacinto
1b1ab6225c Use table lookup for checking addition files 2021-10-24 21:55:42 +02:00
Ettore Di Giacinto
bdcf26401c Prepare for tagging 0.20.0 2021-10-24 19:07:41 +02:00
Ettore Di Giacinto
21247331e0 Update README 2021-10-24 19:07:41 +02:00
Ettore Di Giacinto
b77b71f6cd cmd: Create output build dir if doesn't exist already 2021-10-24 19:07:41 +02:00
Ettore Di Giacinto
bb40b5d1b7 update vendor 2021-10-24 19:07:41 +02:00
Ettore Di Giacinto
c220eac061 Move bus to api/core 2021-10-24 19:07:41 +02:00
Ettore Di Giacinto
67a07e7c5a Drop link to moby fork 2021-10-24 19:07:41 +02:00
Ettore Di Giacinto
c897bffdfc Drop untar 2021-10-24 19:07:41 +02:00
Ettore Di Giacinto
52ad2b5cfa Fixup config protect 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
6ff22d923c Make default build dir over context temp 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
37a9a3ef55 use containerd to uncompress 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
4a45b5410d Introduce lock for installation
It is used to ensure integrity and that we do install one package at
once. This is to ensure that we extract correctly, and that we are not
too much I/O intensive depending on CPU
2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
6b7e77df65 test: drop --race 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
819271b9bd Fixup tests 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
063f704057 update vendor 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
ebbb3aad27 Use API also when pulling from helpers used in client 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
ad489c2157 tests: pull image before running 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
454a560f4c Take count of os separator in extraction 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
a0e7e9ba08 ci: split integration tests 2021-10-24 18:26:30 +02:00
Ettore Di Giacinto
acd685b927 Extract with new image API 2021-10-24 18:26:25 +02:00
Ettore Di Giacinto
ab251fefce update vendor 2021-10-24 18:26:25 +02:00
Ettore Di Giacinto
6a9f19941a Add crane-based methods for extraction
- create a new api package to encapsulate image manipulation
- use new api method to calculate delta

Fixes #258
Fixes #204
Fixes #90
2021-10-24 18:26:08 +02:00
Ettore Di Giacinto
d44befe9ff tests: add context unit tests 2021-10-23 15:26:45 +02:00
Ettore Di Giacinto
73c6cff15b Tag 0.19.2 2021-10-23 12:40:55 +02:00
Ettore Di Giacinto
65892f9bfc ux: Display only success on green 2021-10-23 12:40:55 +02:00
Ettore Di Giacinto
315bfb5a54 Move http timeout to the general configuration
Fixes https://github.com/mudler/luet/issues/250 as now it is documented
in the cli --help too.
2021-10-23 11:41:15 +02:00
Ettore Di Giacinto
57c8236184 fixup: cache miss with docker client 2021-10-23 01:50:01 +02:00
1831 changed files with 112382 additions and 78160 deletions

View File

@@ -1,49 +0,0 @@
{{ if .Versions -}}
{{ if .Unreleased.CommitGroups -}}
{{ range .Unreleased.CommitGroups -}}
### {{ .Title }}
{{ range .Commits -}}
- {{ if .Scope }}**{{ .Scope }}:** {{ end }}{{ .Subject }}
{{ end }}
{{ end -}}
{{ else }}
{{ range .Unreleased.Commits -}}
- {{ if .Scope }}**{{ .Scope }}:** {{ end }}{{ .Subject }}
{{ end }}
{{ end -}}
{{ end -}}
{{ range .Versions }}
<a name="{{ .Tag.Name }}"></a>
{{ if .CommitGroups -}}
{{ range .CommitGroups -}}
### {{ .Title }}
{{ range .Commits -}}
- {{ if .Scope }}**{{ .Scope }}:** {{ end }}{{ .Subject }}
{{ end }}
{{ end -}}
{{ else }}
{{ range .Commits -}}
- {{ if .Scope }}**{{ .Scope }}:** {{ end }}{{ .Subject }} (https://github.com/mudler/luet/commit/{{.Hash.Short}})
{{ end }}
{{ end -}}
{{- if .NoteGroups -}}
{{ range .NoteGroups -}}
### {{ .Title }}
{{ range .Notes }}
{{ .Body }}
{{ end }}
{{ end -}}
{{ end -}}
{{ end -}}
{{- if .Versions }}
[Unreleased]: {{ .Info.RepositoryURL }}/compare/{{ $latest := index .Versions 0 }}{{ $latest.Tag.Name }}...HEAD
{{ range .Versions -}}
{{ if .Tag.Previous -}}
[{{ .Tag.Name }}]: {{ $.Info.RepositoryURL }}/compare/{{ .Tag.Previous.Name }}...{{ .Tag.Name }}
{{ end -}}
{{ end -}}
{{ end -}}

View File

@@ -1,27 +0,0 @@
style: github
template: CHANGELOG.tpl.md
info:
title: CHANGELOG
repository_url: https://github.com/mudler/luet
options:
commits:
# filters:
# Type:
# - feat
# - fix
# - perf
# - refactor
commit_groups:
title_maps:
feat: Features
fix: Bug Fixes
perf: Performance Improvements
refactor: Code Refactoring
ci: Continous Integration
header:
pattern: "(.*)"
pattern_maps:
- Subject
notes:
keywords:
- BREAKING CHANGE

View File

@@ -2,7 +2,7 @@
on: pull_request
name: Build and Test
jobs:
tests-integration:
tests-integration-img:
strategy:
matrix:
go-version: [1.16.x]
@@ -24,6 +24,25 @@ jobs:
sudo chmod a+x "/usr/bin/img"
- name: Tests with Img backend
run: sudo -E env "PATH=$PATH" env "LUET_BACKEND=img" make test-integration
tests-integration:
strategy:
matrix:
go-version: [1.16.x]
platform: [ubuntu-latest]
runs-on: ${{ matrix.platform }}
steps:
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
- name: Checkout code
uses: actions/checkout@v2
- name: setup-docker
uses: docker-practice/actions-setup-docker@0.0.1
- name: Install deps
run: |
sudo apt-get install -y upx && sudo -E env "PATH=$PATH" make deps
- name: Tests
run: sudo -E env "PATH=$PATH" make test-integration
tests-unit:
@@ -55,4 +74,8 @@ jobs:
- name: Build
run: sudo -E env "PATH=$PATH" make multiarch-build-small
- name: Tests
run: sudo -E env "PATH=$PATH" make test-coverage
run: sudo -E env "PATH=$PATH" make coverage
- name: Codecov
uses: codecov/codecov-action@v2.1.0
with:
file: coverage.txt

View File

@@ -4,8 +4,8 @@ concurrency:
name: Build on push
jobs:
tests-integration:
name: Integration tests
tests-integration-img:
name: Integration tests with img
runs-on: ubuntu-latest
steps:
- name: Install Go
@@ -14,8 +14,6 @@ jobs:
go-version: 1.16.x
- name: Checkout code
uses: actions/checkout@v2
- name: setup-docker
uses: docker-practice/actions-setup-docker@0.0.1
- name: Login to quay
run: echo ${{ secrets.DOCKER_TESTING_PASSWORD }} | sudo -E docker login -u ${{ secrets.DOCKER_TESTING_USERNAME }} --password-stdin quay.io
- name: Install deps
@@ -30,6 +28,21 @@ jobs:
sudo -E env "PATH=$PATH" \
env "LUET_BACKEND=img" \
make test-integration
tests-integration:
name: Integration tests
runs-on: ubuntu-latest
steps:
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: 1.16.x
- name: Checkout code
uses: actions/checkout@v2
- name: Login to quay
run: echo ${{ secrets.DOCKER_TESTING_PASSWORD }} | sudo -E docker login -u ${{ secrets.DOCKER_TESTING_USERNAME }} --password-stdin quay.io
- name: Install deps
run: |
sudo apt-get install -y upx && sudo -E env "PATH=$PATH" make deps
- name: Tests
run: |
sudo -E \
@@ -49,8 +62,6 @@ jobs:
go-version: 1.16.x
- name: Checkout code
uses: actions/checkout@v2
- name: setup-docker
uses: docker-practice/actions-setup-docker@0.0.1
- name: Login to quay
run: echo ${{ secrets.DOCKER_TESTING_PASSWORD }} | sudo -E docker login -u ${{ secrets.DOCKER_TESTING_USERNAME }} --password-stdin quay.io
- name: Install deps
@@ -73,4 +84,9 @@ jobs:
env "TEST_DOCKER_IMAGE=${{ secrets.DOCKER_TESTING_IMAGE }}" \
env "UNIT_TEST_DOCKER_IMAGE=${{ secrets.DOCKER_TESTING_IMAGE }}" \
env "UNIT_TEST_DOCKER_IMAGE_REPOSITORY=${{ secrets.DOCKER_TESTING_UNIT_TEST_IMAGE }}" \
make test-coverage
make coverage
- name: Codecov
uses: codecov/codecov-action@v2.1.0
with:
file: coverage.txt

View File

@@ -7,7 +7,7 @@ concurrency:
name: Test and Release on tag
jobs:
tests-integration:
tests-integration-img:
name: Integration tests
runs-on: ubuntu-latest
steps:
@@ -33,6 +33,23 @@ jobs:
sudo -E env "PATH=$PATH" \
env "LUET_BACKEND=img" \
make test-integration
tests-integration:
name: Integration tests
runs-on: ubuntu-latest
steps:
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: 1.16.x
- name: Checkout code
uses: actions/checkout@v2
- name: setup-docker
uses: docker-practice/actions-setup-docker@0.0.1
- name: Login to quay
run: echo ${{ secrets.DOCKER_TESTING_PASSWORD }} | sudo -E docker login -u ${{ secrets.DOCKER_TESTING_USERNAME }} --password-stdin quay.io
- name: Install deps
run: |
sudo apt-get install -y upx && sudo -E env "PATH=$PATH" make deps
- name: Tests
run: |
sudo -E \
@@ -70,12 +87,16 @@ jobs:
env "TEST_DOCKER_IMAGE=${{ secrets.DOCKER_TESTING_IMAGE }}" \
env "UNIT_TEST_DOCKER_IMAGE=${{ secrets.DOCKER_TESTING_IMAGE }}" \
env "UNIT_TEST_DOCKER_IMAGE_REPOSITORY=${{ secrets.DOCKER_TESTING_UNIT_TEST_IMAGE }}" \
make test-coverage
make coverage
- name: Codecov
uses: codecov/codecov-action@v2.1.0
with:
file: coverage.txt
release:
name: Release
runs-on: ubuntu-latest
needs: [ "tests-integration","tests-unit" ]
needs: [ "tests-integration-img", "tests-integration","tests-unit" ]
steps:
- name: Install Go
uses: actions/setup-go@v2
@@ -91,4 +112,4 @@ jobs:
version: latest
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,35 +0,0 @@
dist: bionic
language: go
go:
- "1.14"
env:
global:
- "GO15VENDOREXPERIMENT=1"
jobs:
- "DOCKER_BUILDKIT=0"
- "DOCKER_BUILDKIT=1"
before_install:
- sudo rm -rf /var/lib/apt/lists/*
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) edge"
- sudo apt-get update
- echo '{"experimental":true}' | sudo tee /etc/docker/daemon.json
- export DOCKER_CLI_EXPERIMENTAL=enabled
- sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
- mkdir -vp ~/.docker/cli-plugins/
- curl --silent -L "https://github.com/docker/buildx/releases/download/v0.3.0/buildx-v0.3.0.linux-amd64" > ~/.docker/cli-plugins/docker-buildx
- chmod a+x ~/.docker/cli-plugins/docker-buildx
- docker buildx version
- sudo -E env "PATH=$PATH" apt-get install -y libcap2-bin
- sudo -E env "PATH=$PATH" make deps
script:
- sudo -E env "PATH=$PATH" make multiarch-build test-integration test-coverage
#after_success:
# - |
# if [ -n "$TRAVIS_TAG" ] && [ "$TRAVIS_PULL_REQUEST" == "false" ]; then
# sudo -E env "PATH=$PATH" git config --global user.name "Deployer" && git config --global user.email foo@bar.com
# sudo -E env "PATH=$PATH" go get github.com/tcnksm/ghr
# sudo -E env "PATH=$PATH" ghr -u mudler -r luet --replace $TRAVIS_TAG release/
# fi

View File

@@ -17,7 +17,7 @@ Join us in [slack](https://luet.slack.com/join/shared_invite/enQtOTQxMjcyNDQ0MDU
## All Code Changes Happen Through Pull Requests
Pull requests are the best way to propose changes to the codebase. We actively welcome your pull requests:
1. Fork the repo you want to contribute to and create your branch from `develop`.
1. Fork the repo you want to contribute to and create your branch from `master`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the [documentation](https://github.com/Luet-lab/docs).
4. Ensure the test suite passes.

View File

@@ -17,9 +17,10 @@ fmt:
.PHONY: test
test:
GO111MODULE=off go get github.com/onsi/ginkgo/ginkgo
GO111MODULE=off go get github.com/onsi/ginkgo/v2
go install github.com/onsi/ginkgo/v2/ginkgo
GO111MODULE=off go get github.com/onsi/gomega/...
ginkgo -race -r -flakeAttempts 3 ./...
ginkgo -r --flake-attempts=3 ./...
.PHONY: test-integration
test-integration:
@@ -27,11 +28,7 @@ test-integration:
.PHONY: coverage
coverage:
go test ./... -race -coverprofile=coverage.txt -covermode=atomic
.PHONY: test-coverage
test-coverage:
scripts/ginkgo.coverage.sh --codecov
ginkgo --flake-attempts=3 --fail-fast -cover -covermode=atomic -coverprofile=coverage.txt -r .
.PHONY: help
help:

View File

@@ -22,7 +22,7 @@ It is written entirely in Golang and where used as package manager, it can run i
## In a glance
- Luet can reuse Gentoo's portage tree hierarchy, and it is heavily inspired from it.
- It builds, installs, uninstalls and perform upgrades on machines
- It builds from containers, but installs, uninstalls and perform upgrades on machines
- Installer doesn't depend on anything ( 0 dep installer !), statically built
- You can install it aside also with your current distro package manager, and start building and distributing your packages
- [Support for packages as "layers"](https://luet-lab.github.io/docs/docs/concepts/packages/specfile/#building-strategies)
@@ -30,6 +30,7 @@ It is written entirely in Golang and where used as package manager, it can run i
- Support for [collections](https://luet-lab.github.io/docs/docs/concepts/packages/collections/) and [templated package definitions](https://luet-lab.github.io/docs/docs/concepts/packages/templates/)
- [Can be extended with Plugins and Extensions](https://luet-lab.github.io/docs/docs/concepts/plugins-and-extensions/)
- [Can build packages in Kubernetes (experimental)](https://github.com/mudler/luet-k8s)
- Uses containerd/go-containerregistry to manipulate images - works also daemonless with the img backend
## Install

View File

@@ -29,8 +29,8 @@ import (
"github.com/mudler/luet/pkg/compiler/types/compression"
"github.com/mudler/luet/pkg/compiler/types/options"
fileHelpers "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
tree "github.com/mudler/luet/pkg/tree"
"github.com/spf13/cobra"
@@ -38,6 +38,10 @@ import (
)
var buildCmd = &cobra.Command{
// Skip processing output
Annotations: map[string]string{
util.CommandProcessOutput: "",
},
Use: "build <package name> <package name> <package name> ...",
Short: "build a package or a tree",
Long: `Builds one or more packages from a tree (current directory is implied):
@@ -82,9 +86,6 @@ Build packages specifying multiple definition trees:
viper.BindPFlag("wait", cmd.Flags().Lookup("wait"))
viper.BindPFlag("keep-images", cmd.Flags().Lookup("keep-images"))
util.BindSolverFlags(cmd)
viper.BindPFlag("general.show_build_output", cmd.Flags().Lookup("live-output"))
viper.BindPFlag("backend-args", cmd.Flags().Lookup("backend-args"))
},
@@ -92,7 +93,7 @@ Build packages specifying multiple definition trees:
treePaths := viper.GetStringSlice("tree")
dst := viper.GetString("destination")
concurrency := util.DefaultContext.Config.GetGeneral().Concurrency
concurrency := util.DefaultContext.Config.General.Concurrency
backendType := viper.GetString("backend")
privileged := viper.GetBool("privileged")
revdeps := viper.GetBool("revdeps")
@@ -109,14 +110,13 @@ Build packages specifying multiple definition trees:
onlyTarget, _ := cmd.Flags().GetBool("only-target-package")
full, _ := cmd.Flags().GetBool("full")
rebuild, _ := cmd.Flags().GetBool("rebuild")
pushFinalImages, _ := cmd.Flags().GetBool("push-final-images")
pushFinalImagesRepository, _ := cmd.Flags().GetString("push-final-images-repository")
pushFinalImagesForce, _ := cmd.Flags().GetBool("push-final-images-force")
var results Results
backendArgs := viper.GetStringSlice("backend-args")
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
util.DefaultContext.Config.GetLogging().SetLogLevel("error")
}
pretend, _ := cmd.Flags().GetBool("pretend")
fromRepo, _ := cmd.Flags().GetBool("from-repositories")
@@ -144,17 +144,17 @@ Build packages specifying multiple definition trees:
util.DefaultContext.Info("Building in", dst)
opts := util.SetSolverConfig(util.DefaultContext)
pullRepo, _ := cmd.Flags().GetStringArray("pull-repository")
if !fileHelpers.Exists(dst) {
os.MkdirAll(dst, 0600)
util.DefaultContext.Debug("Creating destination folder", dst)
}
util.DefaultContext.Config.GetGeneral().ShowBuildOutput = viper.GetBool("general.show_build_output")
opts := util.DefaultContext.GetConfig().Solver
pullRepo, _ := cmd.Flags().GetStringArray("pull-repository")
util.DefaultContext.Debug("Solver", opts.CompactString())
opts.Options = solver.Options{Type: solver.SingleCoreSimple, Concurrency: concurrency}
luetCompiler := compiler.NewLuetCompiler(compilerBackend, generalRecipe.GetDatabase(),
options.NoDeps(nodeps),
compileropts := []options.Option{options.NoDeps(nodeps),
options.WithBackendType(backendType),
options.PushImages(push),
options.WithBuildValues(values),
@@ -162,7 +162,7 @@ Build packages specifying multiple definition trees:
options.WithPushRepository(imageRepository),
options.Rebuild(rebuild),
options.WithTemplateFolder(util.TemplateFolders(util.DefaultContext, fromRepo, treePaths)),
options.WithSolverOptions(*opts),
options.WithSolverOptions(opts),
options.Wait(wait),
options.OnlyTarget(onlyTarget),
options.PullFirst(pull),
@@ -171,8 +171,21 @@ Build packages specifying multiple definition trees:
options.WithContext(util.DefaultContext),
options.BackendArgs(backendArgs),
options.Concurrency(concurrency),
options.WithCompressionType(compression.Implementation(compressionType)),
)
options.WithCompressionType(compression.Implementation(compressionType))}
if pushFinalImages {
compileropts = append(compileropts, options.EnablePushFinalImages)
if pushFinalImagesForce {
compileropts = append(compileropts, options.ForcePushFinalImages)
}
if pushFinalImagesRepository != "" {
compileropts = append(compileropts, options.WithFinalRepository(pushFinalImagesRepository))
} else if imageRepository != "" {
compileropts = append(compileropts, options.WithFinalRepository(imageRepository))
}
}
luetCompiler := compiler.NewLuetCompiler(compilerBackend, generalRecipe.GetDatabase(), compileropts...)
if full {
specs, err := luetCompiler.FromDatabase(generalRecipe.GetDatabase(), true, dst)
@@ -296,6 +309,11 @@ func init() {
buildCmd.Flags().Bool("privileged", true, "Privileged (Keep permissions)")
buildCmd.Flags().Bool("revdeps", false, "Build with revdeps")
buildCmd.Flags().Bool("all", false, "Build all specfiles in the tree")
buildCmd.Flags().Bool("push-final-images", false, "Push final images while building")
buildCmd.Flags().Bool("push-final-images-force", false, "Override existing images")
buildCmd.Flags().String("push-final-images-repository", "", "Repository where to push final images to")
buildCmd.Flags().Bool("full", false, "Build all packages (optimized)")
buildCmd.Flags().StringSlice("values", []string{}, "Build values file to interpolate with each package")
buildCmd.Flags().StringSliceP("backend-args", "a", []string{}, "Backend args")
@@ -315,7 +333,6 @@ func init() {
buildCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
buildCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
buildCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
buildCmd.Flags().Bool("live-output", util.DefaultContext.Config.GetGeneral().ShowBuildOutput, "Enable live output of the build phase.")
buildCmd.Flags().Bool("from-repositories", false, "Consume the user-defined repositories to pull specfiles from")
buildCmd.Flags().Bool("rebuild", false, "To combine with --pull. Allows to rebuild the target package even if an image is available, against a local values file")
buildCmd.Flags().Bool("pretend", false, "Just print what packages will be compiled")

View File

@@ -31,28 +31,23 @@ var cleanupCmd = &cobra.Command{
Use: "cleanup",
Short: "Clean packages cache.",
Long: `remove downloaded packages tarballs and clean cache directory`,
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
},
Run: func(cmd *cobra.Command, args []string) {
var cleaned int = 0
util.SetSystemConfig(util.DefaultContext)
// Check if cache dir exists
if fileHelper.Exists(util.DefaultContext.Config.GetSystem().GetSystemPkgsCacheDirPath()) {
if fileHelper.Exists(util.DefaultContext.Config.System.PkgsCachePath) {
files, err := ioutil.ReadDir(util.DefaultContext.Config.GetSystem().GetSystemPkgsCacheDirPath())
files, err := ioutil.ReadDir(util.DefaultContext.Config.System.PkgsCachePath)
if err != nil {
util.DefaultContext.Fatal("Error on read cachedir ", err.Error())
}
for _, file := range files {
if util.DefaultContext.Config.GetGeneral().Debug {
util.DefaultContext.Info("Removing ", file.Name())
}
util.DefaultContext.Debug("Removing ", file.Name())
err := os.RemoveAll(
filepath.Join(util.DefaultContext.Config.GetSystem().GetSystemPkgsCacheDirPath(), file.Name()))
filepath.Join(util.DefaultContext.Config.System.PkgsCachePath, file.Name()))
if err != nil {
util.DefaultContext.Fatal("Error on removing", file.Name())
}
@@ -60,14 +55,11 @@ var cleanupCmd = &cobra.Command{
}
}
util.DefaultContext.Info(fmt.Sprintf("Cleaned: %d files from %s", cleaned, util.DefaultContext.Config.GetSystem().GetSystemPkgsCacheDirPath()))
util.DefaultContext.Info(fmt.Sprintf("Cleaned: %d files from %s", cleaned, util.DefaultContext.Config.System.PkgsCachePath))
},
}
func init() {
cleanupCmd.Flags().String("system-dbpath", "", "System db path")
cleanupCmd.Flags().String("system-target", "", "System rootpath")
cleanupCmd.Flags().String("system-engine", "", "System DB engine")
RootCmd.AddCommand(cleanupCmd)
}

View File

@@ -100,6 +100,7 @@ Create a repository from the metadata description defined in the luet.yaml confi
helpers.CheckErr(err)
force := viper.GetBool("force-push")
imagePush := viper.GetBool("push-images")
snapshotID, _ := cmd.Flags().GetString("snapshot-id")
opts := []installer.RepositoryOption{
installer.WithSource(viper.GetString("packages")),
@@ -163,7 +164,7 @@ Create a repository from the metadata description defined in the luet.yaml confi
if metaName != "" {
metaFile.SetFileName(metaName)
}
repo.SetSnapshotID(snapshotID)
repo.SetRepositoryFile(installer.REPOFILE_TREE_KEY, treeFile)
repo.SetRepositoryFile(installer.REPOFILE_META_KEY, metaFile)
@@ -197,6 +198,7 @@ func init() {
createrepoCmd.Flags().String("meta-compression", "none", "Compression alg: none, gzip, zstd")
createrepoCmd.Flags().String("meta-filename", installer.REPOSITORY_METAFILE+".tar", "Repository metadata filename")
createrepoCmd.Flags().Bool("from-repositories", false, "Consume the user-defined repositories to pull specfiles from")
createrepoCmd.Flags().String("snapshot-id", "", "Unique ID to use when creating repository snapshots")
RootCmd.AddCommand(createrepoCmd)
}

View File

@@ -27,7 +27,7 @@ import (
)
func NewDatabaseCreateCommand() *cobra.Command {
var ans = &cobra.Command{
return &cobra.Command{
Use: "create <artifact_metadata1.yaml> <artifact_metadata1.yaml>",
Short: "Insert a package in the system DB",
Long: `Inserts a package in the system database:
@@ -42,12 +42,8 @@ The yaml must contain the package definition, and the file list at least.
For reference, inspect a "metadata.yaml" file generated while running "luet build"`,
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
},
Run: func(cmd *cobra.Command, args []string) {
util.SetSystemConfig(util.DefaultContext)
systemDB := util.DefaultContext.Config.GetSystemDB()
for _, a := range args {
@@ -81,9 +77,4 @@ For reference, inspect a "metadata.yaml" file generated while running "luet buil
},
}
ans.Flags().String("system-dbpath", "", "System db path")
ans.Flags().String("system-target", "", "System rootpath")
ans.Flags().String("system-engine", "", "System DB engine")
return ans
}

View File

@@ -36,12 +36,9 @@ func NewDatabaseGetCommand() *cobra.Command {
To return also files:
$ luet database get --files system/foo`,
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
},
Run: func(cmd *cobra.Command, args []string) {
showFiles, _ := cmd.Flags().GetBool("files")
util.SetSystemConfig(util.DefaultContext)
systemDB := util.DefaultContext.Config.GetSystemDB()
@@ -77,9 +74,6 @@ To return also files:
},
}
c.Flags().Bool("files", false, "Show package files.")
c.Flags().String("system-dbpath", "", "System db path")
c.Flags().String("system-target", "", "System rootpath")
c.Flags().String("system-engine", "", "System DB engine")
return c
}

View File

@@ -23,7 +23,7 @@ import (
)
func NewDatabaseRemoveCommand() *cobra.Command {
var ans = &cobra.Command{
return &cobra.Command{
Use: "remove [package1] [package2] ...",
Short: "Remove a package from the system DB (forcefully - you normally don't want to do that)",
Long: `Removes a package in the system database without actually uninstalling it:
@@ -33,11 +33,8 @@ func NewDatabaseRemoveCommand() *cobra.Command {
This commands takes multiple packages as arguments and prunes their entries from the system database.
`,
Args: cobra.OnlyValidArgs,
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
},
Run: func(cmd *cobra.Command, args []string) {
util.SetSystemConfig(util.DefaultContext)
systemDB := util.DefaultContext.Config.GetSystemDB()
@@ -58,9 +55,5 @@ This commands takes multiple packages as arguments and prunes their entries from
},
}
ans.Flags().String("system-dbpath", "", "System db path")
ans.Flags().String("system-target", "", "System rootpath")
ans.Flags().String("system-engine", "", "System DB engine")
return ans
}

View File

@@ -18,7 +18,7 @@ package cmd_helpers_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -18,7 +18,7 @@ package cmd_helpers_test
import (
. "github.com/mudler/luet/cmd/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -15,9 +15,7 @@
package cmd
import (
"github.com/mudler/luet/pkg/api/core/types"
installer "github.com/mudler/luet/pkg/installer"
"github.com/mudler/luet/pkg/solver"
helpers "github.com/mudler/luet/cmd/helpers"
"github.com/mudler/luet/cmd/util"
@@ -48,8 +46,6 @@ To force install a package:
`,
Aliases: []string{"i"},
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
util.BindSolverFlags(cmd)
viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
viper.BindPFlag("force", cmd.Flags().Lookup("force"))
@@ -71,28 +67,13 @@ To force install a package:
onlydeps := viper.GetBool("onlydeps")
yes := viper.GetBool("yes")
downloadOnly, _ := cmd.Flags().GetBool("download-only")
finalizerEnvs, _ := cmd.Flags().GetStringArray("finalizer-env")
relax, _ := cmd.Flags().GetBool("relax")
util.SetSystemConfig(util.DefaultContext)
util.SetSolverConfig(util.DefaultContext)
util.DefaultContext.Config.GetSolverOptions().Implementation = solver.SingleCoreSimple
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.GetSolverOptions().CompactString())
// Load config protect configs
util.DefaultContext.Config.LoadConfigProtect(util.DefaultContext)
// Load finalizer runtime environments
err := util.SetCliFinalizerEnvs(util.DefaultContext, finalizerEnvs)
if err != nil {
util.DefaultContext.Fatal(err.Error())
}
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.Solver.CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: util.DefaultContext.Config.GetGeneral().Concurrency,
SolverOptions: *util.DefaultContext.Config.GetSolverOptions(),
Concurrency: util.DefaultContext.Config.General.Concurrency,
SolverOptions: util.DefaultContext.Config.Solver,
NoDeps: nodeps,
Force: force,
OnlyDeps: onlydeps,
@@ -104,8 +85,11 @@ To force install a package:
Context: util.DefaultContext,
})
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.GetSystem().Rootfs}
err = inst.Install(toInstall, system)
system := &installer.System{
Database: util.DefaultContext.Config.GetSystemDB(),
Target: util.DefaultContext.Config.System.Rootfs,
}
err := inst.Install(toInstall, system)
if err != nil {
util.DefaultContext.Fatal("Error: " + err.Error())
}
@@ -113,14 +97,7 @@ To force install a package:
}
func init() {
installCmd.Flags().String("system-dbpath", "", "System db path")
installCmd.Flags().String("system-target", "", "System rootpath")
installCmd.Flags().String("system-engine", "", "System DB engine")
installCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+types.AvailableResolvers+" )")
installCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
installCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
installCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
installCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful!)")
installCmd.Flags().Bool("relax", false, "Relax installation constraints")

124
cmd/oscheck.go Normal file
View File

@@ -0,0 +1,124 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package cmd
import (
"fmt"
"os"
"strings"
installer "github.com/mudler/luet/pkg/installer"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/cmd/util"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var osCheckCmd = &cobra.Command{
Use: "oscheck",
Short: "Checks packages integrity",
Long: `List packages that are installed in the system which files are missing in the system.
$ luet oscheck
To reinstall packages in the list:
$ luet oscheck --reinstall
`,
Aliases: []string{"i"},
PreRun: func(cmd *cobra.Command, args []string) {
viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
viper.BindPFlag("force", cmd.Flags().Lookup("force"))
viper.BindPFlag("yes", cmd.Flags().Lookup("yes"))
},
Run: func(cmd *cobra.Command, args []string) {
force := viper.GetBool("force")
onlydeps := viper.GetBool("onlydeps")
yes := viper.GetBool("yes")
downloadOnly, _ := cmd.Flags().GetBool("download-only")
system := &installer.System{
Database: util.DefaultContext.Config.GetSystemDB(),
Target: util.DefaultContext.Config.System.Rootfs,
}
packs := system.OSCheck(util.DefaultContext)
if !util.DefaultContext.Config.General.Quiet {
if len(packs) == 0 {
util.DefaultContext.Success("All good!")
os.Exit(0)
} else {
util.DefaultContext.Info("Following packages are missing files or are incomplete:")
for _, p := range packs {
util.DefaultContext.Info(p.HumanReadableString())
}
}
} else {
var s []string
for _, p := range packs {
s = append(s, p.HumanReadableString())
}
fmt.Println(strings.Join(s, " "))
}
reinstall, _ := cmd.Flags().GetBool("reinstall")
if reinstall {
// Strip version for reinstall
toInstall := pkg.Packages{}
for _, p := range packs {
new := p.Clone()
new.SetVersion(">=0")
toInstall = append(toInstall, new)
}
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.Solver.CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: util.DefaultContext.Config.General.Concurrency,
SolverOptions: util.DefaultContext.Config.Solver,
NoDeps: true,
Force: force,
OnlyDeps: onlydeps,
PreserveSystemEssentialData: true,
Ask: !yes,
DownloadOnly: downloadOnly,
Context: util.DefaultContext,
PackageRepositories: util.DefaultContext.Config.SystemRepositories,
})
err := inst.Swap(packs, toInstall, system)
if err != nil {
util.DefaultContext.Fatal("Error: " + err.Error())
}
}
},
}
func init() {
osCheckCmd.Flags().Bool("reinstall", false, "reinstall")
osCheckCmd.Flags().Bool("onlydeps", false, "Consider **only** package dependencies")
osCheckCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
osCheckCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
osCheckCmd.Flags().Bool("download-only", false, "Download only")
RootCmd.AddCommand(osCheckCmd)
}

View File

@@ -52,7 +52,7 @@ Afterwards, you can use the content generated and associate it with a tree and a
dst := viper.GetString("destination")
compressionType := viper.GetString("compression")
concurrency := util.DefaultContext.Config.GetGeneral().Concurrency
concurrency := util.DefaultContext.Config.General.Concurrency
if len(args) != 1 {
util.DefaultContext.Fatal("You must specify a package name")

View File

@@ -26,7 +26,6 @@ var reclaimCmd = &cobra.Command{
Use: "reclaim",
Short: "Reclaim packages to Luet database from available repositories",
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
viper.BindPFlag("force", cmd.Flags().Lookup("force"))
},
Long: `Reclaim tries to find association between packages in the online repositories and the system one.
@@ -36,21 +35,23 @@ var reclaimCmd = &cobra.Command{
It scans the target file system, and if finds a match with a package available in the repositories, it marks as installed in the system database.
`,
Run: func(cmd *cobra.Command, args []string) {
util.SetSystemConfig(util.DefaultContext)
force := viper.GetBool("force")
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.GetSolverOptions().CompactString())
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.Solver.CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: util.DefaultContext.Config.GetGeneral().Concurrency,
Concurrency: util.DefaultContext.Config.General.Concurrency,
Force: force,
PreserveSystemEssentialData: true,
PackageRepositories: util.DefaultContext.Config.SystemRepositories,
Context: util.DefaultContext,
})
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.GetSystem().Rootfs}
system := &installer.System{
Database: util.DefaultContext.Config.GetSystemDB(),
Target: util.DefaultContext.Config.System.Rootfs,
}
err := inst.Reclaim(system)
if err != nil {
util.DefaultContext.Fatal("Error: " + err.Error())
@@ -60,10 +61,6 @@ It scans the target file system, and if finds a match with a package available i
func init() {
reclaimCmd.Flags().String("system-dbpath", "", "System db path")
reclaimCmd.Flags().String("system-target", "", "System rootpath")
reclaimCmd.Flags().String("system-engine", "", "System DB engine")
reclaimCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
RootCmd.AddCommand(reclaimCmd)

View File

@@ -15,9 +15,7 @@
package cmd
import (
"github.com/mudler/luet/pkg/api/core/types"
installer "github.com/mudler/luet/pkg/installer"
"github.com/mudler/luet/pkg/solver"
helpers "github.com/mudler/luet/cmd/helpers"
"github.com/mudler/luet/cmd/util"
@@ -35,8 +33,6 @@ var reinstallCmd = &cobra.Command{
$ luet reinstall -y system/busybox shells/bash system/coreutils ...
`,
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
util.BindSolverFlags(cmd)
viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
viper.BindPFlag("force", cmd.Flags().Lookup("force"))
viper.BindPFlag("for", cmd.Flags().Lookup("for"))
@@ -52,30 +48,13 @@ var reinstallCmd = &cobra.Command{
yes := viper.GetBool("yes")
downloadOnly, _ := cmd.Flags().GetBool("download-only")
installed, _ := cmd.Flags().GetBool("installed")
util.SetSystemConfig(util.DefaultContext)
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
if err != nil {
util.DefaultContext.Fatal("Invalid package string ", a, ": ", err.Error())
}
toUninstall = append(toUninstall, pack)
toAdd = append(toAdd, pack)
}
util.SetSolverConfig(util.DefaultContext)
util.DefaultContext.Config.GetSolverOptions().Implementation = solver.SingleCoreSimple
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.GetSolverOptions().CompactString())
// Load config protect configs
util.DefaultContext.Config.LoadConfigProtect(util.DefaultContext)
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.Solver.CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: util.DefaultContext.Config.GetGeneral().Concurrency,
SolverOptions: *util.DefaultContext.Config.GetSolverOptions(),
Concurrency: util.DefaultContext.Config.General.Concurrency,
SolverOptions: util.DefaultContext.Config.Solver,
NoDeps: true,
Force: force,
OnlyDeps: onlydeps,
@@ -86,7 +65,26 @@ var reinstallCmd = &cobra.Command{
PackageRepositories: util.DefaultContext.Config.SystemRepositories,
})
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.GetSystem().Rootfs}
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.System.Rootfs}
if installed {
for _, p := range system.Database.World() {
toUninstall = append(toUninstall, p)
c := p.Clone()
c.SetVersion(">=0")
toAdd = append(toAdd, c)
}
} else {
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
if err != nil {
util.DefaultContext.Fatal("Invalid package string ", a, ": ", err.Error())
}
toUninstall = append(toUninstall, pack)
toAdd = append(toAdd, pack)
}
}
err := inst.Swap(toUninstall, toAdd, system)
if err != nil {
util.DefaultContext.Fatal("Error: " + err.Error())
@@ -95,18 +93,9 @@ var reinstallCmd = &cobra.Command{
}
func init() {
reinstallCmd.Flags().String("system-dbpath", "", "System db path")
reinstallCmd.Flags().String("system-target", "", "System rootpath")
reinstallCmd.Flags().String("system-engine", "", "System DB engine")
reinstallCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+types.AvailableResolvers+" )")
reinstallCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
reinstallCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
reinstallCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
reinstallCmd.Flags().Bool("onlydeps", false, "Consider **only** package dependencies")
reinstallCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
reinstallCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
reinstallCmd.Flags().Bool("installed", false, "Reinstall installed packages")
reinstallCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
reinstallCmd.Flags().Bool("download-only", false, "Download only")

View File

@@ -15,7 +15,6 @@
package cmd
import (
"github.com/mudler/luet/pkg/api/core/types"
installer "github.com/mudler/luet/pkg/installer"
"github.com/mudler/luet/pkg/solver"
@@ -37,8 +36,6 @@ var replaceCmd = &cobra.Command{
$ luet replace -y system/busybox ... --for shells/bash --for system/coreutils ...
`,
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
util.BindSolverFlags(cmd)
viper.BindPFlag("onlydeps", cmd.Flags().Lookup("onlydeps"))
viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
viper.BindPFlag("force", cmd.Flags().Lookup("force"))
@@ -57,8 +54,6 @@ var replaceCmd = &cobra.Command{
yes := viper.GetBool("yes")
downloadOnly, _ := cmd.Flags().GetBool("download-only")
util.SetSystemConfig(util.DefaultContext)
util.SetSolverConfig(util.DefaultContext)
for _, a := range args {
pack, err := helpers.ParsePackageStr(a)
if err != nil {
@@ -75,16 +70,13 @@ var replaceCmd = &cobra.Command{
toAdd = append(toAdd, pack)
}
util.DefaultContext.Config.GetSolverOptions().Implementation = solver.SingleCoreSimple
util.DefaultContext.Config.Solver.Implementation = solver.SingleCoreSimple
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.GetSolverOptions().CompactString())
// Load config protect configs
util.DefaultContext.Config.LoadConfigProtect(util.DefaultContext)
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.Solver.CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: util.DefaultContext.Config.GetGeneral().Concurrency,
SolverOptions: *util.DefaultContext.Config.GetSolverOptions(),
Concurrency: util.DefaultContext.Config.General.Concurrency,
SolverOptions: util.DefaultContext.Config.Solver,
NoDeps: nodeps,
Force: force,
OnlyDeps: onlydeps,
@@ -95,7 +87,7 @@ var replaceCmd = &cobra.Command{
Context: util.DefaultContext,
})
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.GetSystem().Rootfs}
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.System.Rootfs}
err := inst.Swap(toUninstall, toAdd, system)
if err != nil {
util.DefaultContext.Fatal("Error: " + err.Error())
@@ -105,18 +97,9 @@ var replaceCmd = &cobra.Command{
func init() {
replaceCmd.Flags().String("system-dbpath", "", "System db path")
replaceCmd.Flags().String("system-target", "", "System rootpath")
replaceCmd.Flags().String("system-engine", "", "System DB engine")
replaceCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+types.AvailableResolvers+" )")
replaceCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
replaceCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
replaceCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
replaceCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful!)")
replaceCmd.Flags().Bool("onlydeps", false, "Consider **only** package dependencies")
replaceCmd.Flags().Bool("force", false, "Skip errors and keep going (potentially harmful)")
replaceCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
replaceCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
replaceCmd.Flags().StringSlice("for", []string{}, "Packages that has to be installed in place of others")
replaceCmd.Flags().Bool("download-only", false, "Download only")

View File

@@ -68,7 +68,7 @@ func NewRepoListCommand() *cobra.Command {
repoText = pterm.LightYellow(repo.Urls[0])
}
repobasedir := util.DefaultContext.Config.GetSystem().GetRepoDatabaseDirPath(repo.Name)
repobasedir := util.DefaultContext.Config.System.GetRepoDatabaseDirPath(repo.Name)
if repo.Cached {
r := installer.NewSystemRepository(repo)

View File

@@ -1,5 +1,6 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
@@ -24,7 +25,7 @@ import (
)
func NewRepoUpdateCommand() *cobra.Command {
var ans = &cobra.Command{
var repoUpdate = &cobra.Command{
Use: "update [repo1] [repo2] [OPTIONS]",
Short: "Update a specific cached repository or all cached repositories.",
Example: `
@@ -72,8 +73,8 @@ $> luet repo update repo1 repo2
},
}
ans.Flags().BoolP("ignore-errors", "i", false, "Ignore errors on sync repositories.")
ans.Flags().BoolP("force", "f", false, "Force resync.")
repoUpdate.Flags().BoolP("ignore-errors", "i", false, "Ignore errors on sync repositories.")
repoUpdate.Flags().BoolP("force", "f", true, "Force resync.")
return ans
return repoUpdate
}

View File

@@ -20,7 +20,7 @@ import (
"os"
"github.com/mudler/luet/cmd/util"
bus "github.com/mudler/luet/pkg/bus"
bus "github.com/mudler/luet/pkg/api/core/bus"
"github.com/spf13/cobra"
"github.com/spf13/viper"
@@ -30,7 +30,7 @@ var cfgFile string
var Verbose bool
const (
LuetCLIVersion = "0.19.1"
LuetCLIVersion = "0.22.5"
LuetEnvPrefix = "LUET"
)
@@ -81,35 +81,31 @@ To build a package, from a tree definition:
`,
Version: version(),
PersistentPreRun: func(cmd *cobra.Command, args []string) {
err := util.InitContext(util.DefaultContext)
ctx, err := util.InitContext(cmd)
if err != nil {
util.DefaultContext.Error("failed to load configuration:", err.Error())
fmt.Println("failed to load configuration:", err.Error())
os.Exit(1)
}
util.DisplayVersionBanner(util.DefaultContext, util.IntroScreen, version, license)
// Initialize tmpdir prefix. TODO: Move this with LoadConfig
// directly on sub command to ensure the creation only when it's
// needed.
err = util.DefaultContext.Config.GetSystem().InitTmpDir()
if err != nil {
util.DefaultContext.Fatal("failed on init tmp basedir:", err.Error())
}
util.DefaultContext = ctx
util.DisplayVersionBanner(util.DefaultContext, util.IntroScreen, version, license)
viper.BindPFlag("plugin", cmd.Flags().Lookup("plugin"))
plugin := viper.GetStringSlice("plugin")
bus.Manager.Initialize(plugin...)
bus.Manager.Initialize(util.DefaultContext, plugin...)
if len(bus.Manager.Plugins) != 0 {
util.DefaultContext.Info(":lollipop:Enabled plugins:")
for _, p := range bus.Manager.Plugins {
util.DefaultContext.Info("\t:arrow_right:", p.Name)
util.DefaultContext.Info(fmt.Sprintf("\t:arrow_right: %s (at %s)", p.Name, p.Executable))
}
}
},
PersistentPostRun: func(cmd *cobra.Command, args []string) {
// Cleanup all tmp directories used by luet
err := util.DefaultContext.Config.GetSystem().CleanupTmpDir()
err := util.DefaultContext.Clean()
if err != nil {
util.DefaultContext.Warning("failed on cleanup tmpdir:", err.Error())
}
@@ -129,5 +125,5 @@ func Execute() {
}
func init() {
util.InitViper(util.DefaultContext, RootCmd)
util.InitViper(RootCmd)
}

View File

@@ -20,7 +20,6 @@ import (
"github.com/ghodss/yaml"
"github.com/mudler/luet/cmd/util"
"github.com/mudler/luet/pkg/api/core/types"
installer "github.com/mudler/luet/pkg/installer"
pkg "github.com/mudler/luet/pkg/package"
"github.com/pterm/pterm"
@@ -36,6 +35,7 @@ type PackageResult struct {
Target string `json:"target"`
Hidden bool `json:"hidden"`
Files []string `json:"files"`
Installed bool `json:"installed"`
}
type Results struct {
@@ -46,13 +46,13 @@ func (r PackageResult) String() string {
return fmt.Sprintf("%s/%s-%s required for %s", r.Category, r.Name, r.Version, r.Target)
}
var rows []string = []string{"Package", "Category", "Name", "Version", "Repository", "License"}
var rows []string = []string{"Package", "Category", "Name", "Version", "Repository", "License", "Installed"}
func packageToRow(repo string, p pkg.Package) []string {
return []string{p.HumanReadableString(), p.GetCategory(), p.GetName(), p.GetVersion(), repo, p.GetLicense()}
func packageToRow(repo string, p pkg.Package, installed bool) []string {
return []string{p.HumanReadableString(), p.GetCategory(), p.GetName(), p.GetVersion(), repo, p.GetLicense(), fmt.Sprintf("%t", installed)}
}
func packageToList(l *util.ListWriter, repo string, p pkg.Package) {
func packageToList(l *util.ListWriter, repo string, p pkg.Package, installed bool) {
l.AppendItem(pterm.BulletListItem{
Level: 0, Text: p.HumanReadableString(),
TextStyle: pterm.NewStyle(pterm.FgCyan), Bullet: ">", BulletStyle: pterm.NewStyle(pterm.FgYellow),
@@ -81,12 +81,32 @@ func packageToList(l *util.ListWriter, repo string, p pkg.Package) {
Level: 1, Text: fmt.Sprintf("Uri: %s ", strings.Join(p.GetURI(), " ")),
Bullet: "->", BulletStyle: pterm.NewStyle(pterm.FgDarkGray),
})
l.AppendItem(pterm.BulletListItem{
Level: 1, Text: fmt.Sprintf("Installed: %t ", installed),
Bullet: "->", BulletStyle: pterm.NewStyle(pterm.FgDarkGray),
})
}
var s *installer.System
func sys() *installer.System {
if s != nil {
return s
}
s = &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.System.Rootfs}
return s
}
func installed(p pkg.Package) bool {
s := sys()
_, err := s.Database.FindPackage(p)
return err == nil
}
func searchLocally(term string, l *util.ListWriter, t *util.TableWriter, label, labelMatch, revdeps, hidden bool) Results {
var results Results
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.GetSystem().Rootfs}
system := sys()
var err error
iMatches := pkg.Packages{}
@@ -105,9 +125,8 @@ func searchLocally(term string, l *util.ListWriter, t *util.TableWriter, label,
for _, pack := range iMatches {
if !revdeps {
if !pack.IsHidden() || pack.IsHidden() && hidden {
t.AppendRow(packageToRow("system", pack))
packageToList(l, "system", pack)
t.AppendRow(packageToRow("system", pack, true))
packageToList(l, "system", pack, true)
f, _ := system.Database.GetPackageFiles(pack)
results.Packages = append(results.Packages,
PackageResult{
@@ -117,6 +136,7 @@ func searchLocally(term string, l *util.ListWriter, t *util.TableWriter, label,
Repository: "system",
Hidden: pack.IsHidden(),
Files: f,
Installed: true,
})
}
} else {
@@ -124,8 +144,9 @@ func searchLocally(term string, l *util.ListWriter, t *util.TableWriter, label,
packs, _ := system.Database.GetRevdeps(pack)
for _, revdep := range packs {
if !revdep.IsHidden() || revdep.IsHidden() && hidden {
t.AppendRow(packageToRow("system", pack))
packageToList(l, "system", pack)
i := installed(pack)
t.AppendRow(packageToRow("system", pack, i))
packageToList(l, "system", pack, i)
f, _ := system.Database.GetPackageFiles(revdep)
results.Packages = append(results.Packages,
PackageResult{
@@ -135,6 +156,7 @@ func searchLocally(term string, l *util.ListWriter, t *util.TableWriter, label,
Repository: "system",
Hidden: revdep.IsHidden(),
Files: f,
Installed: i,
})
}
}
@@ -148,8 +170,8 @@ func searchOnline(term string, l *util.ListWriter, t *util.TableWriter, label, l
inst := installer.NewLuetInstaller(
installer.LuetInstallerOptions{
Concurrency: util.DefaultContext.Config.GetGeneral().Concurrency,
SolverOptions: *util.DefaultContext.Config.GetSolverOptions(),
Concurrency: util.DefaultContext.Config.General.Concurrency,
SolverOptions: util.DefaultContext.Config.Solver,
PackageRepositories: util.DefaultContext.Config.SystemRepositories,
Context: util.DefaultContext,
},
@@ -173,14 +195,16 @@ func searchOnline(term string, l *util.ListWriter, t *util.TableWriter, label, l
for _, m := range matches {
if !revdeps {
if !m.Package.IsHidden() || m.Package.IsHidden() && hidden {
t.AppendRow(packageToRow(m.Repo.GetName(), m.Package))
packageToList(l, m.Repo.GetName(), m.Package)
i := installed(m.Package)
t.AppendRow(packageToRow(m.Repo.GetName(), m.Package, i))
packageToList(l, m.Repo.GetName(), m.Package, i)
r := &PackageResult{
Name: m.Package.GetName(),
Version: m.Package.GetVersion(),
Category: m.Package.GetCategory(),
Repository: m.Repo.GetName(),
Hidden: m.Package.IsHidden(),
Installed: i,
}
if m.Artifact != nil {
r.Files = m.Artifact.Files
@@ -191,10 +215,12 @@ func searchOnline(term string, l *util.ListWriter, t *util.TableWriter, label, l
packs, _ := m.Repo.GetTree().GetDatabase().GetRevdeps(m.Package)
for _, revdep := range packs {
if !revdep.IsHidden() || revdep.IsHidden() && hidden {
t.AppendRow(packageToRow(m.Repo.GetName(), revdep))
packageToList(l, m.Repo.GetName(), revdep)
i := installed(revdep)
t.AppendRow(packageToRow(m.Repo.GetName(), revdep, i))
packageToList(l, m.Repo.GetName(), revdep, i)
r := &PackageResult{
Name: revdep.GetName(),
Installed: i,
Version: revdep.GetVersion(),
Category: revdep.GetCategory(),
Repository: m.Repo.GetName(),
@@ -216,8 +242,9 @@ func searchLocalFiles(term string, l *util.ListWriter, t *util.TableWriter) Resu
matches, _ := util.DefaultContext.Config.GetSystemDB().FindPackageByFile(term)
for _, pack := range matches {
t.AppendRow(packageToRow("system", pack))
packageToList(l, "system", pack)
i := installed(pack)
t.AppendRow(packageToRow("system", pack, i))
packageToList(l, "system", pack, i)
f, _ := util.DefaultContext.Config.GetSystemDB().GetPackageFiles(pack)
results.Packages = append(results.Packages,
PackageResult{
@@ -227,6 +254,7 @@ func searchLocalFiles(term string, l *util.ListWriter, t *util.TableWriter) Resu
Repository: "system",
Hidden: pack.IsHidden(),
Files: f,
Installed: i,
})
}
@@ -238,8 +266,8 @@ func searchFiles(term string, l *util.ListWriter, t *util.TableWriter) Results {
inst := installer.NewLuetInstaller(
installer.LuetInstallerOptions{
Concurrency: util.DefaultContext.Config.GetGeneral().Concurrency,
SolverOptions: *util.DefaultContext.Config.GetSolverOptions(),
Concurrency: util.DefaultContext.Config.General.Concurrency,
SolverOptions: util.DefaultContext.Config.Solver,
PackageRepositories: util.DefaultContext.Config.SystemRepositories,
Context: util.DefaultContext,
},
@@ -256,8 +284,9 @@ func searchFiles(term string, l *util.ListWriter, t *util.TableWriter) Results {
matches = synced.SearchPackages(term, installer.FileSearch)
for _, m := range matches {
t.AppendRow(packageToRow(m.Repo.GetName(), m.Package))
packageToList(l, m.Repo.GetName(), m.Package)
i := installed(m.Package)
t.AppendRow(packageToRow(m.Repo.GetName(), m.Package, i))
packageToList(l, m.Repo.GetName(), m.Package, i)
results.Packages = append(results.Packages,
PackageResult{
Name: m.Package.GetName(),
@@ -266,13 +295,18 @@ func searchFiles(term string, l *util.ListWriter, t *util.TableWriter) Results {
Repository: m.Repo.GetName(),
Hidden: m.Package.IsHidden(),
Files: m.Artifact.Files,
Installed: i,
})
}
return results
}
var searchCmd = &cobra.Command{
Use: "search <term>",
Use: "search <term>",
// Skip processing output
Annotations: map[string]string{
util.CommandProcessOutput: "",
},
Short: "Search packages",
Long: `Search for installed and available packages
@@ -309,8 +343,7 @@ Search can also return results in the terminal in different ways: as terminal ou
`,
Aliases: []string{"s"},
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
util.BindSolverFlags(cmd)
viper.BindPFlag("installed", cmd.Flags().Lookup("installed"))
},
Run: func(cmd *cobra.Command, args []string) {
@@ -329,18 +362,11 @@ Search can also return results in the terminal in different ways: as terminal ou
tableMode, _ := cmd.Flags().GetBool("table")
files, _ := cmd.Flags().GetBool("files")
util.SetSystemConfig(util.DefaultContext)
util.SetSolverConfig(util.DefaultContext)
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
util.DefaultContext.Config.GetLogging().SetLogLevel("error")
}
l := &util.ListWriter{}
t := &util.TableWriter{}
t.AppendRow(rows)
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.GetSolverOptions().CompactString())
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.Solver.CompactString())
switch {
case files && installed:
@@ -353,12 +379,6 @@ Search can also return results in the terminal in different ways: as terminal ou
results = searchLocally(args[0], l, t, searchWithLabel, searchWithLabelMatch, revdeps, hidden)
}
if tableMode {
t.Render()
} else {
l.Render()
}
y, err := yaml.Marshal(results)
if err != nil {
fmt.Printf("err: %v\n", err)
@@ -374,22 +394,24 @@ Search can also return results in the terminal in different ways: as terminal ou
return
}
fmt.Println(string(j2))
default:
if tableMode {
t.Render()
} else if util.DefaultContext.Config.General.Quiet {
for _, tt := range results.Packages {
fmt.Printf("%s/%s-%s\n", tt.Category, tt.Name, tt.Version)
}
} else {
l.Render()
}
}
},
}
func init() {
searchCmd.Flags().String("system-dbpath", "", "System db path")
searchCmd.Flags().String("system-target", "", "System rootpath")
searchCmd.Flags().String("system-engine", "", "System DB engine")
searchCmd.Flags().Bool("installed", false, "Search between system packages")
searchCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+types.AvailableResolvers+" )")
searchCmd.Flags().StringP("output", "o", "terminal", "Output format ( Defaults: terminal, available: json,yaml )")
searchCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
searchCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
searchCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
searchCmd.Flags().Bool("by-label", false, "Search packages through label")
searchCmd.Flags().Bool("by-label-regex", false, "Search packages through label regex")
searchCmd.Flags().Bool("revdeps", false, "Search package reverse dependencies")

View File

@@ -38,7 +38,11 @@ import (
func NewTreeImageCommand() *cobra.Command {
var ans = &cobra.Command{
Use: "images [OPTIONS]",
Use: "images [OPTIONS]",
// Skip processing output
Annotations: map[string]string{
util.CommandProcessOutput: "",
},
Short: "List of the images of a package",
PreRun: func(cmd *cobra.Command, args []string) {
t, _ := cmd.Flags().GetStringArray("tree")
@@ -60,12 +64,7 @@ func NewTreeImageCommand() *cobra.Command {
imageRepository := viper.GetString("image-repository")
pullRepo, _ := cmd.Flags().GetStringArray("pull-repository")
values := util.ValuesFlags()
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
util.DefaultContext.Config.GetLogging().SetLogLevel("error")
}
reciper := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
for _, t := range treePath {
@@ -76,7 +75,7 @@ func NewTreeImageCommand() *cobra.Command {
}
compilerBackend := backend.NewSimpleDockerBackend(util.DefaultContext)
opts := *util.DefaultContext.Config.GetSolverOptions()
opts := util.DefaultContext.Config.Solver
opts.Options = solver.Options{Type: solver.SingleCoreSimple, Concurrency: 1}
luetCompiler := compiler.NewLuetCompiler(
compilerBackend,
@@ -102,7 +101,12 @@ func NewTreeImageCommand() *cobra.Command {
}
ht := compiler.NewHashTree(reciper.GetDatabase())
hashtree, err := ht.Query(luetCompiler, spec)
copy, err := compiler.CompilerFinalImages(luetCompiler)
if err != nil {
util.DefaultContext.Fatal("Error: " + err.Error())
}
hashtree, err := ht.Query(copy, spec)
if err != nil {
util.DefaultContext.Fatal("Error: " + err.Error())
}

View File

@@ -67,6 +67,10 @@ func NewTreePkglistCommand() *cobra.Command {
var matches []string
var ans = &cobra.Command{
// Skip processing output
Annotations: map[string]string{
util.CommandProcessOutput: "",
},
Use: "pkglist [OPTIONS]",
Short: "List of the packages found in tree.",
Args: cobra.NoArgs,
@@ -95,9 +99,6 @@ func NewTreePkglistCommand() *cobra.Command {
deps, _ := cmd.Flags().GetBool("deps")
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
util.DefaultContext.Config.GetLogging().SetLogLevel("error")
}
var reciper tree.Builder
if buildtime {

View File

@@ -255,7 +255,7 @@ func validatePackage(p pkg.Package, checkType string, opts *ValidateOpts, recipe
r.GetCategory(), r.GetName(), r.GetVersion(),
))
if util.DefaultContext.Config.GetGeneral().Debug {
if util.DefaultContext.Config.General.Debug {
for idx, pa := range solution {
fmt.Println(fmt.Sprintf("[%9s] %s/%s-%s: solution %d: %s",
checkType,
@@ -426,7 +426,7 @@ func NewTreeValidateCommand() *cobra.Command {
Run: func(cmd *cobra.Command, args []string) {
var reciper tree.Builder
concurrency := util.DefaultContext.Config.GetGeneral().Concurrency
concurrency := util.DefaultContext.Config.General.Concurrency
withSolver, _ := cmd.Flags().GetBool("with-solver")
onlyRuntime, _ := cmd.Flags().GetBool("only-runtime")

View File

@@ -17,7 +17,6 @@ package cmd
import (
helpers "github.com/mudler/luet/cmd/helpers"
"github.com/mudler/luet/cmd/util"
"github.com/mudler/luet/pkg/api/core/types"
installer "github.com/mudler/luet/pkg/installer"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/solver"
@@ -32,8 +31,7 @@ var uninstallCmd = &cobra.Command{
Long: `Uninstall packages`,
Aliases: []string{"rm", "un"},
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
util.BindSolverFlags(cmd)
viper.BindPFlag("nodeps", cmd.Flags().Lookup("nodeps"))
viper.BindPFlag("force", cmd.Flags().Lookup("force"))
viper.BindPFlag("yes", cmd.Flags().Lookup("yes"))
@@ -57,21 +55,15 @@ var uninstallCmd = &cobra.Command{
yes := viper.GetBool("yes")
keepProtected, _ := cmd.Flags().GetBool("keep-protected-files")
util.SetSystemConfig(util.DefaultContext)
util.SetSolverConfig(util.DefaultContext)
util.DefaultContext.Config.ConfigProtectSkip = !keepProtected
util.DefaultContext.Config.GetSolverOptions().Implementation = solver.SingleCoreSimple
util.DefaultContext.Config.Solver.Implementation = solver.SingleCoreSimple
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.GetSolverOptions().CompactString())
// Load config protect configs
util.DefaultContext.Config.LoadConfigProtect(util.DefaultContext)
util.DefaultContext.Debug("Solver", util.DefaultContext.Config.Solver.CompactString())
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: util.DefaultContext.Config.GetGeneral().Concurrency,
SolverOptions: *util.DefaultContext.Config.GetSolverOptions(),
Concurrency: util.DefaultContext.Config.General.Concurrency,
SolverOptions: util.DefaultContext.Config.Solver,
NoDeps: nodeps,
Force: force,
FullUninstall: full,
@@ -82,7 +74,7 @@ var uninstallCmd = &cobra.Command{
Context: util.DefaultContext,
})
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.GetSystem().Rootfs}
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.System.Rootfs}
if err := inst.Uninstall(system, toRemove...); err != nil {
util.DefaultContext.Fatal("Error: " + err.Error())
@@ -92,14 +84,6 @@ var uninstallCmd = &cobra.Command{
func init() {
uninstallCmd.Flags().String("system-dbpath", "", "System db path")
uninstallCmd.Flags().String("system-target", "", "System rootpath")
uninstallCmd.Flags().String("system-engine", "", "System DB engine")
uninstallCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+types.AvailableResolvers+" )")
uninstallCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
uninstallCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
uninstallCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
uninstallCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful! overrides checkconflicts and full!)")
uninstallCmd.Flags().Bool("force", false, "Force uninstall")
uninstallCmd.Flags().Bool("full", false, "Attempts to remove as much packages as possible which aren't required (slow)")

View File

@@ -16,7 +16,6 @@ package cmd
import (
"github.com/mudler/luet/cmd/util"
"github.com/mudler/luet/pkg/api/core/types"
installer "github.com/mudler/luet/pkg/installer"
"github.com/mudler/luet/pkg/solver"
@@ -29,8 +28,7 @@ var upgradeCmd = &cobra.Command{
Short: "Upgrades the system",
Aliases: []string{"u"},
PreRun: func(cmd *cobra.Command, args []string) {
util.BindSystemFlags(cmd)
util.BindSolverFlags(cmd)
viper.BindPFlag("force", cmd.Flags().Lookup("force"))
viper.BindPFlag("yes", cmd.Flags().Lookup("yes"))
},
@@ -43,22 +41,18 @@ var upgradeCmd = &cobra.Command{
universe, _ := cmd.Flags().GetBool("universe")
clean, _ := cmd.Flags().GetBool("clean")
sync, _ := cmd.Flags().GetBool("sync")
osCheck, _ := cmd.Flags().GetBool("oscheck")
yes := viper.GetBool("yes")
downloadOnly, _ := cmd.Flags().GetBool("download-only")
util.SetSystemConfig(util.DefaultContext)
opts := util.SetSolverConfig(util.DefaultContext)
util.DefaultContext.Config.Solver.Implementation = solver.SingleCoreSimple
util.DefaultContext.Config.GetSolverOptions().Implementation = solver.SingleCoreSimple
util.DefaultContext.Debug("Solver", opts.CompactString())
// Load config protect configs
util.DefaultContext.Config.LoadConfigProtect(util.DefaultContext)
util.DefaultContext.Debug("Solver", util.DefaultContext.GetConfig().Solver)
inst := installer.NewLuetInstaller(installer.LuetInstallerOptions{
Concurrency: util.DefaultContext.Config.GetGeneral().Concurrency,
SolverOptions: *util.DefaultContext.Config.GetSolverOptions(),
Concurrency: util.DefaultContext.Config.General.Concurrency,
SolverOptions: util.DefaultContext.Config.Solver,
Force: force,
FullUninstall: full,
NoDeps: nodeps,
@@ -67,12 +61,13 @@ var upgradeCmd = &cobra.Command{
UpgradeNewRevisions: sync,
PreserveSystemEssentialData: true,
Ask: !yes,
AutoOSCheck: osCheck,
DownloadOnly: downloadOnly,
PackageRepositories: util.DefaultContext.Config.SystemRepositories,
Context: util.DefaultContext,
})
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.GetSystem().Rootfs}
system := &installer.System{Database: util.DefaultContext.Config.GetSystemDB(), Target: util.DefaultContext.Config.System.Rootfs}
if err := inst.Upgrade(system); err != nil {
util.DefaultContext.Fatal("Error: " + err.Error())
}
@@ -80,14 +75,6 @@ var upgradeCmd = &cobra.Command{
}
func init() {
upgradeCmd.Flags().String("system-dbpath", "", "System db path")
upgradeCmd.Flags().String("system-target", "", "System rootpath")
upgradeCmd.Flags().String("system-engine", "", "System DB engine")
upgradeCmd.Flags().String("solver-type", "", "Solver strategy ( Defaults none, available: "+types.AvailableResolvers+" )")
upgradeCmd.Flags().Float32("solver-rate", 0.7, "Solver learning rate")
upgradeCmd.Flags().Float32("solver-discount", 1.0, "Solver discount rate")
upgradeCmd.Flags().Int("solver-attempts", 9000, "Solver maximum attempts")
upgradeCmd.Flags().Bool("force", false, "Force upgrade by ignoring errors")
upgradeCmd.Flags().Bool("nodeps", false, "Don't consider package dependencies (harmful! overrides checkconflicts and full!)")
upgradeCmd.Flags().Bool("full", false, "Attempts to remove as much packages as possible which aren't required (slow)")
@@ -97,6 +84,7 @@ func init() {
upgradeCmd.Flags().Bool("solver-concurrent", false, "Use concurrent solver (experimental)")
upgradeCmd.Flags().BoolP("yes", "y", false, "Don't ask questions")
upgradeCmd.Flags().Bool("download-only", false, "Download only")
upgradeCmd.Flags().Bool("oscheck", false, "Perform automatically oschecks after upgrades")
RootCmd.AddCommand(upgradeCmd)
}

View File

@@ -19,23 +19,77 @@ import (
"fmt"
"os"
"path/filepath"
"runtime"
"github.com/docker/docker/api/types"
"github.com/docker/go-units"
"github.com/mudler/luet/pkg/api/core/image"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
"github.com/pkg/errors"
"github.com/mudler/luet/cmd/util"
"github.com/mudler/luet/pkg/api/core/context"
"github.com/mudler/luet/pkg/helpers/docker"
"github.com/spf13/cobra"
)
func pack(ctx *context.Context, p, dst, imageName, arch, OS string) error {
tempimage, err := ctx.TempFile("tempimage")
if err != nil {
return errors.Wrap(err, "error met while creating tempdir for "+p)
}
defer os.RemoveAll(tempimage.Name()) // clean up
if err := image.CreateTar(p, tempimage.Name(), imageName, arch, OS); err != nil {
return errors.Wrap(err, "could not create image from tar")
}
return fileHelper.CopyFile(tempimage.Name(), dst)
}
func NewPackCommand() *cobra.Command {
c := &cobra.Command{
Use: "pack image src.tar dst.tar",
Short: "Pack a standard tar archive as a container image",
Long: `Pack creates a tar which can be loaded as an image from a standard flat tar archive, for e.g. with docker load.
It doesn't need the docker daemon to run, and allows to override default os/arch:
luet util pack --os arm64 image:tag src.tar dst.tar
`,
Args: cobra.MinimumNArgs(3),
Run: func(cmd *cobra.Command, args []string) {
image := args[0]
src := args[1]
dst := args[2]
arch, _ := cmd.Flags().GetString("arch")
os, _ := cmd.Flags().GetString("os")
err := pack(util.DefaultContext, src, dst, image, arch, os)
if err != nil {
util.DefaultContext.Fatal(err.Error())
}
util.DefaultContext.Info("Image packed as", image)
},
}
c.Flags().String("arch", runtime.GOARCH, "Image architecture")
c.Flags().String("os", runtime.GOOS, "Image OS")
return c
}
func NewUnpackCommand() *cobra.Command {
c := &cobra.Command{
Use: "unpack image path",
Short: "Unpack a docker image natively",
Long: `unpack doesn't need the docker daemon to run, and unpacks a docker image in the specified directory:
luet util unpack golang:alpine /alpine
`,
PreRun: func(cmd *cobra.Command, args []string) {
@@ -53,7 +107,7 @@ func NewUnpackCommand() *cobra.Command {
util.DefaultContext.Error("Invalid path %s", destination)
os.Exit(1)
}
local, _ := cmd.Flags().GetBool("local")
verify, _ := cmd.Flags().GetBool("verify")
user, _ := cmd.Flags().GetString("auth-username")
pass, _ := cmd.Flags().GetString("auth-password")
@@ -62,11 +116,6 @@ func NewUnpackCommand() *cobra.Command {
identity, _ := cmd.Flags().GetString("auth-identity-token")
registryToken, _ := cmd.Flags().GetString("auth-registry-token")
temp, err := util.DefaultContext.Config.GetSystem().TempDir("contentstore")
if err != nil {
util.DefaultContext.Fatal("Cannot create a tempdir", err.Error())
}
util.DefaultContext.Info("Downloading", image, "to", destination)
auth := &types.AuthConfig{
Username: user,
@@ -77,13 +126,22 @@ func NewUnpackCommand() *cobra.Command {
RegistryToken: registryToken,
}
info, err := docker.DownloadAndExtractDockerImage(temp, image, destination, auth, verify)
if err != nil {
util.DefaultContext.Error(err.Error())
os.Exit(1)
if !local {
info, err := docker.DownloadAndExtractDockerImage(util.DefaultContext, image, destination, auth, verify)
if err != nil {
util.DefaultContext.Error(err.Error())
os.Exit(1)
}
util.DefaultContext.Info(fmt.Sprintf("Pulled: %s %s", info.Target.Digest, info.Name))
util.DefaultContext.Info(fmt.Sprintf("Size: %s", units.BytesSize(float64(info.Target.Size))))
} else {
info, err := docker.ExtractDockerImage(util.DefaultContext, image, destination)
if err != nil {
util.DefaultContext.Error(err.Error())
os.Exit(1)
}
util.DefaultContext.Info(fmt.Sprintf("Size: %s", units.BytesSize(float64(info.Target.Size))))
}
util.DefaultContext.Info(fmt.Sprintf("Pulled: %s %s", info.Target.Digest, info.Name))
util.DefaultContext.Info(fmt.Sprintf("Size: %s", units.BytesSize(float64(info.Target.Size))))
},
}
@@ -94,6 +152,32 @@ func NewUnpackCommand() *cobra.Command {
c.Flags().String("auth-identity-token", "", "Authentication identity token")
c.Flags().String("auth-registry-token", "", "Authentication registry token")
c.Flags().Bool("verify", false, "Verify signed images to notary before to pull")
c.Flags().Bool("local", false, "Unpack local image")
return c
}
func NewExistCommand() *cobra.Command {
c := &cobra.Command{
Use: "image-exist image path",
Short: "Check if an image exist",
Long: `Exits 0 if the image exist, otherwise exits with 1`,
PreRun: func(cmd *cobra.Command, args []string) {
if len(args) != 1 {
util.DefaultContext.Fatal("Expects an image")
}
},
Run: func(cmd *cobra.Command, args []string) {
if image.Available(args[0]) {
os.Exit(0)
} else {
os.Exit(1)
}
},
}
return c
}
@@ -107,5 +191,7 @@ func init() {
utilGroup.AddCommand(
NewUnpackCommand(),
NewPackCommand(),
NewExistCommand(),
)
}

View File

@@ -16,7 +16,6 @@
package util
import (
"errors"
"os"
"path/filepath"
"strings"
@@ -26,28 +25,14 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/mudler/luet/pkg/api/core/context"
"github.com/mudler/luet/pkg/api/core/types"
"github.com/mudler/luet/pkg/installer"
)
var DefaultContext = types.NewContext()
var lockedCommands = []string{"install", "uninstall", "upgrade"}
var bannerCommands = []string{"install", "build", "uninstall", "upgrade"}
func BindSystemFlags(cmd *cobra.Command) {
viper.BindPFlag("system.database_path", cmd.Flags().Lookup("system-dbpath"))
viper.BindPFlag("system.rootfs", cmd.Flags().Lookup("system-target"))
viper.BindPFlag("system.database_engine", cmd.Flags().Lookup("system-engine"))
}
func BindSolverFlags(cmd *cobra.Command) {
viper.BindPFlag("solver.type", cmd.Flags().Lookup("solver-type"))
viper.BindPFlag("solver.discount", cmd.Flags().Lookup("solver-discount"))
viper.BindPFlag("solver.rate", cmd.Flags().Lookup("solver-rate"))
viper.BindPFlag("solver.max_attempts", cmd.Flags().Lookup("solver-attempts"))
}
func BindValuesFlags(cmd *cobra.Command) {
viper.BindPFlag("values", cmd.Flags().Lookup("values"))
}
@@ -56,59 +41,14 @@ func ValuesFlags() []string {
return viper.GetStringSlice("values")
}
func SetSystemConfig(ctx *types.Context) {
dbpath := viper.GetString("system.database_path")
rootfs := viper.GetString("system.rootfs")
engine := viper.GetString("system.database_engine")
ctx.Config.System.DatabaseEngine = engine
ctx.Config.System.DatabasePath = dbpath
ctx.Config.System.SetRootFS(rootfs)
}
func SetSolverConfig(ctx *types.Context) (c *types.LuetSolverOptions) {
stype := viper.GetString("solver.type")
discount := viper.GetFloat64("solver.discount")
rate := viper.GetFloat64("solver.rate")
attempts := viper.GetInt("solver.max_attempts")
ctx.Config.GetSolverOptions().Type = stype
ctx.Config.GetSolverOptions().LearnRate = float32(rate)
ctx.Config.GetSolverOptions().Discount = float32(discount)
ctx.Config.GetSolverOptions().MaxAttempts = attempts
return &types.LuetSolverOptions{
Type: stype,
LearnRate: float32(rate),
Discount: float32(discount),
MaxAttempts: attempts,
}
}
func SetCliFinalizerEnvs(ctx *types.Context, finalizerEnvs []string) error {
if len(finalizerEnvs) > 0 {
for _, v := range finalizerEnvs {
idx := strings.Index(v, "=")
if idx < 0 {
return errors.New("Found invalid runtime finalizer environment: " + v)
}
ctx.Config.SetFinalizerEnv(v[0:idx], v[idx+1:])
}
}
return nil
}
// TemplateFolders returns the default folders which holds shared template between packages in a given tree path
func TemplateFolders(ctx *types.Context, fromRepo bool, treePaths []string) []string {
func TemplateFolders(ctx *context.Context, fromRepo bool, treePaths []string) []string {
templateFolders := []string{}
for _, t := range treePaths {
templateFolders = append(templateFolders, filepath.Join(t, "templates"))
}
if fromRepo {
for _, s := range installer.SystemRepositories(ctx.Config.SystemRepositories) {
for _, s := range installer.SystemRepositories(ctx.GetConfig().SystemRepositories) {
templateFolders = append(templateFolders, filepath.Join(s.TreePath, "templates"))
}
}
@@ -126,7 +66,7 @@ func IntroScreen() {
pterm.DefaultCenter.Print(pterm.DefaultHeader.WithFullWidth().WithBackgroundStyle(pterm.NewStyle(pterm.BgLightBlue)).WithMargin(10).Sprint("Luet - 0-deps container-based package manager"))
}
func HandleLock(c *types.Context) {
func HandleLock(c types.Context) {
if os.Getenv("LUET_NOLOCK") != "true" {
if len(os.Args) > 1 {
for _, lockedCmd := range lockedCommands {
@@ -146,7 +86,7 @@ func HandleLock(c *types.Context) {
}
}
func DisplayVersionBanner(c *types.Context, banner func(), version func() string, license []string) {
func DisplayVersionBanner(c *context.Context, banner func(), version func() string, license []string) {
display := false
if len(os.Args) > 1 {
for _, c := range bannerCommands {
@@ -156,11 +96,15 @@ func DisplayVersionBanner(c *types.Context, banner func(), version func() string
}
}
if display {
banner()
pterm.DefaultCenter.Print(version())
for _, l := range license {
pterm.DefaultCenter.Print(l)
if c.Config.General.Quiet {
pterm.Info.Printf("Luet %s\n", version())
pterm.Info.Println(strings.Join(license, "\n"))
} else {
banner()
pterm.DefaultCenter.Print(version())
for _, l := range license {
pterm.DefaultCenter.Print(l)
}
}
}
}

View File

@@ -16,6 +16,7 @@
package util
import (
"errors"
"fmt"
"os"
"os/user"
@@ -23,8 +24,15 @@ import (
"runtime"
"strings"
"github.com/ipfs/go-log/v2"
extensions "github.com/mudler/cobra-extensions"
"github.com/mudler/luet/pkg/api/core/context"
gc "github.com/mudler/luet/pkg/api/core/garbagecollector"
"github.com/mudler/luet/pkg/api/core/logger"
"github.com/mudler/luet/pkg/api/core/types"
"github.com/mudler/luet/pkg/solver"
"github.com/pterm/pterm"
"go.uber.org/zap/zapcore"
helpers "github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
@@ -87,26 +95,117 @@ func initConfig() {
}
var DefaultContext *context.Context
// InitContext inits the context by parsing the configurations from viper
// this is meant to be run before each command to be able to parse any override from
// the CLI/ENV
func InitContext(ctx *types.Context) (err error) {
func InitContext(cmd *cobra.Command) (ctx *context.Context, err error) {
err = viper.Unmarshal(&ctx.Config)
c := &types.LuetConfig{}
err = viper.Unmarshal(c)
if err != nil {
return
}
// Converts user-defined config into paths
// and creates the required directory on the system if necessary
c.Init()
finalizerEnvs, _ := cmd.Flags().GetStringArray("finalizer-env")
setCliFinalizerEnvs(c, finalizerEnvs)
c.Solver.Options = solver.Options{Type: solver.SingleCoreSimple, Concurrency: c.General.Concurrency}
ctx = context.NewContext(
context.WithConfig(c),
context.WithGarbageCollector(gc.GarbageCollector(c.System.TmpDirBase)),
)
// Inits the context with the configurations loaded
// It reads system repositories, sets logging, and all the
// context which is required to perform luet actions
err = ctx.Init()
if err != nil {
return
return ctx, initContext(cmd, ctx)
}
func setCliFinalizerEnvs(c *types.LuetConfig, finalizerEnvs []string) error {
if len(finalizerEnvs) > 0 {
for _, v := range finalizerEnvs {
idx := strings.Index(v, "=")
if idx < 0 {
return errors.New("Found invalid runtime finalizer environment: " + v)
}
c.SetFinalizerEnv(v[0:idx], v[idx+1:])
}
}
// no_spinner is not mapped in our configs
ctx.NoSpinner = viper.GetBool("no_spinner")
return nil
}
const (
CommandProcessOutput = "command.process.output"
)
func initContext(cmd *cobra.Command, c *context.Context) (err error) {
if logger.IsTerminal() {
if !c.Config.Logging.Color {
pterm.DisableColor()
}
} else {
pterm.DisableColor()
c.Debug("Not a terminal, colors disabled")
}
if c.Config.General.Quiet {
pterm.DisableColor()
pterm.DisableStyling()
}
level := c.Config.Logging.Level
if c.Config.General.Debug {
level = "debug"
}
if _, ok := cmd.Annotations[CommandProcessOutput]; ok {
// Note: create-repo output is different, so we annotate in the cmd of create-repo CommandNoProcess
// to avoid
out, _ := cmd.Flags().GetString("output")
if out != "terminal" {
level = zapcore.Level(log.LevelFatal).String()
}
}
// Init logging
opts := []logger.LoggerOptions{
logger.WithLevel(level),
}
if c.Config.Logging.NoSpinner {
opts = append(opts, logger.NoSpinner)
}
if c.Config.Logging.EnableLogFile && c.Config.Logging.Path != "" {
f := "console"
if c.Config.Logging.JsonFormat {
f = "json"
}
opts = append(opts, logger.WithFileLogging(c.Config.Logging.Path, f))
}
if c.Config.Logging.EnableEmoji {
opts = append(opts, logger.EnableEmoji())
}
l, err := logger.New(opts...)
c.Logger = l
c.Debug("System rootfs:", c.Config.System.Rootfs)
c.Debug("Colors", c.Config.Logging.Color)
c.Debug("Logging level", c.Config.Logging.Level)
c.Debug("Debug mode", c.Config.General.Debug)
return
}
@@ -120,8 +219,10 @@ func setDefaults(viper *viper.Viper) {
viper.SetDefault("general.concurrency", runtime.NumCPU())
viper.SetDefault("general.debug", false)
viper.SetDefault("general.show_build_output", false)
viper.SetDefault("general.quiet", false)
viper.SetDefault("general.show_build_output", true)
viper.SetDefault("general.fatal_warnings", false)
viper.SetDefault("general.http_timeout", 360)
u, err := user.Current()
// os/user doesn't work in from scratch environments
@@ -154,41 +255,56 @@ func setDefaults(viper *viper.Viper) {
// InitViper inits a new viper
// this is meant to be run just once at beginning to setup the root command
func InitViper(ctx *types.Context, RootCmd *cobra.Command) {
func InitViper(RootCmd *cobra.Command) {
cobra.OnInitialize(initConfig)
pflags := RootCmd.PersistentFlags()
pflags.StringVar(&cfgFile, "config", "", "config file (default is $HOME/.luet.yaml)")
pflags.BoolP("debug", "d", false, "verbose output")
pflags.BoolP("debug", "d", false, "debug output")
pflags.BoolP("quiet", "q", false, "quiet output")
pflags.Bool("fatal", false, "Enables Warnings to exit")
pflags.Bool("enable-logfile", false, "Enable log to file")
pflags.Bool("no-spinner", false, "Disable spinner.")
pflags.Bool("color", ctx.Config.GetLogging().Color, "Enable/Disable color.")
pflags.Bool("emoji", ctx.Config.GetLogging().EnableEmoji, "Enable/Disable emoji.")
pflags.Bool("skip-config-protect", ctx.Config.ConfigProtectSkip,
"Disable config protect analysis.")
pflags.StringP("logfile", "l", ctx.Config.GetLogging().Path,
"Logfile path. Empty value disable log to file.")
pflags.Bool("color", true, "Enable/Disable color.")
pflags.Bool("emoji", true, "Enable/Disable emoji.")
pflags.Bool("skip-config-protect", true, "Disable config protect analysis.")
pflags.StringP("logfile", "l", "", "Logfile path. Empty value disable log to file.")
pflags.StringSlice("plugin", []string{}, "A list of runtime plugins to load")
// os/user doesn't work in from scratch environments.
// Check if i can retrieve user informations.
_, err := user.Current()
if err != nil {
ctx.Warning("failed to retrieve user identity:", err.Error())
}
pflags.Bool("same-owner", ctx.Config.GetGeneral().SameOwner, "Maintain same owner on uncompress.")
pflags.String("system-dbpath", "", "System db path")
pflags.String("system-target", "", "System rootpath")
pflags.String("system-engine", "", "System DB engine")
pflags.String("solver-type", "", "Solver strategy ( Defaults none, available: "+types.AvailableResolvers+" )")
pflags.Float32("solver-rate", 0.7, "Solver learning rate")
pflags.Float32("solver-discount", 1.0, "Solver discount rate")
pflags.Int("solver-attempts", 9000, "Solver maximum attempts")
pflags.Bool("live-output", true, "Show live output during build")
pflags.Bool("same-owner", true, "Maintain same owner on uncompress.")
pflags.Int("concurrency", runtime.NumCPU(), "Concurrency")
pflags.Int("http-timeout", 360, "Default timeout for http(s) requests")
viper.BindPFlag("system.database_path", pflags.Lookup("system-dbpath"))
viper.BindPFlag("system.rootfs", pflags.Lookup("system-target"))
viper.BindPFlag("system.database_engine", pflags.Lookup("system-engine"))
viper.BindPFlag("solver.type", pflags.Lookup("solver-type"))
viper.BindPFlag("solver.discount", pflags.Lookup("solver-discount"))
viper.BindPFlag("solver.rate", pflags.Lookup("solver-rate"))
viper.BindPFlag("solver.max_attempts", pflags.Lookup("solver-attempts"))
viper.BindPFlag("logging.color", pflags.Lookup("color"))
viper.BindPFlag("logging.enable_emoji", pflags.Lookup("emoji"))
viper.BindPFlag("logging.enable_logfile", pflags.Lookup("enable-logfile"))
viper.BindPFlag("logging.path", pflags.Lookup("logfile"))
viper.BindPFlag("logging.no_spinner", pflags.Lookup("no-spinner"))
viper.BindPFlag("general.concurrency", pflags.Lookup("concurrency"))
viper.BindPFlag("general.debug", pflags.Lookup("debug"))
viper.BindPFlag("general.quiet", pflags.Lookup("quiet"))
viper.BindPFlag("general.fatal_warnings", pflags.Lookup("fatal"))
viper.BindPFlag("general.same_owner", pflags.Lookup("same-owner"))
viper.BindPFlag("plugin", pflags.Lookup("plugin"))
viper.BindPFlag("general.http_timeout", pflags.Lookup("http-timeout"))
viper.BindPFlag("general.show_build_output", pflags.Lookup("live-output"))
// Currently I maintain this only from cli.
viper.BindPFlag("no_spinner", pflags.Lookup("no-spinner"))

43
go.mod
View File

@@ -6,60 +6,56 @@ require (
github.com/DataDog/zstd v1.4.5 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver/v3 v3.1.1 // indirect
github.com/Microsoft/go-winio v0.5.0 // indirect
github.com/Sabayon/pkgs-checker v0.8.4
github.com/apex/log v1.9.0 // indirect
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef
github.com/asdine/storm v0.0.0-20190418133842-e0f77eada154
github.com/cavaliercoder/grab v1.0.1-0.20201108051000-98a5bfe305ec
github.com/containerd/cgroups v0.0.0-20200217135630-d732e370d46d // indirect
github.com/containerd/containerd v1.4.1-0.20201117152358-0edc412565dc
github.com/cpuguy83/go-md2man/v2 v2.0.1 // indirect
github.com/containerd/containerd v1.5.7
github.com/crillab/gophersat v1.3.2-0.20210701121804-72b19f5b6b38
github.com/docker/cli v20.10.0-beta1.0.20201029214301-1d20b15adc38+incompatible
github.com/docker/cli v20.10.10+incompatible
github.com/docker/distribution v2.7.1+incompatible
github.com/docker/docker v20.10.0-beta1.0.20201110211921-af34b94a78a1+incompatible
github.com/docker/docker v20.10.10+incompatible
github.com/docker/go-units v0.4.0
github.com/ecooper/qlearning v0.0.0-20160612200101-3075011a69fd
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/ghodss/yaml v1.0.0
github.com/go-sql-driver/mysql v1.6.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/google/go-cmp v0.5.6 // indirect
github.com/google/go-containerregistry v0.2.1
github.com/google/go-containerregistry v0.7.0
github.com/google/renameio v1.0.0
github.com/google/uuid v1.3.0 // indirect
github.com/gookit/color v1.5.0 // indirect
github.com/gookit/color v1.5.0
github.com/hashicorp/go-multierror v1.0.0
github.com/hashicorp/go-version v1.3.0
github.com/huandu/xstrings v1.3.2 // indirect
github.com/imdario/mergo v0.3.12
github.com/ipfs/go-log/v2 v2.4.0
github.com/jinzhu/copier v0.0.0-20180308034124-7e38e58719c3
github.com/klauspost/compress v1.12.2
github.com/klauspost/pgzip v1.2.1
github.com/klauspost/compress v1.13.6
github.com/klauspost/pgzip v1.2.5
github.com/knqyf263/go-deb-version v0.0.0-20190517075300-09fca494f03d
github.com/kyokomi/emoji v2.1.0+incompatible
github.com/marcsauter/single v0.0.0-20181104081128-f8bf46f26ec0
github.com/mattn/go-isatty v0.0.14
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/hashstructure/v2 v2.0.1
github.com/mitchellh/mapstructure v1.4.2 // indirect
github.com/moby/moby v20.10.9+incompatible
github.com/moby/sys/mount v0.2.0 // indirect
github.com/mudler/cobra-extensions v0.0.0-20200612154940-31a47105fe3d
github.com/mudler/docker-companion v0.4.6-0.20200418093252-41846f112d87
github.com/mudler/go-pluggable v0.0.0-20210513155700-54c6443073af
github.com/mudler/go-pluggable v0.0.0-20211206135551-9263b05c562e
github.com/mudler/topsort v0.0.0-20201103161459-db5c7901c290
github.com/onsi/ginkgo v1.16.4
github.com/onsi/gomega v1.16.0
github.com/onsi/ginkgo/v2 v2.0.0
github.com/onsi/gomega v1.17.0
github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.0.1
github.com/opencontainers/runc v1.0.0-rc9.0.20200221051241-688cf6d43cc4 // indirect
github.com/opencontainers/image-spec v1.0.2-0.20210730191737-8e42a01fb1b7
github.com/otiai10/copy v1.2.1-0.20200916181228-26f84a0b1578
github.com/pelletier/go-toml v1.9.4 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible
github.com/philopon/go-toposort v0.0.0-20170620085441-9be86dbd762f
github.com/pkg/errors v0.9.1
github.com/pterm/pterm v0.12.32-0.20211002183613-ada9ef6790c3
github.com/rancher-sandbox/gofilecache v0.0.0-20210330135715-becdeff5df15
github.com/sirupsen/logrus v1.8.1
github.com/spf13/cast v1.4.1 // indirect
github.com/spf13/cobra v1.2.1
github.com/spf13/viper v1.8.1
@@ -69,17 +65,10 @@ require (
go.uber.org/zap v1.17.0
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97 // indirect
golang.org/x/mod v0.4.2
golang.org/x/oauth2 v0.0.0-20210810183815-faf39c7919d5 // indirect
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
google.golang.org/genproto v0.0.0-20210811021853-ddbe55d93216 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
gopkg.in/ini.v1 v1.63.2 // indirect
gopkg.in/yaml.v2 v2.4.0
gotest.tools/v3 v3.0.2 // indirect
helm.sh/helm/v3 v3.3.4
)
replace github.com/docker/docker => github.com/Luet-lab/moby v17.12.0-ce-rc1.0.20200605210607-749178b8f80d+incompatible

554
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -18,7 +18,7 @@ package client_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -20,8 +20,8 @@ import (
"fmt"
"strings"
"github.com/google/go-containerregistry/pkg/crane"
"github.com/mudler/luet/pkg/api/client/utils"
image "github.com/mudler/luet/pkg/api/core/image"
)
func TreePackages(treedir string) (searchResult SearchResult, err error) {
@@ -35,11 +35,6 @@ func TreePackages(treedir string) (searchResult SearchResult, err error) {
return
}
func imageAvailable(image string) bool {
_, err := crane.Digest(image)
return err == nil
}
type SearchResult struct {
Packages []Package
}
@@ -66,7 +61,7 @@ func (p Package) ImageMetadata(repository string) string {
}
func (p Package) ImageAvailable(repository string) bool {
return imageAvailable(p.Image(repository))
return image.Available(p.Image(repository))
}
func (p Package) Equal(pp Package) bool {

View File

@@ -17,7 +17,7 @@ package client_test
import (
. "github.com/mudler/luet/pkg/api/client"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -1,8 +1,10 @@
package bus
import (
"fmt"
"github.com/mudler/go-pluggable"
"github.com/sirupsen/logrus"
"github.com/mudler/luet/pkg/api/core/types"
)
var (
@@ -12,6 +14,10 @@ var (
EventPackageInstall pluggable.EventType = "package.install"
// EventPackageUnInstall is the event fired when a new package is being uninstalled
EventPackageUnInstall pluggable.EventType = "package.uninstall"
// EventPreUpgrade is the event fired before an upgrade is attempted
EventPreUpgrade pluggable.EventType = "package.pre.upgrade"
// EventPostUpgrade is the event fired after an upgrade is done
EventPostUpgrade pluggable.EventType = "package.post.upgrade"
// Package build
@@ -61,6 +67,8 @@ var Manager *Bus = &Bus{
EventPackageInstall,
EventPackageUnInstall,
EventPackagePreBuild,
EventPreUpgrade,
EventPostUpgrade,
EventPackagePreBuildArtifact,
EventPackagePostBuildArtifact,
EventPackagePostBuild,
@@ -82,15 +90,12 @@ type Bus struct {
*pluggable.Manager
}
func (b *Bus) Initialize(plugin ...string) {
func (b *Bus) Initialize(ctx types.Context, plugin ...string) {
b.Manager.Load(plugin...).Register()
for _, e := range b.Manager.Events {
b.Manager.Response(e, func(p *pluggable.Plugin, r *pluggable.EventResponse) {
if r.Errored() {
logrus.Fatal("Plugin", p.Name, "at", p.Executable, "Error", r.Error)
}
logrus.Debug(
ctx.Debug(
"plugin_event",
"received from",
p.Name,
@@ -98,6 +103,16 @@ func (b *Bus) Initialize(plugin ...string) {
p.Executable,
r,
)
if r.Errored() {
err := fmt.Sprintf("Plugin %s at %s had an error: %s", p.Name, p.Executable, r.Error)
ctx.Fatal(err)
} else {
if r.State != "" {
message := fmt.Sprintf(":lollipop: Plugin %s at %s succeded, state reported:", p.Name, p.Executable)
ctx.Success(message)
ctx.Info(r.State)
}
}
})
}
}

View File

@@ -53,18 +53,10 @@ func NewConfigProtect(annotationDir string) *ConfigProtect {
}
return &ConfigProtect{
AnnotationDir: annotationDir,
MapProtected: make(map[string]bool, 0),
MapProtected: make(map[string]bool),
}
}
func (c *ConfigProtect) AddAnnotationDir(d string) {
c.AnnotationDir = d
}
func (c *ConfigProtect) GetAnnotationDir() string {
return c.AnnotationDir
}
func (c *ConfigProtect) Map(files []string, protected []ConfigProtectConfFile) {
for _, file := range files {
@@ -105,7 +97,7 @@ func (c *ConfigProtect) Protected(file string) bool {
func (c *ConfigProtect) GetProtectFiles(withSlash bool) []string {
ans := []string{}
for key, _ := range c.MapProtected {
for key := range c.MapProtected {
if withSlash {
ans = append(ans, key)
} else {

View File

@@ -18,9 +18,9 @@ package config_test
import (
config "github.com/mudler/luet/pkg/api/core/config"
"github.com/mudler/luet/pkg/api/core/types"
"github.com/mudler/luet/pkg/api/core/context"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
@@ -29,7 +29,7 @@ var _ = Describe("Config", func() {
Context("Test config protect", func() {
It("Protect1", func() {
ctx := types.NewContext()
ctx := context.NewContext()
files := []string{
"etc/foo/my.conf",
"usr/bin/foo",
@@ -59,7 +59,7 @@ var _ = Describe("Config", func() {
})
It("Protect2", func() {
ctx := types.NewContext()
ctx := context.NewContext()
files := []string{
"etc/foo/my.conf",
@@ -86,7 +86,7 @@ var _ = Describe("Config", func() {
})
It("Protect3: Annotation dir without initial slash", func() {
ctx := types.NewContext()
ctx := context.NewContext()
files := []string{
"etc/foo/my.conf",

View File

@@ -19,7 +19,7 @@ package config_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -1,4 +1,4 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@sabayon.org>
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
@@ -13,18 +13,16 @@
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package docker_test
package context_test
import (
"github.com/mudler/luet/pkg/helpers/docker"
. "github.com/onsi/ginkgo"
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("StripInvalidStringsFromImage", func() {
Context("Image names", func() {
It("strips invalid chars", func() {
Expect(docker.StripInvalidStringsFromImage("foo+bar")).To(Equal("foo-bar"))
})
})
})
func TestContext(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Context Suite")
}

View File

@@ -0,0 +1,159 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package context
import (
"context"
"os"
"path/filepath"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
gc "github.com/mudler/luet/pkg/api/core/garbagecollector"
"github.com/mudler/luet/pkg/api/core/logger"
"github.com/mudler/luet/pkg/api/core/types"
"github.com/pkg/errors"
)
type Context struct {
*logger.Logger
context.Context
types.GarbageCollector
Config *types.LuetConfig
NoSpinner bool
annotations map[string]interface{}
}
// SetAnnotation sets generic annotations to hold in a context
func (c *Context) SetAnnotation(s string, i interface{}) {
c.annotations[s] = i
}
// GetAnnotation gets generic annotations to hold in a context
func (c *Context) GetAnnotation(s string) interface{} {
return c.annotations[s]
}
type ContextOption func(c *Context) error
// WithLogger sets the logger
func WithLogger(l *logger.Logger) ContextOption {
return func(c *Context) error {
c.Logger = l
return nil
}
}
// WithConfig sets the luet config
func WithConfig(cc *types.LuetConfig) ContextOption {
return func(c *Context) error {
c.Config = cc
return nil
}
}
// NOTE: GC needs to be instantiated when a new context is created from system TmpDirBase
// WithGarbageCollector sets the Garbage collector for the given context
func WithGarbageCollector(l types.GarbageCollector) ContextOption {
return func(c *Context) error {
if !filepath.IsAbs(l.String()) {
abs, err := fileHelper.Rel2Abs(l.String())
if err != nil {
return errors.Wrap(err, "while converting relative path to absolute path")
}
l = gc.GarbageCollector(abs)
}
c.GarbageCollector = l
return nil
}
}
// NewContext returns a new context.
// It accepts a Garbage collector, a config and a logger as an option
func NewContext(opts ...ContextOption) *Context {
l, _ := logger.New()
d := &Context{
annotations: make(map[string]interface{}),
Logger: l,
GarbageCollector: gc.GarbageCollector(filepath.Join(os.TempDir(), "tmpluet")),
Config: &types.LuetConfig{
ConfigFromHost: true,
Logging: types.LuetLoggingConfig{},
General: types.LuetGeneralConfig{},
System: types.LuetSystemConfig{
DatabasePath: filepath.Join("var", "db"),
PkgsCachePath: filepath.Join("var", "db", "packages"),
},
Solver: types.LuetSolverOptions{},
},
}
for _, o := range opts {
o(d)
}
return d
}
// WithLoggingContext returns a copy of the context with a contextualized logger
func (c *Context) WithLoggingContext(name string) types.Context {
configCopy := *c.Config
configCopy.System = c.Config.System
configCopy.General = c.Config.General
configCopy.Logging = c.Config.Logging
ctx := *c
ctxCopy := &ctx
ctxCopy.Config = &configCopy
ctxCopy.annotations = ctx.annotations
ctxCopy.Logger, _ = c.Logger.Copy(logger.WithContext(name))
return ctxCopy
}
// Copy returns a context copy with a reset logging context
func (c *Context) Copy() types.Context {
return c.WithLoggingContext("")
}
func (c *Context) Warning(mess ...interface{}) {
c.Logger.Warn(mess...)
if c.Config.General.FatalWarns {
os.Exit(2)
}
}
func (c *Context) Warn(mess ...interface{}) {
c.Warning(mess...)
}
func (c *Context) Warnf(t string, mess ...interface{}) {
c.Logger.Warnf(t, mess...)
if c.Config.General.FatalWarns {
os.Exit(2)
}
}
func (c *Context) Warningf(t string, mess ...interface{}) {
c.Warnf(t, mess...)
}
func (c *Context) GetConfig() types.LuetConfig {
return *c.Config
}

View File

@@ -0,0 +1,59 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package gc
import (
"io/ioutil"
"os"
)
type GarbageCollector string
func (c GarbageCollector) String() string {
return string(c)
}
func (c GarbageCollector) init() error {
if _, err := os.Stat(string(c)); err != nil {
if os.IsNotExist(err) {
err = os.MkdirAll(string(c), os.ModePerm)
if err != nil {
return err
}
}
}
return nil
}
func (c GarbageCollector) Clean() error {
return os.RemoveAll(string(c))
}
func (c GarbageCollector) TempDir(pattern string) (string, error) {
err := c.init()
if err != nil {
return "", err
}
return ioutil.TempDir(string(c), pattern)
}
func (c GarbageCollector) TempFile(s string) (*os.File, error) {
err := c.init()
if err != nil {
return nil, err
}
return ioutil.TempFile(string(c), s)
}

168
pkg/api/core/image/cache.go Normal file
View File

@@ -0,0 +1,168 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image
import (
"encoding/json"
"os"
"strings"
"github.com/peterbourgon/diskv"
)
// Cache represents a key-value store which is capable to upgrade to disk when it
// reaches a pre-defined threshold.
type Cache struct {
store *diskv.Diskv
memory map[string]string
dir string
onDisk bool
maxmemorySize, maxItemSize int
}
// New creates a new key-value cache
// the cache acts in memory as long as the maxItemsize is not reached.
// Once the threshold is met the cache is offloaded to disk automatically,
// with a buffer of maxmemorySize into memory.
func NewCache(path string, maxmemorySize, maxItemsize int) *Cache {
disk := diskv.New(diskv.Options{
BasePath: path,
CacheSizeMax: uint64(maxmemorySize), // 500MB
})
return &Cache{
memory: make(map[string]string),
store: disk,
dir: path,
maxmemorySize: maxmemorySize,
maxItemSize: maxItemsize,
}
}
// This is needed as the disk cache is merely stored as separate files
// thus we don't want to conflict file names with the path separator.
// XXX: This is inconvenient as while we are looping result we can't rely
// anymore originally to the key name.
// We don't do any hashing to avoid any performance impact
func cleanKey(s string) string {
return strings.ReplaceAll(s, string(os.PathSeparator), "_")
}
// Count returns the items in the cache.
// If it's a disk cache might be an expensive call.
func (c *Cache) Count() int {
if !c.onDisk {
return len(c.memory)
}
count := 0
for range c.store.Keys(nil) {
count++
}
return count
}
// Get attempts to retrieve a value for a key
func (c *Cache) Get(key string) (value string, found bool) {
if !c.onDisk {
v, ok := c.memory[key]
return v, ok
}
v, err := c.store.Read(cleanKey(key))
if err == nil {
found = true
}
value = string(v)
return
}
func (c *Cache) flushToDisk() {
for k, v := range c.memory {
c.store.Write(cleanKey(k), []byte(v))
}
c.memory = make(map[string]string)
c.onDisk = true
}
// Set updates or inserts a new value
func (c *Cache) Set(key, value string) error {
if !c.onDisk && c.Count() >= c.maxItemSize && c.maxItemSize != 0 {
c.flushToDisk()
}
if c.onDisk {
return c.store.Write(cleanKey(key), []byte(value))
}
c.memory[key] = value
return nil
}
// SetValue updates or inserts a new value by marshalling it into JSON.
func (c *Cache) SetValue(key string, value interface{}) error {
dat, err := json.Marshal(value)
if err != nil {
return err
}
return c.Set(cleanKey(key), string(dat))
}
// CacheResult represent the key value result when
// iterating over the cache
type CacheResult struct {
key, value string
}
// Value returns the underlying value
func (c CacheResult) Value() string {
return c.value
}
// Key returns the cache result key
func (c CacheResult) Key() string {
return c.key
}
// Unmarshal the result into the interface. Use it to retrieve data
// set with SetValue
func (c CacheResult) Unmarshal(i interface{}) error {
return json.Unmarshal([]byte(c.Value()), i)
}
// Iterates over cache by key
func (c *Cache) All(fn func(CacheResult)) {
if !c.onDisk {
for k, v := range c.memory {
fn(CacheResult{key: k, value: v})
}
return
}
for key := range c.store.Keys(nil) {
val, _ := c.store.Read(key)
fn(CacheResult{key: key, value: string(val)})
}
}
// Clean the cache
func (c *Cache) Clean() {
c.memory = make(map[string]string)
c.onDisk = false
os.RemoveAll(c.dir)
}

View File

@@ -0,0 +1,98 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image_test
import (
"path/filepath"
"github.com/mudler/luet/pkg/api/core/context"
. "github.com/mudler/luet/pkg/api/core/image"
"github.com/mudler/luet/pkg/helpers/file"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Cache", func() {
ctx := context.NewContext()
Context("used as k/v store", func() {
cache := &Cache{}
var dir string
BeforeEach(func() {
ctx = context.NewContext()
var err error
dir, err = ctx.TempDir("foo")
Expect(err).ToNot(HaveOccurred())
cache = NewCache(dir, 10*1024*1024, 1) // 10MB Cache when upgrading to files. Max volatile memory of 1 row.
})
AfterEach(func() {
cache.Clean()
})
It("does handle automatically memory upgrade", func() {
cache.Set("foo", "bar")
v, found := cache.Get("foo")
Expect(found).To(BeTrue())
Expect(v).To(Equal("bar"))
Expect(file.Exists(filepath.Join(dir, "foo"))).To(BeFalse())
cache.Set("baz", "bar")
Expect(file.Exists(filepath.Join(dir, "foo"))).To(BeTrue())
Expect(file.Exists(filepath.Join(dir, "baz"))).To(BeTrue())
v, found = cache.Get("foo")
Expect(found).To(BeTrue())
Expect(v).To(Equal("bar"))
Expect(cache.Count()).To(Equal(2))
})
It("does CRUD", func() {
cache.Set("foo", "bar")
v, found := cache.Get("foo")
Expect(found).To(BeTrue())
Expect(v).To(Equal("bar"))
hit := false
cache.All(func(c CacheResult) {
hit = true
Expect(c.Key()).To(Equal("foo"))
Expect(c.Value()).To(Equal("bar"))
})
Expect(hit).To(BeTrue())
})
It("Unmarshals values", func() {
type testStruct struct {
Test string
}
cache.SetValue("foo", &testStruct{Test: "baz"})
n := &testStruct{}
cache.All(func(cr CacheResult) {
err := cr.Unmarshal(n)
Expect(err).ToNot(HaveOccurred())
})
Expect(n.Test).To(Equal("baz"))
})
})
})

View File

@@ -0,0 +1,97 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image
import (
"io"
"os"
containerdCompression "github.com/containerd/containerd/archive/compression"
"github.com/google/go-containerregistry/pkg/name"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/empty"
"github.com/google/go-containerregistry/pkg/v1/mutate"
"github.com/google/go-containerregistry/pkg/v1/tarball"
"github.com/pkg/errors"
)
func imageFromTar(imagename, architecture, OS string, opener func() (io.ReadCloser, error)) (name.Reference, v1.Image, error) {
newRef, err := name.ParseReference(imagename)
if err != nil {
return nil, nil, err
}
layer, err := tarball.LayerFromOpener(opener)
if err != nil {
return nil, nil, err
}
baseImage := empty.Image
cfg, err := baseImage.ConfigFile()
if err != nil {
return nil, nil, err
}
cfg.Architecture = architecture
cfg.OS = OS
baseImage, err = mutate.ConfigFile(baseImage, cfg)
if err != nil {
return nil, nil, err
}
img, err := mutate.Append(baseImage, mutate.Addendum{
Layer: layer,
History: v1.History{
CreatedBy: "luet",
Comment: "Custom image",
},
})
if err != nil {
return nil, nil, err
}
return newRef, img, nil
}
// CreateTar a imagetarball from a standard tarball
func CreateTar(srctar, dstimageTar, imagename, architecture, OS string) error {
dstFile, err := os.Create(dstimageTar)
if err != nil {
return errors.Wrap(err, "Cannot create "+dstimageTar)
}
defer dstFile.Close()
newRef, img, err := imageFromTar(imagename, architecture, OS, func() (io.ReadCloser, error) {
f, err := os.Open(srctar)
if err != nil {
return nil, errors.Wrap(err, "Cannot open "+srctar)
}
decompressed, err := containerdCompression.DecompressStream(f)
if err != nil {
return nil, errors.Wrap(err, "Cannot open "+srctar)
}
return decompressed, nil
})
if err != nil {
return err
}
// NOTE: We might also stream that back to the daemon with daemon.Write(tag, img)
return tarball.Write(newRef, img, dstFile)
}

View File

@@ -0,0 +1,80 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image_test
import (
"os"
"path/filepath"
"runtime"
"github.com/mudler/luet/pkg/api/core/context"
. "github.com/mudler/luet/pkg/api/core/image"
"github.com/mudler/luet/pkg/api/core/types/artifact"
"github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/helpers/file"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Create", func() {
Context("Creates an OCI image from a standard tar", func() {
It("creates an image which is loadable", func() {
ctx := context.NewContext()
dst, err := ctx.TempFile("dst")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(dst.Name())
srcTar, err := ctx.TempFile("srcTar")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(srcTar.Name())
b := backend.NewSimpleDockerBackend(ctx)
b.DownloadImage(backend.Options{ImageName: "alpine"})
img, err := b.ImageReference("alpine", false)
Expect(err).ToNot(HaveOccurred())
_, dir, err := Extract(ctx, img, nil)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(dir)
Expect(file.Touch(filepath.Join(dir, "test"))).ToNot(HaveOccurred())
Expect(file.Exists(filepath.Join(dir, "bin"))).To(BeTrue())
a := artifact.NewPackageArtifact(srcTar.Name())
a.Compress(dir, 1)
// Unfortunately there is no other easy way to test this
err = CreateTar(srcTar.Name(), dst.Name(), "testimage", runtime.GOARCH, runtime.GOOS)
Expect(err).ToNot(HaveOccurred())
b.LoadImage(dst.Name())
Expect(b.ImageExists("testimage")).To(BeTrue())
img, err = b.ImageReference("testimage", false)
Expect(err).ToNot(HaveOccurred())
_, dir, err = Extract(ctx, img, nil)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(dir)
Expect(file.Exists(filepath.Join(dir, "bin"))).To(BeTrue())
Expect(file.Exists(filepath.Join(dir, "test"))).To(BeTrue())
})
})
})

111
pkg/api/core/image/delta.go Normal file
View File

@@ -0,0 +1,111 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image
import (
"archive/tar"
"io"
"regexp"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/mutate"
)
func compileRegexes(regexes []string) []*regexp.Regexp {
var result []*regexp.Regexp
for _, i := range regexes {
r, e := regexp.Compile(i)
if e != nil {
continue
}
result = append(result, r)
}
return result
}
type ImageDiffNode struct {
Name string `json:"Name"`
Size int `json:"Size"`
}
type ImageDiff struct {
Additions []ImageDiffNode `json:"Adds"`
Deletions []ImageDiffNode `json:"Dels"`
Changes []ImageDiffNode `json:"Mods"`
}
func Delta(srcimg, dstimg v1.Image) (res ImageDiff, err error) {
srcReader := mutate.Extract(srcimg)
defer srcReader.Close()
dstReader := mutate.Extract(dstimg)
defer dstReader.Close()
filesSrc, filesDst := map[string]int64{}, map[string]int64{}
srcTar := tar.NewReader(srcReader)
dstTar := tar.NewReader(dstReader)
for {
var hdr *tar.Header
hdr, err = srcTar.Next()
if err == io.EOF {
// end of tar archive
break
}
if err != nil {
return
}
filesSrc[hdr.Name] = hdr.Size
}
for {
var hdr *tar.Header
hdr, err = dstTar.Next()
if err == io.EOF {
// end of tar archive
break
}
if err != nil {
return
}
filesDst[hdr.Name] = hdr.Size
}
err = nil
for f, size := range filesDst {
if size2, exist := filesSrc[f]; exist && size2 != size {
res.Changes = append(res.Changes, ImageDiffNode{
Name: f,
Size: int(size),
})
} else if !exist {
res.Additions = append(res.Additions, ImageDiffNode{
Name: f,
Size: int(size),
})
}
}
for f, size := range filesSrc {
if _, exist := filesDst[f]; !exist {
res.Deletions = append(res.Deletions, ImageDiffNode{
Name: f,
Size: int(size),
})
}
}
return
}

View File

@@ -0,0 +1,139 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image_test
import (
"io/ioutil"
"os"
"path/filepath"
"github.com/google/go-containerregistry/pkg/name"
v1 "github.com/google/go-containerregistry/pkg/v1"
daemon "github.com/google/go-containerregistry/pkg/v1/daemon"
"github.com/mudler/luet/pkg/api/core/context"
. "github.com/mudler/luet/pkg/api/core/image"
"github.com/mudler/luet/pkg/helpers/file"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Delta", func() {
Context("Generates deltas of images", func() {
It("computes delta", func() {
ref, err := name.ParseReference("alpine")
Expect(err).ToNot(HaveOccurred())
img, err := daemon.Image(ref)
Expect(err).ToNot(HaveOccurred())
layers, err := Delta(img, img)
Expect(err).ToNot(HaveOccurred())
Expect(len(layers.Changes)).To(Equal(0))
Expect(len(layers.Additions)).To(Equal(0))
Expect(len(layers.Deletions)).To(Equal(0))
})
Context("ExtractDeltaFiles", func() {
ctx := context.NewContext()
var tmpfile *os.File
var ref, ref2 name.Reference
var img, img2 v1.Image
var err error
ref, _ = name.ParseReference("alpine")
ref2, _ = name.ParseReference("golang:alpine")
img, _ = daemon.Image(ref)
img2, _ = daemon.Image(ref2)
BeforeEach(func() {
ctx = context.NewContext()
tmpfile, err = ioutil.TempFile("", "delta")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpfile.Name()) // clean up
})
It("Extract all deltas", func() {
f, err := ExtractDeltaAdditionsFiles(ctx, img, []string{}, []string{})
Expect(err).ToNot(HaveOccurred())
_, tmpdir, err := Extract(
ctx,
img2,
f,
)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
// No extra dirs are present
Expect(file.Exists(filepath.Join(tmpdir, "home"))).To(BeFalse())
// Cache from go
Expect(file.Exists(filepath.Join(tmpdir, "root", ".cache"))).To(BeTrue())
// sh is present from alpine, hence not in the result
Expect(file.Exists(filepath.Join(tmpdir, "bin", "sh"))).To(BeFalse())
// /usr/local/go is part of golang:alpine
Expect(file.Exists(filepath.Join(tmpdir, "usr", "local", "go"))).To(BeTrue())
Expect(file.Exists(filepath.Join(tmpdir, "usr", "local", "go", "bin"))).To(BeTrue())
})
It("Extract deltas and excludes /usr/local/go", func() {
f, err := ExtractDeltaAdditionsFiles(ctx, img, []string{}, []string{"usr/local/go"})
Expect(err).ToNot(HaveOccurred())
Expect(err).ToNot(HaveOccurred())
_, tmpdir, err := Extract(
ctx,
img2,
f,
)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(file.Exists(filepath.Join(tmpdir, "usr", "local", "go"))).To(BeFalse())
})
It("Extract deltas and excludes /usr/local/go/bin, but includes /usr/local/go", func() {
f, err := ExtractDeltaAdditionsFiles(ctx, img, []string{"usr/local/go"}, []string{"usr/local/go/bin"})
Expect(err).ToNot(HaveOccurred())
_, tmpdir, err := Extract(
ctx,
img2,
f,
)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(file.Exists(filepath.Join(tmpdir, "usr", "local", "go"))).To(BeTrue())
Expect(file.Exists(filepath.Join(tmpdir, "usr", "local", "go", "bin"))).To(BeFalse())
})
It("Extract deltas and includes /usr/local/go", func() {
f, err := ExtractDeltaAdditionsFiles(ctx, img, []string{"usr/local/go"}, []string{})
Expect(err).ToNot(HaveOccurred())
_, tmpdir, err := Extract(
ctx,
img2,
f,
)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(file.Exists(filepath.Join(tmpdir, "usr", "local", "go"))).To(BeTrue())
Expect(file.Exists(filepath.Join(tmpdir, "root", ".cache"))).To(BeFalse())
})
})
})
})

View File

@@ -0,0 +1,42 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image
import (
"crypto/tls"
"net/http"
"github.com/google/go-containerregistry/pkg/crane"
"github.com/google/go-containerregistry/pkg/v1/remote"
)
// Available checks if the image is available in the remote endpoint.
func Available(image string, opt ...crane.Option) bool {
// We use crane.insecure as we just check if the image is available
// It's the daemon duty to use it or not based on the host settings
transport := remote.DefaultTransport.Clone()
transport.TLSClientConfig = &tls.Config{
InsecureSkipVerify: true, //nolint: gosec
}
var rt http.RoundTripper = transport
if len(opt) == 0 {
opt = append(opt, crane.Insecure, crane.WithTransport(rt))
}
_, err := crane.Digest(image, opt...)
return err == nil
}

View File

@@ -0,0 +1,222 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image
import (
"archive/tar"
"context"
"io"
"os"
"path/filepath"
"strings"
containerdarchive "github.com/containerd/containerd/archive"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/mutate"
"github.com/mudler/luet/pkg/api/core/types"
"github.com/pkg/errors"
)
// ExtractDeltaAdditionsFromImages is a filter that takes two images
// an includes and an excludes list. It computes the delta between the images
// considering the added files only, and applies a filter on them based on the regexes
// in the lists.
func ExtractDeltaAdditionsFiles(
ctx types.Context,
srcimg v1.Image,
includes []string, excludes []string,
) (func(h *tar.Header) (bool, error), error) {
includeRegexp := compileRegexes(includes)
excludeRegexp := compileRegexes(excludes)
srcfilesd, err := ctx.TempDir("srcfiles")
if err != nil {
return nil, err
}
filesSrc := NewCache(srcfilesd, 50*1024*1024, 10000)
srcReader := mutate.Extract(srcimg)
defer srcReader.Close()
srcTar := tar.NewReader(srcReader)
for {
var hdr *tar.Header
hdr, err := srcTar.Next()
if err == io.EOF {
// end of tar archive
break
}
if err != nil {
return nil, err
}
filesSrc.Set(hdr.Name, "")
}
return func(h *tar.Header) (bool, error) {
fileName := filepath.Join(string(os.PathSeparator), h.Name)
_, exists := filesSrc.Get(h.Name)
if exists {
return false, nil
}
switch {
case len(includes) == 0 && len(excludes) != 0:
for _, i := range excludeRegexp {
if i.MatchString(filepath.Join(string(os.PathSeparator), h.Name)) &&
fileName == filepath.Join(string(os.PathSeparator), h.Name) {
return false, nil
}
}
ctx.Debug("Adding name", fileName)
return true, nil
case len(includes) > 0 && len(excludes) == 0:
for _, i := range includeRegexp {
if i.MatchString(filepath.Join(string(os.PathSeparator), h.Name)) && fileName == filepath.Join(string(os.PathSeparator), h.Name) {
ctx.Debug("Adding name", fileName)
return true, nil
}
}
return false, nil
case len(includes) != 0 && len(excludes) != 0:
for _, i := range includeRegexp {
if i.MatchString(filepath.Join(string(os.PathSeparator), h.Name)) && fileName == filepath.Join(string(os.PathSeparator), h.Name) {
for _, e := range excludeRegexp {
if e.MatchString(fileName) {
return false, nil
}
}
ctx.Debug("Adding name", fileName)
return true, nil
}
}
return false, nil
default:
ctx.Debug("Adding name", fileName)
return true, nil
}
}, nil
}
// ExtractFiles returns a filter that extracts files from the given path (if not empty)
// It then filters files by an include and exclude list.
// The list can be regexes
func ExtractFiles(
ctx types.Context,
prefixPath string,
includes []string, excludes []string,
) func(h *tar.Header) (bool, error) {
includeRegexp := compileRegexes(includes)
excludeRegexp := compileRegexes(excludes)
return func(h *tar.Header) (bool, error) {
fileName := filepath.Join(string(os.PathSeparator), h.Name)
switch {
case len(includes) == 0 && len(excludes) != 0:
for _, i := range excludeRegexp {
if i.MatchString(filepath.Join(prefixPath, fileName)) {
return false, nil
}
}
if prefixPath != "" {
return strings.HasPrefix(fileName, prefixPath), nil
}
ctx.Debug("Adding name", fileName)
return true, nil
case len(includes) > 0 && len(excludes) == 0:
for _, i := range includeRegexp {
if i.MatchString(filepath.Join(prefixPath, fileName)) {
if prefixPath != "" {
return strings.HasPrefix(fileName, prefixPath), nil
}
ctx.Debug("Adding name", fileName)
return true, nil
}
}
return false, nil
case len(includes) != 0 && len(excludes) != 0:
for _, i := range includeRegexp {
if i.MatchString(filepath.Join(prefixPath, fileName)) {
for _, e := range excludeRegexp {
if e.MatchString(filepath.Join(prefixPath, fileName)) {
return false, nil
}
}
if prefixPath != "" {
return strings.HasPrefix(fileName, prefixPath), nil
}
ctx.Debug("Adding name", fileName)
return true, nil
}
}
return false, nil
default:
if prefixPath != "" {
return strings.HasPrefix(fileName, prefixPath), nil
}
return true, nil
}
}
}
// ExtractReader perform the extracting action over the io.ReadCloser
// it extracts the files over output. Accepts a filter as an option
// and additional containerd Options
func ExtractReader(ctx types.Context, reader io.ReadCloser, output string, filter func(h *tar.Header) (bool, error), opts ...containerdarchive.ApplyOpt) (int64, string, error) {
defer reader.Close()
// If no filter is specified, grab all.
if filter == nil {
filter = func(h *tar.Header) (bool, error) { return true, nil }
}
opts = append(opts, containerdarchive.WithFilter(filter))
// Handle the extraction
c, err := containerdarchive.Apply(context.Background(), output, reader, opts...)
if err != nil {
return 0, "", err
}
return c, output, nil
}
// Extract is just syntax sugar around ExtractReader. It extracts an image into a dir
func Extract(ctx types.Context, img v1.Image, filter func(h *tar.Header) (bool, error), opts ...containerdarchive.ApplyOpt) (int64, string, error) {
tmpdiffs, err := ctx.TempDir("extraction")
if err != nil {
return 0, "", errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
return ExtractReader(ctx, mutate.Extract(img), tmpdiffs, filter, opts...)
}
// ExtractTo is just syntax sugar around ExtractReader
func ExtractTo(ctx types.Context, img v1.Image, output string, filter func(h *tar.Header) (bool, error), opts ...containerdarchive.ApplyOpt) (int64, string, error) {
return ExtractReader(ctx, mutate.Extract(img), output, filter, opts...)
}

View File

@@ -0,0 +1,111 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image_test
import (
"io/ioutil"
"os"
"path/filepath"
"github.com/google/go-containerregistry/pkg/name"
v1 "github.com/google/go-containerregistry/pkg/v1"
daemon "github.com/google/go-containerregistry/pkg/v1/daemon"
"github.com/mudler/luet/pkg/api/core/context"
. "github.com/mudler/luet/pkg/api/core/image"
"github.com/mudler/luet/pkg/helpers/file"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Extract", func() {
Context("extract files from images", func() {
Context("ExtractFiles", func() {
ctx := context.NewContext()
var tmpfile *os.File
var ref name.Reference
var img v1.Image
var err error
BeforeEach(func() {
ctx = context.NewContext()
tmpfile, err = ioutil.TempFile("", "extract")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpfile.Name()) // clean up
ref, err = name.ParseReference("alpine")
Expect(err).ToNot(HaveOccurred())
img, err = daemon.Image(ref)
Expect(err).ToNot(HaveOccurred())
})
It("Extract all files", func() {
_, tmpdir, err := Extract(
ctx,
img,
ExtractFiles(ctx, "", []string{}, []string{}),
)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(file.Exists(filepath.Join(tmpdir, "usr", "bin"))).To(BeTrue())
Expect(file.Exists(filepath.Join(tmpdir, "bin", "sh"))).To(BeTrue())
})
It("Extract specific dir", func() {
_, tmpdir, err := Extract(
ctx,
img,
ExtractFiles(ctx, "/usr", []string{}, []string{}),
)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(file.Exists(filepath.Join(tmpdir, "usr", "sbin"))).To(BeTrue())
Expect(file.Exists(filepath.Join(tmpdir, "usr", "bin"))).To(BeTrue())
Expect(file.Exists(filepath.Join(tmpdir, "bin", "sh"))).To(BeFalse())
})
It("Extract a dir with includes/excludes", func() {
_, tmpdir, err := Extract(
ctx,
img,
ExtractFiles(ctx, "/usr", []string{"bin"}, []string{"sbin"}),
)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(file.Exists(filepath.Join(tmpdir, "usr", "bin"))).To(BeTrue())
Expect(file.Exists(filepath.Join(tmpdir, "bin", "sh"))).To(BeFalse())
Expect(file.Exists(filepath.Join(tmpdir, "usr", "sbin"))).To(BeFalse())
})
It("Extract with includes/excludes", func() {
_, tmpdir, err := Extract(
ctx,
img,
ExtractFiles(ctx, "", []string{"/usr|/usr/bin"}, []string{"^/bin"}),
)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
Expect(file.Exists(filepath.Join(tmpdir, "usr", "bin"))).To(BeTrue())
Expect(file.Exists(filepath.Join(tmpdir, "bin", "sh"))).To(BeFalse())
})
})
})
})

View File

@@ -0,0 +1,34 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package image_test
import (
"testing"
"github.com/mudler/luet/pkg/api/core/context"
"github.com/mudler/luet/pkg/compiler/backend"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func TestImageApi(t *testing.T) {
RegisterFailHandler(Fail)
b := backend.NewSimpleDockerBackend(context.NewContext())
b.DownloadImage(backend.Options{ImageName: "alpine"})
b.DownloadImage(backend.Options{ImageName: "golang:alpine"})
RunSpecs(t, "Image API Suite")
}

View File

@@ -0,0 +1,28 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package logger_test
import (
"testing"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func TestAPITypes(t *testing.T) {
RegisterFailHandler(Fail)
RunSpecs(t, "Types Suite")
}

View File

@@ -0,0 +1,367 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package logger
import (
"fmt"
"os"
"path"
"regexp"
"runtime"
"strings"
"sync"
log "github.com/ipfs/go-log/v2"
"github.com/kyokomi/emoji"
"github.com/pterm/pterm"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
// Logger is the default logger
type Logger struct {
level log.LogLevel
emoji bool
logToFile bool
noSpinner bool
fileLogger *zap.Logger
context string
spinnerLock sync.Mutex
s *pterm.SpinnerPrinter
}
// LogLevel represents a log severity level. Use the package variables as an
// enum.
type LogLevel zapcore.Level
type LoggerOptions func(*Logger) error
var NoSpinner LoggerOptions = func(l *Logger) error {
l.noSpinner = true
return nil
}
func WithLevel(level string) LoggerOptions {
return func(l *Logger) error {
lvl, _ := log.LevelFromString(level) // Defaults to Info
l.level = lvl
if l.level == log.LevelDebug {
pterm.EnableDebugMessages()
}
return nil
}
}
func WithContext(c string) LoggerOptions {
return func(l *Logger) error {
l.context = c
return nil
}
}
func WithFileLogging(p, encoding string) LoggerOptions {
return func(l *Logger) error {
if encoding == "" {
encoding = "console"
}
l.logToFile = true
var err error
cfg := zap.NewProductionConfig()
cfg.OutputPaths = []string{p}
cfg.Level = zap.NewAtomicLevelAt(zapcore.Level(l.level))
cfg.ErrorOutputPaths = []string{}
cfg.Encoding = encoding
cfg.DisableCaller = true
cfg.DisableStacktrace = true
cfg.EncoderConfig.TimeKey = "time"
cfg.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
l.fileLogger, err = cfg.Build()
return err
}
}
var EnableEmoji = func() LoggerOptions {
return func(l *Logger) error {
l.emoji = true
return nil
}
}
func New(opts ...LoggerOptions) (*Logger, error) {
l := &Logger{
level: log.LevelDebug,
s: pterm.DefaultSpinner.WithShowTimer(false).WithRemoveWhenDone(true),
}
for _, o := range opts {
if err := o(l); err != nil {
return nil, err
}
}
return l, nil
}
func (l *Logger) Copy(opts ...LoggerOptions) (*Logger, error) {
c := *l
copy := &c
for _, o := range opts {
if err := o(copy); err != nil {
return nil, err
}
}
return copy, nil
}
func joinMsg(args ...interface{}) (message string) {
for _, m := range args {
message += " " + fmt.Sprintf("%v", m)
}
return
}
func (l *Logger) enabled(lvl log.LogLevel) bool {
return lvl >= l.level
}
var emojiStrip = regexp.MustCompile(`[:][\w]+[:]`)
func (l *Logger) transform(args ...interface{}) (sanitized []interface{}) {
for _, a := range args {
var aString string
// Strip emoji if needed
if l.emoji {
aString = emoji.Sprint(a)
} else {
aString = emojiStrip.ReplaceAllString(joinMsg(a), "")
}
sanitized = append(sanitized, aString)
}
if l.context != "" {
sanitized = append([]interface{}{fmt.Sprintf("(%s)", l.context)}, sanitized...)
}
return
}
func prefixCodeLine(args ...interface{}) []interface{} {
pc, file, line, ok := runtime.Caller(3)
if ok {
args = append([]interface{}{fmt.Sprintf("(%s:#%d:%v)",
path.Base(file), line, runtime.FuncForPC(pc).Name())}, args...)
}
return args
}
func (l *Logger) send(ll log.LogLevel, f string, args ...interface{}) {
if !l.enabled(ll) {
return
}
sanitizedArgs := joinMsg(l.transform(args...)...)
sanitizedF := joinMsg(l.transform(f)...)
formatDefined := f != ""
switch {
case log.LevelDebug == ll && !formatDefined:
pterm.Debug.Println(prefixCodeLine(sanitizedArgs)...)
if l.logToFile {
l.fileLogger.Debug(joinMsg(prefixCodeLine(sanitizedArgs)...))
}
case log.LevelDebug == ll && formatDefined:
pterm.Debug.Printfln(joinMsg(prefixCodeLine(sanitizedF)...), args...)
if l.logToFile {
l.fileLogger.Sugar().Debugf(joinMsg(prefixCodeLine(sanitizedF)...), args...)
}
case log.LevelError == ll && !formatDefined:
pterm.Error.Println(pterm.LightRed(sanitizedArgs))
if l.logToFile {
l.fileLogger.Error(sanitizedArgs)
}
case log.LevelError == ll && formatDefined:
pterm.Error.Printfln(pterm.LightRed(sanitizedF), args...)
if l.logToFile {
l.fileLogger.Sugar().Errorf(sanitizedF, args...)
}
case log.LevelFatal == ll && !formatDefined:
pterm.Error.Println(sanitizedArgs)
if l.logToFile {
l.fileLogger.Error(sanitizedArgs)
}
case log.LevelFatal == ll && formatDefined:
pterm.Error.Printfln(sanitizedF, args...)
if l.logToFile {
l.fileLogger.Sugar().Errorf(sanitizedF, args...)
}
//INFO
case log.LevelInfo == ll && !formatDefined:
pterm.Info.Println(sanitizedArgs)
if l.logToFile {
l.fileLogger.Info(sanitizedArgs)
}
case log.LevelInfo == ll && formatDefined:
pterm.Info.Printfln(sanitizedF, args...)
if l.logToFile {
l.fileLogger.Sugar().Infof(sanitizedF, args...)
}
//WARN
case log.LevelWarn == ll && !formatDefined:
pterm.Warning.Println(sanitizedArgs)
if l.logToFile {
l.fileLogger.Warn(sanitizedArgs)
}
case log.LevelWarn == ll && formatDefined:
pterm.Warning.Printfln(sanitizedF, args...)
if l.logToFile {
l.fileLogger.Sugar().Warnf(sanitizedF, args...)
}
}
}
func (l *Logger) Debug(args ...interface{}) {
l.send(log.LevelDebug, "", args...)
}
func (l *Logger) Error(args ...interface{}) {
l.send(log.LevelError, "", args...)
}
func (l *Logger) Trace(args ...interface{}) {
l.send(log.LevelDebug, "", args...)
}
func (l *Logger) Tracef(t string, args ...interface{}) {
l.send(log.LevelDebug, t, args...)
}
func (l *Logger) Fatal(args ...interface{}) {
l.send(log.LevelFatal, "", args...)
os.Exit(1)
}
func (l *Logger) Info(args ...interface{}) {
l.send(log.LevelInfo, "", args...)
}
func (l *Logger) Success(args ...interface{}) {
l.Info(append([]interface{}{"SUCCESS"}, args...)...)
}
func (l *Logger) Panic(args ...interface{}) {
l.Fatal(args...)
}
func (l *Logger) Warn(args ...interface{}) {
l.send(log.LevelWarn, "", args...)
}
func (l *Logger) Warning(args ...interface{}) {
l.Warn(args...)
}
func (l *Logger) Debugf(f string, args ...interface{}) {
l.send(log.LevelDebug, f, args...)
}
func (l *Logger) Errorf(f string, args ...interface{}) {
l.send(log.LevelError, f, args...)
}
func (l *Logger) Fatalf(f string, args ...interface{}) {
l.send(log.LevelFatal, f, args...)
}
func (l *Logger) Infof(f string, args ...interface{}) {
l.send(log.LevelInfo, f, args...)
}
func (l *Logger) Panicf(f string, args ...interface{}) {
l.Fatalf(joinMsg(f), args...)
}
func (l *Logger) Warnf(f string, args ...interface{}) {
l.send(log.LevelWarn, f, args...)
}
func (l *Logger) Warningf(f string, args ...interface{}) {
l.Warnf(f, args...)
}
func (l *Logger) Ask() bool {
var input string
l.Info("Do you want to continue with this operation? [y/N]: ")
_, err := fmt.Scanln(&input)
if err != nil {
return false
}
input = strings.ToLower(input)
if input == "y" || input == "yes" {
return true
}
return false
}
// Spinner starts the spinner
func (l *Logger) Spinner() {
if !IsTerminal() || l.noSpinner {
return
}
l.spinnerLock.Lock()
defer l.spinnerLock.Unlock()
if l.s != nil && !l.s.IsActive {
l.s, _ = l.s.Start()
}
}
func (l *Logger) Screen(text string) {
pterm.DefaultHeader.WithBackgroundStyle(pterm.NewStyle(pterm.BgLightBlue)).WithMargin(2).Println(text)
}
func (l *Logger) SpinnerText(suffix, prefix string) {
if !IsTerminal() || l.noSpinner {
return
}
l.spinnerLock.Lock()
defer l.spinnerLock.Unlock()
if l.level == log.LevelDebug {
fmt.Printf("%s %s\n",
suffix, prefix,
)
} else {
l.s.UpdateText(suffix + prefix)
}
}
func (l *Logger) SpinnerStop() {
if !IsTerminal() || l.noSpinner {
return
}
l.spinnerLock.Lock()
defer l.spinnerLock.Unlock()
if l.s != nil {
l.s.Success()
}
}

View File

@@ -0,0 +1,131 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package logger_test
import (
"io"
"io/ioutil"
"os"
"github.com/gookit/color"
"github.com/mudler/luet/pkg/api/core/logger"
. "github.com/mudler/luet/pkg/api/core/logger"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
func captureStdout(f func(w io.Writer)) string {
originalStdout := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
color.SetOutput(w)
f(w)
_ = w.Close()
out, _ := ioutil.ReadAll(r)
os.Stdout = originalStdout
color.SetOutput(os.Stdout)
_ = r.Close()
return string(out)
}
var _ = Describe("Context and logging", func() {
Context("Context", func() {
It("detect if is a terminal", func() {
Expect(captureStdout(func(w io.Writer) {
_, _, err := GetTerminalSize()
Expect(err).To(HaveOccurred())
Expect(err.Error()).To(Equal("size not detectable"))
os.Stdout.Write([]byte(err.Error()))
})).To(ContainSubstring("size not detectable"))
})
It("respects loglevel", func() {
l, err := New(WithLevel("info"))
Expect(err).ToNot(HaveOccurred())
Expect(captureStdout(func(w io.Writer) {
l.Debug("")
})).To(Equal(""))
l, err = New(WithLevel("debug"))
Expect(err).ToNot(HaveOccurred())
Expect(captureStdout(func(w io.Writer) {
l.Debug("foo")
})).To(ContainSubstring("foo"))
})
It("logs with context", func() {
l, err := New(WithLevel("debug"), WithContext("foo"))
Expect(err).ToNot(HaveOccurred())
Expect(captureStdout(func(w io.Writer) {
l.Debug("bar")
})).To(ContainSubstring("(foo) bar"))
})
It("returns copies with logged context", func() {
l, err := New(WithLevel("debug"))
l, _ = l.Copy(logger.WithContext("bazzz"))
Expect(err).ToNot(HaveOccurred())
Expect(captureStdout(func(w io.Writer) {
l.Debug("bar")
})).To(ContainSubstring("(bazzz) bar"))
})
It("logs to file", func() {
t, err := ioutil.TempFile("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(t.Name()) // clean up
l, err := New(WithLevel("debug"), WithFileLogging(t.Name(), ""))
Expect(err).ToNot(HaveOccurred())
// ctx.Init()
Expect(captureStdout(func(w io.Writer) {
l.Info("foot")
})).To(And(ContainSubstring("INFO"), ContainSubstring("foot")))
Expect(captureStdout(func(w io.Writer) {
l.Success("test")
})).To(And(ContainSubstring("SUCCESS"), ContainSubstring("test")))
Expect(captureStdout(func(w io.Writer) {
l.Error("foobar")
})).To(And(ContainSubstring("ERROR"), ContainSubstring("foobar")))
Expect(captureStdout(func(w io.Writer) {
l.Warning("foowarn")
})).To(And(ContainSubstring("WARNING"), ContainSubstring("foowarn")))
ll, err := ioutil.ReadFile(t.Name())
Expect(err).ToNot(HaveOccurred())
logs := string(ll)
Expect(logs).To(ContainSubstring("foot"))
Expect(logs).To(ContainSubstring("test"))
Expect(logs).To(ContainSubstring("foowarn"))
Expect(logs).To(ContainSubstring("foobar"))
})
})
})

View File

@@ -0,0 +1,43 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package logger
import (
"errors"
"os"
"github.com/mattn/go-isatty"
"golang.org/x/term"
)
func IsTerminal() bool {
return isatty.IsTerminal(os.Stdout.Fd())
}
// GetTerminalSize returns the width and the height of the active terminal.
func GetTerminalSize() (width, height int, err error) {
w, h, err := term.GetSize(int(os.Stdout.Fd()))
if w <= 0 {
w = 0
}
if h <= 0 {
h = 0
}
if err != nil {
err = errors.New("size not detectable")
}
return w, h, err
}

View File

@@ -27,18 +27,21 @@ import (
"os"
"path"
"path/filepath"
"regexp"
"runtime"
"github.com/docker/docker/pkg/pools"
v1 "github.com/google/go-containerregistry/pkg/v1"
zstd "github.com/klauspost/compress/zstd"
gzip "github.com/klauspost/pgzip"
//"strconv"
"strings"
"sync"
containerdCompression "github.com/containerd/containerd/archive/compression"
bus "github.com/mudler/luet/pkg/api/core/bus"
config "github.com/mudler/luet/pkg/api/core/config"
types "github.com/mudler/luet/pkg/api/core/types"
bus "github.com/mudler/luet/pkg/bus"
"github.com/mudler/luet/pkg/api/core/image"
"github.com/mudler/luet/pkg/api/core/types"
backend "github.com/mudler/luet/pkg/compiler/backend"
compression "github.com/mudler/luet/pkg/compiler/types/compression"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
@@ -67,6 +70,22 @@ type PackageArtifact struct {
Runtime *pkg.DefaultPackage `json:"runtime,omitempty"`
}
func ImageToArtifact(ctx types.Context, img v1.Image, t compression.Implementation, output string, filter func(h *tar.Header) (bool, error)) (*PackageArtifact, error) {
_, tmpdiffs, err := image.Extract(ctx, img, filter)
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tmpdiffs) // clean up
a := NewPackageArtifact(output)
a.CompressionType = t
err = a.Compress(tmpdiffs, 1)
if err != nil {
return nil, errors.Wrap(err, "Error met while creating package archive")
}
return a, nil
}
func (p *PackageArtifact) ShallowCopy() *PackageArtifact {
copy := *p
return &copy
@@ -156,19 +175,13 @@ func (a *PackageArtifact) GetFileName() string {
return path.Base(a.Path)
}
func (a *PackageArtifact) genDockerfile() string {
return `
FROM scratch
COPY . /`
}
// CreateArtifactForFile creates a new artifact from the given file
func CreateArtifactForFile(ctx *types.Context, s string, opts ...func(*PackageArtifact)) (*PackageArtifact, error) {
func CreateArtifactForFile(ctx types.Context, s string, opts ...func(*PackageArtifact)) (*PackageArtifact, error) {
if _, err := os.Stat(s); os.IsNotExist(err) {
return nil, errors.Wrap(err, "artifact path doesn't exist")
}
fileName := path.Base(s)
archive, err := ctx.Config.GetSystem().TempDir("archive")
archive, err := ctx.TempDir("archive")
if err != nil {
return nil, errors.Wrap(err, "error met while creating tempdir for "+s)
}
@@ -178,7 +191,7 @@ func CreateArtifactForFile(ctx *types.Context, s string, opts ...func(*PackageAr
return nil, errors.Wrapf(err, "error while copying %s to %s", s, dst)
}
artifact, err := ctx.Config.GetSystem().TempDir("artifact")
artifact, err := ctx.TempDir("artifact")
if err != nil {
return nil, errors.Wrap(err, "error met while creating tempdir for "+s)
}
@@ -193,52 +206,26 @@ func CreateArtifactForFile(ctx *types.Context, s string, opts ...func(*PackageAr
type ImageBuilder interface {
BuildImage(backend.Options) error
LoadImage(path string) error
}
// GenerateFinalImage takes an artifact and builds a Docker image with its content
func (a *PackageArtifact) GenerateFinalImage(ctx *types.Context, imageName string, b ImageBuilder, keepPerms bool) (backend.Options, error) {
builderOpts := backend.Options{}
archive, err := ctx.Config.GetSystem().TempDir("archive")
func (a *PackageArtifact) GenerateFinalImage(ctx types.Context, imageName string, b ImageBuilder, keepPerms bool) error {
tempimage, err := ctx.TempFile("tempimage")
if err != nil {
return builderOpts, errors.Wrap(err, "error met while creating tempdir for "+a.Path)
return errors.Wrap(err, "error met while creating tempdir for "+a.Path)
}
defer os.RemoveAll(archive) // clean up
defer os.RemoveAll(tempimage.Name()) // clean up
uncompressedFiles := filepath.Join(archive, "files")
dockerFile := filepath.Join(archive, "Dockerfile")
if err := os.MkdirAll(uncompressedFiles, os.ModePerm); err != nil {
return builderOpts, errors.Wrap(err, "error met while creating tempdir for "+a.Path)
if err := image.CreateTar(a.Path, tempimage.Name(), imageName, runtime.GOARCH, runtime.GOOS); err != nil {
return errors.Wrap(err, "could not create image from tar")
}
if err := a.Unpack(ctx, uncompressedFiles, keepPerms); err != nil {
return builderOpts, errors.Wrap(err, "error met while uncompressing artifact "+a.Path)
if err := b.LoadImage(tempimage.Name()); err != nil {
return errors.Wrap(err, "while loading image")
}
empty, err := fileHelper.DirectoryIsEmpty(uncompressedFiles)
if err != nil {
return builderOpts, errors.Wrap(err, "error met while checking if directory is empty "+uncompressedFiles)
}
// See https://github.com/moby/moby/issues/38039.
// We can't generate FROM scratch empty images. Docker will refuse to export them
// workaround: Inject a .virtual empty file
if empty {
fileHelper.Touch(filepath.Join(uncompressedFiles, ".virtual"))
}
data := a.genDockerfile()
if err := ioutil.WriteFile(dockerFile, []byte(data), 0644); err != nil {
return builderOpts, errors.Wrap(err, "error met while rendering artifact dockerfile "+a.Path)
}
builderOpts = backend.Options{
ImageName: imageName,
SourcePath: archive,
DockerFileName: dockerFile,
Context: uncompressedFiles,
}
return builderOpts, b.BuildImage(builderOpts)
return nil
}
// Compress is responsible to archive and compress to the artifact Path.
@@ -369,7 +356,79 @@ func hashFileContent(path string) (string, error) {
return base64.URLEncoding.EncodeToString(h.Sum(nil)), nil
}
func tarModifierWrapperFunc(ctx *types.Context) func(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
func replaceFileTarWrapper(dst string, inputTarStream io.ReadCloser, mods []string, fn func(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error)) io.ReadCloser {
pipeReader, pipeWriter := io.Pipe()
go func() {
tarReader := tar.NewReader(inputTarStream)
tarWriter := tar.NewWriter(pipeWriter)
defer inputTarStream.Close()
defer tarWriter.Close()
modify := func(name string, original *tar.Header, tarReader io.Reader) error {
header, data, err := fn(dst, name, original, tarReader)
switch {
case err != nil:
return err
case header == nil:
return nil
}
if header.Name == "" {
header.Name = name
}
header.Size = int64(len(data))
if err := tarWriter.WriteHeader(header); err != nil {
return err
}
if len(data) != 0 {
if _, err := tarWriter.Write(data); err != nil {
return err
}
}
return nil
}
// var remaining []string
var err error
var originalHeader *tar.Header
for {
originalHeader, err = tarReader.Next()
if err == io.EOF {
break
}
if err != nil {
pipeWriter.CloseWithError(err)
return
}
if !helpers.Contains(mods, originalHeader.Name) {
// No modifiers for this file, copy the header and data
if err := tarWriter.WriteHeader(originalHeader); err != nil {
pipeWriter.CloseWithError(err)
return
}
if _, err := pools.Copy(tarWriter, tarReader); err != nil {
pipeWriter.CloseWithError(err)
return
}
continue
}
if err := modify(originalHeader.Name, originalHeader, tarReader); err != nil {
pipeWriter.CloseWithError(err)
return
}
}
// Apply the modifiers that haven't matched any files in the archive
pipeWriter.Close()
}()
return pipeReader
}
func tarModifierWrapperFunc(ctx types.Context) func(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
return func(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
// If the destination path already exists I rename target file name with postfix.
var destPath string
@@ -404,11 +463,15 @@ func tarModifierWrapperFunc(ctx *types.Context) func(dst, path string, header *t
}
}
ctx.Debug("Existing file hash: ", existingHash, "Tar file hashsum: ", tarHash)
ctx.Debug(destPath, "- existing file hash: ", existingHash, "Tar file hashsum: ", tarHash)
if fileHelper.Exists(destPath) {
ctx.Debug(destPath, "already exists")
}
// We want to protect file only if the hash of the files are differing OR the file size are
differs := (existingHash != "" && existingHash != tarHash) || (err != nil && f != nil && header.Size != f.Size())
// Check if exists
if fileHelper.Exists(destPath) && differs {
ctx.Debug(destPath, "already exists and differs")
for i := 1; i < 1000; i++ {
name := filepath.Join(filepath.Join(filepath.Dir(path),
fmt.Sprintf("._cfg%04d_%s", i, filepath.Base(path))))
@@ -432,11 +495,10 @@ func tarModifierWrapperFunc(ctx *types.Context) func(dst, path string, header *t
}
}
func (a *PackageArtifact) GetProtectFiles(ctx *types.Context) []string {
ans := []string{}
func (a *PackageArtifact) GetProtectFiles(ctx types.Context) (res []string) {
annotationDir := ""
if !ctx.Config.ConfigProtectSkip {
if !ctx.GetConfig().ConfigProtectSkip {
// a.CompileSpec could be nil when artifact.Unpack is used for tree tarball
if a.CompileSpec != nil &&
@@ -449,162 +511,76 @@ func (a *PackageArtifact) GetProtectFiles(ctx *types.Context) []string {
// TODO: check if skip this if we have a.CompileSpec nil
cp := config.NewConfigProtect(annotationDir)
cp.Map(a.Files, ctx.Config.GetConfigProtectConfFiles())
cp.Map(a.Files, ctx.GetConfig().ConfigProtectConfFiles)
// NOTE: for unpack we need files path without initial /
ans = cp.GetProtectFiles(false)
res = cp.GetProtectFiles(false)
}
return ans
return
}
// Unpack Untar and decompress (TODO) to the given path
func (a *PackageArtifact) Unpack(ctx *types.Context, dst string, keepPerms bool) error {
if !strings.HasPrefix(dst, "/") {
func (a *PackageArtifact) Unpack(ctx types.Context, dst string, keepPerms bool) error {
if !strings.HasPrefix(dst, string(os.PathSeparator)) {
return errors.New("destination must be an absolute path")
}
// Create
protectedFiles := a.GetProtectFiles(ctx)
tarModifier := helpers.NewTarModifierWrapper(dst, tarModifierWrapperFunc(ctx))
mod := tarModifierWrapperFunc(ctx)
//tarModifier := helpers.NewTarModifierWrapper(dst, mod)
switch a.CompressionType {
case compression.Zstandard:
// Create the uncompressed archive
archive, err := os.Create(a.Path + ".uncompressed")
if err != nil {
return err
}
defer os.RemoveAll(a.Path + ".uncompressed")
defer archive.Close()
original, err := os.Open(a.Path)
if err != nil {
return errors.Wrap(err, "Cannot open "+a.Path)
}
defer original.Close()
bufferedReader := bufio.NewReader(original)
d, err := zstd.NewReader(bufferedReader)
if err != nil {
return err
}
defer d.Close()
_, err = io.Copy(archive, d)
if err != nil {
return errors.Wrap(err, "Cannot copy to "+a.Path+".uncompressed")
}
err = helpers.UntarProtect(a.Path+".uncompressed", dst,
ctx.Config.GetGeneral().SameOwner, protectedFiles, tarModifier)
if err != nil {
return err
}
return nil
case compression.GZip:
// Create the uncompressed archive
archive, err := os.Create(a.Path + ".uncompressed")
if err != nil {
return err
}
defer os.RemoveAll(a.Path + ".uncompressed")
defer archive.Close()
original, err := os.Open(a.Path)
if err != nil {
return errors.Wrap(err, "Cannot open "+a.Path)
}
defer original.Close()
bufferedReader := bufio.NewReader(original)
r, err := gzip.NewReader(bufferedReader)
if err != nil {
return err
}
defer r.Close()
_, err = io.Copy(archive, r)
if err != nil {
return errors.Wrap(err, "Cannot copy to "+a.Path+".uncompressed")
}
err = helpers.UntarProtect(a.Path+".uncompressed", dst,
ctx.Config.GetGeneral().SameOwner, protectedFiles, tarModifier)
if err != nil {
return err
}
return nil
// Defaults to tar only (covers when "none" is supplied)
default:
return helpers.UntarProtect(a.Path, dst, ctx.Config.GetGeneral().SameOwner,
protectedFiles, tarModifier)
archiveFile, err := os.Open(a.Path)
if err != nil {
return errors.Wrap(err, "Cannot open "+a.Path)
}
return errors.New("Compression type must be supplied")
defer archiveFile.Close()
decompressed, err := containerdCompression.DecompressStream(archiveFile)
if err != nil {
return errors.Wrap(err, "Cannot open "+a.Path)
}
replacerArchive := replaceFileTarWrapper(dst, decompressed, protectedFiles, mod)
// or with filter?
// func(header *tar.Header) (bool, error) {
// if helpers.Contains(protectedFiles, header.Name) {
// newHead, _, err := mod(dst, header.Name, header, decompressed)
// if err != nil {
// return false, err
// }
// header.Name = newHead.Name
// // Override target path
// //target = filepath.Join(dest, header.Name)
// }
// // tarModifier.Modifier()
// return true, nil
// },
_, _, err = image.ExtractReader(ctx, replacerArchive, dst, nil)
return err
}
// FileList generates the list of file of a package from the local archive
func (a *PackageArtifact) FileList() ([]string, error) {
var tr *tar.Reader
switch a.CompressionType {
case compression.Zstandard:
archive, err := os.Create(a.Path + ".uncompressed")
if err != nil {
return []string{}, err
}
defer os.RemoveAll(a.Path + ".uncompressed")
defer archive.Close()
original, err := os.Open(a.Path)
if err != nil {
return []string{}, errors.Wrap(err, "Cannot open "+a.Path)
}
defer original.Close()
bufferedReader := bufio.NewReader(original)
r, err := zstd.NewReader(bufferedReader)
if err != nil {
return []string{}, err
}
defer r.Close()
tr = tar.NewReader(r)
case compression.GZip:
// Create the uncompressed archive
archive, err := os.Create(a.Path + ".uncompressed")
if err != nil {
return []string{}, err
}
defer os.RemoveAll(a.Path + ".uncompressed")
defer archive.Close()
original, err := os.Open(a.Path)
if err != nil {
return []string{}, errors.Wrap(err, "Cannot open "+a.Path)
}
defer original.Close()
bufferedReader := bufio.NewReader(original)
r, err := gzip.NewReader(bufferedReader)
if err != nil {
return []string{}, err
}
defer r.Close()
tr = tar.NewReader(r)
// Defaults to tar only (covers when "none" is supplied)
default:
tarFile, err := os.Open(a.Path)
if err != nil {
return []string{}, errors.Wrap(err, "Could not open package archive")
}
defer tarFile.Close()
tr = tar.NewReader(tarFile)
}
var files []string
archiveFile, err := os.Open(a.Path)
if err != nil {
return files, errors.Wrap(err, "Cannot open "+a.Path)
}
defer archiveFile.Close()
decompressed, err := containerdCompression.DecompressStream(archiveFile)
if err != nil {
return files, errors.Wrap(err, "Cannot open "+a.Path)
}
defer decompressed.Close()
tr := tar.NewReader(decompressed)
// untar each segment
for {
hdr, err := tr.Next()
@@ -626,181 +602,3 @@ func (a *PackageArtifact) FileList() ([]string, error) {
}
return files, nil
}
type CopyJob struct {
Src, Dst string
Artifact string
}
func worker(ctx *types.Context, i int, wg *sync.WaitGroup, s <-chan CopyJob) {
defer wg.Done()
for job := range s {
_, err := os.Lstat(job.Dst)
if err != nil {
ctx.Debug("Copying ", job.Src)
if err := fileHelper.DeepCopyFile(job.Src, job.Dst); err != nil {
ctx.Warning("Error copying", job, err)
}
}
}
}
func compileRegexes(regexes []string) []*regexp.Regexp {
var result []*regexp.Regexp
for _, i := range regexes {
r, e := regexp.Compile(i)
if e != nil {
continue
}
result = append(result, r)
}
return result
}
type ArtifactNode struct {
Name string `json:"Name"`
Size int `json:"Size"`
}
type ArtifactDiffs struct {
Additions []ArtifactNode `json:"Adds"`
Deletions []ArtifactNode `json:"Dels"`
Changes []ArtifactNode `json:"Mods"`
}
type ArtifactLayer struct {
FromImage string `json:"Image1"`
ToImage string `json:"Image2"`
Diffs ArtifactDiffs `json:"Diff"`
}
// ExtractArtifactFromDelta extracts deltas from ArtifactLayer from an image in tar format
func ExtractArtifactFromDelta(ctx *types.Context, src, dst string, layers []ArtifactLayer, concurrency int, keepPerms bool, includes []string, excludes []string, t compression.Implementation) (*PackageArtifact, error) {
archive, err := ctx.Config.GetSystem().TempDir("archive")
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for archive")
}
defer os.RemoveAll(archive) // clean up
if strings.HasSuffix(src, ".tar") {
rootfs, err := ctx.Config.GetSystem().TempDir("rootfs")
if err != nil {
return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(rootfs) // clean up
err = helpers.Untar(src, rootfs, keepPerms)
if err != nil {
return nil, errors.Wrap(err, "Error met while unpacking rootfs")
}
src = rootfs
}
toCopy := make(chan CopyJob)
var wg = new(sync.WaitGroup)
for i := 0; i < concurrency; i++ {
wg.Add(1)
go worker(ctx, i, wg, toCopy)
}
// Handle includes in spec. If specified they filter what gets in the package
if len(includes) > 0 && len(excludes) == 0 {
includeRegexp := compileRegexes(includes)
for _, l := range layers {
// Consider d.Additions (and d.Changes? - warn at least) only
ADDS:
for _, a := range l.Diffs.Additions {
for _, i := range includeRegexp {
if i.MatchString(a.Name) {
toCopy <- CopyJob{Src: filepath.Join(src, a.Name), Dst: filepath.Join(archive, a.Name), Artifact: a.Name}
continue ADDS
}
}
}
for _, a := range l.Diffs.Changes {
ctx.Debug("File ", a.Name, " changed")
}
for _, a := range l.Diffs.Deletions {
ctx.Debug("File ", a.Name, " deleted")
}
}
} else if len(includes) == 0 && len(excludes) != 0 {
excludeRegexp := compileRegexes(excludes)
for _, l := range layers {
// Consider d.Additions (and d.Changes? - warn at least) only
ADD:
for _, a := range l.Diffs.Additions {
for _, i := range excludeRegexp {
if i.MatchString(a.Name) {
continue ADD
}
}
toCopy <- CopyJob{Src: filepath.Join(src, a.Name), Dst: filepath.Join(archive, a.Name), Artifact: a.Name}
}
for _, a := range l.Diffs.Changes {
ctx.Debug("File ", a.Name, " changed")
}
for _, a := range l.Diffs.Deletions {
ctx.Debug("File ", a.Name, " deleted")
}
}
} else if len(includes) != 0 && len(excludes) != 0 {
includeRegexp := compileRegexes(includes)
excludeRegexp := compileRegexes(excludes)
for _, l := range layers {
// Consider d.Additions (and d.Changes? - warn at least) only
EXCLUDES:
for _, a := range l.Diffs.Additions {
for _, i := range includeRegexp {
if i.MatchString(a.Name) {
for _, e := range excludeRegexp {
if e.MatchString(a.Name) {
continue EXCLUDES
}
}
toCopy <- CopyJob{Src: filepath.Join(src, a.Name), Dst: filepath.Join(archive, a.Name), Artifact: a.Name}
continue EXCLUDES
}
}
}
for _, a := range l.Diffs.Changes {
ctx.Debug("File ", a.Name, " changed")
}
for _, a := range l.Diffs.Deletions {
ctx.Debug("File ", a.Name, " deleted")
}
}
} else {
// Otherwise just grab all
for _, l := range layers {
// Consider d.Additions (and d.Changes? - warn at least) only
for _, a := range l.Diffs.Additions {
ctx.Debug("File ", a.Name, " added")
toCopy <- CopyJob{Src: filepath.Join(src, a.Name), Dst: filepath.Join(archive, a.Name), Artifact: a.Name}
}
for _, a := range l.Diffs.Changes {
ctx.Debug("File ", a.Name, " changed")
}
for _, a := range l.Diffs.Deletions {
ctx.Debug("File ", a.Name, " deleted")
}
}
}
close(toCopy)
wg.Wait()
a := NewPackageArtifact(dst)
a.CompressionType = t
err = a.Compress(archive, concurrency)
if err != nil {
return nil, errors.Wrap(err, "Error met while creating package archive")
}
return a, nil
}

View File

@@ -18,7 +18,7 @@ package artifact_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -20,27 +20,25 @@ import (
"os"
"path/filepath"
"github.com/mudler/luet/pkg/api/core/types"
"github.com/mudler/luet/pkg/api/core/context"
"github.com/mudler/luet/pkg/api/core/image"
. "github.com/mudler/luet/pkg/api/core/types/artifact"
"github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/compiler/backend"
backend "github.com/mudler/luet/pkg/compiler/backend"
compression "github.com/mudler/luet/pkg/compiler/types/compression"
"github.com/mudler/luet/pkg/compiler/types/options"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
. "github.com/mudler/luet/pkg/compiler"
helpers "github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Artifact", func() {
Context("Simple package build definition", func() {
ctx := types.NewContext()
ctx := context.NewContext()
It("Generates a verified delta", func() {
generalRecipe := tree.NewGeneralRecipe(pkg.NewInMemoryDatabase(false))
@@ -50,7 +48,7 @@ var _ = Describe("Artifact", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
cc := NewLuetCompiler(nil, generalRecipe.GetDatabase(), options.WithContext(types.NewContext()))
cc := NewLuetCompiler(nil, generalRecipe.GetDatabase(), options.WithContext(context.NewContext()))
lspec, err := cc.FromPackage(&pkg.DefaultPackage{Name: "enman", Category: "app-admin", Version: "1.4.0"})
Expect(err).ToNot(HaveOccurred())
@@ -84,7 +82,7 @@ WORKDIR /luetbuild
ENV PACKAGE_NAME=enman
ENV PACKAGE_VERSION=1.4.0
ENV PACKAGE_CATEGORY=app-admin`))
b := NewSimpleDockerBackend(ctx)
b := backend.NewSimpleDockerBackend(ctx)
opts := backend.Options{
ImageName: "luet/base",
SourcePath: tmpdir,
@@ -117,57 +115,11 @@ RUN echo bar > /test2`))
}
Expect(b.BuildImage(opts2)).ToNot(HaveOccurred())
Expect(b.ExportImage(opts2)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
diffs, err := compiler.GenerateChanges(ctx, b, opts, opts2)
Expect(err).ToNot(HaveOccurred())
artifacts := []ArtifactNode{{
Name: "/luetbuild/LuetDockerfile",
Size: 175,
}}
if os.Getenv("DOCKER_BUILDKIT") == "1" {
artifacts = append(artifacts, ArtifactNode{Name: "/etc/resolv.conf", Size: 0})
}
artifacts = append(artifacts, ArtifactNode{Name: "/test", Size: 4})
artifacts = append(artifacts, ArtifactNode{Name: "/test2", Size: 4})
Expect(diffs).To(Equal(
[]ArtifactLayer{{
FromImage: "luet/base",
ToImage: "test",
Diffs: ArtifactDiffs{
Additions: artifacts,
},
}}))
err = b.ExtractRootfs(backend.Options{ImageName: "test", Destination: rootfs}, false)
Expect(err).ToNot(HaveOccurred())
a, err := ExtractArtifactFromDelta(ctx, rootfs, filepath.Join(tmpdir, "package.tar"), diffs, 2, false, []string{}, []string{}, compression.None)
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.Exists(filepath.Join(tmpdir, "package.tar"))).To(BeTrue())
err = helpers.Untar(a.Path, unpacked, false)
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.Exists(filepath.Join(unpacked, "test"))).To(BeTrue())
Expect(fileHelper.Exists(filepath.Join(unpacked, "test2"))).To(BeTrue())
content1, err := fileHelper.Read(filepath.Join(unpacked, "test"))
Expect(err).ToNot(HaveOccurred())
Expect(content1).To(Equal("foo\n"))
content2, err := fileHelper.Read(filepath.Join(unpacked, "test2"))
Expect(err).ToNot(HaveOccurred())
Expect(content2).To(Equal("bar\n"))
err = a.Hash()
Expect(err).ToNot(HaveOccurred())
err = a.Verify()
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.CopyFile(filepath.Join(tmpdir, "output2.tar"), filepath.Join(tmpdir, "package.tar"))).ToNot(HaveOccurred())
err = a.Verify()
Expect(err).To(HaveOccurred())
})
It("Generates packages images", func() {
b := NewSimpleDockerBackend(ctx)
b := backend.NewSimpleDockerBackend(ctx)
imageprefix := "foo/"
testString := []byte(`funky test data`)
@@ -193,9 +145,8 @@ RUN echo bar > /test2`))
err = a.Compress(tmpdir, 1)
Expect(err).ToNot(HaveOccurred())
resultingImage := imageprefix + "foo--1.0"
opts, err := a.GenerateFinalImage(ctx, resultingImage, b, false)
err = a.GenerateFinalImage(ctx, resultingImage, b, false)
Expect(err).ToNot(HaveOccurred())
Expect(opts.ImageName).To(Equal(resultingImage))
Expect(b.ImageExists(resultingImage)).To(BeTrue())
@@ -203,7 +154,14 @@ RUN echo bar > /test2`))
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(result) // clean up
err = b.ExtractRootfs(backend.Options{ImageName: resultingImage, Destination: result}, false)
img, err := b.ImageReference(resultingImage, true)
Expect(err).ToNot(HaveOccurred())
_, _, err = image.ExtractTo(
ctx,
img,
result,
nil,
)
Expect(err).ToNot(HaveOccurred())
content, err := ioutil.ReadFile(filepath.Join(result, "test"))
@@ -218,7 +176,7 @@ RUN echo bar > /test2`))
})
It("Generates empty packages images", func() {
b := NewSimpleDockerBackend(ctx)
b := backend.NewSimpleDockerBackend(ctx)
imageprefix := "foo/"
tmpdir, err := ioutil.TempDir(os.TempDir(), "artifact")
@@ -235,9 +193,8 @@ RUN echo bar > /test2`))
err = a.Compress(tmpdir, 1)
Expect(err).ToNot(HaveOccurred())
resultingImage := imageprefix + "foo--1.0"
opts, err := a.GenerateFinalImage(ctx, resultingImage, b, false)
err = a.GenerateFinalImage(ctx, resultingImage, b, false)
Expect(err).ToNot(HaveOccurred())
Expect(opts.ImageName).To(Equal(resultingImage))
Expect(b.ImageExists(resultingImage)).To(BeTrue())
@@ -245,14 +202,17 @@ RUN echo bar > /test2`))
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(result) // clean up
err = b.ExtractRootfs(backend.Options{ImageName: resultingImage, Destination: result}, false)
img, err := b.ImageReference(resultingImage, false)
Expect(err).ToNot(HaveOccurred())
_, _, err = image.ExtractTo(
ctx,
img,
result,
nil,
)
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.DirectoryIsEmpty(result)).To(BeFalse())
content, err := ioutil.ReadFile(filepath.Join(result, ".virtual"))
Expect(err).ToNot(HaveOccurred())
Expect(string(content)).To(Equal(""))
Expect(fileHelper.DirectoryIsEmpty(result)).To(BeTrue())
})
It("Retrieves uncompressed name", func() {

View File

@@ -20,13 +20,13 @@ import (
"os"
"path/filepath"
types "github.com/mudler/luet/pkg/api/core/types"
"github.com/mudler/luet/pkg/api/core/context"
. "github.com/mudler/luet/pkg/api/core/types/artifact"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
@@ -59,7 +59,7 @@ var _ = Describe("Cache", func() {
Expect(err).ToNot(HaveOccurred())
b := NewPackageArtifact(path)
ctx := types.NewContext()
ctx := context.NewContext()
err = b.Unpack(ctx, tmpdir, false)
Expect(err).ToNot(HaveOccurred())

View File

@@ -45,7 +45,7 @@ type HashOptions struct {
func (c Checksums) List() (res [][]string) {
keys := make([]string, 0)
for k, _ := range c {
for k := range c {
keys = append(keys, k)
}
sort.Strings(keys)

View File

@@ -21,7 +21,7 @@ import (
. "github.com/mudler/luet/pkg/api/core/types/artifact"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -1,6 +1,4 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
// 2021 Ettore Di Giacinto <mudler@mocaccino.org>
// Copyright © 2019-2021 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
@@ -39,19 +37,22 @@ var AvailableResolvers = strings.Join([]string{solver.QLearningResolverType}, "
type LuetLoggingConfig struct {
// Path of the logfile
Path string `mapstructure:"path"`
Path string `yaml:"path" mapstructure:"path"`
// Enable/Disable logging to file
EnableLogFile bool `mapstructure:"enable_logfile"`
EnableLogFile bool `yaml:"enable_logfile" mapstructure:"enable_logfile"`
// Enable JSON format logging in file
JsonFormat bool `mapstructure:"json_format"`
JsonFormat bool `yaml:"json_format" mapstructure:"json_format"`
// Log level
Level LogLevel `mapstructure:"level"`
Level string `yaml:"level" mapstructure:"level"`
// Enable emoji
EnableEmoji bool `mapstructure:"enable_emoji"`
EnableEmoji bool `yaml:"enable_emoji" mapstructure:"enable_emoji"`
// Enable/Disable color in logging
Color bool `mapstructure:"color"`
Color bool `yaml:"color" mapstructure:"color"`
// NoSpinner disable spinner
NoSpinner bool `yaml:"no_spinner" mapstructure:"no_spinner"`
}
type LuetGeneralConfig struct {
@@ -60,6 +61,8 @@ type LuetGeneralConfig struct {
Debug bool `yaml:"debug,omitempty" mapstructure:"debug"`
ShowBuildOutput bool `yaml:"show_build_output,omitempty" mapstructure:"show_build_output"`
FatalWarns bool `yaml:"fatal_warnings,omitempty" mapstructure:"fatal_warnings"`
HTTPTimeout int `yaml:"http_timeout,omitempty" mapstructure:"http_timeout"`
Quiet bool `yaml:"quiet" mapstructure:"quiet"`
}
type LuetSolverOptions struct {
@@ -106,8 +109,42 @@ type LuetSystemConfig struct {
TmpDirBase string `yaml:"tmpdir_base" mapstructure:"tmpdir_base"`
}
func (s *LuetSystemConfig) SetRootFS(path string) error {
p, err := fileHelper.Rel2Abs(path)
// Init reads the config and replace user-defined paths with
// absolute paths where necessary, and construct the paths for the cache
// and database on the real system
func (c *LuetConfig) Init() error {
if err := c.System.init(); err != nil {
return err
}
if err := c.loadConfigProtect(); err != nil {
return err
}
// Load repositories
if err := c.loadRepositories(); err != nil {
return err
}
return nil
}
func (s *LuetSystemConfig) init() error {
if err := s.setRootfs(); err != nil {
return err
}
if err := s.setDBPath(); err != nil {
return err
}
s.setCachePath()
return nil
}
func (s *LuetSystemConfig) setRootfs() error {
p, err := fileHelper.Rel2Abs(s.Rootfs)
if err != nil {
return err
}
@@ -116,9 +153,8 @@ func (s *LuetSystemConfig) SetRootFS(path string) error {
return nil
}
func (sc *LuetSystemConfig) GetRepoDatabaseDirPath(name string) string {
dbpath := filepath.Join(sc.Rootfs, sc.DatabasePath)
dbpath = filepath.Join(dbpath, "repos/"+name)
func (sc LuetSystemConfig) GetRepoDatabaseDirPath(name string) string {
dbpath := filepath.Join(sc.DatabasePath, "repos/"+name)
err := os.MkdirAll(dbpath, os.ModePerm)
if err != nil {
panic(err)
@@ -126,43 +162,48 @@ func (sc *LuetSystemConfig) GetRepoDatabaseDirPath(name string) string {
return dbpath
}
func (sc *LuetSystemConfig) GetSystemRepoDatabaseDirPath() string {
func (sc *LuetSystemConfig) setDBPath() error {
dbpath := filepath.Join(sc.Rootfs,
sc.DatabasePath)
err := os.MkdirAll(dbpath, os.ModePerm)
if err != nil {
panic(err)
return err
}
return dbpath
sc.DatabasePath = dbpath
return nil
}
func (sc *LuetSystemConfig) GetSystemPkgsCacheDirPath() (ans string) {
func (sc *LuetSystemConfig) setCachePath() {
var cachepath string
if sc.PkgsCachePath != "" {
cachepath = sc.PkgsCachePath
if !filepath.IsAbs(cachepath) {
cachepath = filepath.Join(sc.DatabasePath, sc.PkgsCachePath)
os.MkdirAll(cachepath, os.ModePerm)
} else {
cachepath = sc.PkgsCachePath
}
} else {
// Create dynamic cache for test suites
cachepath, _ = ioutil.TempDir(os.TempDir(), "cachepkgs")
}
if filepath.IsAbs(cachepath) {
ans = cachepath
} else {
ans = filepath.Join(sc.GetSystemRepoDatabaseDirPath(), cachepath)
}
return
sc.PkgsCachePath = cachepath // Be consistent with the path we set
}
func (sc *LuetSystemConfig) GetRootFsAbs() (string, error) {
return filepath.Abs(sc.Rootfs)
}
type LuetKV struct {
type FinalizerEnv struct {
Key string `json:"key" yaml:"key" mapstructure:"key"`
Value string `json:"value" yaml:"value" mapstructure:"value"`
}
type Finalizers []FinalizerEnv
func (f Finalizers) Slice() (sl []string) {
for _, kv := range f {
sl = append(sl, fmt.Sprintf("%s=%s", kv.Key, kv.Value))
}
return
}
type LuetConfig struct {
Logging LuetLoggingConfig `yaml:"logging,omitempty" mapstructure:"logging"`
General LuetGeneralConfig `yaml:"general,omitempty" mapstructure:"general"`
@@ -175,16 +216,16 @@ type LuetConfig struct {
ConfigFromHost bool `yaml:"config_from_host,omitempty" mapstructure:"config_from_host"`
SystemRepositories LuetRepositories `yaml:"repositories,omitempty" mapstructure:"repositories"`
FinalizerEnvs []LuetKV `json:"finalizer_envs,omitempty" yaml:"finalizer_envs,omitempty" mapstructure:"finalizer_envs,omitempty"`
FinalizerEnvs Finalizers `json:"finalizer_envs,omitempty" yaml:"finalizer_envs,omitempty" mapstructure:"finalizer_envs,omitempty"`
ConfigProtectConfFiles []config.ConfigProtectConfFile `yaml:"-" mapstructure:"-"`
}
func (c *LuetConfig) GetSystemDB() pkg.PackageDatabase {
switch c.GetSystem().DatabaseEngine {
switch c.System.DatabaseEngine {
case "boltdb":
return pkg.NewBoltDatabase(
filepath.Join(c.GetSystem().GetSystemRepoDatabaseDirPath(), "luet.db"))
filepath.Join(c.System.DatabasePath, "luet.db"))
default:
return pkg.NewInMemoryDatabase(true)
}
@@ -194,83 +235,30 @@ func (c *LuetConfig) AddSystemRepository(r LuetRepository) {
c.SystemRepositories = append(c.SystemRepositories, r)
}
func (c *LuetConfig) GetFinalizerEnvsMap() map[string]string {
ans := make(map[string]string)
for _, kv := range c.FinalizerEnvs {
ans[kv.Key] = kv.Value
}
return ans
}
func (c *LuetConfig) SetFinalizerEnv(k, v string) {
keyPresent := false
envs := []LuetKV{}
envs := []FinalizerEnv{}
for _, kv := range c.FinalizerEnvs {
if kv.Key == k {
keyPresent = true
envs = append(envs, LuetKV{Key: kv.Key, Value: v})
envs = append(envs, FinalizerEnv{Key: kv.Key, Value: v})
} else {
envs = append(envs, kv)
}
}
if !keyPresent {
envs = append(envs, LuetKV{Key: k, Value: v})
envs = append(envs, FinalizerEnv{Key: k, Value: v})
}
c.FinalizerEnvs = envs
}
func (c *LuetConfig) GetFinalizerEnvs() []string {
ans := []string{}
for _, kv := range c.FinalizerEnvs {
ans = append(ans, fmt.Sprintf("%s=%s", kv.Key, kv.Value))
}
return ans
}
func (c *LuetConfig) GetFinalizerEnv(k string) (string, error) {
keyNotPresent := true
ans := ""
for _, kv := range c.FinalizerEnvs {
if kv.Key == k {
keyNotPresent = false
ans = kv.Value
}
}
if keyNotPresent {
return "", errors.New("Finalizer key " + k + " not found")
}
return ans, nil
}
func (c *LuetConfig) GetLogging() *LuetLoggingConfig {
return &c.Logging
}
func (c *LuetConfig) GetGeneral() *LuetGeneralConfig {
return &c.General
}
func (c *LuetConfig) GetSystem() *LuetSystemConfig {
return &c.System
}
func (c *LuetConfig) GetSolverOptions() *LuetSolverOptions {
return &c.Solver
}
func (c *LuetConfig) YAML() ([]byte, error) {
return yaml.Marshal(c)
}
func (c *LuetConfig) GetConfigProtectConfFiles() []config.ConfigProtectConfFile {
return c.ConfigProtectConfFiles
}
func (c *LuetConfig) AddConfigProtectConfFile(file *config.ConfigProtectConfFile) {
func (c *LuetConfig) addProtectFile(file *config.ConfigProtectConfFile) {
if c.ConfigProtectConfFiles == nil {
c.ConfigProtectConfFiles = []config.ConfigProtectConfFile{*file}
} else {
@@ -278,28 +266,21 @@ func (c *LuetConfig) AddConfigProtectConfFile(file *config.ConfigProtectConfFile
}
}
func (c *LuetConfig) LoadRepositories(ctx *Context) error {
func (c *LuetConfig) loadRepositories() error {
var regexRepo = regexp.MustCompile(`.yml$|.yaml$`)
var err error
rootfs := ""
// Respect the rootfs param on read repositories
if !c.ConfigFromHost {
rootfs, err = c.GetSystem().GetRootFsAbs()
if err != nil {
return err
}
rootfs = c.System.Rootfs
}
for _, rdir := range c.RepositoriesConfDir {
rdir = filepath.Join(rootfs, rdir)
ctx.Debug("Parsing Repository Directory", rdir, "...")
files, err := ioutil.ReadDir(rdir)
if err != nil {
ctx.Debug("Skip dir", rdir, ":", err.Error())
continue
}
@@ -309,27 +290,20 @@ func (c *LuetConfig) LoadRepositories(ctx *Context) error {
}
if !regexRepo.MatchString(file.Name()) {
ctx.Debug("File", file.Name(), "skipped.")
continue
}
content, err := ioutil.ReadFile(path.Join(rdir, file.Name()))
if err != nil {
ctx.Warning("On read file", file.Name(), ":", err.Error())
ctx.Warning("File", file.Name(), "skipped.")
continue
}
r, err := LoadRepository(content)
if err != nil {
ctx.Warning("On parse file", file.Name(), ":", err.Error())
ctx.Warning("File", file.Name(), "skipped.")
continue
}
if r.Name == "" || len(r.Urls) == 0 || r.Type == "" {
ctx.Warning("Invalid repository ", file.Name())
ctx.Warning("File", file.Name(), "skipped.")
continue
}
@@ -355,28 +329,20 @@ func (c *LuetConfig) GetSystemRepository(name string) (*LuetRepository, error) {
return ans, nil
}
func (c *LuetConfig) LoadConfigProtect(ctx *Context) error {
func (c *LuetConfig) loadConfigProtect() error {
var regexConfs = regexp.MustCompile(`.yml$`)
var err error
rootfs := ""
// Respect the rootfs param on read repositories
if !c.ConfigFromHost {
rootfs, err = c.GetSystem().GetRootFsAbs()
if err != nil {
return err
}
rootfs = c.System.Rootfs
}
for _, cdir := range c.ConfigProtectConfDir {
cdir = filepath.Join(rootfs, cdir)
ctx.Debug("Parsing Config Protect Directory", cdir, "...")
files, err := ioutil.ReadDir(cdir)
if err != nil {
ctx.Debug("Skip dir", cdir, ":", err.Error())
continue
}
@@ -386,38 +352,31 @@ func (c *LuetConfig) LoadConfigProtect(ctx *Context) error {
}
if !regexConfs.MatchString(file.Name()) {
ctx.Debug("File", file.Name(), "skipped.")
continue
}
content, err := ioutil.ReadFile(path.Join(cdir, file.Name()))
if err != nil {
ctx.Warning("On read file", file.Name(), ":", err.Error())
ctx.Warning("File", file.Name(), "skipped.")
continue
}
r, err := loadConfigProtectConFile(file.Name(), content)
r, err := loadConfigProtectConfFile(file.Name(), content)
if err != nil {
ctx.Warning("On parse file", file.Name(), ":", err.Error())
ctx.Warning("File", file.Name(), "skipped.")
continue
}
if r.Name == "" || len(r.Directories) == 0 {
ctx.Warning("Invalid config protect file", file.Name())
ctx.Warning("File", file.Name(), "skipped.")
continue
}
c.AddConfigProtectConfFile(r)
c.addProtectFile(r)
}
}
return nil
}
func loadConfigProtectConFile(filename string, data []byte) (*config.ConfigProtectConfFile, error) {
func loadConfigProtectConfFile(filename string, data []byte) (*config.ConfigProtectConfFile, error) {
ans := config.NewConfigProtectConfFile(filename)
err := yaml.Unmarshal(data, &ans)
if err != nil {
@@ -425,47 +384,3 @@ func loadConfigProtectConFile(filename string, data []byte) (*config.ConfigProte
}
return ans, nil
}
func (c *LuetLoggingConfig) SetLogLevel(s LogLevel) {
c.Level = s
}
func (c *LuetSystemConfig) InitTmpDir() error {
if !filepath.IsAbs(c.TmpDirBase) {
abs, err := fileHelper.Rel2Abs(c.TmpDirBase)
if err != nil {
return errors.Wrap(err, "while converting relative path to absolute path")
}
c.TmpDirBase = abs
}
if _, err := os.Stat(c.TmpDirBase); err != nil {
if os.IsNotExist(err) {
err = os.MkdirAll(c.TmpDirBase, os.ModePerm)
if err != nil {
return err
}
}
}
return nil
}
func (c *LuetSystemConfig) CleanupTmpDir() error {
return os.RemoveAll(c.TmpDirBase)
}
func (c *LuetSystemConfig) TempDir(pattern string) (string, error) {
err := c.InitTmpDir()
if err != nil {
return "", err
}
return ioutil.TempDir(c.TmpDirBase, pattern)
}
func (c *LuetSystemConfig) TempFile(pattern string) (*os.File, error) {
err := c.InitTmpDir()
if err != nil {
return nil, err
}
return ioutil.TempFile(c.TmpDirBase, pattern)
}

View File

@@ -18,7 +18,7 @@ package types_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -17,53 +17,86 @@
package types_test
import (
"io/ioutil"
"os"
"path/filepath"
"strings"
types "github.com/mudler/luet/pkg/api/core/types"
"github.com/mudler/luet/pkg/api/core/context"
"github.com/mudler/luet/pkg/api/core/types"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Config", func() {
Context("Load Repository1", func() {
ctx := types.NewContext()
ctx.Config.RepositoriesConfDir = []string{
"../../../../tests/fixtures/repos.conf.d",
}
err := ctx.Config.LoadRepositories(ctx)
Context("Inits paths", func() {
t, _ := ioutil.TempDir("", "tests")
defer os.RemoveAll(t)
c := &types.LuetConfig{
System: types.LuetSystemConfig{
Rootfs: t,
PkgsCachePath: "foo",
DatabasePath: "baz",
}}
It("sets default", func() {
err := c.Init()
Expect(err).ToNot(HaveOccurred())
Expect(c.System.Rootfs).To(Equal(t))
Expect(c.System.PkgsCachePath).To(Equal(filepath.Join(t, "baz", "foo")))
Expect(c.System.DatabasePath).To(Equal(filepath.Join(t, "baz")))
})
})
Context("Load Repository1", func() {
var ctx *context.Context
BeforeEach(func() {
ctx = context.NewContext(context.WithConfig(&types.LuetConfig{
RepositoriesConfDir: []string{
"../../../../tests/fixtures/repos.conf.d",
},
}))
ctx.Config.Init()
})
It("Check Load Repository 1", func() {
Expect(err).Should(BeNil())
Expect(len(ctx.Config.SystemRepositories)).Should(Equal(2))
Expect(ctx.Config.SystemRepositories[0].Name).Should(Equal("test1"))
Expect(ctx.Config.SystemRepositories[0].Priority).Should(Equal(999))
Expect(ctx.Config.SystemRepositories[0].Type).Should(Equal("disk"))
Expect(len(ctx.Config.SystemRepositories[0].Urls)).Should(Equal(1))
Expect(ctx.Config.SystemRepositories[0].Urls[0]).Should(Equal("tests/repos/test1"))
Expect(len(ctx.GetConfig().SystemRepositories)).Should(Equal(2))
Expect(ctx.GetConfig().SystemRepositories[0].Name).Should(Equal("test1"))
Expect(ctx.GetConfig().SystemRepositories[0].Priority).Should(Equal(999))
Expect(ctx.GetConfig().SystemRepositories[0].Type).Should(Equal("disk"))
Expect(len(ctx.GetConfig().SystemRepositories[0].Urls)).Should(Equal(1))
Expect(ctx.GetConfig().SystemRepositories[0].Urls[0]).Should(Equal("tests/repos/test1"))
})
It("Chec Load Repository 2", func() {
Expect(err).Should(BeNil())
Expect(len(ctx.Config.SystemRepositories)).Should(Equal(2))
Expect(ctx.Config.SystemRepositories[1].Name).Should(Equal("test2"))
Expect(ctx.Config.SystemRepositories[1].Priority).Should(Equal(1000))
Expect(ctx.Config.SystemRepositories[1].Type).Should(Equal("disk"))
Expect(len(ctx.Config.SystemRepositories[1].Urls)).Should(Equal(1))
Expect(ctx.Config.SystemRepositories[1].Urls[0]).Should(Equal("tests/repos/test2"))
Expect(len(ctx.GetConfig().SystemRepositories)).Should(Equal(2))
Expect(ctx.GetConfig().SystemRepositories[1].Name).Should(Equal("test2"))
Expect(ctx.GetConfig().SystemRepositories[1].Priority).Should(Equal(1000))
Expect(ctx.GetConfig().SystemRepositories[1].Type).Should(Equal("disk"))
Expect(len(ctx.GetConfig().SystemRepositories[1].Urls)).Should(Equal(1))
Expect(ctx.GetConfig().SystemRepositories[1].Urls[0]).Should(Equal("tests/repos/test2"))
})
})
Context("Simple temporary directory creation", func() {
ctx := context.NewContext(context.WithConfig(&types.LuetConfig{
System: types.LuetSystemConfig{
TmpDirBase: os.TempDir() + "/tmpluet",
},
}))
BeforeEach(func() {
ctx = context.NewContext(context.WithConfig(&types.LuetConfig{
System: types.LuetSystemConfig{
TmpDirBase: os.TempDir() + "/tmpluet",
},
}))
})
It("Create Temporary directory", func() {
ctx := types.NewContext()
ctx.Config.GetSystem().TmpDirBase = os.TempDir() + "/tmpluet"
tmpDir, err := ctx.Config.GetSystem().TempDir("test1")
tmpDir, err := ctx.TempDir("test1")
Expect(err).ToNot(HaveOccurred())
Expect(strings.HasPrefix(tmpDir, filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
Expect(fileHelper.Exists(tmpDir)).To(BeTrue())
@@ -72,11 +105,7 @@ var _ = Describe("Config", func() {
})
It("Create Temporary file", func() {
ctx := types.NewContext()
ctx.Config.GetSystem().TmpDirBase = os.TempDir() + "/tmpluet"
tmpFile, err := ctx.Config.GetSystem().TempFile("testfile1")
tmpFile, err := ctx.TempFile("testfile1")
Expect(err).ToNot(HaveOccurred())
Expect(strings.HasPrefix(tmpFile.Name(), filepath.Join(os.TempDir(), "tmpluet"))).To(BeTrue())
Expect(fileHelper.Exists(tmpFile.Name())).To(BeTrue())

View File

@@ -15,400 +15,16 @@
package types
import (
"context"
"fmt"
"os"
"path"
"path/filepath"
"regexp"
"runtime"
"strings"
"sync"
type Context interface {
Logger
GarbageCollector
GetConfig() LuetConfig
Copy() Context
// SetAnnotation sets generic annotations to hold in a context
SetAnnotation(s string, i interface{})
"github.com/kyokomi/emoji"
"github.com/mudler/luet/pkg/helpers/terminal"
"github.com/pkg/errors"
"github.com/pterm/pterm"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
"golang.org/x/term"
)
// GetAnnotation gets generic annotations to hold in a context
GetAnnotation(s string) interface{}
const (
ErrorLevel LogLevel = "error"
WarningLevel LogLevel = "warning"
InfoLevel LogLevel = "info"
SuccessLevel LogLevel = "success"
FatalLevel LogLevel = "fatal"
)
type Context struct {
context.Context
Config *LuetConfig
IsTerminal bool
NoSpinner bool
s *pterm.SpinnerPrinter
spinnerLock *sync.Mutex
z *zap.Logger
AreaPrinter *pterm.AreaPrinter
ProgressBar *pterm.ProgressbarPrinter
}
func NewContext() *Context {
return &Context{
spinnerLock: &sync.Mutex{},
IsTerminal: terminal.IsTerminal(os.Stdout),
Config: &LuetConfig{
ConfigFromHost: true,
Logging: LuetLoggingConfig{},
General: LuetGeneralConfig{},
System: LuetSystemConfig{
DatabasePath: filepath.Join("var", "db", "packages"),
TmpDirBase: filepath.Join(os.TempDir(), "tmpluet")},
Solver: LuetSolverOptions{},
},
s: pterm.DefaultSpinner.WithShowTimer(false).WithRemoveWhenDone(true),
}
}
func (c *Context) Copy() *Context {
configCopy := *c.Config
configCopy.System = *c.Config.GetSystem()
configCopy.General = *c.Config.GetGeneral()
configCopy.Logging = *c.Config.GetLogging()
ctx := *c
ctxCopy := &ctx
ctxCopy.Config = &configCopy
return ctxCopy
}
// GetTerminalSize returns the width and the height of the active terminal.
func (c *Context) GetTerminalSize() (width, height int, err error) {
w, h, err := term.GetSize(int(os.Stdout.Fd()))
if w <= 0 {
w = 0
}
if h <= 0 {
h = 0
}
if err != nil {
err = errors.New("size not detectable")
}
return w, h, err
}
func (c *Context) Init() (err error) {
if c.IsTerminal {
if !c.Config.Logging.Color {
c.Debug("Disabling colors")
c.NoColor()
}
} else {
c.Debug("Not a terminal, disabling colors")
c.NoColor()
}
c.Debug("Colors", c.Config.GetLogging().Color)
c.Debug("Logging level", c.Config.GetLogging().Level)
c.Debug("Debug mode", c.Config.GetGeneral().Debug)
if c.Config.GetLogging().EnableLogFile && c.Config.GetLogging().Path != "" {
// Init zap logger
err = c.InitZap()
if err != nil {
return
}
}
// Load repositories
err = c.Config.LoadRepositories(c)
if err != nil {
return
}
return
}
func (c *Context) NoColor() {
pterm.DisableColor()
}
func (c *Context) Ask() bool {
var input string
c.Info("Do you want to continue with this operation? [y/N]: ")
_, err := fmt.Scanln(&input)
if err != nil {
return false
}
input = strings.ToLower(input)
if input == "y" || input == "yes" {
return true
}
return false
}
func (c *Context) InitZap() error {
var err error
if c.z == nil {
// TODO: test permission for open logfile.
cfg := zap.NewProductionConfig()
cfg.OutputPaths = []string{c.Config.GetLogging().Path}
cfg.Level = c.Config.GetLogging().Level.ZapLevel()
cfg.ErrorOutputPaths = []string{}
if c.Config.GetLogging().JsonFormat {
cfg.Encoding = "json"
} else {
cfg.Encoding = "console"
}
cfg.DisableCaller = true
cfg.DisableStacktrace = true
cfg.EncoderConfig.TimeKey = "time"
cfg.EncoderConfig.EncodeTime = zapcore.ISO8601TimeEncoder
c.z, err = cfg.Build()
if err != nil {
fmt.Fprint(os.Stderr, "Error on initialize file logger: "+err.Error()+"\n")
return err
}
}
return nil
}
// Spinner starts the spinner
func (c *Context) Spinner() {
if !c.IsTerminal || c.NoSpinner {
return
}
c.spinnerLock.Lock()
defer c.spinnerLock.Unlock()
var confLevel int
if c.Config.GetGeneral().Debug {
confLevel = 3
} else {
confLevel = c.Config.GetLogging().Level.ToNumber()
}
if 2 > confLevel {
return
}
if !c.s.IsActive {
c.s, _ = c.s.Start()
}
}
func (c *Context) Screen(text string) {
pterm.DefaultHeader.WithBackgroundStyle(pterm.NewStyle(pterm.BgLightBlue)).WithMargin(2).Println(text)
//pterm.DefaultCenter.Print(pterm.DefaultHeader.WithFullWidth().WithBackgroundStyle(pterm.NewStyle(pterm.BgLightBlue)).WithMargin(10).Sprint(text))
}
func (c *Context) SpinnerText(suffix, prefix string) {
if !c.IsTerminal || c.NoSpinner {
return
}
c.spinnerLock.Lock()
defer c.spinnerLock.Unlock()
if c.Config.GetGeneral().Debug {
fmt.Printf("%s %s\n",
suffix, prefix,
)
} else {
c.s.UpdateText(suffix + prefix)
}
}
func (c *Context) SpinnerStop() {
if !c.IsTerminal {
return
}
c.spinnerLock.Lock()
defer c.spinnerLock.Unlock()
var confLevel int
if c.Config.GetGeneral().Debug {
confLevel = 3
} else {
confLevel = c.Config.GetLogging().Level.ToNumber()
}
if 2 > confLevel {
return
}
if c.s != nil {
c.s.Success()
}
}
func (c *Context) log2File(level LogLevel, msg string) {
switch level {
case FatalLevel:
c.z.Fatal(msg)
case ErrorLevel:
c.z.Error(msg)
case WarningLevel:
c.z.Warn(msg)
case InfoLevel, SuccessLevel:
c.z.Info(msg)
default:
c.z.Debug(msg)
}
}
func (c *Context) Msg(level LogLevel, ln bool, msg ...interface{}) {
var message string
var confLevel, msgLevel int
if c.Config.GetGeneral().Debug {
confLevel = 3
pterm.EnableDebugMessages()
} else {
confLevel = c.Config.GetLogging().Level.ToNumber()
}
msgLevel = level.ToNumber()
if msgLevel > confLevel {
return
}
for _, m := range msg {
message += " " + fmt.Sprintf("%v", m)
}
// Color message
levelMsg := message
if c.Config.GetLogging().Color {
switch level {
case WarningLevel:
levelMsg = pterm.LightYellow(":construction: warning" + message)
case InfoLevel, SuccessLevel:
levelMsg = pterm.LightGreen(message)
case ErrorLevel:
levelMsg = pterm.Red(message)
default:
levelMsg = pterm.Blue(message)
}
}
// Strip emoji if needed
if c.Config.GetLogging().EnableEmoji && c.IsTerminal {
levelMsg = emoji.Sprint(levelMsg)
} else {
re := regexp.MustCompile(`[:][\w]+[:]`)
levelMsg = re.ReplaceAllString(levelMsg, "")
}
if c.z != nil {
c.log2File(level, message)
}
// Print the message based on the level
switch level {
case SuccessLevel:
if ln {
pterm.Success.Println(levelMsg)
} else {
pterm.Success.Print(levelMsg)
}
case InfoLevel:
if ln {
pterm.Info.Println(levelMsg)
} else {
pterm.Info.Print(levelMsg)
}
case WarningLevel:
if ln {
pterm.Warning.Println(levelMsg)
} else {
pterm.Warning.Print(levelMsg)
}
case ErrorLevel:
if ln {
pterm.Error.Println(levelMsg)
} else {
pterm.Error.Print(levelMsg)
}
case FatalLevel:
if ln {
pterm.Fatal.Println(levelMsg)
} else {
pterm.Fatal.Print(levelMsg)
}
default:
if ln {
pterm.Debug.Println(levelMsg)
} else {
pterm.Debug.Print(levelMsg)
}
}
}
func (c *Context) Warning(mess ...interface{}) {
c.Msg("warning", true, mess...)
if c.Config.GetGeneral().FatalWarns {
os.Exit(2)
}
}
func (c *Context) Debug(mess ...interface{}) {
pc, file, line, ok := runtime.Caller(1)
if ok {
mess = append([]interface{}{fmt.Sprintf("(%s:#%d:%v)",
path.Base(file), line, runtime.FuncForPC(pc).Name())}, mess...)
}
c.Msg("debug", true, mess...)
}
func (c *Context) Info(mess ...interface{}) {
c.Msg("info", true, mess...)
}
func (c *Context) Success(mess ...interface{}) {
c.Msg("success", true, mess...)
}
func (c *Context) Error(mess ...interface{}) {
c.Msg("error", true, mess...)
}
func (c *Context) Fatal(mess ...interface{}) {
c.Error(mess...)
os.Exit(1)
}
type LogLevel string
func (level LogLevel) ToNumber() int {
switch level {
case ErrorLevel, FatalLevel:
return 0
case WarningLevel:
return 1
case InfoLevel, SuccessLevel:
return 2
default: // debug
return 3
}
}
func (level LogLevel) ZapLevel() zap.AtomicLevel {
switch level {
case FatalLevel:
return zap.NewAtomicLevelAt(zap.FatalLevel)
case ErrorLevel:
return zap.NewAtomicLevelAt(zap.ErrorLevel)
case WarningLevel:
return zap.NewAtomicLevelAt(zap.WarnLevel)
case InfoLevel, SuccessLevel:
return zap.NewAtomicLevelAt(zap.InfoLevel)
default:
return zap.NewAtomicLevelAt(zap.DebugLevel)
}
WithLoggingContext(s string) Context
}

View File

@@ -0,0 +1,25 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package types
import "os"
type GarbageCollector interface {
Clean() error
TempDir(pattern string) (string, error)
TempFile(s string) (*os.File, error)
String() string
}

View File

@@ -0,0 +1,41 @@
// Copyright © 2021 Ettore Di Giacinto <mudler@mocaccino.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package types
// Logger is a standard logging interface
type Logger interface {
Info(...interface{})
Success(...interface{})
Warning(...interface{})
Warn(...interface{})
Debug(...interface{})
Error(...interface{})
Fatal(...interface{})
Panic(...interface{})
Trace(...interface{})
Infof(string, ...interface{})
Warnf(string, ...interface{})
Debugf(string, ...interface{})
Errorf(string, ...interface{})
Fatalf(string, ...interface{})
Panicf(string, ...interface{})
Tracef(string, ...interface{})
SpinnerStop()
Spinner()
Ask() bool
Screen(string)
}

View File

@@ -19,7 +19,7 @@ import (
"runtime"
types "github.com/mudler/luet/pkg/api/core/types"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -1,20 +1,14 @@
package compiler
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/mudler/luet/pkg/api/core/types"
artifact "github.com/mudler/luet/pkg/api/core/types/artifact"
"github.com/mudler/luet/pkg/compiler/backend"
"github.com/pkg/errors"
)
func NewBackend(ctx *types.Context, s string) (CompilerBackend, error) {
func NewBackend(ctx types.Context, s string) (CompilerBackend, error) {
var compilerBackend CompilerBackend
switch s {
@@ -32,9 +26,9 @@ func NewBackend(ctx *types.Context, s string) (CompilerBackend, error) {
type CompilerBackend interface {
BuildImage(backend.Options) error
ExportImage(backend.Options) error
LoadImage(string) error
RemoveImage(backend.Options) error
ImageDefinitionToTar(backend.Options) error
ExtractRootfs(opts backend.Options, keepPerms bool) error
CopyImage(string, string) error
DownloadImage(opts backend.Options) error
@@ -42,204 +36,6 @@ type CompilerBackend interface {
Push(opts backend.Options) error
ImageAvailable(string) bool
ImageReference(img1 string, ondisk bool) (v1.Image, error)
ImageExists(string) bool
}
// GenerateChanges generates changes between two images using a backend by leveraging export/extractrootfs methods
// example of json return: [
// {
// "Image1": "luet/base",
// "Image2": "alpine",
// "DiffType": "File",
// "Diff": {
// "Adds": null,
// "Dels": [
// {
// "Name": "/luetbuild",
// "Size": 5830706
// },
// {
// "Name": "/luetbuild/Dockerfile",
// "Size": 50
// },
// {
// "Name": "/luetbuild/output1",
// "Size": 5830656
// }
// ],
// "Mods": null
// }
// }
// ]
func GenerateChanges(ctx *types.Context, b CompilerBackend, fromImage, toImage backend.Options) ([]artifact.ArtifactLayer, error) {
res := artifact.ArtifactLayer{FromImage: fromImage.ImageName, ToImage: toImage.ImageName}
tmpdiffs, err := ctx.Config.GetSystem().TempDir("extraction")
if err != nil {
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tmpdiffs) // clean up
srcRootFS, err := ioutil.TempDir(tmpdiffs, "src")
if err != nil {
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(srcRootFS) // clean up
dstRootFS, err := ioutil.TempDir(tmpdiffs, "dst")
if err != nil {
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(dstRootFS) // clean up
srcImageExtract := backend.Options{
ImageName: fromImage.ImageName,
Destination: srcRootFS,
}
ctx.Debug("Extracting source image", fromImage.ImageName)
err = b.ExtractRootfs(srcImageExtract, false) // No need to keep permissions as we just collect file diffs
if err != nil {
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking src image "+fromImage.ImageName)
}
dstImageExtract := backend.Options{
ImageName: toImage.ImageName,
Destination: dstRootFS,
}
ctx.Debug("Extracting destination image", toImage.ImageName)
err = b.ExtractRootfs(dstImageExtract, false)
if err != nil {
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while unpacking dst image "+toImage.ImageName)
}
// Get Additions/Changes. dst -> src
err = filepath.Walk(dstRootFS, func(path string, info os.FileInfo, err error) error {
if info.IsDir() {
return nil
}
realpath := strings.Replace(path, dstRootFS, "", -1)
fileInfo, err := os.Lstat(filepath.Join(srcRootFS, realpath))
if err == nil {
var sizeA, sizeB int64
sizeA = fileInfo.Size()
if s, err := os.Lstat(filepath.Join(dstRootFS, realpath)); err == nil {
sizeB = s.Size()
}
if sizeA != sizeB {
// fmt.Println("File changed", path, filepath.Join(srcRootFS, realpath))
res.Diffs.Changes = append(res.Diffs.Changes, artifact.ArtifactNode{
Name: filepath.Join("/", realpath),
Size: int(sizeB),
})
} else {
// fmt.Println("File already exists", path, filepath.Join(srcRootFS, realpath))
}
} else {
var sizeB int64
if s, err := os.Lstat(filepath.Join(dstRootFS, realpath)); err == nil {
sizeB = s.Size()
}
res.Diffs.Additions = append(res.Diffs.Additions, artifact.ArtifactNode{
Name: filepath.Join("/", realpath),
Size: int(sizeB),
})
// fmt.Println("File created", path, filepath.Join(srcRootFS, realpath))
}
return nil
})
if err != nil {
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while walking image destination")
}
// Get deletions. src -> dst
err = filepath.Walk(srcRootFS, func(path string, info os.FileInfo, err error) error {
if info.IsDir() {
return nil
}
realpath := strings.Replace(path, srcRootFS, "", -1)
if _, err = os.Lstat(filepath.Join(dstRootFS, realpath)); err != nil {
// fmt.Println("File deleted", path, filepath.Join(srcRootFS, realpath))
res.Diffs.Deletions = append(res.Diffs.Deletions, artifact.ArtifactNode{
Name: filepath.Join("/", realpath),
})
}
return nil
})
if err != nil {
return []artifact.ArtifactLayer{}, errors.Wrap(err, "Error met while walking image source")
}
diffs := []artifact.ArtifactLayer{res}
if ctx.Config.GetGeneral().Debug {
summary := ComputeArtifactLayerSummary(diffs)
for _, l := range summary.Layers {
ctx.Debug(fmt.Sprintf("Diff %s -> %s: add %d (%d bytes), del %d (%d bytes), change %d (%d bytes)",
l.FromImage, l.ToImage,
l.AddFiles, l.AddSizes,
l.DelFiles, l.DelSizes,
l.ChangeFiles, l.ChangeSizes))
}
}
return diffs, nil
}
type ArtifactLayerSummary struct {
FromImage string `json:"image1"`
ToImage string `json:"image2"`
AddFiles int `json:"add_files"`
AddSizes int64 `json:"add_sizes"`
DelFiles int `json:"del_files"`
DelSizes int64 `json:"del_sizes"`
ChangeFiles int `json:"change_files"`
ChangeSizes int64 `json:"change_sizes"`
}
type ArtifactLayersSummary struct {
Layers []ArtifactLayerSummary `json:"summary"`
}
func ComputeArtifactLayerSummary(diffs []artifact.ArtifactLayer) ArtifactLayersSummary {
ans := ArtifactLayersSummary{
Layers: make([]ArtifactLayerSummary, 0),
}
for _, layer := range diffs {
sum := ArtifactLayerSummary{
FromImage: layer.FromImage,
ToImage: layer.ToImage,
AddFiles: 0,
AddSizes: 0,
DelFiles: 0,
DelSizes: 0,
ChangeFiles: 0,
ChangeSizes: 0,
}
for _, a := range layer.Diffs.Additions {
sum.AddFiles++
sum.AddSizes += int64(a.Size)
}
for _, d := range layer.Diffs.Deletions {
sum.DelFiles++
sum.DelSizes += int64(d.Size)
}
for _, c := range layer.Diffs.Changes {
sum.ChangeFiles++
sum.ChangeSizes += int64(c.Size)
}
ans.Layers = append(ans.Layers, sum)
}
return ans
}

View File

@@ -18,7 +18,7 @@ package backend_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -19,8 +19,6 @@ import (
"os/exec"
"github.com/mudler/luet/pkg/api/core/types"
"github.com/google/go-containerregistry/pkg/crane"
"github.com/pkg/errors"
)
@@ -29,11 +27,6 @@ const (
DockerBackend = "docker"
)
func imageAvailable(image string) bool {
_, err := crane.Digest(image)
return err == nil
}
type Options struct {
ImageName string
SourcePath string
@@ -43,9 +36,9 @@ type Options struct {
BackendArgs []string
}
func runCommand(ctx *types.Context, cmd *exec.Cmd) error {
func runCommand(ctx types.Context, cmd *exec.Cmd) error {
output := ""
buffered := !ctx.Config.GetGeneral().ShowBuildOutput
buffered := !ctx.GetConfig().General.ShowBuildOutput
writer := NewBackendWriter(buffered, ctx)
cmd.Stdout = writer

View File

@@ -1,4 +1,4 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
// Copyright © 2019-2021 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
@@ -16,29 +16,27 @@
package backend
import (
"encoding/json"
"io/ioutil"
"os"
"io"
"os/exec"
"path/filepath"
"strings"
"github.com/google/go-containerregistry/pkg/crane"
"github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1/daemon"
"github.com/google/go-containerregistry/pkg/v1/tarball"
bus "github.com/mudler/luet/pkg/api/core/bus"
"github.com/mudler/luet/pkg/api/core/image"
"github.com/mudler/luet/pkg/api/core/types"
bus "github.com/mudler/luet/pkg/bus"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
capi "github.com/mudler/docker-companion/api"
"github.com/mudler/luet/pkg/helpers"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/pkg/errors"
)
type SimpleDocker struct {
ctx *types.Context
ctx types.Context
}
func NewSimpleDockerBackend(ctx *types.Context) *SimpleDocker {
func NewSimpleDockerBackend(ctx types.Context) *SimpleDocker {
return &SimpleDocker{ctx: ctx}
}
@@ -74,6 +72,17 @@ func (s *SimpleDocker) CopyImage(src, dst string) error {
return nil
}
func (s *SimpleDocker) LoadImage(path string) error {
s.ctx.Debug(":whale: Loading image:", path)
cmd := exec.Command("docker", "load", "-i", path)
out, err := cmd.CombinedOutput()
if err != nil {
return errors.Wrap(err, "Failed loading image: "+string(out))
}
s.ctx.Success(":whale: Loaded image:", path)
return nil
}
func (s *SimpleDocker) DownloadImage(opts Options) error {
name := opts.ImageName
bus.Manager.Publish(bus.EventImagePrePull, opts)
@@ -110,7 +119,7 @@ func (s *SimpleDocker) ImageExists(imagename string) bool {
}
func (*SimpleDocker) ImageAvailable(imagename string) bool {
return imageAvailable(imagename)
return image.Available(imagename)
}
func (s *SimpleDocker) RemoveImage(opts Options) error {
@@ -144,6 +153,68 @@ func (s *SimpleDocker) Push(opts Options) error {
return nil
}
func (s *SimpleDocker) imagefromDaemon(a string) (v1.Image, error) {
ref, err := name.ParseReference(a)
if err != nil {
return nil, err
}
img, err := daemon.Image(ref, daemon.WithUnbufferedOpener())
if err != nil {
return nil, err
}
return img, nil
}
// TODO: Make it possible optionally to use this?
// It might be unsafer, as it relies on the pipe.
// imageFromCLIPipe returns a new image from a tarball by providing a reader from the docker stdout pipe.
// See also daemon.Image implementation below for an example (which returns the tarball stream
// from the HTTP api endpoint instead ).
func (s *SimpleDocker) imageFromCLIPipe(a string) (v1.Image, error) {
return tarball.Image(func() (io.ReadCloser, error) {
buildarg := []string{"save", a}
s.ctx.Spinner()
defer s.ctx.SpinnerStop()
c := exec.Command("docker", buildarg...)
p, err := c.StdoutPipe()
if err != nil {
return nil, err
}
err = c.Start()
if err != nil {
return nil, err
}
go func() { c.Wait() }()
return p, nil
}, nil)
}
func (s *SimpleDocker) imageFromDisk(a string) (v1.Image, error) {
f, err := s.ctx.TempFile("snapshot")
if err != nil {
return nil, err
}
buildarg := []string{"save", a, "-o", f.Name()}
s.ctx.Spinner()
defer s.ctx.SpinnerStop()
out, err := exec.Command("docker", buildarg...).CombinedOutput()
if err != nil {
return nil, errors.Wrap(err, "Failed saving image: "+string(out))
}
return crane.Load(f.Name())
}
func (s *SimpleDocker) ImageReference(a string, ondisk bool) (v1.Image, error) {
if ondisk {
return s.imageFromDisk(a)
}
return s.imagefromDaemon(a)
}
func (s *SimpleDocker) ImageDefinitionToTar(opts Options) error {
if err := s.BuildImage(opts); err != nil {
return errors.Wrap(err, "Failed building image")
@@ -179,102 +250,3 @@ func (s *SimpleDocker) ExportImage(opts Options) error {
type ManifestEntry struct {
Layers []string `json:"Layers"`
}
func (b *SimpleDocker) ExtractRootfs(opts Options, keepPerms bool) error {
name := opts.ImageName
dst := opts.Destination
if !b.ImageExists(name) {
if err := b.DownloadImage(opts); err != nil {
return errors.Wrap(err, "failed pulling image "+name+" during extraction")
}
}
tempexport, err := ioutil.TempDir(dst, "tmprootfs")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tempexport) // clean up
imageExport := filepath.Join(tempexport, "image.tar")
b.ctx.Spinner()
defer b.ctx.SpinnerStop()
if err := b.ExportImage(Options{ImageName: name, Destination: imageExport}); err != nil {
return errors.Wrap(err, "failed while extracting rootfs for "+name)
}
src := imageExport
if src == "" && opts.ImageName != "" {
tempUnpack, err := ioutil.TempDir(dst, "tempUnpack")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(tempUnpack) // clean up
imageExport := filepath.Join(tempUnpack, "image.tar")
if err := b.ExportImage(Options{ImageName: opts.ImageName, Destination: imageExport}); err != nil {
return errors.Wrap(err, "while exporting image before extraction")
}
src = imageExport
}
rootfs, err := ioutil.TempDir(dst, "tmprootfs")
if err != nil {
return errors.Wrap(err, "Error met while creating tempdir for rootfs")
}
defer os.RemoveAll(rootfs) // clean up
// TODO: Following as option if archive as output?
// archive, err := ioutil.TempDir(os.TempDir(), "archive")
// if err != nil {
// return nil, errors.Wrap(err, "Error met while creating tempdir for rootfs")
// }
// defer os.RemoveAll(archive) // clean up
err = helpers.Untar(src, rootfs, keepPerms)
if err != nil {
return errors.Wrap(err, "Error met while unpacking rootfs")
}
manifest, err := fileHelper.Read(filepath.Join(rootfs, "manifest.json"))
if err != nil {
return errors.Wrap(err, "Error met while reading image manifest")
}
// Unpack all layers
var manifestData []ManifestEntry
if err := json.Unmarshal([]byte(manifest), &manifestData); err != nil {
return errors.Wrap(err, "Error met while unmarshalling manifest")
}
layers_sha := []string{}
for _, data := range manifestData {
for _, l := range data.Layers {
if strings.Contains(l, "layer.tar") {
layers_sha = append(layers_sha, strings.Replace(l, "/layer.tar", "", -1))
}
}
}
// TODO: Drop capi in favor of the img approach already used in pkg/installer/repository
export, err := capi.CreateExport(rootfs)
if err != nil {
return err
}
err = export.UnPackLayers(layers_sha, dst, "containerd")
if err != nil {
return err
}
// err = helpers.Tar(archive, dst)
// if err != nil {
// return nil, errors.Wrap(err, "Error met while creating package archive")
// }
return nil
}

View File

@@ -16,9 +16,7 @@
package backend_test
import (
"github.com/mudler/luet/pkg/api/core/types"
"github.com/mudler/luet/pkg/api/core/types/artifact"
"github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/api/core/context"
. "github.com/mudler/luet/pkg/compiler"
"github.com/mudler/luet/pkg/compiler/backend"
. "github.com/mudler/luet/pkg/compiler/backend"
@@ -30,13 +28,13 @@ import (
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Docker backend", func() {
Context("Simple Docker backend satisfies main interface functionalities", func() {
ctx := types.NewContext()
ctx := context.NewContext()
It("Builds and generate tars", func() {
generalRecipe := tree.NewGeneralRecipe(pkg.NewInMemoryDatabase(false))
@@ -107,35 +105,6 @@ RUN echo bar > /test2`))
Expect(b.ExportImage(opts2)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(filepath.Join(tmpdir, "output2.tar"))).To(BeTrue())
artifacts := []artifact.ArtifactNode{{
Name: "/luetbuild/LuetDockerfile",
Size: 175,
}}
if os.Getenv("DOCKER_BUILDKIT") == "1" {
artifacts = append(artifacts, artifact.ArtifactNode{Name: "/etc/resolv.conf", Size: 0})
}
artifacts = append(artifacts, artifact.ArtifactNode{Name: "/test", Size: 4})
artifacts = append(artifacts, artifact.ArtifactNode{Name: "/test2", Size: 4})
Expect(compiler.GenerateChanges(ctx, b, opts, opts2)).To(Equal(
[]artifact.ArtifactLayer{{
FromImage: "luet/base",
ToImage: "test",
Diffs: artifact.ArtifactDiffs{
Additions: artifacts,
},
}}))
opts2 = backend.Options{
ImageName: "test",
SourcePath: tmpdir,
DockerFileName: "LuetDockerfile",
Destination: filepath.Join(tmpdir, "output3.tar"),
}
Expect(b.ImageDefinitionToTar(opts2)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(filepath.Join(tmpdir, "output3.tar"))).To(BeTrue())
Expect(b.ImageExists(opts2.ImageName)).To(BeFalse())
})
It("Detects available images", func() {

View File

@@ -16,24 +16,30 @@
package backend
import (
"os"
"os/exec"
"strings"
"github.com/google/go-containerregistry/pkg/crane"
v1 "github.com/google/go-containerregistry/pkg/v1"
bus "github.com/mudler/luet/pkg/api/core/bus"
"github.com/mudler/luet/pkg/api/core/image"
"github.com/mudler/luet/pkg/api/core/types"
bus "github.com/mudler/luet/pkg/bus"
"github.com/pkg/errors"
)
type SimpleImg struct {
ctx *types.Context
ctx types.Context
}
func NewSimpleImgBackend(ctx *types.Context) *SimpleImg {
func NewSimpleImgBackend(ctx types.Context) *SimpleImg {
return &SimpleImg{ctx: ctx}
}
func (s *SimpleImg) LoadImage(string) error {
return errors.New("Not supported")
}
// TODO: Missing still: labels, and build args expansion
func (s *SimpleImg) BuildImage(opts Options) error {
name := opts.ImageName
@@ -70,6 +76,29 @@ func (s *SimpleImg) RemoveImage(opts Options) error {
return nil
}
func (s *SimpleImg) ImageReference(a string, ondisk bool) (v1.Image, error) {
f, err := s.ctx.TempFile("snapshot")
if err != nil {
return nil, err
}
buildarg := []string{"save", a, "-o", f.Name()}
s.ctx.Spinner()
defer s.ctx.SpinnerStop()
out, err := exec.Command("img", buildarg...).CombinedOutput()
if err != nil {
return nil, errors.Wrap(err, "Failed saving image: "+string(out))
}
img, err := crane.Load(f.Name())
if err != nil {
return nil, err
}
return img, nil
}
func (s *SimpleImg) DownloadImage(opts Options) error {
name := opts.ImageName
bus.Manager.Publish(bus.EventImagePrePull, opts)
@@ -107,7 +136,7 @@ func (s *SimpleImg) CopyImage(src, dst string) error {
}
func (s *SimpleImg) ImageAvailable(imagename string) bool {
return imageAvailable(imagename)
return image.Available(imagename)
}
// ImageExists check if the given image is available locally
@@ -153,33 +182,6 @@ func (s *SimpleImg) ExportImage(opts Options) error {
return nil
}
// ExtractRootfs extracts the docker image content inside the destination
func (s *SimpleImg) ExtractRootfs(opts Options, keepPerms bool) error {
name := opts.ImageName
path := opts.Destination
if !s.ImageExists(name) {
if err := s.DownloadImage(opts); err != nil {
return errors.Wrap(err, "failed pulling image "+name+" during extraction")
}
}
os.RemoveAll(path)
buildarg := []string{"unpack", "-o", path, name}
s.ctx.Debug(":tea: Extracting image " + name)
s.ctx.Spinner()
defer s.ctx.SpinnerStop()
out, err := exec.Command("img", buildarg...).CombinedOutput()
if err != nil {
return errors.Wrap(err, "Failed extracting image: "+string(out))
}
s.ctx.Debug(":tea: Image " + name + " extracted")
return nil
}
func (s *SimpleImg) Push(opts Options) error {
name := opts.ImageName
bus.Manager.Publish(bus.EventImagePrePush, opts)

View File

@@ -25,10 +25,10 @@ import (
type BackendWriter struct {
BufferedOutput bool
Buffer *bytes.Buffer
ctx *types.Context
ctx types.Context
}
func NewBackendWriter(buffered bool, ctx *types.Context) *BackendWriter {
func NewBackendWriter(buffered bool, ctx types.Context) *BackendWriter {
return &BackendWriter{
BufferedOutput: buffered,
Buffer: &bytes.Buffer{},
@@ -41,7 +41,7 @@ func (b *BackendWriter) Write(p []byte) (int, error) {
return b.Buffer.Write(p)
}
b.ctx.Msg("info", false, (string(p)))
b.ctx.Info((string(p)))
return len(p), nil
}

View File

@@ -1,79 +0,0 @@
// Copyright © 2019 Ettore Di Giacinto <mudler@gentoo.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package compiler_test
import (
"github.com/mudler/luet/pkg/api/core/types"
. "github.com/mudler/luet/pkg/compiler"
. "github.com/mudler/luet/pkg/compiler/backend"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
var _ = Describe("Docker image diffs", func() {
var b CompilerBackend
ctx := types.NewContext()
BeforeEach(func() {
b = NewSimpleDockerBackend(ctx)
})
Context("Generate diffs from docker images", func() {
It("Detect no changes", func() {
opts := Options{
ImageName: "alpine:latest",
}
err := b.DownloadImage(opts)
Expect(err).ToNot(HaveOccurred())
layers, err := GenerateChanges(ctx, b, opts, opts)
Expect(err).ToNot(HaveOccurred())
Expect(len(layers)).To(Equal(1))
Expect(len(layers[0].Diffs.Additions)).To(Equal(0))
Expect(len(layers[0].Diffs.Changes)).To(Equal(0))
Expect(len(layers[0].Diffs.Deletions)).To(Equal(0))
})
It("Detects additions and changed files", func() {
err := b.DownloadImage(Options{
ImageName: "quay.io/mocaccino/micro",
})
Expect(err).ToNot(HaveOccurred())
err = b.DownloadImage(Options{
ImageName: "quay.io/mocaccino/extra",
})
Expect(err).ToNot(HaveOccurred())
layers, err := GenerateChanges(ctx, b, Options{
ImageName: "quay.io/mocaccino/micro",
}, Options{
ImageName: "quay.io/mocaccino/extra",
})
Expect(err).ToNot(HaveOccurred())
Expect(len(layers)).To(Equal(1))
Expect(len(layers[0].Diffs.Changes) > 0).To(BeTrue())
Expect(len(layers[0].Diffs.Changes[0].Name) > 0).To(BeTrue())
Expect(layers[0].Diffs.Changes[0].Size > 0).To(BeTrue())
Expect(len(layers[0].Diffs.Additions) > 0).To(BeTrue())
Expect(len(layers[0].Diffs.Additions[0].Name) > 0).To(BeTrue())
Expect(layers[0].Diffs.Additions[0].Size > 0).To(BeTrue())
Expect(len(layers[0].Diffs.Deletions)).To(Equal(0))
})
})
})

View File

@@ -29,9 +29,10 @@ import (
"sync"
"time"
"github.com/mudler/luet/pkg/api/core/types"
bus "github.com/mudler/luet/pkg/api/core/bus"
"github.com/mudler/luet/pkg/api/core/context"
"github.com/mudler/luet/pkg/api/core/image"
artifact "github.com/mudler/luet/pkg/api/core/types/artifact"
bus "github.com/mudler/luet/pkg/bus"
"github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/compiler/types/options"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
@@ -81,7 +82,7 @@ func NewLuetCompiler(backend CompilerBackend, db pkg.PackageDatabase, compilerOp
c := NewCompiler(compilerOpts...)
if c.Options.Context == nil {
c.Options.Context = types.NewContext()
c.Options.Context = context.NewContext()
}
// c.Options.BackendType
c.Backend = backend
@@ -227,35 +228,44 @@ func (cs *LuetCompiler) stripFromRootfs(includes []string, rootfs string, includ
func (cs *LuetCompiler) unpackFs(concurrency int, keepPermissions bool, p *compilerspec.LuetCompilationSpec, runnerOpts backend.Options) (*artifact.PackageArtifact, error) {
rootfs, err := ioutil.TempDir(p.GetOutputPath(), "rootfs")
if !cs.Backend.ImageExists(runnerOpts.ImageName) {
if err := cs.Backend.DownloadImage(runnerOpts); err != nil {
return nil, errors.Wrap(err, "failed pulling image "+runnerOpts.ImageName+" during extraction")
}
}
img, err := cs.Backend.ImageReference(runnerOpts.ImageName, true)
if err != nil {
return nil, errors.Wrap(err, "Could not create tempdir")
return nil, err
}
ctx := cs.Options.Context.WithLoggingContext(fmt.Sprintf("extract %s", runnerOpts.ImageName))
_, rootfs, err := image.Extract(
ctx,
img,
image.ExtractFiles(
cs.Options.Context,
p.GetPackageDir(),
p.GetIncludes(),
p.GetExcludes(),
),
)
if err != nil {
return nil, err
}
defer os.RemoveAll(rootfs) // clean up
err = cs.Backend.ExtractRootfs(backend.Options{
ImageName: runnerOpts.ImageName, Destination: rootfs}, keepPermissions)
if err != nil {
return nil, errors.Wrap(err, "Could not extract rootfs")
toUnpack := rootfs
if p.PackageDir != "" {
toUnpack = filepath.Join(toUnpack, p.PackageDir)
}
if p.GetPackageDir() != "" {
cs.Options.Context.Info(":tophat: Packing from output dir", p.GetPackageDir())
rootfs = filepath.Join(rootfs, p.GetPackageDir())
}
if len(p.GetIncludes()) > 0 {
// strip from includes
cs.stripFromRootfs(p.GetIncludes(), rootfs, true)
}
if len(p.GetExcludes()) > 0 {
// strip from excludes
cs.stripFromRootfs(p.GetExcludes(), rootfs, false)
}
a := artifact.NewPackageArtifact(p.Rel(p.GetPackage().GetFingerPrint() + ".package.tar"))
a.CompressionType = cs.Options.CompressionType
if err := a.Compress(rootfs, concurrency); err != nil {
if err := a.Compress(toUnpack, concurrency); err != nil {
return nil, errors.Wrap(err, "Error met while creating package archive")
}
@@ -265,38 +275,65 @@ func (cs *LuetCompiler) unpackFs(concurrency int, keepPermissions bool, p *compi
func (cs *LuetCompiler) unpackDelta(concurrency int, keepPermissions bool, p *compilerspec.LuetCompilationSpec, builderOpts, runnerOpts backend.Options) (*artifact.PackageArtifact, error) {
rootfs, err := ioutil.TempDir(p.GetOutputPath(), "rootfs")
rootfs, err := cs.Options.Context.TempDir("rootfs")
if err != nil {
return nil, errors.Wrap(err, "Could not create tempdir")
}
defer os.RemoveAll(rootfs) // clean up
defer os.RemoveAll(rootfs)
pkgTag := ":package: " + p.GetPackage().HumanReadableString()
if cs.Options.PullFirst && !cs.Backend.ImageExists(builderOpts.ImageName) && cs.Backend.ImageAvailable(builderOpts.ImageName) {
err := cs.Backend.DownloadImage(builderOpts)
if err != nil {
return nil, errors.Wrap(err, "Could not pull image")
if cs.Options.PullFirst {
if !cs.Backend.ImageExists(builderOpts.ImageName) {
err := cs.Backend.DownloadImage(builderOpts)
if err != nil {
return nil, errors.Wrap(err, "Could not pull image")
}
}
if !cs.Backend.ImageExists(runnerOpts.ImageName) {
err := cs.Backend.DownloadImage(runnerOpts)
if err != nil {
return nil, errors.Wrap(err, "Could not pull image")
}
}
}
cs.Options.Context.Info(pkgTag, ":hammer: Generating delta")
diffs, err := GenerateChanges(cs.Options.Context, cs.Backend, builderOpts, runnerOpts)
cs.Options.Context.Debug(pkgTag, ":hammer: Retrieving reference for", builderOpts.ImageName)
ref, err := cs.Backend.ImageReference(builderOpts.ImageName, true)
if err != nil {
return nil, errors.Wrap(err, "Could not generate changes from layers")
return nil, err
}
cs.Options.Context.Debug("Extracting image to grab files from delta")
if err := cs.Backend.ExtractRootfs(backend.Options{
ImageName: runnerOpts.ImageName, Destination: rootfs}, keepPermissions); err != nil {
return nil, errors.Wrap(err, "Could not extract rootfs")
}
artifact, err := artifact.ExtractArtifactFromDelta(cs.Options.Context, rootfs, p.Rel(p.GetPackage().GetFingerPrint()+".package.tar"), diffs, concurrency, keepPermissions, p.GetIncludes(), p.GetExcludes(), cs.Options.CompressionType)
cs.Options.Context.Debug(pkgTag, ":hammer: Retrieving reference for", runnerOpts.ImageName)
ref2, err := cs.Backend.ImageReference(runnerOpts.ImageName, true)
if err != nil {
return nil, errors.Wrap(err, "Could not generate deltas")
return nil, err
}
artifact.CompileSpec = p
return artifact, nil
cs.Options.Context.Debug(pkgTag, ":hammer: Generating filters for extraction")
filter, err := image.ExtractDeltaAdditionsFiles(cs.Options.Context, ref, p.GetIncludes(), p.GetExcludes())
if err != nil {
return nil, errors.Wrap(err, "failed generating filter for extraction")
}
cs.Options.Context.Info(pkgTag, ":hammer: Extracting artifact from image", runnerOpts.ImageName)
a, err := artifact.ImageToArtifact(
cs.Options.Context,
ref2,
cs.Options.CompressionType,
p.Rel(fmt.Sprintf("%s%s", p.GetPackage().GetFingerPrint(), ".package.tar")),
filter,
)
if err != nil {
return nil, err
}
a.CompileSpec = p
return a, nil
}
func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImage string,
@@ -315,15 +352,11 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
p.SetSeedImage(image) // In this case, we ignore the build deps as we suppose that the image has them - otherwise we recompose the tree with a solver,
// and we build all the images first.
err := os.MkdirAll(p.Rel("build"), os.ModePerm)
buildDir, err := cs.Options.Context.TempDir("build")
if err != nil {
return builderOpts, runnerOpts, errors.Wrap(err, "Error met while creating tempdir for building")
return builderOpts, runnerOpts, err
}
buildDir, err := ioutil.TempDir(p.Rel("build"), "pack")
if err != nil {
return builderOpts, runnerOpts, errors.Wrap(err, "Error met while creating tempdir for building")
}
defer os.RemoveAll(buildDir) // clean up
defer os.RemoveAll(buildDir)
// First we copy the source definitions into the output - we create a copy which the builds will need (we need to cache this phase somehow)
err = fileHelper.CopyDir(p.GetPackage().GetPath(), buildDir)
@@ -340,7 +373,7 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
}
// First we create the builder image
if err := p.WriteBuildImageDefinition(filepath.Join(buildDir, p.GetPackage().GetFingerPrint()+"-builder.dockerfile")); err != nil {
if err := p.WriteBuildImageDefinition(filepath.Join(buildDir, p.GetPackage().ImageID()+"-builder.dockerfile")); err != nil {
return builderOpts, runnerOpts, errors.Wrap(err, "Could not generate image definition")
}
@@ -355,21 +388,21 @@ func (cs *LuetCompiler) buildPackageImage(image, buildertaggedImage, packageImag
// steps in prelude are == 0 those are equivalent.
// Then we write the step image, which uses the builder one
if err := p.WriteStepImageDefinition(buildertaggedImage, filepath.Join(buildDir, p.GetPackage().GetFingerPrint()+".dockerfile")); err != nil {
if err := p.WriteStepImageDefinition(buildertaggedImage, filepath.Join(buildDir, p.GetPackage().ImageID()+".dockerfile")); err != nil {
return builderOpts, runnerOpts, errors.Wrap(err, "Could not generate image definition")
}
builderOpts = backend.Options{
ImageName: buildertaggedImage,
SourcePath: buildDir,
DockerFileName: p.GetPackage().GetFingerPrint() + "-builder.dockerfile",
DockerFileName: p.GetPackage().ImageID() + "-builder.dockerfile",
Destination: p.Rel(p.GetPackage().GetFingerPrint() + "-builder.image.tar"),
BackendArgs: cs.Options.BackendArgs,
}
runnerOpts = backend.Options{
ImageName: packageImage,
SourcePath: buildDir,
DockerFileName: p.GetPackage().GetFingerPrint() + ".dockerfile",
DockerFileName: p.GetPackage().ImageID() + ".dockerfile",
Destination: p.Rel(p.GetPackage().GetFingerPrint() + ".image.tar"),
BackendArgs: cs.Options.BackendArgs,
}
@@ -427,11 +460,11 @@ func (cs *LuetCompiler) genArtifact(p *compilerspec.LuetCompilationSpec, builder
if p.EmptyPackage() {
fakePackage := p.Rel(p.GetPackage().GetFingerPrint() + ".package.tar")
rootfs, err = ioutil.TempDir(p.GetOutputPath(), "rootfs")
rootfs, err = cs.Options.Context.TempDir("rootfs")
if err != nil {
return nil, errors.Wrap(err, "Could not create tempdir")
}
defer os.RemoveAll(rootfs) // clean up
defer os.RemoveAll(rootfs)
a := artifact.NewPackageArtifact(fakePackage)
a.CompressionType = cs.Options.CompressionType
@@ -442,12 +475,16 @@ func (cs *LuetCompiler) genArtifact(p *compilerspec.LuetCompilationSpec, builder
a.CompileSpec = p
a.CompileSpec.GetPackage().SetBuildTimestamp(time.Now().String())
err = a.WriteYAML(p.GetOutputPath())
if err != nil {
return a, errors.Wrap(err, "Failed while writing metadata file")
}
cs.Options.Context.Success(pkgTag, " :white_check_mark: done (empty virtual package)")
if cs.Options.PushFinalImages {
if err := cs.pushFinalArtifact(a, p, keepPermissions); err != nil {
return nil, err
}
}
return a, nil
}
@@ -477,11 +514,55 @@ func (cs *LuetCompiler) genArtifact(p *compilerspec.LuetCompilationSpec, builder
if err != nil {
return a, errors.Wrap(err, "Failed while writing metadata file")
}
cs.Options.Context.Success(pkgTag, " :white_check_mark: Done")
cs.Options.Context.Success(pkgTag, " :white_check_mark: Done building")
if cs.Options.PushFinalImages {
if err := cs.pushFinalArtifact(a, p, keepPermissions); err != nil {
return nil, err
}
}
return a, nil
}
// TODO: A small readaptation of repository_docker.go pushImageFromArtifact()
// Move this to a common place
func (cs *LuetCompiler) pushFinalArtifact(a *artifact.PackageArtifact, p *compilerspec.LuetCompilationSpec, keepPermissions bool) error {
cs.Options.Context.Info("Pushing final image for", a.CompileSpec.Package.HumanReadableString())
imageID := fmt.Sprintf("%s:%s", cs.Options.PushFinalImagesRepository, a.CompileSpec.Package.ImageID())
// First push the package image
if !cs.Backend.ImageAvailable(imageID) || cs.Options.PushFinalImagesForce {
cs.Options.Context.Info("Generating and pushing final image for", a.CompileSpec.Package.HumanReadableString(), "as", imageID)
if err := a.GenerateFinalImage(cs.Options.Context, imageID, cs.GetBackend(), true); err != nil {
return errors.Wrap(err, "while creating final image")
}
if err := cs.Backend.Push(backend.Options{ImageName: imageID}); err != nil {
return errors.Wrapf(err, "Could not push image: %s", imageID)
}
}
// Then the image ID
metadataImageID := fmt.Sprintf("%s:%s", cs.Options.PushFinalImagesRepository, helpers.SanitizeImageString(a.CompileSpec.GetPackage().GetMetadataFilePath()))
if !cs.Backend.ImageAvailable(metadataImageID) || cs.Options.PushFinalImagesForce {
cs.Options.Context.Info("Generating metadata image for", a.CompileSpec.Package.HumanReadableString(), metadataImageID)
a := artifact.NewPackageArtifact(filepath.Join(p.GetOutputPath(), a.CompileSpec.GetPackage().GetMetadataFilePath()))
metadataArchive, err := artifact.CreateArtifactForFile(cs.Options.Context, a.Path)
if err != nil {
return errors.Wrap(err, "failed generating checksums for tree")
}
if err := metadataArchive.GenerateFinalImage(cs.Options.Context, metadataImageID, cs.Backend, keepPermissions); err != nil {
return errors.Wrap(err, "Failed generating metadata tree "+metadataImageID)
}
if err = cs.Backend.Push(backend.Options{ImageName: metadataImageID}); err != nil {
return errors.Wrapf(err, "Could not push image: %s", metadataImageID)
}
}
return nil
}
func (cs *LuetCompiler) waitForImages(images []string) {
if cs.Options.PullFirst && cs.Options.Wait {
available, _ := oneOfImagesAvailable(images, cs.Backend)
@@ -860,11 +941,11 @@ func (cs *LuetCompiler) resolveFinalImages(concurrency int, keepPermissions bool
}
// otherwise, generate it and push it aside
joinDir, err := ioutil.TempDir(p.GetOutputPath(), "join")
joinDir, err := cs.Options.Context.TempDir("join")
if err != nil {
return errors.Wrap(err, "could not create tempdir for joining images")
}
defer os.RemoveAll(joinDir) // clean up
defer os.RemoveAll(joinDir)
for _, p := range fromPackages {
cs.Options.Context.Info(joinTag, ":arrow_right_hook:", p.HumanReadableString(), ":leaves:")
@@ -899,11 +980,11 @@ func (cs *LuetCompiler) resolveFinalImages(concurrency int, keepPermissions bool
}
}
artifactDir, err := ioutil.TempDir(p.GetOutputPath(), "artifact")
artifactDir, err := cs.Options.Context.TempDir("join")
if err != nil {
return errors.Wrap(err, "could not create tempdir for final artifact")
}
defer os.RemoveAll(joinDir) // clean up
defer os.RemoveAll(artifactDir)
cs.Options.Context.Info(joinTag, ":droplet: generating artifact for source image of", p.GetPackage().HumanReadableString())
@@ -916,14 +997,14 @@ func (cs *LuetCompiler) resolveFinalImages(concurrency int, keepPermissions bool
joinImageName := fmt.Sprintf("%s:%s", cs.Options.PushImageRepository, overallFp)
cs.Options.Context.Info(joinTag, ":droplet: generating image from artifact", joinImageName)
opts, err := a.GenerateFinalImage(cs.Options.Context, joinImageName, cs.Backend, keepPermissions)
err = a.GenerateFinalImage(cs.Options.Context, joinImageName, cs.Backend, keepPermissions)
if err != nil {
return errors.Wrap(err, "could not create final image")
}
if cs.Options.Push {
cs.Options.Context.Info(joinTag, ":droplet: pushing image from artifact", joinImageName)
if err = cs.Backend.Push(opts); err != nil {
return errors.Wrapf(err, "Could not push image: %s %s", image, opts.DockerFileName)
if err = cs.Backend.Push(backend.Options{ImageName: joinImageName}); err != nil {
return errors.Wrapf(err, "Could not push image: %s", joinImageName)
}
}
cs.Options.Context.Info(joinTag, ":droplet: Consuming image", joinImageName)
@@ -974,6 +1055,36 @@ func (cs *LuetCompiler) resolveMultiStageImages(concurrency int, keepPermissions
return nil
}
func CompilerFinalImages(cs *LuetCompiler) (*LuetCompiler, error) {
// When computing the hash tree, we need to take into consideration
// that packages that require final images have to be seen as packages without deps
// This is because we don't really want to calculate the deptree of them as
// as it is handled already when we are creating the images in resolveFinalImages().
c := *cs
copy := &c
memDB := pkg.NewInMemoryDatabase(false)
// Create a copy to avoid races
dbCopy := pkg.NewInMemoryDatabase(false)
err := cs.Database.Clone(dbCopy)
if err != nil {
return nil, errors.Wrap(err, "failed cloning db")
}
for _, p := range dbCopy.World() {
copy := p.Clone()
spec, err := cs.FromPackage(p)
if err != nil {
return nil, errors.Wrap(err, "failed getting compile spec for package "+p.HumanReadableString())
}
if spec.RequiresFinalImages {
copy.Requires([]*pkg.DefaultPackage{})
}
memDB.CreatePackage(copy)
}
copy.Database = memDB
return copy, nil
}
func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, generateFinalArtifact *bool, generateDependenciesFinalArtifact *bool, p *compilerspec.LuetCompilationSpec) (*artifact.PackageArtifact, error) {
cs.Options.Context.Info(":package: Compiling", p.GetPackage().HumanReadableString(), ".... :coffee:")
@@ -997,8 +1108,11 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, generateF
}
ht := NewHashTree(cs.Database)
packageHashTree, err := ht.Query(cs, p)
copy, err := CompilerFinalImages(cs)
if err != nil {
return nil, err
}
packageHashTree, err := ht.Query(copy, p)
if err != nil {
return nil, errors.Wrap(err, "failed querying hashtree")
}
@@ -1056,6 +1170,7 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, generateF
buildTarget := !cs.Options.OnlyDeps
if buildDeps {
cs.Options.Context.Info(":deciduous_tree: Build dependencies for " + p.GetPackage().HumanReadableString())
for _, assertion := range dependencies { //highly dependent on the order
depsN++
@@ -1071,6 +1186,7 @@ func (cs *LuetCompiler) compile(concurrency int, keepPermissions bool, generateF
return nil, errors.Wrap(err, "Error while generating compilespec for "+assertion.Package.GetName())
}
compileSpec.BuildOptions.PullImageRepository = append(compileSpec.BuildOptions.PullImageRepository, p.BuildOptions.PullImageRepository...)
cs.Options.Context.Debug("PullImage repos:", compileSpec.BuildOptions.PullImageRepository)
compileSpec.SetOutputPath(p.GetOutputPath())
@@ -1227,11 +1343,12 @@ func (cs *LuetCompiler) templatePackage(vals []map[string]interface{}, pack pkg.
} else {
bv := cs.Options.BuildValuesFile
if len(vals) > 0 {
valuesdir, err := ioutil.TempDir("", "genvalues")
valuesdir, err := cs.Options.Context.TempDir("genvalues")
if err != nil {
return nil, errors.Wrap(err, "Could not create tempdir")
}
defer os.RemoveAll(valuesdir) // clean up
defer os.RemoveAll(valuesdir)
for _, b := range vals {
out, err := yaml.Marshal(b)
if err != nil {

View File

@@ -18,7 +18,7 @@ package compiler_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -16,25 +16,31 @@
package compiler_test
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"github.com/mudler/luet/pkg/api/core/types"
helpers "github.com/mudler/luet/tests/helpers"
"github.com/mudler/luet/pkg/api/core/context"
"github.com/mudler/luet/pkg/api/core/image"
"github.com/mudler/luet/pkg/api/core/types/artifact"
. "github.com/mudler/luet/pkg/compiler"
sd "github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/compiler/types/compression"
"github.com/mudler/luet/pkg/compiler/types/options"
compilerspec "github.com/mudler/luet/pkg/compiler/types/spec"
helpers "github.com/mudler/luet/pkg/helpers"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Compiler", func() {
ctx := types.NewContext()
ctx := context.NewContext()
Context("Simple package build definition", func() {
It("Compiles it correctly", func() {
@@ -45,7 +51,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(context.NewContext()))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -64,7 +70,7 @@ var _ = Describe("Compiler", func() {
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test6"))).To(BeTrue())
@@ -88,7 +94,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(context.NewContext()))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.2"})
Expect(err).ToNot(HaveOccurred())
@@ -104,7 +110,7 @@ var _ = Describe("Compiler", func() {
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("result"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("bina/busybox"))).To(BeTrue())
@@ -118,7 +124,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(context.NewContext()))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.2"})
Expect(err).ToNot(HaveOccurred())
@@ -134,7 +140,7 @@ var _ = Describe("Compiler", func() {
artifact, err := compiler.Compile(false, spec)
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("newc"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test4"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
@@ -150,7 +156,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(1), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(1), options.WithContext(context.NewContext()))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -169,7 +175,7 @@ var _ = Describe("Compiler", func() {
Expect(errs).To(BeNil())
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
})
@@ -184,7 +190,7 @@ var _ = Describe("Compiler", func() {
err = generalRecipe.Load("../../tests/fixtures/templates")
Expect(err).ToNot(HaveOccurred())
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.WithContext(context.NewContext()))
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(1))
pkg, err := generalRecipe.GetDatabase().FindPackage(&pkg.DefaultPackage{Name: "b", Category: "test", Version: "1.0"})
@@ -208,7 +214,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(4))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(context.NewContext()))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -232,7 +238,7 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(fileHelper.Exists(spec.Rel("test3"))).To(BeTrue())
@@ -264,7 +270,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(1), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(1), options.WithContext(context.NewContext()))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "extra", Category: "layer", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -282,12 +288,12 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
for _, artifact := range artifacts2 {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(fileHelper.Exists(spec.Rel("etc/hosts"))).To(BeTrue())
@@ -318,10 +324,9 @@ var _ = Describe("Compiler", func() {
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
@@ -355,7 +360,7 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
@@ -390,7 +395,7 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
@@ -424,7 +429,7 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(fileHelper.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
@@ -457,7 +462,7 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(fileHelper.Exists(spec.Rel("marvin"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test5"))).To(BeTrue())
@@ -490,7 +495,7 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(fileHelper.Exists(spec.Rel("var/lib/udhcpd"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("marvin"))).To(BeTrue())
@@ -528,9 +533,9 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("extra-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.NewPackageArtifact(spec.Rel("extra-layer-0.1.package.tar")).Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("extra-layer"))).To(BeTrue())
@@ -569,9 +574,9 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("c-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.NewPackageArtifact(spec.Rel("c-test-1.0.package.tar")).Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("d"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("dd"))).To(BeTrue())
@@ -612,9 +617,9 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("c-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.NewPackageArtifact(spec.Rel("c-test-1.0.package.tar")).Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("d"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("dd"))).To(BeTrue())
@@ -652,9 +657,9 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("extra-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.NewPackageArtifact(spec.Rel("extra-layer-0.1.package.tar")).Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("extra-layer"))).To(BeTrue())
@@ -714,9 +719,9 @@ var _ = Describe("Compiler", func() {
Expect(len(artifacts[0].Dependencies)).To(Equal(6))
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
Expect(helpers.Untar(spec.Rel("vhba-sys-fs-5.4.2-20190410.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.NewPackageArtifact(spec.Rel("vhba-sys-fs-5.4.2-20190410.package.tar")).Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("sabayon-build-portage-layer-0.20191126.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("build-layer-0.1.package.tar"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("build-sabayon-overlay-layer-0.20191212.package.tar"))).To(BeTrue())
@@ -749,7 +754,7 @@ var _ = Describe("Compiler", func() {
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
// A deps on B, so A artifacts are here:
@@ -789,7 +794,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(3))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2), options.WithContext(context.NewContext()))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "c", Category: "test", Version: "1.0"})
Expect(err).ToNot(HaveOccurred())
@@ -804,13 +809,13 @@ var _ = Describe("Compiler", func() {
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec))
Expect(errs).To(BeNil())
for _, artifact := range artifacts {
Expect(fileHelper.Exists(artifact.Path)).To(BeTrue())
Expect(helpers.Untar(artifact.Path, tmpdir, false)).ToNot(HaveOccurred())
for _, a := range artifacts {
Expect(fileHelper.Exists(a.Path)).To(BeTrue())
Expect(a.Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
for _, d := range artifact.Dependencies {
for _, d := range a.Dependencies {
Expect(fileHelper.Exists(d.Path)).To(BeTrue())
Expect(helpers.Untar(d.Path, tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.NewPackageArtifact(d.Path).Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
}
}
@@ -848,10 +853,72 @@ var _ = Describe("Compiler", func() {
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(1))
Expect(len(artifacts[0].Dependencies)).To(Equal(1))
Expect(helpers.Untar(spec.Rel("runtime-layer-0.1.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.NewPackageArtifact(filepath.Join(tmpdir, "runtime-layer-0.1.package.tar")).Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("bin/busybox"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("var"))).ToNot(BeTrue())
})
It("Pushes final images along", func() {
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
randString := strings.ToLower(helpers.String(10))
imageName := fmt.Sprintf("ttl.sh/%s", randString)
b := sd.NewSimpleDockerBackend(ctx)
err := generalRecipe.Load("../../tests/fixtures/packagelayers")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(b, generalRecipe.GetDatabase(),
options.EnablePushFinalImages, options.ForcePushFinalImages, options.WithFinalRepository(imageName))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "runtime", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
spec2, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "build", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
Expect(spec.GetPackage().GetPath()).ToNot(Equal(""))
tmpdir, err := ioutil.TempDir("", "tree")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(tmpdir) // clean up
spec.SetOutputPath(tmpdir)
artifacts, errs := compiler.CompileParallel(false, compilerspec.NewLuetCompilationspecs(spec, spec2))
Expect(errs).To(BeNil())
Expect(len(artifacts)).To(Equal(2))
//Expect(len(artifacts[0].Dependencies)).To(Equal(1))
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", imageName, artifacts[0].Runtime.ImageID()))).To(BeTrue())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", imageName, artifacts[0].Runtime.GetMetadataFilePath()))).To(BeTrue())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", imageName, artifacts[1].Runtime.ImageID()))).To(BeTrue())
Expect(b.ImageAvailable(fmt.Sprintf("%s:%s", imageName, artifacts[1].Runtime.GetMetadataFilePath()))).To(BeTrue())
img, err := b.ImageReference(fmt.Sprintf("%s:%s", imageName, artifacts[0].Runtime.ImageID()), true)
Expect(err).ToNot(HaveOccurred())
_, path, err := image.Extract(ctx, img, nil)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(path) // clean up
Expect(fileHelper.Exists(filepath.Join(path, "bin/busybox"))).To(BeTrue())
img, err = b.ImageReference(fmt.Sprintf("%s:%s", imageName, artifacts[1].Runtime.GetMetadataFilePath()), true)
Expect(err).ToNot(HaveOccurred())
_, path, err = image.Extract(ctx, img, nil)
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(path) // clean up
meta := filepath.Join(path, artifacts[1].Runtime.GetMetadataFilePath())
Expect(fileHelper.Exists(meta)).To(BeTrue())
d, err := ioutil.ReadFile(meta)
Expect(err).ToNot(HaveOccurred())
Expect(string(d)).To(ContainSubstring(artifacts[1].CompileSpec.GetPackage().GetName()))
})
})
Context("Packages which conents are a package folder", func() {
@@ -896,11 +963,11 @@ var _ = Describe("Compiler", func() {
Expect(len(artifacts)).To(Equal(2))
Expect(len(artifacts[0].Dependencies)).To(Equal(0))
Expect(helpers.Untar(spec.Rel("dironly-test-1.0.package.tar"), tmpdir, false)).ToNot(HaveOccurred())
Expect(artifact.NewPackageArtifact(filepath.Join(tmpdir, "dironly-test-1.0.package.tar")).Unpack(ctx, tmpdir, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec.Rel("test1"))).To(BeTrue())
Expect(fileHelper.Exists(spec.Rel("test2"))).To(BeTrue())
Expect(helpers.Untar(spec2.Rel("dironly_filter-test-1.0.package.tar"), tmpdir2, false)).ToNot(HaveOccurred())
Expect(artifact.NewPackageArtifact(filepath.Join(tmpdir2, "dironly_filter-test-1.0.package.tar")).Unpack(ctx, tmpdir2, false)).ToNot(HaveOccurred())
Expect(fileHelper.Exists(spec2.Rel("test5"))).To(BeTrue())
Expect(fileHelper.Exists(spec2.Rel("test6"))).ToNot(BeTrue())
Expect(fileHelper.Exists(spec2.Rel("artifact42"))).ToNot(BeTrue())
@@ -916,7 +983,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.WithContext(context.NewContext()))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "runtime", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())
@@ -951,7 +1018,7 @@ var _ = Describe("Compiler", func() {
err := generalRecipe.Load("../../tests/fixtures/includeimage")
Expect(err).ToNot(HaveOccurred())
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.WithContext(context.NewContext()))
specs, err := compiler.FromDatabase(generalRecipe.GetDatabase(), true, "")
Expect(err).ToNot(HaveOccurred())
@@ -970,7 +1037,7 @@ var _ = Describe("Compiler", func() {
Expect(len(generalRecipe.GetDatabase().GetPackages())).To(Equal(2))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.WithContext(types.NewContext()))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.WithContext(context.NewContext()))
spec, err := compiler.FromPackage(&pkg.DefaultPackage{Name: "runtime", Category: "layer", Version: "0.1"})
Expect(err).ToNot(HaveOccurred())

View File

@@ -16,18 +16,18 @@
package compiler_test
import (
"github.com/mudler/luet/pkg/api/core/types"
"github.com/mudler/luet/pkg/api/core/context"
. "github.com/mudler/luet/pkg/compiler"
sd "github.com/mudler/luet/pkg/compiler/backend"
"github.com/mudler/luet/pkg/compiler/types/options"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("ImageHashTree", func() {
ctx := types.NewContext()
ctx := context.NewContext()
generalRecipe := tree.NewCompilerRecipe(pkg.NewInMemoryDatabase(false))
compiler := NewLuetCompiler(sd.NewSimpleDockerBackend(ctx), generalRecipe.GetDatabase(), options.Concurrency(2))
hashtree := NewHashTree(generalRecipe.GetDatabase())

View File

@@ -47,7 +47,13 @@ type Compiler struct {
// TemplatesFolder. should default to tree/templates
TemplatesFolder []string
Context *types.Context
// Tells wether to push final container images after building
PushFinalImages bool
PushFinalImagesForce bool
// Image repository to push to
PushFinalImagesRepository string
Context types.Context
}
func NewDefaultCompiler() *Compiler {
@@ -85,6 +91,25 @@ func WithOptions(opt *Compiler) func(cfg *Compiler) error {
}
}
// WithFinalRepository Sets the final repository where to push
// images of built artifacts
func WithFinalRepository(r string) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.PushFinalImagesRepository = r
return nil
}
}
func EnablePushFinalImages(cfg *Compiler) error {
cfg.PushFinalImages = true
return nil
}
func ForcePushFinalImages(cfg *Compiler) error {
cfg.PushFinalImagesForce = true
return nil
}
func WithBackendType(r string) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.BackendType = r
@@ -113,6 +138,8 @@ func WithPullRepositories(r []string) func(cfg *Compiler) error {
}
}
// WithPushRepository Sets the image reference where to push
// cache images
func WithPushRepository(r string) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
if len(cfg.PullImageRepository) == 0 {
@@ -210,7 +237,7 @@ func WithSolverOptions(c types.LuetSolverOptions) func(cfg *Compiler) error {
}
}
func WithContext(c *types.Context) func(cfg *Compiler) error {
func WithContext(c types.Context) func(cfg *Compiler) error {
return func(cfg *Compiler) error {
cfg.Context = c
return nil

View File

@@ -18,7 +18,7 @@ package compilerspec_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -27,7 +27,7 @@ import (
. "github.com/mudler/luet/pkg/compiler"
pkg "github.com/mudler/luet/pkg/package"
"github.com/mudler/luet/pkg/tree"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -16,14 +16,10 @@
package helpers
import (
"archive/tar"
"bytes"
"io"
"os"
"path/filepath"
"github.com/docker/docker/pkg/archive"
"github.com/pkg/errors"
"github.com/moby/moby/pkg/archive"
)
func Tar(src, dest string) error {
@@ -50,168 +46,3 @@ func Tar(src, dest string) error {
}
return err
}
type TarModifierWrapperFunc func(path, dst string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error)
type TarModifierWrapper struct {
DestinationPath string
Modifier TarModifierWrapperFunc
}
func NewTarModifierWrapper(dst string, modifier TarModifierWrapperFunc) *TarModifierWrapper {
return &TarModifierWrapper{
DestinationPath: dst,
Modifier: modifier,
}
}
func (m *TarModifierWrapper) GetModifier() archive.TarModifierFunc {
return func(path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
return m.Modifier(m.DestinationPath, path, header, content)
}
}
func UntarProtect(src, dst string, sameOwner bool, protectedFiles []string, modifier *TarModifierWrapper) error {
var ans error
if len(protectedFiles) <= 0 {
return Untar(src, dst, sameOwner)
}
// POST: we have files to protect. I create a ReplaceFileTarWrapper
in, err := os.Open(src)
if err != nil {
return err
}
defer in.Close()
// Create modifier map
mods := make(map[string]archive.TarModifierFunc)
for _, file := range protectedFiles {
mods[file] = modifier.GetModifier()
}
if sameOwner {
// we do have root permissions, so we can extract keeping the same permissions.
replacerArchive := archive.ReplaceFileTarWrapper(in, mods)
opts := &archive.TarOptions{
NoLchown: false,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
ContinueOnError: true,
}
ans = archive.Untar(replacerArchive, dst, opts)
} else {
ans = unTarIgnoreOwner(dst, in, mods)
}
return ans
}
func unTarIgnoreOwner(dest string, in io.ReadCloser, mods map[string]archive.TarModifierFunc) error {
tr := tar.NewReader(in)
for {
header, err := tr.Next()
var data []byte
var headerReplaced = false
switch {
case err == io.EOF:
goto tarEof
case err != nil:
return err
case header == nil:
continue
}
// the target location where the dir/file should be created
target := filepath.Join(dest, header.Name)
if mods != nil {
modifier, ok := mods[header.Name]
if ok {
header, data, err = modifier(header.Name, header, tr)
if err != nil {
return errors.Wrap(err, "running modifier wrapper")
}
// Override target path
target = filepath.Join(dest, header.Name)
headerReplaced = true
}
}
// Check the file type
switch header.Typeflag {
// if its a dir and it doesn't exist create it
case tar.TypeDir:
if _, err := os.Stat(target); err != nil {
if err := os.MkdirAll(target, 0755); err != nil {
return err
}
}
// handle creation of file
case tar.TypeReg:
f, err := os.OpenFile(target, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode))
if err != nil {
return errors.Wrap(err, "creating destination")
}
// copy over contents
if headerReplaced {
_, err = io.Copy(f, bytes.NewReader(data))
} else {
_, err = io.Copy(f, tr)
}
if err != nil {
return err
}
// manually close here after each file operation; defering would cause each
// file close to wait until all operations have completed.
f.Close()
case tar.TypeSymlink:
source := header.Linkname
err := os.Symlink(source, target)
if err != nil {
return err
}
}
}
tarEof:
return nil
}
// Untar just a wrapper around the docker functions
func Untar(src, dest string, sameOwner bool) (err error) {
in, err := os.Open(src)
if err != nil {
return errors.Wrap(err, "while opening "+src+" for untar ")
}
defer in.Close()
if sameOwner {
opts := &archive.TarOptions{
NoLchown: false,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
ContinueOnError: true,
}
err = archive.Untar(in, dest, opts)
} else {
err = unTarIgnoreOwner(dest, in, nil)
}
if err != nil {
err = errors.Wrap(err, "while untarring "+src+" into "+dest)
}
return
}

View File

@@ -1,136 +0,0 @@
// Copyright © 2019-2020 Ettore Di Giacinto <mudler@gentoo.org>
// Daniele Rondina <geaaru@sabayonlinux.org>
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this program; if not, see <http://www.gnu.org/licenses/>.
package helpers_test
import (
"archive/tar"
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
"github.com/docker/docker/pkg/archive"
. "github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
)
// Code from moby/moby pkg/archive/archive_test
func prepareUntarSourceDirectory(numberOfFiles int, targetPath string, makeLinks bool) (int, error) {
fileData := []byte("fooo")
for n := 0; n < numberOfFiles; n++ {
fileName := fmt.Sprintf("file-%d", n)
if err := ioutil.WriteFile(filepath.Join(targetPath, fileName), fileData, 0700); err != nil {
return 0, err
}
if makeLinks {
if err := os.Link(filepath.Join(targetPath, fileName), filepath.Join(targetPath, fileName+"-link")); err != nil {
return 0, err
}
}
}
totalSize := numberOfFiles * len(fileData)
return totalSize, nil
}
func tarModifierWrapperFunc(dst, path string, header *tar.Header, content io.Reader) (*tar.Header, []byte, error) {
// If the destination path already exists I rename target file name with postfix.
var basePath string
// Read data. TODO: We need change archive callback to permit to return a Reader
buffer := bytes.Buffer{}
if content != nil {
if _, err := buffer.ReadFrom(content); err != nil {
return nil, nil, err
}
}
if header != nil {
switch header.Typeflag {
case tar.TypeReg:
basePath = filepath.Base(path)
default:
// Nothing to do. I return original reader
return header, buffer.Bytes(), nil
}
if basePath == "file-0" {
name := filepath.Join(filepath.Join(filepath.Dir(path), fmt.Sprintf("._cfg%04d_%s", 1, basePath)))
return &tar.Header{
Mode: header.Mode,
Typeflag: header.Typeflag,
PAXRecords: header.PAXRecords,
Name: name,
}, buffer.Bytes(), nil
} else if basePath == "file-1" {
return header, []byte("newcontent"), nil
}
// else file not present
}
return header, buffer.Bytes(), nil
}
var _ = Describe("Helpers Archive", func() {
Context("Untar Protect", func() {
It("Detect existing and not-existing files", func() {
archiveSourceDir, err := ioutil.TempDir("", "archive-source")
Expect(err).ToNot(HaveOccurred())
defer os.RemoveAll(archiveSourceDir)
_, err = prepareUntarSourceDirectory(10, archiveSourceDir, false)
Expect(err).ToNot(HaveOccurred())
targetDir, err := ioutil.TempDir("", "archive-target")
Expect(err).ToNot(HaveOccurred())
// defer os.RemoveAll(targetDir)
sourceArchive, err := archive.TarWithOptions(archiveSourceDir, &archive.TarOptions{})
Expect(err).ToNot(HaveOccurred())
defer sourceArchive.Close()
tarModifier := NewTarModifierWrapper(targetDir, tarModifierWrapperFunc)
mods := make(map[string]archive.TarModifierFunc)
mods["file-0"] = tarModifier.GetModifier()
mods["file-1"] = tarModifier.GetModifier()
mods["file-9999"] = tarModifier.GetModifier()
replacerArchive := archive.ReplaceFileTarWrapper(sourceArchive, mods)
//replacerArchive := archive.ReplaceFileTarWrapper(sourceArchive, mods)
opts := &archive.TarOptions{
// NOTE: NoLchown boolean is used for chmod of the symlink
// Probably it's needed set this always to true.
NoLchown: true,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
ContinueOnError: true,
}
err = archive.Untar(replacerArchive, targetDir, opts)
Expect(err).ToNot(HaveOccurred())
Expect(fileHelper.Exists(filepath.Join(targetDir, "._cfg0001_file-0"))).Should(Equal(true))
})
})
})

View File

@@ -18,25 +18,23 @@ package docker
import (
"context"
"encoding/hex"
"fmt"
"os"
"strings"
"github.com/containerd/containerd/images"
luetimages "github.com/mudler/luet/pkg/api/core/image"
luettypes "github.com/mudler/luet/pkg/api/core/types"
fileHelper "github.com/mudler/luet/pkg/helpers/file"
continerdarchive "github.com/containerd/containerd/archive"
"github.com/docker/cli/cli/trust"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/registry"
"github.com/google/go-containerregistry/pkg/authn"
"github.com/google/go-containerregistry/pkg/name"
v1 "github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/mutate"
"github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/mudler/luet/pkg/bus"
"github.com/google/go-containerregistry/pkg/v1/daemon"
"github.com/mudler/luet/pkg/api/core/bus"
"github.com/opencontainers/go-digest"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@@ -129,44 +127,8 @@ type UnpackEventData struct {
Dest string
}
// UnarchiveLayers extract layers with archive.Untar from docker instead of containerd
func UnarchiveLayers(temp string, img v1.Image, image, dest string, auth *types.AuthConfig, verify bool) (int64, error) {
layers, err := img.Layers()
if err != nil {
return 0, fmt.Errorf("reading layers from '%s' image failed: %v", image, err)
}
bus.Manager.Publish(bus.EventImagePreUnPack, UnpackEventData{Image: image, Dest: dest})
var size int64
for _, l := range layers {
s, err := l.Size()
if err != nil {
return 0, fmt.Errorf("reading layer size from '%s' image failed: %v", image, err)
}
size += s
layerReader, err := l.Uncompressed()
if err != nil {
return 0, fmt.Errorf("reading uncompressed layer from '%s' image failed: %v", image, err)
}
defer layerReader.Close()
// Unpack the tarfile to the rootfs path.
// FROM: https://godoc.org/github.com/moby/moby/pkg/archive#TarOptions
if err := archive.Untar(layerReader, dest, &archive.TarOptions{
NoLchown: false,
ExcludePatterns: []string{"dev/"}, // prevent 'operation not permitted'
}); err != nil {
return 0, fmt.Errorf("extracting '%s' image to directory %s failed: %v", image, dest, err)
}
}
bus.Manager.Publish(bus.EventImagePostUnPack, UnpackEventData{Image: image, Dest: dest})
return size, nil
}
// DownloadAndExtractDockerImage extracts a container image natively. It supports privileged/unprivileged mode
func DownloadAndExtractDockerImage(temp, image, dest string, auth *types.AuthConfig, verify bool) (*images.Image, error) {
func DownloadAndExtractDockerImage(ctx luettypes.Context, image, dest string, auth *types.AuthConfig, verify bool) (*images.Image, error) {
if verify {
img, err := verifyImage(image, auth)
if err != nil {
@@ -206,13 +168,15 @@ func DownloadAndExtractDockerImage(temp, image, dest string, auth *types.AuthCon
return nil, err
}
reader := mutate.Extract(img)
defer reader.Close()
defer os.RemoveAll(temp)
bus.Manager.Publish(bus.EventImagePreUnPack, UnpackEventData{Image: image, Dest: dest})
c, err := continerdarchive.Apply(context.TODO(), dest, reader)
var c int64
c, _, err = luetimages.ExtractTo(
ctx,
img,
dest,
nil,
)
if err != nil {
return nil, err
}
@@ -230,6 +194,59 @@ func DownloadAndExtractDockerImage(temp, image, dest string, auth *types.AuthCon
}, nil
}
func StripInvalidStringsFromImage(s string) string {
return strings.ReplaceAll(s, "+", "-")
func ExtractDockerImage(ctx luettypes.Context, local, dest string)(*images.Image, error) {
if !fileHelper.Exists(dest) {
if err := os.MkdirAll(dest, os.ModePerm); err != nil {
return nil, errors.Wrapf(err, "cannot create destination directory")
}
}
ref, err := name.ParseReference(local)
if err != nil {
return nil, err
}
img, err := daemon.Image(ref)
if err != nil {
return nil, err
}
m, err := img.Manifest()
if err != nil {
return nil, err
}
mt, err := img.MediaType()
if err != nil {
return nil, err
}
d, err := img.Digest()
if err != nil {
return nil, err
}
var c int64
c, _, err = luetimages.ExtractTo(
ctx,
img,
dest,
nil,
)
if err != nil {
return nil, err
}
bus.Manager.Publish(bus.EventImagePostUnPack, UnpackEventData{Image: local, Dest: dest})
return &images.Image{
Name: local,
Labels: m.Annotations,
Target: specs.Descriptor{
MediaType: string(mt),
Digest: digest.Digest(d.String()),
Size: c,
},
}, nil
}

View File

@@ -22,7 +22,7 @@ import (
fileHelper "github.com/mudler/luet/pkg/helpers/file"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -21,7 +21,7 @@ import (
"path/filepath"
. "github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -18,7 +18,7 @@ package helpers_test
import (
"testing"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)

View File

@@ -17,8 +17,9 @@
package helpers
import (
"github.com/asaskevich/govalidator"
"strings"
"github.com/asaskevich/govalidator"
)
func StripRegistryFromImage(image string) string {
@@ -28,3 +29,7 @@ func StripRegistryFromImage(image string) string {
}
return image
}
func SanitizeImageString(s string) string {
return strings.ReplaceAll(s, "+", "-")
}

View File

@@ -18,11 +18,16 @@ package helpers_test
import (
. "github.com/mudler/luet/pkg/helpers"
. "github.com/onsi/ginkgo"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Helpers", func() {
Context("Image names", func() {
It("strips invalid chars", func() {
Expect(SanitizeImageString("foo+bar")).To(Equal("foo-bar"))
})
})
Context("StripRegistryFromImage", func() {
It("Strips the domain name", func() {
out := StripRegistryFromImage("valid.domain.org/base/image:tag")

View File

@@ -1,12 +0,0 @@
// +build darwin dragonfly freebsd netbsd openbsd
package terminal
import "golang.org/x/sys/unix"
const ioctlReadTermios = unix.TIOCGETA
func isTerminal(fd int) bool {
_, err := unix.IoctlGetTermios(fd, ioctlReadTermios)
return err == nil
}

View File

@@ -1,17 +0,0 @@
// +build !windows,!nacl,!plan9
package terminal
import (
"io"
"os"
)
func IsTerminal(w io.Writer) bool {
switch v := w.(type) {
case *os.File:
return isTerminal(int(v.Fd()))
default:
return false
}
}

View File

@@ -1,11 +0,0 @@
// +build js nacl plan9
package terminal
import (
"io"
)
func IsTerminal(w io.Writer) bool {
return false
}

Some files were not shown because too many files have changed in this diff Show More