cross build packages

Signed-off-by: Avi Deitcher <avi@deitcher.net>
This commit is contained in:
Avi Deitcher 2021-03-23 12:23:53 +02:00
parent 60919fee96
commit c8ef7d0eb0
20 changed files with 1210 additions and 365 deletions

View File

@ -9,7 +9,7 @@ you should be able to build a LinuxKit package.
All official LinuxKit packages are:
- Enabled with multi-arch manifests to work on multiple architectures.
- Derived from well-known (and signed) sources for repeatable builds.
- Derived from well-known sources for repeatable builds.
- Built with multi-stage builds to minimise their size.
@ -67,33 +67,172 @@ during the build.
### Build Targets
LinuxKit builds packages as docker images. It deposits the built package as a docker image in one of two targets:
LinuxKit builds packages as docker images. It deposits the built package as a docker image in one or both of two targets:
* the linuxkit cache `~/.linuxkit/` (configurable) - default option
* the docker image cache
* the linuxkit cache, which is at `~/.linuxkit/cache/` (configurable)
* the docker image cache (optional)
If you want to build images and test and run them _in a standalone_ fashion locally, then you should pick the docker image cache. Otherwise, you should use the default linuxkit cache. LinuxKit defaults to building OS images using docker images from this cache,\
only looking in the docker cache if instructed to via `linuxkit build --docker`.
The package _always_ is built and saved in the linuxkit cache. However, you _also_ can load the package for the current
architecture, if available, into the docker image cache.
When using the linuxkit cache as the package build target, it creates all of the layers, the manifest that can be uploaded
If you want to build images and test and run them _in a standalone_ fashion locally, then you should add the docker image cache.
Otherwise, you don't need anything more than the default linuxkit cache. LinuxKit defaults to building OS images using docker
images from this cache, only looking in the docker cache if instructed to via `linuxkit build --docker`.
In the linuxkit cache, it creates all of the layers, the manifest that can be uploaded
to a registry, and the multi-architecture index. If an image already exists for a different architecture in the cache,
it updates the index to include additional manifests created.
As of this writing, `linuxkit pkg build` only builds packages for the platform on which it is running; it does not (yet) support cross-building the packages for other architectures.
The order of building is as follows:
Note that the local docker option is available _only_ when building without pushing to a remote registry, i.e.:
1. Build the image to the linuxkit cache
1. If `--docker` is provided, load the image into the docker image cache
```
linuxkit pkg build
linuxkit pkg build --docker
For example:
```bash
linuxkit pkg build pkg/foo # builds pkg/foo and places it in the linuxkit cache
linuxkit pkg build pkg/foo --docker # builds pkg/foo and places it in the linuxkit cache and also loads it into docker
```
If you push to a registry, it _always_ uses the linuxkit cache only:
#### Build Platforms
By default, `linuxkit pkg build` builds for all supported platforms in the package's `build.yml`, whose syntax is available
[here][Package source]. If no platforms are provided in the `build.yml`, it builds for all platforms that linuxkit supports.
As of this writing, those are:
* `linux/amd64`
* `linux/arm64`
* `linux/s390x`
You can override the target build platform by passing it the `--platforms` option:
```
linuxkit pkg push
linuxkit pkg build --platforms <platform1,platform2,...platformN>
```
The options for `--platforms` are identical to those for [docker build](https://docs.docker.com/engine/reference/commandline/build/).
An example is available in the official [buildx documentation](https://docs.docker.com/buildx/working-with-buildx/#build-multi-platform-images).
Given that this is linuxkit, i.e. all builds are for linux, the `OS` part would seem redundant, and it should be sufficient to pass `--platform arm64`. However, for complete consistency, the _entire_ platform, e.g. `--platforms linux/amd64,linux/arm64`, must be provided.
#### Where it builds
You are running the `linuxkit pkg build` command on a single platform, e.g. your local linux cloud instance running on `amd64`, or
a MacBook with Apple Silicon running on `arm64`.
How does linuxkit determine where to build the target images?
linuxkit uses a combination of buildx builders and docker contexts, controlled by your input, to determine where to build.
Upon startup, it looks for a buildx builder named `linuxkit`. If it cannot find that builder, it creates it.
When linuxkit needs to build a package for a particular architecture:
1. If a context for that architecture was provided, use that context.
1. If no context for that architecture was provided, use the default `linuxkit` context
The actual building then will be one of:
1. native, if the provided context has the same architecture as the target build architecture; else
1. cross-build, if the provided context has a different architecture, but the package's `Dockerfile` supports cross-building; else
1. emulated build, using docker's qemu binfmt capabilities
Cross-building, i.e. building on one platform using that platform's binaries to create outputs for a different platform,
depends on the package's `Dockerfile`. Details are available in the
[official Docker buildx docs](https://docs.docker.com/buildx/working-with-buildx/#build-multi-platform-images).
* if the image is just `FROM something`, then it runs it under qemu using binfmt
* if the image is `FROM --platform=$BUILDPLATFORM something`, then it runs it using the local architecture, invoking cross-builders
Read the official docs to learn more how to leverage cross-building with buildx.
**Important:** When building, if the local architecture is not one of those being build,
selecting `--docker` to load the images into the docker image cache will result in an error.
You _must_ be building for the local architecture - optionally for others as well - in order to
pass the `--docker` option.
#### Providing native builder nodes
linuxkit is capable of using native build nodes to do the build, even remotely. To do so, you must:
1. Create a [docker context](https://docs.docker.com/engine/context/working-with-contexts/) that references the build node
1. Tell linuxkit to use that context for that architecture
linuxkit will then use that provided context to create a buildx builder and use it for that architecture.
linuxkit looks for contexts in the following descending order of priority:
1. CLI option `--builders <platform>=<context>,<platform>=<context>`, e.g. `--builders linux/arm64=linuxkit-arm64,linux/amd64=default`
1. Environment variable `LINUXKIT_BUILDERS=<platform>=<context>,<platform>=<context>`, e.g. `LINUXKIT_BUILDERS=linux/arm64=linuxkit-arm64,linux/amd64=default`
1. Existing context named `linuxkit-<platform>`, e.g. `linuxkit-linux-arm64` or `linuxkit-linux-s390x`, with "/" replaced by "-", as "/" is an invalid character.
1. Default builder named `linuxkit`, created by linuxkit, running in the default context
If a builder name is provided for a specific platform, and it doesn't exist, it will be treated as a fatal error.
#### Examples
##### Simple build
There are no contexts starting with `linuxkit-`, no environment variable `LINUXKIT_BUILDERS`, no command-line argument `--builders`.
linuxkit will build any requested packages using `docker buildx` on the local platform, with a builder (created, if necessary) named `linuxkit`.
Builds for the same architecture will be native, builds for other platforms will use either qemu or cross-building.
##### Specified target
You create a context named `my-remote-arm64` and then run:
```bash
linuxkit pkg build --platforms=linux/arm64,linux/amd64 --builders linux/arm64=my-remote-arm64
```
linuxkit will build:
* for arm64 using the context `my-remote-arm64`, since you specified in `--builders` to use `my-remote-arm64` for `linux/arm64`
* for amd64 using the context `default` and the `linuxkit` builder, as that is the default fallback
The same would happen if you used `LINUXKIT_BUILDERS=linux/arm64=my-remote-arm64` instead of the `--builders` flag.
##### Named context
You create a context named `linuxkit-linux-arm64` and then run:
```bash
linuxkit pkg build --platforms=linux/arm64,linux/amd64
```
linuxkit will build:
* for arm64 using the context `linuxkit-linux-arm64`, since there is a context with the name `linuxkit-<platform>`, and you did not override it using `--builders` or the environment variable `LINUXKIT_BUILDERS`
* for amd64 using the context `default` and the `linuxkit` builder, as that is the default fallback
##### Combination
You create a context named `linuxkit-arm64`, and another named `my-remote-builder-amd64` and then run:
```bash
linuxkit pkg build --platforms=linux/arm64,linux/amd64 --builders linux/amd64=my-remote-builder-amd64
```
linuxkit will build:
* for arm64 using the context `linuxkit-arm64`, since there is a context with the name `linuxkit-<arch>`, and you did not override that particular architecture using `--builders` or the environment variable `LINUXKIT_BUILDERS`
* for amd64 using the context `my-remote-builder-amd64`, since you specified for that architecture using `--builders`
The same would happen if you used `LINUXKIT_BUILDERS=linux/arm64=my-remote-builder-amd64` instead of the `--builders` flag.
##### Missing context
You do not have a context named `my-remote-arm64`, and run:
```bash
linuxkit pkg build --platforms=linux/arm64 --builders linux/arm64=my-remote-arm64
```
linuxkit will try to build for `linux/arm64` using the context `my-remote-arm64`. Since that context does not exist, you will get an error.
### Build packages as a maintainer
All official LinuxKit packages are multi-arch manifests and most of
@ -103,78 +242,33 @@ them are available for the following platforms:
* `linux/arm64`
* `linux/s390x`
Official images *must* be built on all architectures for which they are available.
They can be built and pushed in parallel, but the manifest should be pushed once
when all of the images are done.
Official images *must* be built for all architectures for which they are available.
Pushing out a package as a maintainer involves two distinct stages:
Pushing out a package as a maintainer involves two stages:
1. Building and pushing out the platform-specific image
1. Creating, pushing out and signing the multi-arch manifest, a.k.a. OCI image index
1. Building and pushing out the platform-specific images
1. Creating and pushing out the multi-arch manifest, a.k.a. OCI image index
The `linuxkit pkg` command contains automation which performs all of the steps.
Note that `«path-to-package»` is the path to the package's source directory
(containing at least `build.yml` and `Dockerfile`). It can be `.` if
the package is in the current directory.
#### Image Only
To build and push out the platform-specific image, on that platform:
```
linuxkit pkg push --manifest=false «path-to-package»
linuxkit pkg push «path-to-package»
```
The options do the following:
* `--manifest=false` means not to push or sign a manifest
Repeat the above on each platform where you need an image.
This will do the following:
1. Determine the name and tag for the image as follows:
* The tag is from the hash of the git tree for that package. You can see it by doing `linuxkit pkg show-tag «path-to-package»`.
* The name for the image is from `«path-to-package»/build.yml`
* The organization for the package is given on the command-line, default to `linuxkit`.
1. Build the package in the given path using your local docker instance for the local platform. E.g. if you are running on `linux/arm64`, it will build for `linux/arm64`.
1. Tag the build image as `«image-name»:«hash»-«arch»`
1. Push the image to the hub
#### Manifest Only
To perform just the manifest steps, do:
```
linuxkit pkg push --image=false --manifest «path-to-package»
```
The options do the following:
* `--image=false` do not push the image, as you already did it; you can, of course, skip this argument and push the image as well
* `--manifest` create and push the manifest
This will do the following:
1. Find all of the images on the hub of the format `«image-name»:«hash»-«arch»`
1. Build the package in the given path using your local docker instance for all the platforms in `«path-to-package»/build.yml`
1. Save the built image in the linuxkit cache
1. Tag each built image as `«image-name»:«hash»-«arch»`
1. Create a multi-arch manifest called `«image-name»:«hash»` (note no `-«arch»`)
1. Push the manifest to the hub
1. Sign the manifest with your key
Each time you perform the manifest steps, it will find all of the images,
including any that have been added since last time.
The LinuxKit YAML files should consume the package as the multi-arch manifest:
`linuxkit/<image>:<hash>`.
#### Everything at once
To perform _all_ of the steps at once - build and push out the image for whatever platform
you are running on, and create and sign a manifest - do:
```
linuxkit pkg push «path-to-package»
```
1. Push the manifest and all of the images to the hub
#### Prerequisites

View File

@ -4,7 +4,6 @@ import (
"fmt"
"github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/layout"
"github.com/google/go-containerregistry/pkg/v1/match"
"github.com/google/go-containerregistry/pkg/v1/partial"
)
@ -27,8 +26,8 @@ func matchPlatformsOSArch(platforms ...v1.Platform) match.Matcher {
}
}
func findImage(p layout.Path, imageName, architecture string) (v1.Image, error) {
root, err := findRootFromLayout(p, imageName)
func (p *Provider) findImage(imageName, architecture string) (v1.Image, error) {
root, err := p.FindRoot(imageName)
if err != nil {
return nil, err
}
@ -50,12 +49,8 @@ func findImage(p layout.Path, imageName, architecture string) (v1.Image, error)
}
// FindDescriptor get the first descriptor pointed to by the image name
func FindDescriptor(dir string, name string) (*v1.Descriptor, error) {
p, err := Get(dir)
if err != nil {
return nil, err
}
index, err := p.ImageIndex()
func (p *Provider) FindDescriptor(name string) (*v1.Descriptor, error) {
index, err := p.cache.ImageIndex()
// if there is no root index, we are broken
if err != nil {
return nil, fmt.Errorf("invalid image cache: %v", err)

19
src/cmd/linuxkit/cache/provider.go vendored Normal file
View File

@ -0,0 +1,19 @@
package cache
import (
"github.com/google/go-containerregistry/pkg/v1/layout"
)
// Provider cache implementation of cacheProvider
type Provider struct {
cache layout.Path
}
// NewProvider create a new CacheProvider based in the provided directory
func NewProvider(dir string) (*Provider, error) {
p, err := Get(dir)
if err != nil {
return nil, err
}
return &Provider{p}, nil
}

View File

@ -7,11 +7,12 @@ import (
"github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/partial"
"github.com/google/go-containerregistry/pkg/v1/validate"
lktspec "github.com/linuxkit/linuxkit/src/cmd/linuxkit/spec"
)
// ValidateImage given a reference, validate that it is complete. If not, pull down missing
// components as necessary.
func ValidateImage(ref *reference.Spec, cacheDir, architecture string) (ImageSource, error) {
func (p *Provider) ValidateImage(ref *reference.Spec, architecture string) (lktspec.ImageSource, error) {
var (
imageIndex v1.ImageIndex
image v1.Image
@ -19,7 +20,7 @@ func ValidateImage(ref *reference.Spec, cacheDir, architecture string) (ImageSou
desc *v1.Descriptor
)
// next try the local cache
root, err := FindRoot(cacheDir, imageName)
root, err := p.FindRoot(imageName)
if err == nil {
img, err := root.Image()
if err == nil {
@ -49,9 +50,8 @@ func ValidateImage(ref *reference.Spec, cacheDir, architecture string) (ImageSou
case imageIndex != nil:
// we found a local index, just make sure it is up to date and, if not, download it
if err := validate.Index(imageIndex); err == nil {
return NewSource(
return p.NewSource(
ref,
cacheDir,
architecture,
desc,
), nil
@ -60,9 +60,8 @@ func ValidateImage(ref *reference.Spec, cacheDir, architecture string) (ImageSou
case image != nil:
// we found a local image, just make sure it is up to date
if err := validate.Image(image); err == nil {
return NewSource(
return p.NewSource(
ref,
cacheDir,
architecture,
desc,
), nil

View File

@ -10,16 +10,11 @@ import (
)
// PushWithManifest push an image along with, optionally, a multi-arch index.
func PushWithManifest(dir string, name, suffix string, pushImage, pushManifest bool) error {
func (p *Provider) PushWithManifest(name, suffix string, pushImage, pushManifest bool) error {
var (
err error
options []remote.Option
)
p, err := Get(dir)
if err != nil {
return err
}
imageName := name + suffix
ref, err := namepkg.ParseReference(imageName)
if err != nil {
@ -29,7 +24,7 @@ func PushWithManifest(dir string, name, suffix string, pushImage, pushManifest b
if pushImage {
fmt.Printf("Pushing %s\n", imageName)
// do we even have the given one?
root, err := findRootFromLayout(p, imageName)
root, err := p.FindRoot(imageName)
if err != nil {
return err
}
@ -70,3 +65,40 @@ func PushWithManifest(dir string, name, suffix string, pushImage, pushManifest b
}
return nil
}
// Push push an image along with a multi-arch index.
func (p *Provider) Push(name string) error {
var (
err error
options []remote.Option
)
ref, err := namepkg.ParseReference(name)
if err != nil {
return err
}
fmt.Printf("Pushing %s\n", name)
// do we even have the given one?
root, err := p.FindRoot(name)
if err != nil {
return err
}
options = append(options, remote.WithAuthFromKeychain(authn.DefaultKeychain))
img, err1 := root.Image()
ii, err2 := root.ImageIndex()
switch {
case err1 == nil:
if err := remote.Write(ref, img, options...); err != nil {
return err
}
fmt.Printf("Pushed image %s\n", name)
case err2 == nil:
if err := remote.WriteIndex(ref, ii, options...); err != nil {
return err
}
fmt.Printf("Pushed index %s\n", name)
default:
return fmt.Errorf("name %s unknown in cache", name)
}
return nil
}

View File

@ -4,7 +4,6 @@ import (
"fmt"
"github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/layout"
"github.com/google/go-containerregistry/pkg/v1/match"
"github.com/google/go-containerregistry/pkg/v1/partial"
)
@ -43,17 +42,9 @@ func (l layoutIndex) ImageIndex() (v1.ImageIndex, error) {
// FindRoot find the root ResolvableDescriptor, representing an Image or Index, for
// a given imageName.
func FindRoot(dir, imageName string) (ResolvableDescriptor, error) {
p, err := Get(dir)
if err != nil {
return nil, err
}
return findRootFromLayout(p, imageName)
}
func findRootFromLayout(p layout.Path, imageName string) (ResolvableDescriptor, error) {
func (p *Provider) FindRoot(imageName string) (ResolvableDescriptor, error) {
matcher := match.Name(imageName)
rootIndex, err := p.ImageIndex()
rootIndex, err := p.cache.ImageIndex()
// of there is no root index, we are broken
if err != nil {
return nil, fmt.Errorf("invalid image cache: %v", err)

View File

@ -7,27 +7,26 @@ import (
"github.com/containerd/containerd/reference"
"github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/layout"
"github.com/google/go-containerregistry/pkg/v1/mutate"
lktspec "github.com/linuxkit/linuxkit/src/cmd/linuxkit/spec"
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
)
// ImageSource a source for an image in the OCI distribution cache.
// Implements a moby.ImageSource.
// Implements a spec.ImageSource.
type ImageSource struct {
ref *reference.Spec
cache layout.Path
provider *Provider
architecture string
descriptor *v1.Descriptor
}
// NewSource return an ImageSource for a specific ref and architecture in the given
// cache directory.
func NewSource(ref *reference.Spec, dir string, architecture string, descriptor *v1.Descriptor) ImageSource {
p, _ := Get(dir)
func (p *Provider) NewSource(ref *reference.Spec, architecture string, descriptor *v1.Descriptor) lktspec.ImageSource {
return ImageSource{
ref: ref,
cache: p,
provider: p,
architecture: architecture,
descriptor: descriptor,
}
@ -37,7 +36,7 @@ func NewSource(ref *reference.Spec, dir string, architecture string, descriptor
// architecture, if necessary.
func (c ImageSource) Config() (imagespec.ImageConfig, error) {
imageName := c.ref.String()
image, err := findImage(c.cache, imageName, c.architecture)
image, err := c.provider.findImage(imageName, c.architecture)
if err != nil {
return imagespec.ImageConfig{}, err
}
@ -63,7 +62,7 @@ func (c ImageSource) TarReader() (io.ReadCloser, error) {
imageName := c.ref.String()
// get a reference to the image
image, err := findImage(c.cache, imageName, c.architecture)
image, err := c.provider.findImage(imageName, c.architecture)
if err != nil {
return nil, err
}

View File

@ -7,8 +7,6 @@ import (
"fmt"
"io"
"io/ioutil"
"os"
"path"
"strings"
"github.com/containerd/containerd/reference"
@ -20,6 +18,7 @@ import (
"github.com/google/go-containerregistry/pkg/v1/partial"
"github.com/google/go-containerregistry/pkg/v1/remote"
"github.com/google/go-containerregistry/pkg/v1/types"
lktspec "github.com/linuxkit/linuxkit/src/cmd/linuxkit/spec"
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
log "github.com/sirupsen/logrus"
)
@ -28,20 +27,16 @@ const (
linux = "linux"
)
// ImageWrite takes an image name and pulls it down, writing it locally. It should be
// ImagePull takes an image name and pulls it down, writing it locally. It should be
// efficient and only write missing blobs, based on their content hash.
func ImageWrite(dir string, ref *reference.Spec, trustedRef, architecture string) (ImageSource, error) {
p, err := Get(dir)
if err != nil {
return ImageSource{}, err
}
func (p *Provider) ImagePull(ref *reference.Spec, trustedRef, architecture string) (lktspec.ImageSource, error) {
image := ref.String()
pullImageName := image
remoteOptions := []remote.Option{remote.WithAuthFromKeychain(authn.DefaultKeychain)}
if trustedRef != "" {
pullImageName = trustedRef
}
log.Debugf("ImageWrite to cache %s trusted reference %s", image, pullImageName)
log.Debugf("ImagePull to cache %s trusted reference %s", image, pullImageName)
remoteRef, err := name.ParseReference(pullImageName)
if err != nil {
return ImageSource{}, fmt.Errorf("invalid image name %s: %v", pullImageName, err)
@ -61,7 +56,7 @@ func ImageWrite(dir string, ref *reference.Spec, trustedRef, architecture string
ii, err := desc.ImageIndex()
if err == nil {
log.Debugf("ImageWrite retrieved %s is index, saving", pullImageName)
err = p.ReplaceIndex(ii, match.Name(image), layout.WithAnnotations(annotations))
err = p.cache.ReplaceIndex(ii, match.Name(image), layout.WithAnnotations(annotations))
} else {
var im v1.Image
// try an image
@ -70,27 +65,21 @@ func ImageWrite(dir string, ref *reference.Spec, trustedRef, architecture string
return ImageSource{}, fmt.Errorf("provided image is neither an image nor an index: %s", image)
}
log.Debugf("ImageWrite retrieved %s is image, saving", pullImageName)
err = p.ReplaceImage(im, match.Name(image), layout.WithAnnotations(annotations))
err = p.cache.ReplaceImage(im, match.Name(image), layout.WithAnnotations(annotations))
}
if err != nil {
return ImageSource{}, fmt.Errorf("unable to save image to cache: %v", err)
}
return NewSource(
return p.NewSource(
ref,
dir,
architecture,
&desc.Descriptor,
), nil
}
// ImageWriteTar takes an OCI format image tar stream and writes it locally. It should be
// ImageLoad takes an OCI format image tar stream and writes it locally. It should be
// efficient and only write missing blobs, based on their content hash.
func ImageWriteTar(dir string, ref *reference.Spec, architecture string, r io.Reader) (ImageSource, error) {
p, err := Get(dir)
if err != nil {
return ImageSource{}, err
}
func (p *Provider) ImageLoad(ref *reference.Spec, architecture string, r io.Reader) (lktspec.ImageSource, error) {
var (
tr = tar.NewReader(r)
index bytes.Buffer
@ -147,7 +136,7 @@ func ImageWriteTar(dir string, ref *reference.Spec, architecture string, r io.Re
return ImageSource{}, fmt.Errorf("invalid hash filename for %s: %v", filename, err)
}
log.Debugf("writing %s as hash %s", filename, hash)
if err := p.WriteBlob(hash, ioutil.NopCloser(tr)); err != nil {
if err := p.cache.WriteBlob(hash, ioutil.NopCloser(tr)); err != nil {
return ImageSource{}, fmt.Errorf("error reading data for file %s : %v", filename, err)
}
}
@ -164,7 +153,7 @@ func ImageWriteTar(dir string, ref *reference.Spec, architecture string, r io.Re
if len(im.Manifests) != 1 {
return ImageSource{}, fmt.Errorf("currently only support OCI tar stream that has a single image")
}
if err := p.RemoveDescriptors(match.Name(imageName)); err != nil {
if err := p.cache.RemoveDescriptors(match.Name(imageName)); err != nil {
return ImageSource{}, fmt.Errorf("unable to remove old descriptors for %s: %v", imageName, err)
}
for _, desc := range im.Manifests {
@ -176,7 +165,7 @@ func ImageWriteTar(dir string, ref *reference.Spec, architecture string, r io.Re
descriptor = &desc
log.Debugf("appending descriptor %#v", descriptor)
if err := p.AppendDescriptor(desc); err != nil {
if err := p.cache.AppendDescriptor(desc); err != nil {
return ImageSource{}, fmt.Errorf("error appending descriptor to layout index: %v", err)
}
}
@ -187,9 +176,8 @@ func ImageWriteTar(dir string, ref *reference.Spec, architecture string, r io.Re
Architecture: architecture,
}
}
return NewSource(
return p.NewSource(
ref,
dir,
architecture,
descriptor,
), nil
@ -199,28 +187,24 @@ func ImageWriteTar(dir string, ref *reference.Spec, architecture string, r io.Re
// does not pull down any images; entirely assumes that the subjects of the manifests are present.
// If a reference to the provided already exists and it is an index, updates the manifests in the
// existing index.
func IndexWrite(dir string, ref *reference.Spec, descriptors ...v1.Descriptor) (ImageSource, error) {
p, err := Get(dir)
if err != nil {
return ImageSource{}, err
}
func (p *Provider) IndexWrite(ref *reference.Spec, descriptors ...v1.Descriptor) (lktspec.ImageSource, error) {
image := ref.String()
log.Debugf("writing an index for %s", image)
ii, err := p.ImageIndex()
ii, err := p.cache.ImageIndex()
if err != nil {
return ImageSource{}, fmt.Errorf("unable to get root index at %s: %v", dir, err)
return ImageSource{}, fmt.Errorf("unable to get root index: %v", err)
}
images, err := partial.FindImages(ii, match.Name(image))
if err != nil {
return ImageSource{}, fmt.Errorf("error parsing index at %s: %v", dir, err)
return ImageSource{}, fmt.Errorf("error parsing index: %v", err)
}
if err == nil && len(images) > 0 {
return ImageSource{}, fmt.Errorf("image named %s already exists in cache at %s and is not an index", image, dir)
return ImageSource{}, fmt.Errorf("image named %s already exists in cache and is not an index", image)
}
indexes, err := partial.FindIndexes(ii, match.Name(image))
if err != nil {
return ImageSource{}, fmt.Errorf("error parsing index at %s: %v", dir, err)
return ImageSource{}, fmt.Errorf("error parsing index: %v", err)
}
var im v1.IndexManifest
// do we update an existing one? Or create a new one?
@ -260,7 +244,7 @@ func IndexWrite(dir string, ref *reference.Spec, descriptors ...v1.Descriptor) (
manifest.Manifests = manifests
im = *manifest
// remove the old index
if err := p.RemoveBlob(oldhash); err != nil {
if err := p.cache.RemoveBlob(oldhash); err != nil {
return ImageSource{}, fmt.Errorf("unable to remove old index file: %v", err)
}
@ -282,11 +266,11 @@ func IndexWrite(dir string, ref *reference.Spec, descriptors ...v1.Descriptor) (
if err != nil {
return ImageSource{}, fmt.Errorf("error calculating hash of index json: %v", err)
}
if err := p.WriteBlob(hash, ioutil.NopCloser(bytes.NewReader(b))); err != nil {
if err := p.cache.WriteBlob(hash, ioutil.NopCloser(bytes.NewReader(b))); err != nil {
return ImageSource{}, fmt.Errorf("error writing new index to json: %v", err)
}
// finally update the descriptor in the root
if err := p.RemoveDescriptors(match.Name(image)); err != nil {
if err := p.cache.RemoveDescriptors(match.Name(image)); err != nil {
return ImageSource{}, fmt.Errorf("unable to remove old descriptor from index.json: %v", err)
}
desc := v1.Descriptor{
@ -297,41 +281,36 @@ func IndexWrite(dir string, ref *reference.Spec, descriptors ...v1.Descriptor) (
imagespec.AnnotationRefName: image,
},
}
if err := p.AppendDescriptor(desc); err != nil {
if err := p.cache.AppendDescriptor(desc); err != nil {
return ImageSource{}, fmt.Errorf("unable to append new descriptor to index.json: %v", err)
}
return NewSource(
return p.NewSource(
ref,
dir,
"",
&desc,
), nil
}
// DescriptorWrite writes a name for a given descriptor
func DescriptorWrite(dir string, ref *reference.Spec, descriptors ...v1.Descriptor) (ImageSource, error) {
p, err := Get(dir)
if err != nil {
return ImageSource{}, err
}
func (p *Provider) DescriptorWrite(ref *reference.Spec, descriptors ...v1.Descriptor) (lktspec.ImageSource, error) {
image := ref.String()
log.Debugf("writing descriptors for image %s: %v", image, descriptors)
ii, err := p.ImageIndex()
ii, err := p.cache.ImageIndex()
if err != nil {
return ImageSource{}, fmt.Errorf("unable to get root index at %s: %v", dir, err)
return ImageSource{}, fmt.Errorf("unable to get root index: %v", err)
}
images, err := partial.FindImages(ii, match.Name(image))
if err != nil {
return ImageSource{}, fmt.Errorf("error parsing index at %s: %v", dir, err)
return ImageSource{}, fmt.Errorf("error parsing index: %v", err)
}
if err == nil && len(images) > 0 {
return ImageSource{}, fmt.Errorf("image named %s already exists in cache at %s and is not an index", image, dir)
return ImageSource{}, fmt.Errorf("image named %s already exists in cache and is not an index", image)
}
indexes, err := partial.FindIndexes(ii, match.Name(image))
if err != nil {
return ImageSource{}, fmt.Errorf("error parsing index at %s: %v", dir, err)
return ImageSource{}, fmt.Errorf("error parsing index: %v", err)
}
var im v1.IndexManifest
// do we update an existing one? Or create a new one?
@ -368,13 +347,9 @@ func DescriptorWrite(dir string, ref *reference.Spec, descriptors ...v1.Descript
}
im.Manifests = manifests
// remove the old index - unfortunately, there is no "RemoveBlob" option in the library
// once https://github.com/google/go-containerregistry/pull/936/ is in, we can get rid of some of this
oldfile := path.Join(dir, oldhash.Algorithm, oldhash.Hex)
if err := os.RemoveAll(oldfile); err != nil {
return ImageSource{}, fmt.Errorf("unable to remove old file %s: %v", oldfile, err)
if err := p.cache.RemoveBlob(oldhash); err != nil {
return ImageSource{}, fmt.Errorf("unable to remove old index blob: %v", err)
}
} else {
// we did not have one, so create an index, store it, update the root index.json, and return
im = v1.IndexManifest{
@ -393,11 +368,11 @@ func DescriptorWrite(dir string, ref *reference.Spec, descriptors ...v1.Descript
if err != nil {
return ImageSource{}, fmt.Errorf("error calculating hash of index json: %v", err)
}
if err := p.WriteBlob(hash, ioutil.NopCloser(bytes.NewReader(b))); err != nil {
if err := p.cache.WriteBlob(hash, ioutil.NopCloser(bytes.NewReader(b))); err != nil {
return ImageSource{}, fmt.Errorf("error writing new index to json: %v", err)
}
// finally update the descriptor in the root
if err := p.RemoveDescriptors(match.Name(image)); err != nil {
if err := p.cache.RemoveDescriptors(match.Name(image)); err != nil {
return ImageSource{}, fmt.Errorf("unable to remove old descriptor from index.json: %v", err)
}
desc := v1.Descriptor{
@ -408,13 +383,12 @@ func DescriptorWrite(dir string, ref *reference.Spec, descriptors ...v1.Descript
imagespec.AnnotationRefName: image,
},
}
if err := p.AppendDescriptor(desc); err != nil {
if err := p.cache.AppendDescriptor(desc); err != nil {
return ImageSource{}, fmt.Errorf("unable to append new descriptor to index.json: %v", err)
}
return NewSource(
return p.NewSource(
ref,
dir,
"",
&desc,
), nil

View File

@ -6,6 +6,7 @@ import (
"io"
"github.com/containerd/containerd/reference"
"github.com/google/go-containerregistry/pkg/v1"
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
)
@ -75,3 +76,8 @@ func (d ImageSource) TarReader() (io.ReadCloser, error) {
},
}, nil
}
// Descriptor return the descriptor of the image.
func (d ImageSource) Descriptor() *v1.Descriptor {
return nil
}

View File

@ -12,7 +12,6 @@ import (
"strings"
"github.com/containerd/containerd/reference"
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/runtime-spec/specs-go"
log "github.com/sirupsen/logrus"
)
@ -24,13 +23,6 @@ type tarWriter interface {
WriteHeader(hdr *tar.Header) error
}
// ImageSource interface to an image. It can have its config read, and a its containers
// can be read via an io.ReadCloser tar stream.
type ImageSource interface {
Config() (imagespec.ImageConfig, error)
TarReader() (io.ReadCloser, error)
}
// This uses Docker to convert a Docker image into a tarball. It would be an improvement if we
// used the containerd libraries to do this instead locally direct from a local image
// cache as it would be much simpler.

View File

@ -4,13 +4,14 @@ import (
"github.com/containerd/containerd/reference"
"github.com/linuxkit/linuxkit/src/cmd/linuxkit/cache"
"github.com/linuxkit/linuxkit/src/cmd/linuxkit/docker"
lktspec "github.com/linuxkit/linuxkit/src/cmd/linuxkit/spec"
)
// imagePull pull an image from the OCI registry to the cache.
// If the image root already is in the cache, use it, unless
// the option pull is set to true.
// if alwaysPull, then do not even bother reading locally
func imagePull(ref *reference.Spec, alwaysPull bool, cacheDir string, dockerCache bool, architecture string) (ImageSource, error) {
func imagePull(ref *reference.Spec, alwaysPull bool, cacheDir string, dockerCache bool, architecture string) (lktspec.ImageSource, error) {
// several possibilities:
// - alwaysPull: try to pull it down from the registry to linuxkit cache, then fail
// - !alwaysPull && dockerCache: try to read it from docker, then try linuxkit cache, then try to pull from registry, then fail
@ -25,7 +26,11 @@ func imagePull(ref *reference.Spec, alwaysPull bool, cacheDir string, dockerCach
// next try the local cache
if !alwaysPull {
if image, err := cache.ValidateImage(ref, cacheDir, architecture); err == nil {
c, err := cache.NewProvider(cacheDir)
if err != nil {
return nil, err
}
if image, err := c.ValidateImage(ref, architecture); err == nil {
return image, nil
}
}
@ -35,7 +40,11 @@ func imagePull(ref *reference.Spec, alwaysPull bool, cacheDir string, dockerCach
}
// imageLayoutWrite takes an image name and pulls it down, writing it locally
func imageLayoutWrite(cacheDir string, ref *reference.Spec, architecture string) (ImageSource, error) {
func imageLayoutWrite(cacheDir string, ref *reference.Spec, architecture string) (lktspec.ImageSource, error) {
image := ref.String()
return cache.ImageWrite(cacheDir, ref, image, architecture)
c, err := cache.NewProvider(cacheDir)
if err != nil {
return nil, err
}
return c.ImagePull(ref, image, architecture)
}

View File

@ -5,8 +5,14 @@ import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/linuxkit/linuxkit/src/cmd/linuxkit/pkglib"
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
)
const (
buildersEnvVar = "LINUXKIT_BUILDERS"
)
func pkgBuild(args []string) {
@ -21,6 +27,8 @@ func pkgBuild(args []string) {
force := flags.Bool("force", false, "Force rebuild")
docker := flags.Bool("docker", false, "Store the built image in the docker image cache instead of the default linuxkit cache")
platforms := flags.String("platforms", "", "Which platforms to build for, defaults to all of those for which the package can be built")
builders := flags.String("builders", "", "Which builders to use for which platforms, e.g. linux/arm64=docker-context-arm64, overrides defaults and environment variables, see https://github.com/linuxkit/linuxkit/blob/master/docs/packages.md#Providing-native-builder-nodes")
buildCacheDir := flags.String("cache", defaultLinuxkitCache(), "Directory for storing built image, incompatible with --docker")
p, err := pkglib.NewFromCLI(flags, args...)
@ -39,8 +47,60 @@ func pkgBuild(args []string) {
if *docker {
opts = append(opts, pkglib.WithBuildTargetDockerCache())
}
// if platforms requested is blank, use all from the config
var plats []imagespec.Platform
if *platforms == "" {
for _, a := range p.Arches() {
plats = append(plats, imagespec.Platform{OS: "linux", Architecture: a})
}
} else {
for _, p := range strings.Split(*platforms, ",") {
parts := strings.SplitN(p, "/", 2)
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
fmt.Fprintf(os.Stderr, "invalid target platform specification '%s'\n", p)
os.Exit(1)
}
plats = append(plats, imagespec.Platform{OS: parts[0], Architecture: parts[1]})
}
}
opts = append(opts, pkglib.WithBuildPlatforms(plats...))
// build the builders map
buildersMap := map[string]string{}
// look for builders env var
buildersMap, err = buildPlatformBuildersMap(os.Getenv(buildersEnvVar), buildersMap)
if err != nil {
fmt.Fprintf(os.Stderr, "%s in environment variable %s\n", err.Error(), buildersEnvVar)
os.Exit(1)
}
// any CLI options override env var
buildersMap, err = buildPlatformBuildersMap(*builders, buildersMap)
if err != nil {
fmt.Fprintf(os.Stderr, "%s in --builders flag\n", err.Error())
os.Exit(1)
}
opts = append(opts, pkglib.WithBuildBuilders(buildersMap))
if err := p.Build(opts...); err != nil {
fmt.Fprintf(os.Stderr, "%v\n", err)
os.Exit(1)
}
}
func buildPlatformBuildersMap(inputs string, existing map[string]string) (map[string]string, error) {
if inputs == "" {
return existing, nil
}
for _, platformBuilder := range strings.Split(inputs, ",") {
parts := strings.SplitN(platformBuilder, "=", 2)
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
return existing, fmt.Errorf("invalid platform=builder specification '%s'", platformBuilder)
}
platform, builder := parts[0], parts[1]
parts = strings.SplitN(platform, "/", 2)
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
return existing, fmt.Errorf("invalid platform specification '%s'", platform)
}
existing[platform] = builder
}
return existing, nil
}

View File

@ -5,8 +5,10 @@ import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/linuxkit/linuxkit/src/cmd/linuxkit/pkglib"
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
)
func pkgPush(args []string) {
@ -22,6 +24,9 @@ func pkgPush(args []string) {
force := flags.Bool("force", false, "Force rebuild")
release := flags.String("release", "", "Release the given version")
nobuild := flags.Bool("nobuild", false, "Skip the build")
docker := flags.Bool("docker", false, "Store the built image in the docker image cache instead of the default linuxkit cache")
platforms := flags.String("platforms", "", "Which platforms to build for, defaults to all of those for which the package can be built")
builders := flags.String("builders", "", "Which builders to use for which platforms, e.g. linux/arm64=docker-context-arm64, overrides defaults and environment variables, see https://github.com/linuxkit/linuxkit/blob/master/docs/packages.md#Providing-native-builder-nodes")
manifest := flags.Bool("manifest", true, "Create and push multi-arch manifest")
image := flags.Bool("image", true, "Build and push image for the current platform")
buildCacheDir := flags.String("cache", defaultLinuxkitCache(), "Directory for storing built image, incompatible with --docker")
@ -50,6 +55,41 @@ func pkgPush(args []string) {
opts = append(opts, pkglib.WithBuildImage())
}
opts = append(opts, pkglib.WithBuildCacheDir(*buildCacheDir))
if *docker {
opts = append(opts, pkglib.WithBuildTargetDockerCache())
}
// if platforms requested is blank, use all from the config
var plats []imagespec.Platform
if *platforms == "" {
for _, a := range p.Arches() {
plats = append(plats, imagespec.Platform{OS: "linux", Architecture: a})
}
} else {
for _, p := range strings.Split(*platforms, ",") {
parts := strings.SplitN(p, "/", 2)
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
fmt.Fprintf(os.Stderr, "invalid target platform specification '%s'\n", p)
os.Exit(1)
}
plats = append(plats, imagespec.Platform{OS: parts[0], Architecture: parts[1]})
}
}
opts = append(opts, pkglib.WithBuildPlatforms(plats...))
// build the builders map
buildersMap := map[string]string{}
// look for builders env var
buildersMap, err = buildPlatformBuildersMap(os.Getenv(buildersEnvVar), buildersMap)
if err != nil {
fmt.Fprintf(os.Stderr, "%s in environment variable %s\n", err.Error(), buildersEnvVar)
os.Exit(1)
}
// any CLI options override env var
buildersMap, err = buildPlatformBuildersMap(*builders, buildersMap)
if err != nil {
fmt.Fprintf(os.Stderr, "%s in --builders flag\n", err.Error())
os.Exit(1)
}
opts = append(opts, pkglib.WithBuildBuilders(buildersMap))
if *nobuild {
fmt.Printf("Pushing %q without building\n", p.Tag())

View File

@ -9,11 +9,14 @@ import (
"os"
"path/filepath"
"runtime"
"strings"
"github.com/containerd/containerd/reference"
"github.com/google/go-containerregistry/pkg/v1"
"github.com/linuxkit/linuxkit/src/cmd/linuxkit/cache"
lktspec "github.com/linuxkit/linuxkit/src/cmd/linuxkit/spec"
"github.com/linuxkit/linuxkit/src/cmd/linuxkit/version"
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
@ -23,14 +26,19 @@ const (
)
type buildOpts struct {
skipBuild bool
force bool
push bool
release string
manifest bool
image bool
targetDocker bool
cache string
skipBuild bool
force bool
push bool
release string
manifest bool
image bool
targetDocker bool
cacheDir string
cacheProvider lktspec.CacheProvider
platforms []imagespec.Platform
builders map[string]string
runner dockerRunner
writer io.Writer
}
// BuildOpt allows callers to specify options to Build
@ -95,7 +103,47 @@ func WithBuildTargetDockerCache() BuildOpt {
// WithBuildCacheDir provide a build cache directory to use
func WithBuildCacheDir(dir string) BuildOpt {
return func(bo *buildOpts) error {
bo.cache = dir
bo.cacheDir = dir
return nil
}
}
// WithBuildPlatforms which platforms to build for
func WithBuildPlatforms(platforms ...imagespec.Platform) BuildOpt {
return func(bo *buildOpts) error {
bo.platforms = platforms
return nil
}
}
// WithBuildBuilders which builders, as named contexts per platform, to use
func WithBuildBuilders(builders map[string]string) BuildOpt {
return func(bo *buildOpts) error {
bo.builders = builders
return nil
}
}
// WithBuildDocker provides a docker runner to use. If nil, defaults to the current platform
func WithBuildDocker(runner dockerRunner) BuildOpt {
return func(bo *buildOpts) error {
bo.runner = runner
return nil
}
}
// WithBuildCacheProvider provides a cacheProvider to use. If nil, defaults to the one shipped with linuxkit
func WithBuildCacheProvider(c lktspec.CacheProvider) BuildOpt {
return func(bo *buildOpts) error {
bo.cacheProvider = c
return nil
}
}
// WithBuildOutputWriter set the output writer for messages. If nil, defaults to stdout
func WithBuildOutputWriter(w io.Writer) BuildOpt {
return func(bo *buildOpts) error {
bo.writer = w
return nil
}
}
@ -109,31 +157,44 @@ func (p Pkg) Build(bos ...BuildOpt) error {
}
}
arch := runtime.GOARCH
if !p.archSupported(arch) {
fmt.Printf("Arch %s not supported by this package, skipping build.\n", arch)
return nil
writer := bo.writer
if writer == nil {
writer = os.Stdout
}
arch := runtime.GOARCH
ref, err := reference.Parse(p.Tag())
if err != nil {
return fmt.Errorf("could not resolve references for image %s: %v", p.Tag(), err)
}
for _, platform := range bo.platforms {
if !p.archSupported(platform.Architecture) {
return fmt.Errorf("arch %s not supported by this package, skipping build", platform.Architecture)
}
}
if err := p.cleanForBuild(); err != nil {
return err
}
var (
desc *v1.Descriptor
suffix string
)
switch arch {
case "amd64", "arm64", "s390x":
suffix = "-" + arch
default:
return fmt.Errorf("Unknown arch %q", arch)
// did we have the build cache dir provided?
if bo.cacheDir == "" {
return errors.New("must provide linuxkit build cache directory")
}
// did we have the build cache dir provided? Yes, there is a default, but that is at the CLI level,
// and expected to be provided at this function level
if bo.cache == "" && !bo.targetDocker {
return errors.New("must provide linuxkit build cache directory when not targeting docker")
// if targeting docker, be sure local arch is a build target
if bo.targetDocker {
var found bool
for _, platform := range bo.platforms {
if platform.Architecture == arch {
found = true
break
}
}
if !found {
return fmt.Errorf("must build for local platform 'linux/%s' when targeting docker", arch)
}
}
if p.git != nil && bo.push && bo.release == "" {
@ -148,42 +209,36 @@ func (p Pkg) Build(bos ...BuildOpt) error {
return fmt.Errorf("Cannot release %q if not pushing", bo.release)
}
d := newDockerRunner(p.cache)
d := bo.runner
if d == nil {
d = newDockerRunner(p.cache)
}
c := bo.cacheProvider
if c == nil {
c, err = cache.NewProvider(bo.cacheDir)
if err != nil {
return err
}
}
if err := d.buildkitCheck(); err != nil {
return fmt.Errorf("buildkit not supported, check docker version: %v", err)
}
if !bo.force {
if bo.targetDocker {
ok, err := d.pull(p.Tag())
// any error returns
if err != nil {
return err
}
// if we already have it, do not bother building any more
if ok {
return nil
}
} else {
ref, err := reference.Parse(p.Tag())
if err != nil {
return fmt.Errorf("could not resolve references for image %s: %v", p.Tag(), err)
}
if _, err := cache.ImageWrite(bo.cache, &ref, "", arch); err == nil {
fmt.Printf("image already found %s", ref)
return nil
}
if _, err := c.ImagePull(&ref, "", arch); err == nil {
fmt.Fprintf(writer, "image already found %s", ref)
return nil
}
fmt.Println("No image pulled, continuing with build")
fmt.Fprintln(writer, "No image pulled, continuing with build")
}
if bo.image && !bo.skipBuild {
var args []string
if err := p.dockerDepends.Do(d); err != nil {
return err
}
var (
args []string
descs []v1.Descriptor
)
if p.git != nil && p.gitRepo != "" {
args = append(args, "--label", "org.opencontainers.image.source="+p.gitRepo)
@ -205,111 +260,70 @@ func (p Pkg) Build(bos ...BuildOpt) error {
if err != nil {
return err
}
args = append(args, "--label=org.mobyproject.config="+string(b))
}
args = append(args, "--label=org.mobyproject.linuxkit.version="+version.Version)
args = append(args, "--label=org.mobyproject.linuxkit.revision="+version.GitCommit)
d.ctx = &buildCtx{sources: p.sources}
// set the target
var (
buildxOutput string
stdout io.WriteCloser
tag = p.Tag()
tagArch = tag + suffix
eg errgroup.Group
stdoutCloser = func() {
if stdout != nil {
stdout.Close()
}
// build for each arch and save in the linuxkit cache
for _, platform := range bo.platforms {
desc, err := p.buildArch(d, c, platform.Architecture, args, writer, bo)
if err != nil {
return fmt.Errorf("error building for arch %s: %v", platform.Architecture, err)
}
)
ref, err := reference.Parse(tag)
if desc == nil {
return fmt.Errorf("no valid descriptor returned for image for arch %s", platform.Architecture)
}
descs = append(descs, *desc)
}
// after build is done:
// - create multi-arch manifest
// - potentially push
// - potentially load into docker
// - potentially create a release, including push and load into docker
// create a multi-arch index
if _, err := c.IndexWrite(&ref, descs...); err != nil {
return err
}
}
// get descriptor for root of manifest
desc, err := c.FindDescriptor(p.Tag())
if err != nil {
return err
}
// if requested docker, load the image up
if bo.targetDocker {
cacheSource := c.NewSource(&ref, arch, desc)
reader, err := cacheSource.TarReader()
if err != nil {
return fmt.Errorf("could not resolve references for image %s: %v", tagArch, err)
return fmt.Errorf("unable to get reader from cache: %v", err)
}
if bo.targetDocker {
buildxOutput = "type=docker"
stdout = nil
// there is no gofunc processing for simple output to docker
} else {
// we are writing to local, so we need to catch the tar output stream and place the right files in the right place
buildxOutput = "type=oci"
piper, pipew := io.Pipe()
stdout = pipew
eg.Go(func() error {
source, err := cache.ImageWriteTar(bo.cache, &ref, arch, piper)
// send the error down the channel
if err != nil {
fmt.Printf("cache.ImageWriteTar goroutine ended with error: %v\n", err)
}
desc = source.Descriptor()
piper.Close()
return err
})
}
args = append(args, fmt.Sprintf("--output=%s", buildxOutput))
if err := d.build(tagArch, p.path, stdout, args...); err != nil {
stdoutCloser()
if err := d.load(reader); err != nil {
return err
}
stdoutCloser()
}
// wait for the processor to finish
if err := eg.Wait(); err != nil {
return err
}
// create the arch-less image
switch {
case bo.targetDocker:
// if in docker, use a tag
if err := d.tag(tagArch, tag); err != nil {
return err
}
case desc == nil:
return errors.New("no valid descriptor returned for image")
default:
// if in the proper linuxkit cache, create a multi-arch index
if _, err := cache.IndexWrite(bo.cache, &ref, *desc); err != nil {
return err
}
}
if !bo.push {
fmt.Printf("Build complete, not pushing, all done.\n")
return nil
}
if !bo.push {
fmt.Fprintf(writer, "Build complete, not pushing, all done.\n")
return nil
}
if p.dirty {
return fmt.Errorf("build complete, refusing to push dirty package")
}
// If !bo.force then could do a `docker pull` here, to check
// if there is something on hub so as not to override.
// TODO(ijc) old make based system did this. Not sure if it
// matters given we do either pull or build above in the
// !force case.
if bo.targetDocker {
if err := d.pushWithManifest(p.Tag(), suffix, bo.image, bo.manifest); err != nil {
return err
}
} else {
if err := cache.PushWithManifest(bo.cache, p.Tag(), suffix, bo.image, bo.manifest); err != nil {
return err
}
// push the manifest
if err := c.Push(p.Tag()); err != nil {
return err
}
if bo.release == "" {
fmt.Printf("Build and push complete, not releasing, all done.\n")
fmt.Fprintf(writer, "Build and push complete, not releasing, all done.\n")
return nil
}
@ -318,39 +332,123 @@ func (p Pkg) Build(bos ...BuildOpt) error {
return err
}
if bo.targetDocker {
if err := d.tag(p.Tag()+suffix, relTag+suffix); err != nil {
return err
}
ref, err = reference.Parse(relTag)
if err != nil {
return err
}
if _, err := c.DescriptorWrite(&ref, *desc); err != nil {
return err
}
if err := c.Push(relTag); err != nil {
return err
}
if err := d.pushWithManifest(relTag, suffix, bo.image, bo.manifest); err != nil {
return err
}
} else {
// must make sure descriptor is available
if desc == nil {
desc, err = cache.FindDescriptor(bo.cache, p.Tag()+suffix)
if err != nil {
return err
}
}
ref, err := reference.Parse(relTag + suffix)
if err != nil {
return err
}
if _, err := cache.DescriptorWrite(bo.cache, &ref, *desc); err != nil {
return err
}
if err := cache.PushWithManifest(bo.cache, relTag, suffix, bo.image, bo.manifest); err != nil {
// tag in docker, if requested
if bo.targetDocker {
if err := d.tag(p.Tag(), relTag); err != nil {
return err
}
}
fmt.Printf("Build, push and release of %q complete, all done.\n", bo.release)
fmt.Fprintf(writer, "Build, push and release of %q complete, all done.\n", bo.release)
return nil
}
// buildArch builds the package for a single arch
func (p Pkg) buildArch(d dockerRunner, c lktspec.CacheProvider, arch string, args []string, writer io.Writer, bo buildOpts) (*v1.Descriptor, error) {
var (
desc *v1.Descriptor
tagArch string
tag = p.Tag()
)
switch arch {
case "amd64", "arm64", "s390x":
tagArch = tag + "-" + arch
default:
return nil, fmt.Errorf("Unknown arch %q", arch)
}
fmt.Fprintf(writer, "Building for arch %s as %s\n", arch, tagArch)
if !bo.force {
ref, err := reference.Parse(p.Tag())
if err != nil {
return nil, fmt.Errorf("could not resolve references for image %s: %v", p.Tag(), err)
}
if _, err := c.ImagePull(&ref, "", arch); err == nil {
fmt.Fprintf(writer, "image already found %s for arch %s", ref, arch)
desc, err := c.FindDescriptor(ref.String())
if err != nil {
return nil, fmt.Errorf("could not find root descriptor for %s: %v", ref, err)
}
return desc, nil
}
fmt.Fprintf(writer, "No image pulled for arch %s, continuing with build\n", arch)
}
if err := p.dockerDepends.Do(d); err != nil {
return nil, err
}
// find the desired builder
builderName := getBuilderForPlatform(arch, bo.builders)
d.setBuildCtx(&buildCtx{sources: p.sources})
// set the target
var (
buildxOutput string
stdout io.WriteCloser
eg errgroup.Group
stdoutCloser = func() {
if stdout != nil {
stdout.Close()
}
}
)
ref, err := reference.Parse(tag)
if err != nil {
return nil, fmt.Errorf("could not resolve references for image %s: %v", tagArch, err)
}
// we are writing to local, so we need to catch the tar output stream and place the right files in the right place
buildxOutput = "type=oci"
piper, pipew := io.Pipe()
stdout = pipew
eg.Go(func() error {
source, err := c.ImageLoad(&ref, arch, piper)
// send the error down the channel
if err != nil {
fmt.Fprintf(stdout, "cache.ImageLoad goroutine ended with error: %v\n", err)
} else {
desc = source.Descriptor()
}
piper.Close()
return err
})
args = append(args, fmt.Sprintf("--output=%s", buildxOutput))
platform := fmt.Sprintf("linux/%s", arch)
archArgs := append(args, "--platform")
archArgs = append(archArgs, platform)
if err := d.build(tagArch, p.path, builderName, platform, stdout, archArgs...); err != nil {
stdoutCloser()
if strings.Contains(err.Error(), "executor failed running [/dev/.buildkit_qemu_emulator") {
return nil, fmt.Errorf("buildkit was unable to emulate %s. check binfmt has been set up and works for this platform: %v", platform, err)
}
return nil, err
}
stdoutCloser()
// wait for the processor to finish
if err := eg.Wait(); err != nil {
return nil, err
}
return desc, nil
}
type buildCtx struct {
sources []pkgSource
}
@ -419,3 +517,9 @@ func (c *buildCtx) Copy(w io.WriteCloser) error {
return nil
}
// getBuilderForPlatform given an arch, find the context for the desired builder.
// If it does not exist, return "".
func getBuilderForPlatform(arch string, builders map[string]string) string {
return builders[fmt.Sprintf("linux/%s", arch)]
}

View File

@ -0,0 +1,367 @@
package pkglib
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"math/rand"
"runtime"
"strings"
"testing"
"github.com/containerd/containerd/reference"
"github.com/google/go-containerregistry/pkg/v1"
"github.com/google/go-containerregistry/pkg/v1/types"
lktspec "github.com/linuxkit/linuxkit/src/cmd/linuxkit/spec"
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
)
type dockerMocker struct {
supportBuildKit bool
images map[string][]byte
enableTag bool
enableBuild bool
enablePull bool
ctx buildContext
fixedReadName string
builds []buildLog
}
type buildLog struct {
tag string
pkg string
dockerContext string
platform string
opts []string
}
func (d *dockerMocker) buildkitCheck() error {
if d.supportBuildKit {
return nil
}
return errors.New("buildkit unsupported")
}
func (d *dockerMocker) tag(ref, tag string) error {
if !d.enableTag {
return errors.New("tags not allowed")
}
d.images[tag] = d.images[ref]
return nil
}
func (d *dockerMocker) build(tag, pkg, dockerContext, platform string, stdout io.Writer, opts ...string) error {
if !d.enableBuild {
return errors.New("build disabled")
}
d.builds = append(d.builds, buildLog{tag, pkg, dockerContext, platform, opts})
return nil
}
func (d *dockerMocker) save(tgt string, refs ...string) error {
var b []byte
for _, ref := range refs {
if data, ok := d.images[ref]; ok {
b = append(b, data...)
continue
}
return fmt.Errorf("do not have image %s", ref)
}
return ioutil.WriteFile(tgt, b, 0666)
}
func (d *dockerMocker) load(src io.Reader) error {
b, err := ioutil.ReadAll(src)
if err != nil {
return err
}
d.images[d.fixedReadName] = b
return nil
}
func (d *dockerMocker) pull(img string) (bool, error) {
if d.enablePull {
b := make([]byte, 256)
rand.Read(b)
d.images[img] = b
return true, nil
}
return false, errors.New("failed to pull")
}
func (d *dockerMocker) setBuildCtx(ctx buildContext) {
d.ctx = ctx
}
type cacheMocker struct {
enablePush bool
enabledDescriptorWrite bool
enableImagePull bool
enableImageLoad bool
enableIndexWrite bool
images map[string][]v1.Descriptor
hashes map[string][]byte
}
func (c *cacheMocker) ImagePull(ref *reference.Spec, trustedRef, architecture string) (lktspec.ImageSource, error) {
if !c.enableImagePull {
return nil, errors.New("ImagePull disabled")
}
// make some random data for a layer
b := make([]byte, 256)
rand.Read(b)
return c.imageWriteStream(ref, architecture, bytes.NewReader(b))
}
func (c *cacheMocker) ImageLoad(ref *reference.Spec, architecture string, r io.Reader) (lktspec.ImageSource, error) {
if !c.enableImageLoad {
return nil, errors.New("ImageLoad disabled")
}
return c.imageWriteStream(ref, architecture, r)
}
func (c *cacheMocker) imageWriteStream(ref *reference.Spec, architecture string, r io.Reader) (lktspec.ImageSource, error) {
image := ref.String()
// make some random data for a layer
b, err := ioutil.ReadAll(r)
if err != nil {
return nil, fmt.Errorf("error reading data: %v", err)
}
hash, size, err := v1.SHA256(bytes.NewReader(b))
if err != nil {
return nil, fmt.Errorf("error calculating hash of layer: %v", err)
}
c.assignHash(hash.String(), b)
im := v1.Manifest{
MediaType: types.OCIManifestSchema1,
Layers: []v1.Descriptor{
{MediaType: types.OCILayer, Size: size, Digest: hash},
},
SchemaVersion: 2,
}
// write the updated index, remove the old one
b, err = json.Marshal(im)
if err != nil {
return nil, fmt.Errorf("unable to marshal new image to json: %v", err)
}
hash, size, err = v1.SHA256(bytes.NewReader(b))
if err != nil {
return nil, fmt.Errorf("error calculating hash of index json: %v", err)
}
c.assignHash(hash.String(), b)
desc := v1.Descriptor{
MediaType: types.OCIManifestSchema1,
Size: size,
Digest: hash,
Annotations: map[string]string{
imagespec.AnnotationRefName: image,
},
}
c.appendImage(image, desc)
return c.NewSource(ref, "", &desc), nil
}
func (c *cacheMocker) IndexWrite(ref *reference.Spec, descriptors ...v1.Descriptor) (lktspec.ImageSource, error) {
if !c.enableIndexWrite {
return nil, errors.New("disabled")
}
image := ref.String()
im := v1.IndexManifest{
MediaType: types.OCIImageIndex,
Manifests: descriptors,
SchemaVersion: 2,
}
// write the updated index, remove the old one
b, err := json.Marshal(im)
if err != nil {
return nil, fmt.Errorf("unable to marshal new index to json: %v", err)
}
hash, size, err := v1.SHA256(bytes.NewReader(b))
if err != nil {
return nil, fmt.Errorf("error calculating hash of index json: %v", err)
}
c.assignHash(hash.String(), b)
desc := v1.Descriptor{
MediaType: types.OCIImageIndex,
Size: size,
Digest: hash,
Annotations: map[string]string{
imagespec.AnnotationRefName: image,
},
}
c.appendImage(image, desc)
return c.NewSource(ref, "", &desc), nil
}
func (c *cacheMocker) Push(name string) error {
if !c.enablePush {
return errors.New("push disabled")
}
if _, ok := c.images[name]; !ok {
return fmt.Errorf("unknown image %s", name)
}
return nil
}
func (c *cacheMocker) DescriptorWrite(ref *reference.Spec, descriptors ...v1.Descriptor) (lktspec.ImageSource, error) {
if !c.enabledDescriptorWrite {
return nil, errors.New("descriptor disabled")
}
var (
image = ref.String()
im = v1.IndexManifest{
MediaType: types.OCIImageIndex,
Manifests: descriptors,
SchemaVersion: 2,
}
)
// write the updated index, remove the old one
b, err := json.Marshal(im)
if err != nil {
return nil, fmt.Errorf("unable to marshal new index to json: %v", err)
}
hash, size, err := v1.SHA256(bytes.NewReader(b))
if err != nil {
return nil, fmt.Errorf("error calculating hash of index json: %v", err)
}
c.assignHash(hash.String(), b)
root := v1.Descriptor{
MediaType: types.OCIImageIndex,
Size: size,
Digest: hash,
Annotations: map[string]string{
imagespec.AnnotationRefName: image,
},
}
c.appendImage(image, root)
return c.NewSource(ref, "", &root), nil
}
func (c *cacheMocker) FindDescriptor(name string) (*v1.Descriptor, error) {
if desc, ok := c.images[name]; ok && len(desc) > 0 {
return &desc[0], nil
}
return nil, fmt.Errorf("not found %s", name)
}
func (c *cacheMocker) NewSource(ref *reference.Spec, architecture string, descriptor *v1.Descriptor) lktspec.ImageSource {
return cacheMockerSource{c, ref, architecture, descriptor}
}
func (c *cacheMocker) assignHash(hash string, b []byte) {
if c.hashes == nil {
c.hashes = map[string][]byte{}
}
c.hashes[hash] = b
}
func (c *cacheMocker) appendImage(image string, root v1.Descriptor) {
if c.images == nil {
c.images = map[string][]v1.Descriptor{}
}
c.images[image] = append(c.images[image], root)
}
type cacheMockerSource struct {
c *cacheMocker
ref *reference.Spec
architecture string
descriptor *v1.Descriptor
}
func (c cacheMockerSource) Config() (imagespec.ImageConfig, error) {
return imagespec.ImageConfig{}, errors.New("unsupported")
}
func (c cacheMockerSource) TarReader() (io.ReadCloser, error) {
return nil, errors.New("unsupported")
}
func (c cacheMockerSource) Descriptor() *v1.Descriptor {
return c.descriptor
}
func TestBuild(t *testing.T) {
var (
nonLocal string
cacheDir = "somecachedir"
)
if runtime.GOARCH == "amd64" {
nonLocal = "arm64"
} else {
nonLocal = "amd64"
}
tests := []struct {
msg string
p Pkg
options []BuildOpt
targets []string
runner *dockerMocker
cache *cacheMocker
err string
}{
{"missing tag", Pkg{}, nil, nil, &dockerMocker{}, &cacheMocker{}, "could not resolve references"},
{"invalid tag", Pkg{image: "docker.io/foo/bar:abc:def:ghi"}, nil, nil, &dockerMocker{}, &cacheMocker{}, "could not resolve references"},
{"mismatched platforms", Pkg{org: "foo", image: "bar", hash: "abc", arches: []string{"arm64"}}, nil, []string{"amd64"}, nil, nil, fmt.Sprintf("arch %s not supported", "amd64")},
{"not at head", Pkg{org: "foo", image: "bar", hash: "abc", arches: []string{"amd64"}, commitHash: "foo"}, nil, []string{"amd64"}, &dockerMocker{supportBuildKit: false}, &cacheMocker{}, "Cannot build from commit hash != HEAD"},
{"no build cache", Pkg{org: "foo", image: "bar", hash: "abc", arches: []string{"amd64"}, commitHash: "HEAD"}, nil, []string{"amd64"}, &dockerMocker{supportBuildKit: false}, &cacheMocker{}, "must provide linuxkit build cache"},
{"unsupported buildkit", Pkg{org: "foo", image: "bar", hash: "abc", arches: []string{"amd64"}, commitHash: "HEAD"}, []BuildOpt{WithBuildCacheDir(cacheDir)}, []string{"amd64"}, &dockerMocker{supportBuildKit: false}, &cacheMocker{}, "buildkit not supported, check docker version"},
{"load docker without local platform", Pkg{org: "foo", image: "bar", hash: "abc", arches: []string{"amd64", "arm64"}, commitHash: "HEAD"}, []BuildOpt{WithBuildCacheDir(cacheDir), WithBuildTargetDockerCache()}, []string{nonLocal}, &dockerMocker{supportBuildKit: false}, &cacheMocker{}, "must build for local platform"},
{"amd64", Pkg{org: "foo", image: "bar", hash: "abc", arches: []string{"amd64", "arm64"}, commitHash: "HEAD"}, []BuildOpt{WithBuildCacheDir(cacheDir), WithBuildImage()}, []string{"amd64"}, &dockerMocker{supportBuildKit: true, enableBuild: true}, &cacheMocker{enableImagePull: false, enableImageLoad: true, enableIndexWrite: true}, ""},
{"arm64", Pkg{org: "foo", image: "bar", hash: "abc", arches: []string{"amd64", "arm64"}, commitHash: "HEAD"}, []BuildOpt{WithBuildCacheDir(cacheDir), WithBuildImage()}, []string{"arm64"}, &dockerMocker{supportBuildKit: true, enableBuild: true}, &cacheMocker{enableImagePull: false, enableImageLoad: true, enableIndexWrite: true}, ""},
{"amd64 and arm64", Pkg{org: "foo", image: "bar", hash: "abc", arches: []string{"amd64", "arm64"}, commitHash: "HEAD"}, []BuildOpt{WithBuildCacheDir(cacheDir), WithBuildImage()}, []string{"amd64", "arm64"}, &dockerMocker{supportBuildKit: true, enableBuild: true}, &cacheMocker{enableImagePull: false, enableImageLoad: true, enableIndexWrite: true}, ""},
}
for _, tt := range tests {
t.Run(tt.msg, func(t *testing.T) {
opts := append(tt.options, WithBuildDocker(tt.runner), WithBuildCacheProvider(tt.cache), WithBuildOutputWriter(ioutil.Discard))
// build our build options
if len(tt.targets) > 0 {
var targets []imagespec.Platform
for _, arch := range tt.targets {
targets = append(targets, imagespec.Platform{OS: "linux", Architecture: arch})
}
opts = append(opts, WithBuildPlatforms(targets...))
}
err := tt.p.Build(opts...)
switch {
case (tt.err == "" && err != nil) || (tt.err != "" && err == nil) || (tt.err != "" && err != nil && !strings.HasPrefix(err.Error(), tt.err)):
t.Errorf("mismatched errors actual '%v', expected '%v'", err, tt.err)
case tt.err == "" && len(tt.runner.builds) != len(tt.targets):
// need to make sure that it was called the correct number of times with the correct arguments
t.Errorf("mismatched call to runners, should be %d was %d: %#v", len(tt.targets), len(tt.runner.builds), tt.runner.builds)
case tt.err == "":
// check that all of our platforms were called
platformMap := map[string]bool{}
for _, arch := range tt.targets {
platformMap[fmt.Sprintf("linux/%s", arch)] = false
}
for _, build := range tt.runner.builds {
if err := testCheckBuildRun(build, platformMap); err != nil {
t.Errorf("mismatch in build: '%v', %#v", err, build)
}
}
}
})
}
}
// testCheckBuildRun check the output of a build run
func testCheckBuildRun(build buildLog, platforms map[string]bool) error {
for i, arg := range build.opts {
switch {
case arg == "--platform", arg == "-platform":
if i+1 >= len(build.opts) {
return errors.New("provided arg --platform with no next argument")
}
platform := build.opts[i+1]
used, ok := platforms[platform]
if !ok {
return fmt.Errorf("requested unknown platform: %s", platform)
}
if used {
return fmt.Errorf("tried to use platform twice: %s", platform)
}
platforms[platform] = true
return nil
}
}
return errors.New("missing platform argument")
}

View File

@ -5,13 +5,16 @@ package pkglib
//go:generate ./gen
import (
"bufio"
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"strings"
versioncompare "github.com/hashicorp/go-version"
"github.com/linuxkit/linuxkit/src/cmd/linuxkit/registry"
@ -28,7 +31,17 @@ var platforms = []string{
"linux/amd64", "linux/arm64", "linux/s390x",
}
type dockerRunner struct {
type dockerRunner interface {
buildkitCheck() error
tag(ref, tag string) error
build(tag, pkg, dockerContext, platform string, stdout io.Writer, opts ...string) error
save(tgt string, refs ...string) error
load(src io.Reader) error
pull(img string) (bool, error)
setBuildCtx(ctx buildContext)
}
type dockerRunnerImpl struct {
cache bool
// Optional build context to use
@ -41,7 +54,7 @@ type buildContext interface {
}
func newDockerRunner(cache bool) dockerRunner {
return dockerRunner{cache: cache}
return &dockerRunnerImpl{cache: cache}
}
func isExecErrNotFound(err error) bool {
@ -67,7 +80,7 @@ var proxyEnvVars = []string{
"ALL_PROXY",
}
func (dr dockerRunner) command(stdout, stderr io.Writer, args ...string) error {
func (dr *dockerRunnerImpl) command(stdout, stderr io.Writer, args ...string) error {
cmd := exec.Command("docker", args...)
if stdout == nil {
stdout = os.Stdout
@ -124,7 +137,7 @@ func (dr dockerRunner) command(stdout, stderr io.Writer, args ...string) error {
// versionCheck returns the client version and server version, and compares them both
// against the minimum required version.
func (dr dockerRunner) versionCheck(version string) (string, string, error) {
func (dr *dockerRunnerImpl) versionCheck(version string) (string, string, error) {
var stdout bytes.Buffer
if err := dr.command(&stdout, nil, "version", "--format", "json"); err != nil {
return "", "", err
@ -186,22 +199,102 @@ func (dr dockerRunner) versionCheck(version string) (string, string, error) {
// buildkitCheck checks if buildkit is supported. This is necessary because github uses some strange versions
// of docker in Actions, which makes it difficult to tell if buildkit is supported.
// See https://github.community/t/what-really-is-docker-3-0-6/16171
func (dr dockerRunner) buildkitCheck() error {
return dr.command(nil, nil, "buildx", "ls")
func (dr *dockerRunnerImpl) buildkitCheck() error {
return dr.command(ioutil.Discard, ioutil.Discard, "buildx", "ls")
}
// builder ensure that a builder of the given name exists
func (dr dockerRunner) builder(name string) error {
if err := dr.command(nil, nil, "buildx", "inspect", name); err == nil {
// if no error, then we have a builder already
return nil
// builder ensure that a builder exists. Works as follows.
// 1. if dockerContext is provided, try to create a builder with that context; if it succeeds, we are done; if not, return an error.
// 2. try to find an existing named runner with the pattern; if it succeeds, we are done; if not, try next.
// 3. try to create a generic builder using the default context named "linuxkit".
func (dr *dockerRunnerImpl) builder(dockerContext, platform string) (string, error) {
var (
builderName string
args = []string{"buildx", "create", "--driver", "docker-container", "--buildkitd-flags", "--allow-insecure-entitlement network.host"}
)
// if we were given a context, we must find a builder and use it, or create one and use it
if dockerContext != "" {
// does the context exist?
if err := dr.command(ioutil.Discard, ioutil.Discard, "context", "inspect", dockerContext); err != nil {
return "", fmt.Errorf("provided docker context '%s' not found", dockerContext)
}
builderName = fmt.Sprintf("%s-%s-%s-builder", buildkitBuilderName, dockerContext, strings.ReplaceAll(platform, "/", "-"))
if err := dr.builderEnsureContainer(builderName, platform, dockerContext, args...); err != nil {
return "", fmt.Errorf("error preparing builder based on context '%s': %v", dockerContext, err)
}
return builderName, nil
}
// create a builder
return dr.command(nil, nil, "buildx", "create", "--name", name, "--driver", "docker-container", "--buildkitd-flags", "--allow-insecure-entitlement network.host")
// no provided dockerContext, so look for one based on platform-specific name
dockerContext = fmt.Sprintf("%s-%s", buildkitBuilderName, strings.ReplaceAll(platform, "/", "-"))
if err := dr.command(ioutil.Discard, ioutil.Discard, "context", "inspect", dockerContext); err == nil {
// we found an appropriately named context, so let us try to use it or error out
builderName = fmt.Sprintf("%s-builder", dockerContext)
if err := dr.builderEnsureContainer(builderName, platform, dockerContext, args...); err == nil {
return builderName, nil
}
}
// create a generic builder
builderName = buildkitBuilderName
if err := dr.builderEnsureContainer(builderName, "", "", args...); err != nil {
return "", fmt.Errorf("error ensuring default builder '%s': %v", builderName, err)
}
return builderName, nil
}
func (dr dockerRunner) pull(img string) (bool, error) {
// builderEnsureContainer provided a name of a builder, ensure that the builder exists, and if not, create it
// based on the provided docker context, for the target platform.. Assumes the dockerContext already exists.
func (dr *dockerRunnerImpl) builderEnsureContainer(name, platform, dockerContext string, args ...string) error {
// if no error, then we have a builder already
// inspect it to make sure it is of the right type
var b bytes.Buffer
if err := dr.command(&b, ioutil.Discard, "buildx", "inspect", name); err != nil {
// we did not have the named builder, so create the builder
args = append(args, "--name", name)
msg := fmt.Sprintf("creating builder '%s'", name)
if platform != "" {
args = append(args, "--platform", platform)
msg = fmt.Sprintf("%s for platform '%s'", msg, platform)
} else {
msg = fmt.Sprintf("%s for all supported platforms", msg)
}
if dockerContext != "" {
args = append(args, dockerContext)
msg = fmt.Sprintf("%s based on docker context '%s'", msg, dockerContext)
}
fmt.Println(msg)
return dr.command(ioutil.Discard, ioutil.Discard, args...)
}
// if we got here, we found a builder already, so let us check its type
var (
scanner = bufio.NewScanner(&b)
driver string
)
for scanner.Scan() {
fields := strings.Fields(scanner.Text())
if len(fields) < 2 {
continue
}
if fields[0] != "Driver:" {
continue
}
driver = fields[1]
break
}
switch driver {
case "":
return fmt.Errorf("builder '%s' exists but has no driver type", name)
case "docker-container":
return nil
default:
return fmt.Errorf("builder '%s' exists but has wrong driver type '%s'", name, driver)
}
}
func (dr *dockerRunnerImpl) pull(img string) (bool, error) {
err := dr.command(nil, nil, "image", "pull", img)
if err == nil {
return true, nil
@ -214,11 +307,11 @@ func (dr dockerRunner) pull(img string) (bool, error) {
}
}
func (dr dockerRunner) push(img string) error {
func (dr dockerRunnerImpl) push(img string) error {
return dr.command(nil, nil, "image", "push", img)
}
func (dr dockerRunner) pushWithManifest(img, suffix string, pushImage, pushManifest bool) error {
func (dr *dockerRunnerImpl) pushWithManifest(img, suffix string, pushImage, pushManifest bool) error {
var err error
if pushImage {
fmt.Printf("Pushing %s\n", img+suffix)
@ -246,14 +339,15 @@ func (dr dockerRunner) pushWithManifest(img, suffix string, pushImage, pushManif
return nil
}
func (dr dockerRunner) tag(ref, tag string) error {
func (dr *dockerRunnerImpl) tag(ref, tag string) error {
fmt.Printf("Tagging %s as %s\n", ref, tag)
return dr.command(nil, nil, "image", "tag", ref, tag)
}
func (dr dockerRunner) build(tag, pkg string, stdout io.Writer, opts ...string) error {
func (dr *dockerRunnerImpl) build(tag, pkg, dockerContext, platform string, stdout io.Writer, opts ...string) error {
// ensure we have a builder
if err := dr.builder(buildkitBuilderName); err != nil {
builderName, err := dr.builder(dockerContext, platform)
if err != nil {
return fmt.Errorf("unable to ensure proper buildx builder: %v", err)
}
@ -262,12 +356,24 @@ func (dr dockerRunner) build(tag, pkg string, stdout io.Writer, opts ...string)
args = append(args, "--no-cache")
}
args = append(args, opts...)
args = append(args, fmt.Sprintf("--builder=%s", buildkitBuilderName))
args = append(args, fmt.Sprintf("--builder=%s", builderName))
args = append(args, "-t", tag, pkg)
fmt.Printf("building for platform %s using builder %s\n", platform, builderName)
return dr.command(stdout, nil, args...)
}
func (dr dockerRunner) save(tgt string, refs ...string) error {
func (dr *dockerRunnerImpl) save(tgt string, refs ...string) error {
args := append([]string{"image", "save", "-o", tgt}, refs...)
return dr.command(nil, nil, args...)
}
func (dr *dockerRunnerImpl) load(src io.Reader) error {
args := []string{"image", "load"}
dr.ctx = &readerCtx{
reader: src,
}
return dr.command(nil, nil, args...)
}
func (dr *dockerRunnerImpl) setBuildCtx(ctx buildContext) {
dr.ctx = ctx
}

View File

@ -281,6 +281,11 @@ func (p Pkg) TrustEnabled() bool {
return p.trust
}
// Arches which arches this can be built for
func (p Pkg) Arches() []string {
return p.arches
}
func (p Pkg) archSupported(want string) bool {
for _, supp := range p.arches {
if supp == want {

View File

@ -0,0 +1,18 @@
package pkglib
import (
"io"
)
type readerCtx struct {
reader io.Reader
}
// Copy just copies from reader to writer
func (c *readerCtx) Copy(w io.WriteCloser) error {
_, err := io.Copy(w, c.reader)
if err != nil {
return err
}
return w.Close()
}

View File

@ -0,0 +1,19 @@
package spec
import (
"io"
"github.com/containerd/containerd/reference"
"github.com/google/go-containerregistry/pkg/v1"
)
// CacheProvider interface for a provide of a cache.
type CacheProvider interface {
FindDescriptor(name string) (*v1.Descriptor, error)
ImagePull(ref *reference.Spec, trustedRef, architecture string) (ImageSource, error)
IndexWrite(ref *reference.Spec, descriptors ...v1.Descriptor) (ImageSource, error)
ImageLoad(ref *reference.Spec, architecture string, r io.Reader) (ImageSource, error)
DescriptorWrite(ref *reference.Spec, descriptors ...v1.Descriptor) (ImageSource, error)
Push(name string) error
NewSource(ref *reference.Spec, architecture string, descriptor *v1.Descriptor) ImageSource
}

View File

@ -0,0 +1,16 @@
package spec
import (
"io"
"github.com/google/go-containerregistry/pkg/v1"
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
)
// ImageSource interface to an image. It can have its config read, and a its containers
// can be read via an io.ReadCloser tar stream.
type ImageSource interface {
Config() (imagespec.ImageConfig, error)
TarReader() (io.ReadCloser, error)
Descriptor() *v1.Descriptor
}