mirror of
https://github.com/linuxkit/linuxkit.git
synced 2025-07-19 09:16:29 +00:00
Merge moby/tool into LinuxKit
Note these ended up with unrelated histories in the export process. Signed-off-by: Justin Cormack <justin@specialbusservice.com>
This commit is contained in:
commit
021b5718f8
13
docs/privateimages.md
Normal file
13
docs/privateimages.md
Normal file
@ -0,0 +1,13 @@
|
||||
## Private Images
|
||||
When building, `moby` downloads, and optionally checks the notary signature, on any OCI images referenced in any section.
|
||||
|
||||
As of this writing, `moby` does **not** have the ability to download these images from registries that require credentials to access. This is equally true for private images on public registries, like https://hub.docker.com, as for private registries.
|
||||
|
||||
We are working on enabling private images with credentials. Until such time as that feature is added, you can follow these steps to build a moby image using OCI images
|
||||
that require credentials to access:
|
||||
|
||||
1. `docker login` as relevant to authenticate against the desired registry.
|
||||
2. `docker pull` to download the images to your local machine where you will run `moby build`.
|
||||
3. Run `moby build` (or `linuxkit build`).
|
||||
|
||||
Additionally, ensure that you do **not** have trust enabled for those images. See the section on [trust](#trust) in this document. Alternately, you can run `moby build` or `linuxkit build` with `--disable-trust`.
|
273
docs/yaml.md
Normal file
273
docs/yaml.md
Normal file
@ -0,0 +1,273 @@
|
||||
# Configuration Reference
|
||||
|
||||
The `moby` tool assembles a set of containerised components into in image. The simplest
|
||||
type of image is just a `tar` file of the contents (useful for debugging) but more useful
|
||||
outputs add a `Dockerfile` to build a container, or build a full disk image that can be
|
||||
booted as a linuxKit VM. The main use case is to build an assembly that includes
|
||||
`containerd` to run a set of containers, but the tooling is very generic.
|
||||
|
||||
The yaml configuration specifies the components used to build up an image . All components
|
||||
are downloaded at build time to create an image. The image is self-contained and immutable,
|
||||
so it can be tested reliably for continuous delivery.
|
||||
|
||||
The configuration file is processed in the order `kernel`, `init`, `onboot`, `onshutdown`,
|
||||
`services`, `files`. Each section adds files to the root file system. Sections may be omitted.
|
||||
|
||||
Each container that is specified is allocated a unique `uid` and `gid` that it may use if it
|
||||
wishes to run as an isolated user (or user namespace). Anywhere you specify a `uid` or `gid`
|
||||
field you specify either the numeric id, or if you use a name it will refer to the id allocated
|
||||
to the container with that name.
|
||||
|
||||
```
|
||||
services:
|
||||
- name: redis
|
||||
image: redis:latest
|
||||
uid: redis
|
||||
gid: redis
|
||||
binds:
|
||||
- /etc/redis:/etc/redis
|
||||
files:
|
||||
- path: /etc/redis/redis.conf
|
||||
contents: "..."
|
||||
uid: redis
|
||||
gid: redis
|
||||
mode: "0600"
|
||||
```
|
||||
|
||||
## `kernel`
|
||||
|
||||
The `kernel` section is only required if booting a VM. The files will be put into the `boot/`
|
||||
directory, where they are used to build bootable images.
|
||||
|
||||
The `kernel` section defines the kernel configuration. The `image` field specifies the Docker image,
|
||||
which should contain a `kernel` file that will be booted (eg a `bzImage` for `amd64`) and a file
|
||||
called `kernel.tar` which is a tarball that is unpacked into the root, which should usually
|
||||
contain a kernel modules directory. `cmdline` specifies the kernel command line options if required.
|
||||
|
||||
To override the names, you can specify the kernel image name with `binary: bzImage` and the tar image
|
||||
with `tar: kernel.tar` or the empty string or `none` if you do not want to use a tarball at all.
|
||||
|
||||
Kernel packages may also contain a cpio archive containing CPU microcode which needs prepending to
|
||||
the initrd. To select this option, recommended when booting on bare metal, add `ucode: intel-ucode.cpio`
|
||||
to the kernel section.
|
||||
|
||||
## `init`
|
||||
|
||||
The `init` section is a list of images that are used for the `init` system and are unpacked directly
|
||||
into the root filesystem. This should bring up `containerd`, start the system and daemon containers,
|
||||
and set up basic filesystem mounts. in the case of a LinuxKit system. For ease of
|
||||
modification `runc` and `containerd` images, which just contain these programs are added here
|
||||
rather than bundled into the `init` container.
|
||||
|
||||
## `onboot`
|
||||
|
||||
The `onboot` section is a list of images. These images are run before any other
|
||||
images. They are run sequentially and each must exit before the next one is run.
|
||||
These images can be used to configure one shot settings. See [Image
|
||||
specification](#image-specification) for a list of supported fields.
|
||||
|
||||
## `onshutdown`
|
||||
|
||||
This is a list of images to run on a clean shutdown. Note that you must not rely on these
|
||||
being run at all, as machines may be be powered off or shut down without having time to run
|
||||
these scripts. If you add anything here you should test both in the case where they are
|
||||
run and when they are not. Most systems are likely to be "crash only" and not have any setup here,
|
||||
but you can attempt to deregister cleanly from a network service here, rather than relying
|
||||
on timeouts, for example.
|
||||
|
||||
## `services`
|
||||
|
||||
The `services` section is a list of images for long running services which are
|
||||
run with `containerd`. Startup order is undefined, so containers should wait
|
||||
on any resources, such as networking, that they need. See [Image
|
||||
specification](#image-specification) for a list of supported fields.
|
||||
|
||||
## `files`
|
||||
|
||||
The files section can be used to add files inline in the config, or from an external file.
|
||||
|
||||
```
|
||||
files:
|
||||
- path: dir
|
||||
directory: true
|
||||
mode: "0777"
|
||||
- path: dir/name1
|
||||
source: "/some/path/on/local/filesystem"
|
||||
mode: "0666"
|
||||
- path: dir/name2
|
||||
source: "/some/path/that/it/is/ok/to/omit"
|
||||
optional: true
|
||||
mode: "0666"
|
||||
- path: dir/name3
|
||||
contents: "orange"
|
||||
mode: "0644"
|
||||
uid: 100
|
||||
gid: 100
|
||||
```
|
||||
|
||||
Specifying the `mode` is optional, and will default to `0600`. Leading directories will be
|
||||
created if not specified. You can use `~/path` in `source` to specify a path in the build
|
||||
user's home directory.
|
||||
|
||||
In addition there is a `metadata` option that will generate the file. Currently the only value
|
||||
supported here is `"yaml"` which will output the yaml used to generate the image into the specified
|
||||
file:
|
||||
```
|
||||
- path: etc/linuxkit.yml
|
||||
metadata: yaml
|
||||
```
|
||||
|
||||
Because a `tmpfs` is mounted onto `/var`, `/run`, and `/tmp` by default, the `tmpfs` mounts will shadow anything specified in `files` section for those directories.
|
||||
|
||||
## `trust`
|
||||
|
||||
The `trust` section specifies which build components are to be cryptographically verified with
|
||||
[Docker Content Trust](https://docs.docker.com/engine/security/trust/content_trust/) prior to pulling.
|
||||
Trust is a central concern in any build system, and LinuxKit's is no exception: Docker Content Trust provides authenticity,
|
||||
integrity, and freshness guarantees for the components it verifies. The LinuxKit maintainers are responsible for signing
|
||||
`linuxkit` components, though collaborators can sign their own images with Docker Content Trust or [Notary](https://github.com/docker/notary).
|
||||
|
||||
- `image` lists which individual images to enforce pulling with Docker Content Trust.
|
||||
The image name may include tag or digest, but the matching also succeeds if the base image name is the same.
|
||||
- `org` lists which organizations for which Docker Content Trust is to be enforced across all images,
|
||||
for example `linuxkit` is the org for `linuxkit/kernel`
|
||||
|
||||
## Image specification
|
||||
|
||||
Entries in the `onboot` and `services` sections specify an OCI image and
|
||||
options. Default values may be specified using the `org.mobyproject.config` image label.
|
||||
For more details see the [OCI specification](https://github.com/opencontainers/runtime-spec/blob/master/spec.md).
|
||||
|
||||
If the `org.mobylinux.config` label is set in the image, that specifies default values for these fields if they
|
||||
are not set in the yaml file. You can override the label by setting the value, or setting it to be empty to remove
|
||||
the specification for that value in the label.
|
||||
|
||||
If you need an OCI option that is not specified here please open an issue or pull request as the list is not yet
|
||||
complete.
|
||||
|
||||
By default the containers will be run in the host `net`, `ipc` and `uts` namespaces, as that is the usual requirement;
|
||||
in many ways they behave like pods in Kubernetes. Mount points must already exist, as must a file or directory being
|
||||
bind mounted into a container.
|
||||
|
||||
- `name` a unique name for the program being executed, used as the `containerd` id.
|
||||
- `image` the Docker image to use for the root filesystem. The default command, path and environment are
|
||||
extracted from this so they need not be filled in.
|
||||
- `capabilities` the Linux capabilities required, for example `CAP_SYS_ADMIN`. If there is a single
|
||||
capability `all` then all capabilities are added.
|
||||
- `ambient` the Linux ambient capabilities (capabilities passed to non root users) that are required.
|
||||
- `mounts` is the full form for specifying a mount, which requires `type`, `source`, `destination`
|
||||
and a list of `options`. If any fields are omitted, sensible defaults are used if possible, for example
|
||||
if the `type` is `dev` it is assumed you want to mount at `/dev`. The default mounts and their options
|
||||
can be replaced by specifying a mount with new options here at the same mount point.
|
||||
- `binds` is a simpler interface to specify bind mounts, accepting a string like `/src:/dest:opt1,opt2`
|
||||
similar to the `-v` option for bind mounts in Docker.
|
||||
- `tmpfs` is a simpler interface to mount a `tmpfs`, like `--tmpfs` in Docker, taking `/dest:opt1,opt2`.
|
||||
- `command` will override the command and entrypoint in the image with a new list of commands.
|
||||
- `env` will override the environment in the image with a new environment list. Specify variables as `VAR=value`.
|
||||
- `cwd` will set the working directory, defaults to `/`.
|
||||
- `net` sets the network namespace, either to a path, or if `none` or `new` is specified it will use a new namespace.
|
||||
- `ipc` sets the ipc namespace, either to a path, or if `new` is specified it will use a new namespace.
|
||||
- `uts` sets the uts namespace, either to a path, or if `new` is specified it will use a new namespace.
|
||||
- `pid` sets the pid namespace, either to a path, or if `host` is specified it will use the host namespace.
|
||||
- `readonly` sets the root filesystem to read only, and changes the other default filesystems to read only.
|
||||
- `maskedPaths` sets paths which should be hidden.
|
||||
- `readonlyPaths` sets paths to read only.
|
||||
- `uid` sets the user id of the process.
|
||||
- `gid` sets the group id of the process.
|
||||
- `additionalGids` sets a list of additional groups for the process.
|
||||
- `noNewPrivileges` is `true` means no additional capabilities can be acquired and `suid` binaries do not work.
|
||||
- `hostname` sets the hostname inside the image.
|
||||
- `oomScoreAdj` changes the OOM score.
|
||||
- `rootfsPropagation` sets the rootfs propagation, eg `shared`, `slave` or (default) `private`.
|
||||
- `cgroupsPath` sets the path for cgroups.
|
||||
- `resources` sets cgroup resource limits as per the OCI spec.
|
||||
- `sysctl` sets a map of `sysctl` key value pairs that are set inside the container namespace.
|
||||
- `rmlimits` sets a list of `rlimit` values in the form `name,soft,hard`, eg `nofile,100,200`. You can use `unlimited` as a value too.
|
||||
- `annotations` sets a map of key value pairs as OCI metadata.
|
||||
|
||||
There are experimental `userns`, `uidMappings` and `gidMappings` options for user namespaces but these are not yet supported, and may have
|
||||
permissions issues in use.
|
||||
|
||||
In addition to the parts of the specification above used to generate the OCI spec, there is a `runtime` section in the image specification
|
||||
which specifies some actions to take place when the container is being started.
|
||||
- `cgroups` takes a list of cgroups that will be created before the container is run.
|
||||
- `mounts` takes a list of mount specifications (`source`, `destination`, `type`, `options`) and mounts them in the root namespace before the container is created. It will
|
||||
try to make any missing destination directories.
|
||||
- `mkdir` takes a list of directories to create at runtime, in the root mount namespace. These are created before the container is started, so they can be used to create
|
||||
directories for bind mounts, for example in `/tmp` or `/run` which would otherwise be empty.
|
||||
- `interface` defines a list of actions to perform on a network interface:
|
||||
- `name` specifies the name of an interface. An existing interface with this name will be moved into the container's network namespace.
|
||||
- `add` specifies a type of interface to be created in the containers namespace, with the specified name.
|
||||
- `createInRoot` is a boolean which specifes that the interface being `add`ed should be created in the root namespace first, then moved. This is needed for `wireguard` interfaces.
|
||||
- `peer` specifies the name of the other end when creating a `veth` interface. This end will remain in the root namespace, where it can be attached to a bridge. Specifying this implies `add: veth`.
|
||||
- `bindNS` specifies a namespace type and a path where the namespace from the container being created will be bound. This allows a namespace to be set up in an `onboot` container, and then
|
||||
using `net: path` for a `service` container to use that network namespace later.
|
||||
- `namespace` overrides the LinuxKit default containerd namespace to put the container in; only applicable to services.
|
||||
|
||||
An example of using the `runtime` config to configure a network namespace with `wireguard` and then run `nginx` in that namespace is shown below:
|
||||
```
|
||||
onboot:
|
||||
- name: dhcpcd
|
||||
image: linuxkit/dhcpcd:<hash>
|
||||
command: ["/sbin/dhcpcd", "--nobackground", "-f", "/dhcpcd.conf", "-1"]
|
||||
- name: wg
|
||||
image: linuxkit/ip:<hash>
|
||||
net: new
|
||||
binds:
|
||||
- /etc/wireguard:/etc/wireguard
|
||||
command: ["sh", "-c", "ip link set dev wg0 up; ip address add dev wg0 192.168.2.1 peer 192.168.2.2; wg setconf wg0 /etc/wireguard/wg0.conf; wg show wg0"]
|
||||
runtime:
|
||||
interfaces:
|
||||
- name: wg0
|
||||
add: wireguard
|
||||
createInRoot: true
|
||||
bindNS:
|
||||
net: /run/netns/wg
|
||||
services:
|
||||
- name: nginx
|
||||
image: nginx:alpine
|
||||
net: /run/netns/wg
|
||||
capabilities:
|
||||
- CAP_NET_BIND_SERVICE
|
||||
- CAP_CHOWN
|
||||
- CAP_SETUID
|
||||
- CAP_SETGID
|
||||
- CAP_DAC_OVERRIDE
|
||||
```
|
||||
|
||||
|
||||
### Mount Options
|
||||
When mounting filesystem paths into a container - whether as part of `onboot` or `services` - there are several options of which you need to be aware. Using them properly is necessary for your containers to function properly.
|
||||
|
||||
For most containers - e.g. nginx or even docker - these options are not needed. Simply doing the following will work fine:
|
||||
|
||||
```yml
|
||||
binds:
|
||||
- /var:/some/var/path
|
||||
```
|
||||
|
||||
Please note that `binds` doesn't **add** the mount points, but **replaces** them.
|
||||
You can examine the `Dockerfile` of the component (in particular, `binds` value of
|
||||
`org.mobyproject.config` label) to get the list of the existing binds.
|
||||
|
||||
However, in some circumstances you will need additional options. These options are used primarily if you intend to make changes to mount points _from within your container_ that should be visible from outside the container, e.g., if you intend to mount an external disk from inside the container but have it be visible outside.
|
||||
|
||||
In order for new mounts from within a container to be propagated, you must set the following on the container:
|
||||
|
||||
1. `rootfsPropagation: shared`
|
||||
2. The mount point into the container below which new mounts are to occur must be `rshared,rbind`. In practice, this is `/var` (or some subdir of `/var`), since that is the only true read-write area of the filesystem where you will mount things.
|
||||
|
||||
Thus, if you have a regular container that is only reading and writing, go ahead and do:
|
||||
|
||||
```yml
|
||||
binds:
|
||||
- /var:/some/var/path
|
||||
```
|
||||
|
||||
On the other hand, if you have a container that will make new mounts that you wish to be visible outside the container, do:
|
||||
|
||||
```yml
|
||||
binds:
|
||||
- /var:/var:rshared,rbind
|
||||
rootfsPropagation: shared
|
||||
```
|
196
src/initrd/initrd.go
Normal file
196
src/initrd/initrd.go
Normal file
@ -0,0 +1,196 @@
|
||||
package initrd
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"errors"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/moby/tool/src/pad4"
|
||||
"github.com/surma/gocpio"
|
||||
)
|
||||
|
||||
// Writer is an io.WriteCloser that writes to an initrd
|
||||
// This is a compressed cpio archive, zero padded to 4 bytes
|
||||
type Writer struct {
|
||||
pw *pad4.Writer
|
||||
gw *gzip.Writer
|
||||
cw *cpio.Writer
|
||||
}
|
||||
|
||||
func typeconv(thdr *tar.Header) int64 {
|
||||
switch thdr.Typeflag {
|
||||
case tar.TypeReg:
|
||||
return cpio.TYPE_REG
|
||||
case tar.TypeRegA:
|
||||
return cpio.TYPE_REG
|
||||
// Currently hard links not supported very well :)
|
||||
// Convert to relative symlink as absolute will not work in container
|
||||
// cpio does support hardlinks but file contents still duplicated, so rely
|
||||
// on compression to fix that which is fairly ugly. Symlink has not caused issues.
|
||||
case tar.TypeLink:
|
||||
dir := filepath.Dir(thdr.Name)
|
||||
rel, err := filepath.Rel(dir, thdr.Linkname)
|
||||
if err != nil {
|
||||
// should never happen, but leave as full abs path
|
||||
rel = "/" + thdr.Linkname
|
||||
}
|
||||
thdr.Linkname = rel
|
||||
return cpio.TYPE_SYMLINK
|
||||
case tar.TypeSymlink:
|
||||
return cpio.TYPE_SYMLINK
|
||||
case tar.TypeChar:
|
||||
return cpio.TYPE_CHAR
|
||||
case tar.TypeBlock:
|
||||
return cpio.TYPE_BLK
|
||||
case tar.TypeDir:
|
||||
return cpio.TYPE_DIR
|
||||
case tar.TypeFifo:
|
||||
return cpio.TYPE_FIFO
|
||||
default:
|
||||
return -1
|
||||
}
|
||||
}
|
||||
|
||||
func copyTarEntry(w *Writer, thdr *tar.Header, r io.Reader) (written int64, err error) {
|
||||
tp := typeconv(thdr)
|
||||
if tp == -1 {
|
||||
return written, errors.New("cannot convert tar file")
|
||||
}
|
||||
size := thdr.Size
|
||||
if tp == cpio.TYPE_SYMLINK {
|
||||
size = int64(len(thdr.Linkname))
|
||||
}
|
||||
chdr := cpio.Header{
|
||||
Mode: thdr.Mode,
|
||||
Uid: thdr.Uid,
|
||||
Gid: thdr.Gid,
|
||||
Mtime: thdr.ModTime.Unix(),
|
||||
Size: size,
|
||||
Devmajor: thdr.Devmajor,
|
||||
Devminor: thdr.Devminor,
|
||||
Type: tp,
|
||||
Name: thdr.Name,
|
||||
}
|
||||
err = w.WriteHeader(&chdr)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
var n int64
|
||||
switch tp {
|
||||
case cpio.TYPE_SYMLINK:
|
||||
buffer := bytes.NewBufferString(thdr.Linkname)
|
||||
n, err = io.Copy(w, buffer)
|
||||
case cpio.TYPE_REG:
|
||||
n, err = io.Copy(w, r)
|
||||
}
|
||||
written += n
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// CopyTar copies a tar stream into an initrd
|
||||
func CopyTar(w *Writer, r *tar.Reader) (written int64, err error) {
|
||||
for {
|
||||
var thdr *tar.Header
|
||||
thdr, err = r.Next()
|
||||
if err == io.EOF {
|
||||
return written, nil
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
written, err = copyTarEntry(w, thdr, r)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// CopySplitTar copies a tar stream into an initrd, but splits out kernel, cmdline, and ucode
|
||||
func CopySplitTar(w *Writer, r *tar.Reader) (kernel []byte, cmdline string, ucode []byte, err error) {
|
||||
for {
|
||||
var thdr *tar.Header
|
||||
thdr, err = r.Next()
|
||||
if err == io.EOF {
|
||||
return kernel, cmdline, ucode, nil
|
||||
}
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
switch {
|
||||
case thdr.Name == "boot/kernel":
|
||||
kernel, err = ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
case thdr.Name == "boot/cmdline":
|
||||
var buf []byte
|
||||
buf, err = ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
cmdline = string(buf)
|
||||
case thdr.Name == "boot/ucode.cpio":
|
||||
ucode, err = ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
case strings.HasPrefix(thdr.Name, "boot/"):
|
||||
// skip the rest of ./boot
|
||||
default:
|
||||
_, err = copyTarEntry(w, thdr, r)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// NewWriter creates a writer that will output an initrd stream
|
||||
func NewWriter(w io.Writer) *Writer {
|
||||
initrd := new(Writer)
|
||||
initrd.pw = pad4.NewWriter(w)
|
||||
initrd.gw = gzip.NewWriter(initrd.pw)
|
||||
initrd.cw = cpio.NewWriter(initrd.gw)
|
||||
|
||||
return initrd
|
||||
}
|
||||
|
||||
// WriteHeader writes a cpio header into an initrd
|
||||
func (w *Writer) WriteHeader(hdr *cpio.Header) error {
|
||||
return w.cw.WriteHeader(hdr)
|
||||
}
|
||||
|
||||
// Write writes a cpio file into an initrd
|
||||
func (w *Writer) Write(b []byte) (n int, e error) {
|
||||
return w.cw.Write(b)
|
||||
}
|
||||
|
||||
// Close closes the writer
|
||||
func (w *Writer) Close() error {
|
||||
err1 := w.cw.Close()
|
||||
err2 := w.gw.Close()
|
||||
err3 := w.pw.Close()
|
||||
if err1 != nil {
|
||||
return err1
|
||||
}
|
||||
if err2 != nil {
|
||||
return err2
|
||||
}
|
||||
if err3 != nil {
|
||||
return err3
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Copy reads a tarball in a stream and outputs a compressed init ram disk
|
||||
func Copy(w *Writer, r io.Reader) (int64, error) {
|
||||
tr := tar.NewReader(r)
|
||||
|
||||
return CopyTar(w, tr)
|
||||
}
|
599
src/moby/build.go
Normal file
599
src/moby/build.go
Normal file
@ -0,0 +1,599 @@
|
||||
package moby
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
"gopkg.in/yaml.v2"
|
||||
)
|
||||
|
||||
var streamable = map[string]bool{
|
||||
"docker": true,
|
||||
"tar": true,
|
||||
}
|
||||
|
||||
// Streamable returns true if an output can be streamed
|
||||
func Streamable(t string) bool {
|
||||
return streamable[t]
|
||||
}
|
||||
|
||||
type addFun func(*tar.Writer) error
|
||||
|
||||
const dockerfile = `
|
||||
FROM scratch
|
||||
|
||||
COPY . ./
|
||||
|
||||
ENTRYPOINT ["/bin/rc.init"]
|
||||
`
|
||||
|
||||
// For now this is a constant that we use in init section only to make
|
||||
// resolv.conf point at somewhere writeable. In future whe we are not using
|
||||
// Docker to extract images we can read this directly from image, but now Docker
|
||||
// will overwrite anything we put in the image.
|
||||
const resolvconfSymlink = "/run/resolvconf/resolv.conf"
|
||||
|
||||
var additions = map[string]addFun{
|
||||
"docker": func(tw *tar.Writer) error {
|
||||
log.Infof(" Adding Dockerfile")
|
||||
hdr := &tar.Header{
|
||||
Name: "Dockerfile",
|
||||
Mode: 0644,
|
||||
Size: int64(len(dockerfile)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := tw.Write([]byte(dockerfile)); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
// OutputTypes returns a list of the valid output types
|
||||
func OutputTypes() []string {
|
||||
ts := []string{}
|
||||
for k := range streamable {
|
||||
ts = append(ts, k)
|
||||
}
|
||||
for k := range outFuns {
|
||||
ts = append(ts, k)
|
||||
}
|
||||
sort.Strings(ts)
|
||||
|
||||
return ts
|
||||
}
|
||||
|
||||
func enforceContentTrust(fullImageName string, config *TrustConfig) bool {
|
||||
for _, img := range config.Image {
|
||||
// First check for an exact name match
|
||||
if img == fullImageName {
|
||||
return true
|
||||
}
|
||||
// Also check for an image name only match
|
||||
// by removing a possible tag (with possibly added digest):
|
||||
imgAndTag := strings.Split(fullImageName, ":")
|
||||
if len(imgAndTag) >= 2 && img == imgAndTag[0] {
|
||||
return true
|
||||
}
|
||||
// and by removing a possible digest:
|
||||
imgAndDigest := strings.Split(fullImageName, "@sha256:")
|
||||
if len(imgAndDigest) >= 2 && img == imgAndDigest[0] {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
for _, org := range config.Org {
|
||||
var imgOrg string
|
||||
splitName := strings.Split(fullImageName, "/")
|
||||
switch len(splitName) {
|
||||
case 0:
|
||||
// if the image is empty, return false
|
||||
return false
|
||||
case 1:
|
||||
// for single names like nginx, use library
|
||||
imgOrg = "library"
|
||||
case 2:
|
||||
// for names that assume docker hub, like linxukit/alpine, take the first split
|
||||
imgOrg = splitName[0]
|
||||
default:
|
||||
// for names that include the registry, the second piece is the org, ex: docker.io/library/alpine
|
||||
imgOrg = splitName[1]
|
||||
}
|
||||
if imgOrg == org {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func outputImage(image *Image, section string, prefix string, m Moby, idMap map[string]uint32, dupMap map[string]string, pull bool, iw *tar.Writer) error {
|
||||
log.Infof(" Create OCI config for %s", image.Image)
|
||||
useTrust := enforceContentTrust(image.Image, &m.Trust)
|
||||
oci, runtime, err := ConfigToOCI(image, useTrust, idMap)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to create OCI spec for %s: %v", image.Image, err)
|
||||
}
|
||||
config, err := json.MarshalIndent(oci, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to create config for %s: %v", image.Image, err)
|
||||
}
|
||||
path := path.Join("containers", section, prefix+image.Name)
|
||||
readonly := oci.Root.Readonly
|
||||
err = ImageBundle(path, image.ref, config, runtime, iw, useTrust, pull, readonly, dupMap)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to extract root filesystem for %s: %v", image.Image, err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Build performs the actual build process
|
||||
func Build(m Moby, w io.Writer, pull bool, tp string) error {
|
||||
if MobyDir == "" {
|
||||
MobyDir = defaultMobyConfigDir()
|
||||
}
|
||||
|
||||
// create tmp dir in case needed
|
||||
if err := os.MkdirAll(filepath.Join(MobyDir, "tmp"), 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
iw := tar.NewWriter(w)
|
||||
|
||||
// add additions
|
||||
addition := additions[tp]
|
||||
|
||||
// allocate each container a uid, gid that can be referenced by name
|
||||
idMap := map[string]uint32{}
|
||||
id := uint32(100)
|
||||
for _, image := range m.Onboot {
|
||||
idMap[image.Name] = id
|
||||
id++
|
||||
}
|
||||
for _, image := range m.Onshutdown {
|
||||
idMap[image.Name] = id
|
||||
id++
|
||||
}
|
||||
for _, image := range m.Services {
|
||||
idMap[image.Name] = id
|
||||
id++
|
||||
}
|
||||
|
||||
// deduplicate containers with the same image
|
||||
dupMap := map[string]string{}
|
||||
|
||||
if m.Kernel.ref != nil {
|
||||
// get kernel and initrd tarball and ucode cpio archive from container
|
||||
log.Infof("Extract kernel image: %s", m.Kernel.ref)
|
||||
kf := newKernelFilter(iw, m.Kernel.Cmdline, m.Kernel.Binary, m.Kernel.Tar, m.Kernel.UCode)
|
||||
err := ImageTar(m.Kernel.ref, "", kf, enforceContentTrust(m.Kernel.ref.String(), &m.Trust), pull, "")
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to extract kernel image and tarball: %v", err)
|
||||
}
|
||||
err = kf.Close()
|
||||
if err != nil {
|
||||
return fmt.Errorf("Close error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// convert init images to tarballs
|
||||
if len(m.Init) != 0 {
|
||||
log.Infof("Add init containers:")
|
||||
}
|
||||
for _, ii := range m.initRefs {
|
||||
log.Infof("Process init image: %s", ii)
|
||||
err := ImageTar(ii, "", iw, enforceContentTrust(ii.String(), &m.Trust), pull, resolvconfSymlink)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to build init tarball from %s: %v", ii, err)
|
||||
}
|
||||
}
|
||||
|
||||
if len(m.Onboot) != 0 {
|
||||
log.Infof("Add onboot containers:")
|
||||
}
|
||||
for i, image := range m.Onboot {
|
||||
so := fmt.Sprintf("%03d", i)
|
||||
if err := outputImage(image, "onboot", so+"-", m, idMap, dupMap, pull, iw); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(m.Onshutdown) != 0 {
|
||||
log.Infof("Add onshutdown containers:")
|
||||
}
|
||||
for i, image := range m.Onshutdown {
|
||||
so := fmt.Sprintf("%03d", i)
|
||||
if err := outputImage(image, "onshutdown", so+"-", m, idMap, dupMap, pull, iw); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(m.Services) != 0 {
|
||||
log.Infof("Add service containers:")
|
||||
}
|
||||
for _, image := range m.Services {
|
||||
if err := outputImage(image, "services", "", m, idMap, dupMap, pull, iw); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// add files
|
||||
err := filesystem(m, iw, idMap)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to add filesystem parts: %v", err)
|
||||
}
|
||||
|
||||
// add anything additional for this output type
|
||||
if addition != nil {
|
||||
err = addition(iw)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to add additional files: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
err = iw.Close()
|
||||
if err != nil {
|
||||
return fmt.Errorf("initrd close error: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// kernelFilter is a tar.Writer that transforms a kernel image into the output we want on underlying tar writer
|
||||
type kernelFilter struct {
|
||||
tw *tar.Writer
|
||||
buffer *bytes.Buffer
|
||||
cmdline string
|
||||
kernel string
|
||||
tar string
|
||||
ucode string
|
||||
discard bool
|
||||
foundKernel bool
|
||||
foundKTar bool
|
||||
foundUCode bool
|
||||
}
|
||||
|
||||
func newKernelFilter(tw *tar.Writer, cmdline string, kernel string, tar, ucode *string) *kernelFilter {
|
||||
tarName, kernelName, ucodeName := "kernel.tar", "kernel", ""
|
||||
if tar != nil {
|
||||
tarName = *tar
|
||||
if tarName == "none" {
|
||||
tarName = ""
|
||||
}
|
||||
}
|
||||
if kernel != "" {
|
||||
kernelName = kernel
|
||||
}
|
||||
if ucode != nil {
|
||||
ucodeName = *ucode
|
||||
}
|
||||
return &kernelFilter{tw: tw, cmdline: cmdline, kernel: kernelName, tar: tarName, ucode: ucodeName}
|
||||
}
|
||||
|
||||
func (k *kernelFilter) finishTar() error {
|
||||
if k.buffer == nil {
|
||||
return nil
|
||||
}
|
||||
tr := tar.NewReader(k.buffer)
|
||||
err := tarAppend(k.tw, tr)
|
||||
k.buffer = nil
|
||||
return err
|
||||
}
|
||||
|
||||
func (k *kernelFilter) Close() error {
|
||||
if !k.foundKernel {
|
||||
return errors.New("did not find kernel in kernel image")
|
||||
}
|
||||
if !k.foundKTar && k.tar != "" {
|
||||
return errors.New("did not find kernel tar in kernel image")
|
||||
}
|
||||
return k.finishTar()
|
||||
}
|
||||
|
||||
func (k *kernelFilter) Flush() error {
|
||||
err := k.finishTar()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return k.tw.Flush()
|
||||
}
|
||||
|
||||
func (k *kernelFilter) Write(b []byte) (n int, err error) {
|
||||
if k.discard {
|
||||
return len(b), nil
|
||||
}
|
||||
if k.buffer != nil {
|
||||
return k.buffer.Write(b)
|
||||
}
|
||||
return k.tw.Write(b)
|
||||
}
|
||||
|
||||
func (k *kernelFilter) WriteHeader(hdr *tar.Header) error {
|
||||
err := k.finishTar()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tw := k.tw
|
||||
switch hdr.Name {
|
||||
case k.kernel:
|
||||
if k.foundKernel {
|
||||
return errors.New("found more than one possible kernel image")
|
||||
}
|
||||
k.foundKernel = true
|
||||
k.discard = false
|
||||
// If we handled the ucode, /boot already exist.
|
||||
if !k.foundUCode {
|
||||
whdr := &tar.Header{
|
||||
Name: "boot",
|
||||
Mode: 0755,
|
||||
Typeflag: tar.TypeDir,
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(whdr); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// add the cmdline in /boot/cmdline
|
||||
whdr := &tar.Header{
|
||||
Name: "boot/cmdline",
|
||||
Mode: 0644,
|
||||
Size: int64(len(k.cmdline)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(whdr); err != nil {
|
||||
return err
|
||||
}
|
||||
buf := bytes.NewBufferString(k.cmdline)
|
||||
_, err = io.Copy(tw, buf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
whdr = &tar.Header{
|
||||
Name: "boot/kernel",
|
||||
Mode: hdr.Mode,
|
||||
Size: hdr.Size,
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(whdr); err != nil {
|
||||
return err
|
||||
}
|
||||
case k.tar:
|
||||
k.foundKTar = true
|
||||
k.discard = false
|
||||
k.buffer = new(bytes.Buffer)
|
||||
case k.ucode:
|
||||
k.foundUCode = true
|
||||
k.discard = false
|
||||
// If we handled the kernel, /boot already exist.
|
||||
if !k.foundKernel {
|
||||
whdr := &tar.Header{
|
||||
Name: "boot",
|
||||
Mode: 0755,
|
||||
Typeflag: tar.TypeDir,
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(whdr); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
whdr := &tar.Header{
|
||||
Name: "boot/ucode.cpio",
|
||||
Mode: hdr.Mode,
|
||||
Size: hdr.Size,
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(whdr); err != nil {
|
||||
return err
|
||||
}
|
||||
default:
|
||||
k.discard = true
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func tarAppend(iw *tar.Writer, tr *tar.Reader) error {
|
||||
for {
|
||||
hdr, err := tr.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = iw.WriteHeader(hdr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = io.Copy(iw, tr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// this allows inserting metadata into a file in the image
|
||||
func metadata(m Moby, md string) ([]byte, error) {
|
||||
// Make sure the Image strings are update to date with the refs
|
||||
updateImages(&m)
|
||||
switch md {
|
||||
case "json":
|
||||
return json.MarshalIndent(m, "", " ")
|
||||
case "yaml":
|
||||
return yaml.Marshal(m)
|
||||
default:
|
||||
return []byte{}, fmt.Errorf("Unsupported metadata type: %s", md)
|
||||
}
|
||||
}
|
||||
|
||||
func filesystem(m Moby, tw *tar.Writer, idMap map[string]uint32) error {
|
||||
// TODO also include the files added in other parts of the build
|
||||
var addedFiles = map[string]bool{}
|
||||
|
||||
if len(m.Files) != 0 {
|
||||
log.Infof("Add files:")
|
||||
}
|
||||
for _, f := range m.Files {
|
||||
log.Infof(" %s", f.Path)
|
||||
if f.Path == "" {
|
||||
return errors.New("Did not specify path for file")
|
||||
}
|
||||
// tar archives should not have absolute paths
|
||||
if f.Path[0] == os.PathSeparator {
|
||||
f.Path = f.Path[1:]
|
||||
}
|
||||
mode := int64(0600)
|
||||
if f.Directory {
|
||||
mode = 0700
|
||||
}
|
||||
if f.Mode != "" {
|
||||
var err error
|
||||
mode, err = strconv.ParseInt(f.Mode, 8, 32)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Cannot parse file mode as octal value: %v", err)
|
||||
}
|
||||
}
|
||||
dirMode := mode
|
||||
if dirMode&0700 != 0 {
|
||||
dirMode |= 0100
|
||||
}
|
||||
if dirMode&0070 != 0 {
|
||||
dirMode |= 0010
|
||||
}
|
||||
if dirMode&0007 != 0 {
|
||||
dirMode |= 0001
|
||||
}
|
||||
|
||||
uid, err := idNumeric(f.UID, idMap)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
gid, err := idNumeric(f.GID, idMap)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var contents []byte
|
||||
if f.Contents != nil {
|
||||
contents = []byte(*f.Contents)
|
||||
}
|
||||
if !f.Directory && f.Symlink == "" && f.Contents == nil {
|
||||
if f.Source == "" && f.Metadata == "" {
|
||||
return fmt.Errorf("Contents of file (%s) not specified", f.Path)
|
||||
}
|
||||
if f.Source != "" && f.Metadata != "" {
|
||||
return fmt.Errorf("Specified Source and Metadata for file: %s", f.Path)
|
||||
}
|
||||
if f.Source != "" {
|
||||
source := f.Source
|
||||
if len(source) > 2 && source[:2] == "~/" {
|
||||
source = homeDir() + source[1:]
|
||||
}
|
||||
if f.Optional {
|
||||
_, err := os.Stat(source)
|
||||
if err != nil {
|
||||
// skip if not found or readable
|
||||
log.Debugf("Skipping file [%s] as not readable and marked optional", source)
|
||||
continue
|
||||
}
|
||||
}
|
||||
var err error
|
||||
contents, err = ioutil.ReadFile(source)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
contents, err = metadata(m, f.Metadata)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if f.Metadata != "" {
|
||||
return fmt.Errorf("Specified Contents and Metadata for file: %s", f.Path)
|
||||
}
|
||||
if f.Source != "" {
|
||||
return fmt.Errorf("Specified Contents and Source for file: %s", f.Path)
|
||||
}
|
||||
}
|
||||
// we need all the leading directories
|
||||
parts := strings.Split(path.Dir(f.Path), "/")
|
||||
root := ""
|
||||
for _, p := range parts {
|
||||
if p == "." || p == "/" {
|
||||
continue
|
||||
}
|
||||
if root == "" {
|
||||
root = p
|
||||
} else {
|
||||
root = root + "/" + p
|
||||
}
|
||||
if !addedFiles[root] {
|
||||
hdr := &tar.Header{
|
||||
Name: root,
|
||||
Typeflag: tar.TypeDir,
|
||||
Mode: dirMode,
|
||||
Uid: int(uid),
|
||||
Gid: int(gid),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
err := tw.WriteHeader(hdr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
addedFiles[root] = true
|
||||
}
|
||||
}
|
||||
addedFiles[f.Path] = true
|
||||
hdr := &tar.Header{
|
||||
Name: f.Path,
|
||||
Mode: mode,
|
||||
Uid: int(uid),
|
||||
Gid: int(gid),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if f.Directory {
|
||||
if f.Contents != nil {
|
||||
return errors.New("Directory with contents not allowed")
|
||||
}
|
||||
hdr.Typeflag = tar.TypeDir
|
||||
err := tw.WriteHeader(hdr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else if f.Symlink != "" {
|
||||
hdr.Typeflag = tar.TypeSymlink
|
||||
hdr.Linkname = f.Symlink
|
||||
err := tw.WriteHeader(hdr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
hdr.Size = int64(len(contents))
|
||||
err := tw.WriteHeader(hdr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tw.Write(contents)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
1058
src/moby/config.go
Normal file
1058
src/moby/config.go
Normal file
File diff suppressed because it is too large
Load Diff
113
src/moby/config_test.go
Normal file
113
src/moby/config_test.go
Normal file
@ -0,0 +1,113 @@
|
||||
package moby
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/container"
|
||||
)
|
||||
|
||||
func setupInspect(t *testing.T, label ImageConfig) types.ImageInspect {
|
||||
var inspect types.ImageInspect
|
||||
var config container.Config
|
||||
|
||||
labelJSON, err := json.Marshal(label)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
config.Labels = map[string]string{"org.mobyproject.config": string(labelJSON)}
|
||||
|
||||
inspect.Config = &config
|
||||
|
||||
return inspect
|
||||
}
|
||||
|
||||
func TestOverrides(t *testing.T) {
|
||||
idMap := map[string]uint32{}
|
||||
|
||||
var yamlCaps = []string{"CAP_SYS_ADMIN"}
|
||||
|
||||
var yaml = Image{
|
||||
Name: "test",
|
||||
Image: "testimage",
|
||||
ImageConfig: ImageConfig{
|
||||
Capabilities: &yamlCaps,
|
||||
},
|
||||
}
|
||||
|
||||
var labelCaps = []string{"CAP_SYS_CHROOT"}
|
||||
|
||||
var label = ImageConfig{
|
||||
Capabilities: &labelCaps,
|
||||
Cwd: "/label/directory",
|
||||
}
|
||||
|
||||
inspect := setupInspect(t, label)
|
||||
|
||||
oci, _, err := ConfigInspectToOCI(&yaml, inspect, idMap)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(oci.Process.Capabilities.Bounding, yamlCaps) {
|
||||
t.Error("Expected yaml capabilities to override but got", oci.Process.Capabilities.Bounding)
|
||||
}
|
||||
if oci.Process.Cwd != label.Cwd {
|
||||
t.Error("Expected label Cwd to be applied, got", oci.Process.Cwd)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidCap(t *testing.T) {
|
||||
idMap := map[string]uint32{}
|
||||
|
||||
yaml := Image{
|
||||
Name: "test",
|
||||
Image: "testimage",
|
||||
}
|
||||
|
||||
labelCaps := []string{"NOT_A_CAP"}
|
||||
var label = ImageConfig{
|
||||
Capabilities: &labelCaps,
|
||||
}
|
||||
|
||||
inspect := setupInspect(t, label)
|
||||
|
||||
_, _, err := ConfigInspectToOCI(&yaml, inspect, idMap)
|
||||
if err == nil {
|
||||
t.Error("expected error, got valid OCI config")
|
||||
}
|
||||
}
|
||||
|
||||
func TestIdMap(t *testing.T) {
|
||||
idMap := map[string]uint32{"test": 199}
|
||||
|
||||
var uid interface{} = "test"
|
||||
var gid interface{} = 76
|
||||
|
||||
yaml := Image{
|
||||
Name: "test",
|
||||
Image: "testimage",
|
||||
ImageConfig: ImageConfig{
|
||||
UID: &uid,
|
||||
GID: &gid,
|
||||
},
|
||||
}
|
||||
|
||||
var label = ImageConfig{}
|
||||
|
||||
inspect := setupInspect(t, label)
|
||||
|
||||
oci, _, err := ConfigInspectToOCI(&yaml, inspect, idMap)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if oci.Process.User.UID != 199 {
|
||||
t.Error("Expected named uid to work")
|
||||
}
|
||||
if oci.Process.User.GID != 76 {
|
||||
t.Error("Expected numerical gid to work")
|
||||
}
|
||||
}
|
193
src/moby/docker.go
Normal file
193
src/moby/docker.go
Normal file
@ -0,0 +1,193 @@
|
||||
package moby
|
||||
|
||||
// We want to replace much of this with use of containerd tools
|
||||
// and also using the Docker API not shelling out
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
|
||||
"github.com/containerd/containerd/reference"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/container"
|
||||
"github.com/docker/docker/api/types/filters"
|
||||
"github.com/docker/docker/client"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
func dockerRun(input io.Reader, output io.Writer, trust bool, img string, args ...string) error {
|
||||
log.Debugf("docker run %s (trust=%t) (input): %s", img, trust, strings.Join(args, " "))
|
||||
docker, err := exec.LookPath("docker")
|
||||
if err != nil {
|
||||
return errors.New("Docker does not seem to be installed")
|
||||
}
|
||||
|
||||
env := os.Environ()
|
||||
if trust {
|
||||
env = append(env, "DOCKER_CONTENT_TRUST=1")
|
||||
}
|
||||
|
||||
// Pull first to avoid https://github.com/docker/cli/issues/631
|
||||
pull := exec.Command(docker, "pull", img)
|
||||
pull.Env = env
|
||||
if err := pull.Run(); err != nil {
|
||||
if exitError, ok := err.(*exec.ExitError); ok {
|
||||
return fmt.Errorf("docker pull %s failed: %v output:\n%s", img, err, exitError.Stderr)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
args = append([]string{"run", "--network=none", "--rm", "-i", img}, args...)
|
||||
cmd := exec.Command(docker, args...)
|
||||
cmd.Stdin = input
|
||||
cmd.Stdout = output
|
||||
cmd.Env = env
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
if exitError, ok := err.(*exec.ExitError); ok {
|
||||
return fmt.Errorf("docker run %s failed: %v output:\n%s", img, err, exitError.Stderr)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
log.Debugf("docker run %s (input): %s...Done", img, strings.Join(args, " "))
|
||||
return nil
|
||||
}
|
||||
|
||||
func dockerCreate(image string) (string, error) {
|
||||
log.Debugf("docker create: %s", image)
|
||||
cli, err := dockerClient()
|
||||
if err != nil {
|
||||
return "", errors.New("could not initialize Docker API client")
|
||||
}
|
||||
// we do not ever run the container, so /dev/null is used as command
|
||||
config := &container.Config{
|
||||
Cmd: []string{"/dev/null"},
|
||||
Image: image,
|
||||
}
|
||||
respBody, err := cli.ContainerCreate(context.Background(), config, nil, nil, "")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
log.Debugf("docker create: %s...Done", image)
|
||||
return respBody.ID, nil
|
||||
}
|
||||
|
||||
func dockerExport(container string) (io.ReadCloser, error) {
|
||||
log.Debugf("docker export: %s", container)
|
||||
cli, err := dockerClient()
|
||||
if err != nil {
|
||||
return nil, errors.New("could not initialize Docker API client")
|
||||
}
|
||||
responseBody, err := cli.ContainerExport(context.Background(), container)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return responseBody, err
|
||||
}
|
||||
|
||||
func dockerRm(container string) error {
|
||||
log.Debugf("docker rm: %s", container)
|
||||
cli, err := dockerClient()
|
||||
if err != nil {
|
||||
return errors.New("could not initialize Docker API client")
|
||||
}
|
||||
if err = cli.ContainerRemove(context.Background(), container, types.ContainerRemoveOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Debugf("docker rm: %s...Done", container)
|
||||
return nil
|
||||
}
|
||||
|
||||
func dockerPull(ref *reference.Spec, forcePull, trustedPull bool) error {
|
||||
log.Debugf("docker pull: %s", ref)
|
||||
cli, err := dockerClient()
|
||||
if err != nil {
|
||||
return errors.New("could not initialize Docker API client")
|
||||
}
|
||||
|
||||
if trustedPull {
|
||||
log.Debugf("pulling %s with content trust", ref)
|
||||
trustedImg, err := TrustedReference(ref.String())
|
||||
if err != nil {
|
||||
return fmt.Errorf("Trusted pull for %s failed: %v", ref, err)
|
||||
}
|
||||
|
||||
// tag the image on a best-effort basis after pulling with content trust,
|
||||
// ensuring that docker picks up the tag and digest fom the canonical format
|
||||
defer func(src, dst string) {
|
||||
if err := cli.ImageTag(context.Background(), src, dst); err != nil {
|
||||
log.Debugf("could not tag trusted image %s to %s", src, dst)
|
||||
}
|
||||
}(trustedImg.String(), ref.String())
|
||||
|
||||
log.Debugf("successfully verified trusted reference %s from notary", trustedImg.String())
|
||||
trustedSpec, err := reference.Parse(trustedImg.String())
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to convert trusted img %s to Spec: %v", trustedImg, err)
|
||||
}
|
||||
ref.Locator = trustedSpec.Locator
|
||||
ref.Object = trustedSpec.Object
|
||||
|
||||
imageSearchArg := filters.NewArgs()
|
||||
imageSearchArg.Add("reference", trustedImg.String())
|
||||
if _, err := cli.ImageList(context.Background(), types.ImageListOptions{Filters: imageSearchArg}); err == nil && !forcePull {
|
||||
log.Debugf("docker pull: trusted image %s already cached...Done", trustedImg.String())
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
log.Infof("Pull image: %s", ref)
|
||||
r, err := cli.ImagePull(context.Background(), ref.String(), types.ImagePullOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer r.Close()
|
||||
_, err = io.Copy(ioutil.Discard, r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Debugf("docker pull: %s...Done", ref)
|
||||
return nil
|
||||
}
|
||||
|
||||
func dockerClient() (*client.Client, error) {
|
||||
// for maximum compatibility as we use nothing new
|
||||
err := os.Setenv("DOCKER_API_VERSION", "1.23")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return client.NewEnvClient()
|
||||
}
|
||||
|
||||
func dockerInspectImage(cli *client.Client, ref *reference.Spec, trustedPull bool) (types.ImageInspect, error) {
|
||||
log.Debugf("docker inspect image: %s", ref)
|
||||
|
||||
inspect, _, err := cli.ImageInspectWithRaw(context.Background(), ref.String())
|
||||
if err != nil {
|
||||
if client.IsErrNotFound(err) {
|
||||
pullErr := dockerPull(ref, true, trustedPull)
|
||||
if pullErr != nil {
|
||||
return types.ImageInspect{}, pullErr
|
||||
}
|
||||
inspect, _, err = cli.ImageInspectWithRaw(context.Background(), ref.String())
|
||||
if err != nil {
|
||||
return types.ImageInspect{}, err
|
||||
}
|
||||
} else {
|
||||
return types.ImageInspect{}, err
|
||||
}
|
||||
}
|
||||
|
||||
log.Debugf("docker inspect image: %s...Done", ref)
|
||||
|
||||
return inspect, nil
|
||||
}
|
310
src/moby/image.go
Normal file
310
src/moby/image.go
Normal file
@ -0,0 +1,310 @@
|
||||
package moby
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
"github.com/containerd/containerd/reference"
|
||||
"github.com/opencontainers/runtime-spec/specs-go"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
type tarWriter interface {
|
||||
Close() error
|
||||
Flush() error
|
||||
Write(b []byte) (n int, err error)
|
||||
WriteHeader(hdr *tar.Header) error
|
||||
}
|
||||
|
||||
// This uses Docker to convert a Docker image into a tarball. It would be an improvement if we
|
||||
// used the containerd libraries to do this instead locally direct from a local image
|
||||
// cache as it would be much simpler.
|
||||
|
||||
// Unfortunately there are some files that Docker always makes appear in a running image and
|
||||
// export shows them. In particular we have no way for a user to specify their own resolv.conf.
|
||||
// Even if we were not using docker export to get the image, users of docker build cannot override
|
||||
// the resolv.conf either, as it is not writeable and bind mounted in.
|
||||
|
||||
var exclude = map[string]bool{
|
||||
".dockerenv": true,
|
||||
"Dockerfile": true,
|
||||
"dev/console": true,
|
||||
"dev/pts": true,
|
||||
"dev/shm": true,
|
||||
"etc/hostname": true,
|
||||
}
|
||||
|
||||
var replace = map[string]string{
|
||||
"etc/hosts": `127.0.0.1 localhost
|
||||
::1 localhost ip6-localhost ip6-loopback
|
||||
fe00::0 ip6-localnet
|
||||
ff00::0 ip6-mcastprefix
|
||||
ff02::1 ip6-allnodes
|
||||
ff02::2 ip6-allrouters
|
||||
`,
|
||||
"etc/resolv.conf": `
|
||||
# no resolv.conf configured
|
||||
`,
|
||||
}
|
||||
|
||||
// tarPrefix creates the leading directories for a path
|
||||
func tarPrefix(path string, tw tarWriter) error {
|
||||
if path == "" {
|
||||
return nil
|
||||
}
|
||||
if path[len(path)-1] != byte('/') {
|
||||
return fmt.Errorf("path does not end with /: %s", path)
|
||||
}
|
||||
path = path[:len(path)-1]
|
||||
if path[0] == byte('/') {
|
||||
return fmt.Errorf("path should be relative: %s", path)
|
||||
}
|
||||
mkdir := ""
|
||||
for _, dir := range strings.Split(path, "/") {
|
||||
mkdir = mkdir + dir
|
||||
hdr := &tar.Header{
|
||||
Name: mkdir,
|
||||
Mode: 0755,
|
||||
Typeflag: tar.TypeDir,
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
mkdir = mkdir + "/"
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ImageTar takes a Docker image and outputs it to a tar stream
|
||||
func ImageTar(ref *reference.Spec, prefix string, tw tarWriter, trust bool, pull bool, resolv string) (e error) {
|
||||
log.Debugf("image tar: %s %s", ref, prefix)
|
||||
if prefix != "" && prefix[len(prefix)-1] != byte('/') {
|
||||
return fmt.Errorf("prefix does not end with /: %s", prefix)
|
||||
}
|
||||
|
||||
err := tarPrefix(prefix, tw)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if pull || trust {
|
||||
err := dockerPull(ref, pull, trust)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Could not pull image %s: %v", ref, err)
|
||||
}
|
||||
}
|
||||
container, err := dockerCreate(ref.String())
|
||||
if err != nil {
|
||||
// if the image wasn't found, pull it down. Bail on other errors.
|
||||
if strings.Contains(err.Error(), "No such image") {
|
||||
err := dockerPull(ref, true, trust)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Could not pull image %s: %v", ref, err)
|
||||
}
|
||||
container, err = dockerCreate(ref.String())
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to docker create image %s: %v", ref, err)
|
||||
}
|
||||
} else {
|
||||
return fmt.Errorf("Failed to create docker image %s: %v", ref, err)
|
||||
}
|
||||
}
|
||||
contents, err := dockerExport(container)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to docker export container from container %s: %v", container, err)
|
||||
}
|
||||
defer func() {
|
||||
contents.Close()
|
||||
|
||||
if err := dockerRm(container); e == nil && err != nil {
|
||||
e = fmt.Errorf("Failed to docker rm container %s: %v", container, err)
|
||||
}
|
||||
}()
|
||||
|
||||
// now we need to filter out some files from the resulting tar archive
|
||||
|
||||
tr := tar.NewReader(contents)
|
||||
|
||||
for {
|
||||
hdr, err := tr.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if exclude[hdr.Name] {
|
||||
log.Debugf("image tar: %s %s exclude %s", ref, prefix, hdr.Name)
|
||||
_, err = io.Copy(ioutil.Discard, tr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else if replace[hdr.Name] != "" {
|
||||
if hdr.Name != "etc/resolv.conf" || resolv == "" {
|
||||
contents := replace[hdr.Name]
|
||||
hdr.Size = int64(len(contents))
|
||||
hdr.Name = prefix + hdr.Name
|
||||
log.Debugf("image tar: %s %s add %s", ref, prefix, hdr.Name)
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
buf := bytes.NewBufferString(contents)
|
||||
_, err = io.Copy(tw, buf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
// replace resolv.conf with specified symlink
|
||||
hdr.Name = prefix + hdr.Name
|
||||
hdr.Size = 0
|
||||
hdr.Typeflag = tar.TypeSymlink
|
||||
hdr.Linkname = resolv
|
||||
log.Debugf("image tar: %s %s add resolv symlink /etc/resolv.conf -> %s", ref, prefix, resolv)
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
_, err = io.Copy(ioutil.Discard, tr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
log.Debugf("image tar: %s %s add %s", ref, prefix, hdr.Name)
|
||||
hdr.Name = prefix + hdr.Name
|
||||
if hdr.Typeflag == tar.TypeLink {
|
||||
// hard links are referenced by full path so need to be adjusted
|
||||
hdr.Linkname = prefix + hdr.Linkname
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = io.Copy(tw, tr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ImageBundle produces an OCI bundle at the given path in a tarball, given an image and a config.json
|
||||
func ImageBundle(prefix string, ref *reference.Spec, config []byte, runtime Runtime, tw tarWriter, trust bool, pull bool, readonly bool, dupMap map[string]string) error { // nolint: lll
|
||||
// if read only, just unpack in rootfs/ but otherwise set up for overlay
|
||||
rootExtract := "rootfs"
|
||||
if !readonly {
|
||||
rootExtract = "lower"
|
||||
}
|
||||
|
||||
// See if we have extracted this image previously
|
||||
root := path.Join(prefix, rootExtract)
|
||||
var foundElsewhere = dupMap[ref.String()] != ""
|
||||
if !foundElsewhere {
|
||||
if err := ImageTar(ref, root+"/", tw, trust, pull, ""); err != nil {
|
||||
return err
|
||||
}
|
||||
dupMap[ref.String()] = root
|
||||
} else {
|
||||
if err := tarPrefix(prefix+"/", tw); err != nil {
|
||||
return err
|
||||
}
|
||||
root = dupMap[ref.String()]
|
||||
}
|
||||
|
||||
hdr := &tar.Header{
|
||||
Name: path.Join(prefix, "config.json"),
|
||||
Mode: 0644,
|
||||
Size: int64(len(config)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
buf := bytes.NewBuffer(config)
|
||||
if _, err := io.Copy(tw, buf); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var rootfsMounts []specs.Mount
|
||||
if !readonly {
|
||||
// add a tmp directory to be used as a mount point for tmpfs for upper, work
|
||||
tmp := path.Join(prefix, "tmp")
|
||||
hdr = &tar.Header{
|
||||
Name: tmp,
|
||||
Mode: 0755,
|
||||
Typeflag: tar.TypeDir,
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
// add rootfs as merged mount point
|
||||
hdr = &tar.Header{
|
||||
Name: path.Join(prefix, "rootfs"),
|
||||
Mode: 0755,
|
||||
Typeflag: tar.TypeDir,
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
overlayOptions := []string{"lowerdir=/" + root, "upperdir=/" + path.Join(tmp, "upper"), "workdir=/" + path.Join(tmp, "work")}
|
||||
rootfsMounts = []specs.Mount{
|
||||
{Source: "tmpfs", Type: "tmpfs", Destination: "/" + tmp},
|
||||
// remount private as nothing else should see the temporary layers
|
||||
{Destination: "/" + tmp, Options: []string{"remount", "private"}},
|
||||
{Source: "overlay", Type: "overlay", Destination: "/" + path.Join(prefix, "rootfs"), Options: overlayOptions},
|
||||
}
|
||||
} else {
|
||||
if foundElsewhere {
|
||||
// we need to make the mountpoint at rootfs
|
||||
hdr = &tar.Header{
|
||||
Name: path.Join(prefix, "rootfs"),
|
||||
Mode: 0755,
|
||||
Typeflag: tar.TypeDir,
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// either bind from another location, or bind from self to make sure it is a mountpoint as runc prefers this
|
||||
rootfsMounts = []specs.Mount{
|
||||
{Source: "/" + root, Destination: "/" + path.Join(prefix, "rootfs"), Options: []string{"bind"}},
|
||||
}
|
||||
}
|
||||
|
||||
// Prepend the rootfs onto the user specified mounts.
|
||||
runtimeMounts := append(rootfsMounts, *runtime.Mounts...)
|
||||
runtime.Mounts = &runtimeMounts
|
||||
|
||||
// write the runtime config
|
||||
runtimeConfig, err := json.MarshalIndent(runtime, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to create runtime config for %s: %v", ref, err)
|
||||
}
|
||||
|
||||
hdr = &tar.Header{
|
||||
Name: path.Join(prefix, "runtime.json"),
|
||||
Mode: 0644,
|
||||
Size: int64(len(runtimeConfig)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
buf = bytes.NewBuffer(runtimeConfig)
|
||||
if _, err := io.Copy(tw, buf); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Debugf("image bundle: %s %s cfg: %s runtime: %s", prefix, ref, string(config), string(runtimeConfig))
|
||||
|
||||
return nil
|
||||
}
|
142
src/moby/linuxkit.go
Normal file
142
src/moby/linuxkit.go
Normal file
@ -0,0 +1,142 @@
|
||||
package moby
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var linuxkitYaml = map[string]string{"mkimage": `
|
||||
kernel:
|
||||
image: linuxkit/kernel:4.9.39
|
||||
cmdline: "console=ttyS0"
|
||||
init:
|
||||
- linuxkit/init:00ab58c9681a0bf42b2e35134c1ccf1591ebb64d
|
||||
- linuxkit/runc:f5960b83a8766ae083efc744fa63dbf877450e4f
|
||||
onboot:
|
||||
- name: mkimage
|
||||
image: linuxkit/mkimage:50bde8b00eb82e08f12dd9cc29f36c77f5638426
|
||||
- name: poweroff
|
||||
image: linuxkit/poweroff:3845c4d64d47a1ea367806be5547e44594b0fa91
|
||||
trust:
|
||||
org:
|
||||
- linuxkit
|
||||
`}
|
||||
|
||||
func imageFilename(name string) string {
|
||||
yaml := linuxkitYaml[name]
|
||||
hash := sha256.Sum256([]byte(yaml))
|
||||
return filepath.Join(MobyDir, "linuxkit", name+"-"+fmt.Sprintf("%x", hash))
|
||||
}
|
||||
|
||||
func ensureLinuxkitImage(name string) error {
|
||||
filename := imageFilename(name)
|
||||
_, err1 := os.Stat(filename + "-kernel")
|
||||
_, err2 := os.Stat(filename + "-initrd.img")
|
||||
_, err3 := os.Stat(filename + "-cmdline")
|
||||
if err1 == nil && err2 == nil && err3 == nil {
|
||||
return nil
|
||||
}
|
||||
err := os.MkdirAll(filepath.Join(MobyDir, "linuxkit"), 0755)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// TODO clean up old files
|
||||
log.Infof("Building LinuxKit image %s to generate output formats", name)
|
||||
|
||||
yaml := linuxkitYaml[name]
|
||||
|
||||
m, err := NewConfig([]byte(yaml))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// TODO pass through --pull to here
|
||||
tf, err := ioutil.TempFile("", "")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer os.Remove(tf.Name())
|
||||
Build(m, tf, false, "")
|
||||
if err := tf.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
image, err := os.Open(tf.Name())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer image.Close()
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
return writeKernelInitrd(filename, kernel, initrd, cmdline)
|
||||
}
|
||||
|
||||
func writeKernelInitrd(filename string, kernel []byte, initrd []byte, cmdline string) error {
|
||||
err := ioutil.WriteFile(filename+"-kernel", kernel, 0600)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = ioutil.WriteFile(filename+"-initrd.img", initrd, 0600)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return ioutil.WriteFile(filename+"-cmdline", []byte(cmdline), 0600)
|
||||
}
|
||||
|
||||
func outputLinuxKit(format string, filename string, kernel []byte, initrd []byte, cmdline string, size int) error {
|
||||
log.Debugf("output linuxkit generated img: %s %s size %d", format, filename, size)
|
||||
|
||||
tmp, err := ioutil.TempDir(filepath.Join(MobyDir, "tmp"), "moby")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer os.RemoveAll(tmp)
|
||||
|
||||
buf, err := tarInitrdKernel(kernel, initrd, cmdline)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
tardisk := filepath.Join(tmp, "tardisk")
|
||||
f, err := os.Create(tardisk)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = io.Copy(f, buf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = f.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
sizeString := fmt.Sprintf("%dM", size)
|
||||
_ = os.Remove(filename)
|
||||
_, err = os.Stat(filename)
|
||||
if err == nil || !os.IsNotExist(err) {
|
||||
return fmt.Errorf("Cannot remove existing file [%s]", filename)
|
||||
}
|
||||
linuxkit, err := exec.LookPath("linuxkit")
|
||||
if err != nil {
|
||||
return fmt.Errorf("Cannot find linuxkit executable, needed to build %s output type: %v", format, err)
|
||||
}
|
||||
commandLine := []string{
|
||||
"-q", "run", "qemu",
|
||||
"-disk", fmt.Sprintf("%s,size=%s,format=%s", filename, sizeString, format),
|
||||
"-disk", fmt.Sprintf("%s,format=raw", tardisk),
|
||||
"-kernel", imageFilename("mkimage"),
|
||||
}
|
||||
log.Debugf("run %s: %v", linuxkit, commandLine)
|
||||
cmd := exec.Command(linuxkit, commandLine...)
|
||||
cmd.Stderr = os.Stderr
|
||||
return cmd.Run()
|
||||
}
|
500
src/moby/output.go
Normal file
500
src/moby/output.go
Normal file
@ -0,0 +1,500 @@
|
||||
package moby
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/moby/tool/src/initrd"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var (
|
||||
outputImages = map[string]string{
|
||||
"iso-bios": "linuxkit/mkimage-iso-bios:9a51dc64a461f1cc50ba05f30a38f73f5227ac03",
|
||||
"iso-efi": "linuxkit/mkimage-iso-efi:343cf1a8ac0aba7d8a1f13b7f45fa0b57ab897dc",
|
||||
"raw-bios": "linuxkit/mkimage-raw-bios:d90713b2dd610cf9a0f5f9d9095f8bf86f40d5c6",
|
||||
"raw-efi": "linuxkit/mkimage-raw-efi:8938ffb6014543e557b624a40cce1714f30ce4b6",
|
||||
"squashfs": "linuxkit/mkimage-squashfs:b44d00b0a336fd32c122ff32bd2b39c36a965135",
|
||||
"gcp": "linuxkit/mkimage-gcp:e6cdcf859ab06134c0c37a64ed5f886ec8dae1a1",
|
||||
"qcow2-efi": "linuxkit/mkimage-qcow2-efi:787b54906e14a56b9f1da35dcc8e46bd58435285",
|
||||
"vhd": "linuxkit/mkimage-vhd:3820219e5c350fe8ab2ec6a217272ae82f4b9242",
|
||||
"dynamic-vhd": "linuxkit/mkimage-dynamic-vhd:743ac9959fe6d3912ebd78b4fd490b117c53f1a6",
|
||||
"vmdk": "linuxkit/mkimage-vmdk:cee81a3ed9c44ae446ef7ebff8c42c1e77b3e1b5",
|
||||
"rpi3": "linuxkit/mkimage-rpi3:0f23c4f37cdca99281ca33ac6188e1942fa7a2b8",
|
||||
}
|
||||
)
|
||||
|
||||
// UpdateOutputImages overwrite the docker images used to build the outputs
|
||||
// 'update' is a map where the key is the output format and the value is a LinuxKit 'mkimage' image.
|
||||
func UpdateOutputImages(update map[string]string) error {
|
||||
for k, img := range update {
|
||||
if _, ok := outputImages[k]; !ok {
|
||||
return fmt.Errorf("Image format %s is not known", k)
|
||||
}
|
||||
outputImages[k] = img
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
var outFuns = map[string]func(string, io.Reader, int) error{
|
||||
"kernel+initrd": func(base string, image io.Reader, size int) error {
|
||||
kernel, initrd, cmdline, ucode, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
err = outputKernelInitrd(base, kernel, initrd, cmdline, ucode)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing kernel+initrd output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"tar-kernel-initrd": func(base string, image io.Reader, size int) error {
|
||||
kernel, initrd, cmdline, ucode, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
if err := outputKernelInitrdTarball(base, kernel, initrd, cmdline, ucode); err != nil {
|
||||
return fmt.Errorf("Error writing kernel+initrd tarball output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"iso-bios": func(base string, image io.Reader, size int) error {
|
||||
err := outputIso(outputImages["iso-bios"], base+".iso", image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing iso-bios output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"iso-efi": func(base string, image io.Reader, size int) error {
|
||||
err := outputIso(outputImages["iso-efi"], base+"-efi.iso", image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing iso-efi output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"raw-bios": func(base string, image io.Reader, size int) error {
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
// TODO: Handle ucode
|
||||
err = outputImg(outputImages["raw-bios"], base+"-bios.img", kernel, initrd, cmdline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing raw-bios output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"raw-efi": func(base string, image io.Reader, size int) error {
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
err = outputImg(outputImages["raw-efi"], base+"-efi.img", kernel, initrd, cmdline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing raw-efi output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"kernel+squashfs": func(base string, image io.Reader, size int) error {
|
||||
err := outputKernelSquashFS(outputImages["squashfs"], base, image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing kernel+squashfs output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"aws": func(base string, image io.Reader, size int) error {
|
||||
filename := base + ".raw"
|
||||
log.Infof(" %s", filename)
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
err = outputLinuxKit("raw", filename, kernel, initrd, cmdline, size)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing raw output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"gcp": func(base string, image io.Reader, size int) error {
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
err = outputImg(outputImages["gcp"], base+".img.tar.gz", kernel, initrd, cmdline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing gcp output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"qcow2-efi": func(base string, image io.Reader, size int) error {
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
err = outputImg(outputImages["qcow2-efi"], base+"-efi.qcow2", kernel, initrd, cmdline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing qcow2 EFI output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"qcow2-bios": func(base string, image io.Reader, size int) error {
|
||||
filename := base + ".qcow2"
|
||||
log.Infof(" %s", filename)
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
// TODO: Handle ucode
|
||||
err = outputLinuxKit("qcow2", filename, kernel, initrd, cmdline, size)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing qcow2 output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"vhd": func(base string, image io.Reader, size int) error {
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
err = outputImg(outputImages["vhd"], base+".vhd", kernel, initrd, cmdline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing vhd output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"dynamic-vhd": func(base string, image io.Reader, size int) error {
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
err = outputImg(outputImages["dynamic-vhd"], base+".vhd", kernel, initrd, cmdline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing vhd output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"vmdk": func(base string, image io.Reader, size int) error {
|
||||
kernel, initrd, cmdline, _, err := tarToInitrd(image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting to initrd: %v", err)
|
||||
}
|
||||
err = outputImg(outputImages["vmdk"], base+".vmdk", kernel, initrd, cmdline)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing vmdk output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
"rpi3": func(base string, image io.Reader, size int) error {
|
||||
if runtime.GOARCH != "arm64" {
|
||||
return fmt.Errorf("Raspberry Pi output currently only supported on arm64")
|
||||
}
|
||||
err := outputRPi3(outputImages["rpi3"], base+".tar", image)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error writing rpi3 output: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
var prereq = map[string]string{
|
||||
"aws": "mkimage",
|
||||
"qcow2-bios": "mkimage",
|
||||
}
|
||||
|
||||
func ensurePrereq(out string) error {
|
||||
var err error
|
||||
p := prereq[out]
|
||||
if p != "" {
|
||||
err = ensureLinuxkitImage(p)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// ValidateFormats checks if the format type is known
|
||||
func ValidateFormats(formats []string) error {
|
||||
log.Debugf("validating output: %v", formats)
|
||||
|
||||
for _, o := range formats {
|
||||
f := outFuns[o]
|
||||
if f == nil {
|
||||
return fmt.Errorf("Unknown format type %s", o)
|
||||
}
|
||||
err := ensurePrereq(o)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Failed to set up format type %s: %v", o, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Formats generates all the specified output formats
|
||||
func Formats(base string, image string, formats []string, size int) error {
|
||||
log.Debugf("format: %v %s", formats, base)
|
||||
|
||||
err := ValidateFormats(formats)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, o := range formats {
|
||||
ir, err := os.Open(image)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer ir.Close()
|
||||
f := outFuns[o]
|
||||
if err := f(base, ir, size); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func tarToInitrd(r io.Reader) ([]byte, []byte, string, []byte, error) {
|
||||
w := new(bytes.Buffer)
|
||||
iw := initrd.NewWriter(w)
|
||||
tr := tar.NewReader(r)
|
||||
kernel, cmdline, ucode, err := initrd.CopySplitTar(iw, tr)
|
||||
if err != nil {
|
||||
return []byte{}, []byte{}, "", []byte{}, err
|
||||
}
|
||||
iw.Close()
|
||||
return kernel, w.Bytes(), cmdline, ucode, nil
|
||||
}
|
||||
|
||||
func tarInitrdKernel(kernel, initrd []byte, cmdline string) (*bytes.Buffer, error) {
|
||||
buf := new(bytes.Buffer)
|
||||
tw := tar.NewWriter(buf)
|
||||
hdr := &tar.Header{
|
||||
Name: "kernel",
|
||||
Mode: 0600,
|
||||
Size: int64(len(kernel)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
err := tw.WriteHeader(hdr)
|
||||
if err != nil {
|
||||
return buf, err
|
||||
}
|
||||
_, err = tw.Write(kernel)
|
||||
if err != nil {
|
||||
return buf, err
|
||||
}
|
||||
hdr = &tar.Header{
|
||||
Name: "initrd.img",
|
||||
Mode: 0600,
|
||||
Size: int64(len(initrd)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
err = tw.WriteHeader(hdr)
|
||||
if err != nil {
|
||||
return buf, err
|
||||
}
|
||||
_, err = tw.Write(initrd)
|
||||
if err != nil {
|
||||
return buf, err
|
||||
}
|
||||
hdr = &tar.Header{
|
||||
Name: "cmdline",
|
||||
Mode: 0600,
|
||||
Size: int64(len(cmdline)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
err = tw.WriteHeader(hdr)
|
||||
if err != nil {
|
||||
return buf, err
|
||||
}
|
||||
_, err = tw.Write([]byte(cmdline))
|
||||
if err != nil {
|
||||
return buf, err
|
||||
}
|
||||
return buf, tw.Close()
|
||||
}
|
||||
|
||||
func outputImg(image, filename string, kernel []byte, initrd []byte, cmdline string) error {
|
||||
log.Debugf("output img: %s %s", image, filename)
|
||||
log.Infof(" %s", filename)
|
||||
buf, err := tarInitrdKernel(kernel, initrd, cmdline)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
output, err := os.Create(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer output.Close()
|
||||
return dockerRun(buf, output, true, image, cmdline)
|
||||
}
|
||||
|
||||
func outputIso(image, filename string, filesystem io.Reader) error {
|
||||
log.Debugf("output ISO: %s %s", image, filename)
|
||||
log.Infof(" %s", filename)
|
||||
output, err := os.Create(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer output.Close()
|
||||
return dockerRun(filesystem, output, true, image)
|
||||
}
|
||||
|
||||
func outputRPi3(image, filename string, filesystem io.Reader) error {
|
||||
log.Debugf("output RPi3: %s %s", image, filename)
|
||||
log.Infof(" %s", filename)
|
||||
output, err := os.Create(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer output.Close()
|
||||
return dockerRun(filesystem, output, true, image)
|
||||
}
|
||||
|
||||
func outputKernelInitrd(base string, kernel []byte, initrd []byte, cmdline string, ucode []byte) error {
|
||||
log.Debugf("output kernel/initrd: %s %s", base, cmdline)
|
||||
|
||||
if len(ucode) != 0 {
|
||||
log.Infof(" %s ucode+%s %s", base+"-kernel", base+"-initrd.img", base+"-cmdline")
|
||||
if err := ioutil.WriteFile(base+"-initrd.img", ucode, os.FileMode(0644)); err != nil {
|
||||
return err
|
||||
}
|
||||
f, err := os.OpenFile(base+"-initrd.img", os.O_APPEND|os.O_WRONLY, 0644)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
if _, err = f.Write(initrd); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
log.Infof(" %s %s %s", base+"-kernel", base+"-initrd.img", base+"-cmdline")
|
||||
if err := ioutil.WriteFile(base+"-initrd.img", initrd, os.FileMode(0644)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := ioutil.WriteFile(base+"-kernel", kernel, os.FileMode(0644)); err != nil {
|
||||
return err
|
||||
}
|
||||
return ioutil.WriteFile(base+"-cmdline", []byte(cmdline), os.FileMode(0644))
|
||||
}
|
||||
|
||||
func outputKernelInitrdTarball(base string, kernel []byte, initrd []byte, cmdline string, ucode []byte) error {
|
||||
log.Debugf("output kernel/initrd tarball: %s %s", base, cmdline)
|
||||
log.Infof(" %s", base+"-initrd.tar")
|
||||
f, err := os.Create(base + "-initrd.tar")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
tw := tar.NewWriter(f)
|
||||
hdr := &tar.Header{
|
||||
Name: "kernel",
|
||||
Mode: 0644,
|
||||
Size: int64(len(kernel)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := tw.Write(kernel); err != nil {
|
||||
return err
|
||||
}
|
||||
hdr = &tar.Header{
|
||||
Name: "initrd.img",
|
||||
Mode: 0644,
|
||||
Size: int64(len(initrd)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := tw.Write(initrd); err != nil {
|
||||
return err
|
||||
}
|
||||
hdr = &tar.Header{
|
||||
Name: "cmdline",
|
||||
Mode: 0644,
|
||||
Size: int64(len(cmdline)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := tw.Write([]byte(cmdline)); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(ucode) != 0 {
|
||||
hdr := &tar.Header{
|
||||
Name: "ucode.cpio",
|
||||
Mode: 0644,
|
||||
Size: int64(len(ucode)),
|
||||
Format: tar.FormatPAX,
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := tw.Write(ucode); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return tw.Close()
|
||||
}
|
||||
|
||||
func outputKernelSquashFS(image, base string, filesystem io.Reader) error {
|
||||
log.Debugf("output kernel/squashfs: %s %s", image, base)
|
||||
log.Infof(" %s-squashfs.img", base)
|
||||
|
||||
tr := tar.NewReader(filesystem)
|
||||
buf := new(bytes.Buffer)
|
||||
rootfs := tar.NewWriter(buf)
|
||||
|
||||
for {
|
||||
var thdr *tar.Header
|
||||
thdr, err := tr.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
thdr.Format = tar.FormatPAX
|
||||
switch {
|
||||
case thdr.Name == "boot/kernel":
|
||||
kernel, err := ioutil.ReadAll(tr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := ioutil.WriteFile(base+"-kernel", kernel, os.FileMode(0644)); err != nil {
|
||||
return err
|
||||
}
|
||||
case thdr.Name == "boot/cmdline":
|
||||
cmdline, err := ioutil.ReadAll(tr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := ioutil.WriteFile(base+"-cmdline", cmdline, os.FileMode(0644)); err != nil {
|
||||
return err
|
||||
}
|
||||
case strings.HasPrefix(thdr.Name, "boot/"):
|
||||
// skip the rest of boot/
|
||||
default:
|
||||
rootfs.WriteHeader(thdr)
|
||||
if _, err := io.Copy(rootfs, tr); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
rootfs.Close()
|
||||
|
||||
output, err := os.Create(base + "-squashfs.img")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer output.Close()
|
||||
|
||||
return dockerRun(buf, output, true, image)
|
||||
}
|
313
src/moby/schema.go
Normal file
313
src/moby/schema.go
Normal file
@ -0,0 +1,313 @@
|
||||
package moby
|
||||
|
||||
var schema = string(`
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
||||
"title": "Moby Config",
|
||||
"additionalProperties": false,
|
||||
"definitions": {
|
||||
"kernel": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"image": {"type": "string"},
|
||||
"cmdline": {"type": "string"},
|
||||
"binary": {"type": "string"},
|
||||
"tar": {"type": "string"},
|
||||
"ucode": {"type": "string"}
|
||||
}
|
||||
},
|
||||
"file": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"path": {"type": "string"},
|
||||
"directory": {"type": "boolean"},
|
||||
"symlink": {"type": "string"},
|
||||
"contents": {"type": "string"},
|
||||
"source": {"type": "string"},
|
||||
"metadata": {"type": "string"},
|
||||
"optional": {"type": "boolean"},
|
||||
"mode": {"type": "string"},
|
||||
"uid": {"anyOf": [{"type": "string"}, {"type": "integer"}]},
|
||||
"gid": {"anyOf": [{"type": "string"}, {"type": "integer"}]}
|
||||
}
|
||||
},
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": { "$ref": "#/definitions/file" }
|
||||
},
|
||||
"trust": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"image": { "$ref": "#/definitions/strings" },
|
||||
"org": { "$ref": "#/definitions/strings" }
|
||||
}
|
||||
},
|
||||
"strings": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"}
|
||||
},
|
||||
"mapstring": {
|
||||
"type": "object",
|
||||
"additionalProperties": {"type": "string"}
|
||||
},
|
||||
"mount": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"destination": { "type": "string" },
|
||||
"type": { "type": "string" },
|
||||
"source": { "type": "string" },
|
||||
"options": { "$ref": "#/definitions/strings" }
|
||||
}
|
||||
},
|
||||
"mounts": {
|
||||
"type": "array",
|
||||
"items": { "$ref": "#/definitions/mount" }
|
||||
},
|
||||
"idmapping": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"hostID": { "type": "integer" },
|
||||
"containerID": { "type": "integer" },
|
||||
"size": { "type": "integer" }
|
||||
}
|
||||
},
|
||||
"idmappings": {
|
||||
"type": "array",
|
||||
"items": { "$ref": "#/definitions/idmapping" }
|
||||
},
|
||||
"devicecgroups": {
|
||||
"type": "array",
|
||||
"items": { "$ref": "#/definitions/devicecgroup" }
|
||||
},
|
||||
"devicecgroup": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"allow": {"type": "boolean"},
|
||||
"type": {"type": "string"},
|
||||
"major": {"type": "integer"},
|
||||
"minor": {"type": "integer"},
|
||||
"access": {"type": "string"}
|
||||
}
|
||||
},
|
||||
"memory": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"limit": {"type": "integer"},
|
||||
"reservation": {"type": "integer"},
|
||||
"swap": {"type": "integer"},
|
||||
"kernel": {"type": "integer"},
|
||||
"kernelTCP": {"type": "integer"},
|
||||
"swappiness": {"type": "integer"},
|
||||
"disableOOMKiller": {"type": "boolean"}
|
||||
}
|
||||
},
|
||||
"cpu": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"shares": {"type": "integer"},
|
||||
"quota": {"type": "integer"},
|
||||
"period": {"type": "integer"},
|
||||
"realtimeRuntime": {"type": "integer"},
|
||||
"realtimePeriod": {"type": "integer"},
|
||||
"cpus": {"type": "string"},
|
||||
"mems": {"type": "string"}
|
||||
}
|
||||
},
|
||||
"pids": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"limit": {"type": "integer"}
|
||||
}
|
||||
},
|
||||
"weightdevices": {
|
||||
"type": "array",
|
||||
"items": {"$ref": "#/definitions/weightdevice"}
|
||||
},
|
||||
"weightdevice": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"major": {"type": "integer"},
|
||||
"minor": {"type": "integer"},
|
||||
"weight": {"type": "integer"},
|
||||
"leafWeight": {"type": "integer"}
|
||||
}
|
||||
},
|
||||
"throttledevices": {
|
||||
"type": "array",
|
||||
"items": {"$ref": "#/definitions/throttledevice"}
|
||||
},
|
||||
"throttledevice": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"major": {"type": "integer"},
|
||||
"minor": {"type": "integer"},
|
||||
"rate": {"type": "integer"}
|
||||
}
|
||||
},
|
||||
"blockio": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"weight": {"type": "integer"},
|
||||
"leafWeight": {"type": "integer"},
|
||||
"weightDevice": {"$ref": "#/definitions/weightdevices"},
|
||||
"throttleReadBpsDevice": {"$ref": "#/definitions/throttledevices"},
|
||||
"throttleWriteBpsDevice": {"$ref": "#/definitions/throttledevices"},
|
||||
"throttleReadIOPSDevice": {"$ref": "#/definitions/throttledevices"},
|
||||
"throttleWriteIOPSDevice": {"$ref": "#/definitions/throttledevices"}
|
||||
}
|
||||
},
|
||||
"hugepagelimits": {
|
||||
"type": "array",
|
||||
"items": {"$ref": "#/definitions/hugepagelimit"}
|
||||
},
|
||||
"hugepagelimit": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"pageSize": {"type": "integer"},
|
||||
"limit": {"type": "integer"}
|
||||
}
|
||||
},
|
||||
"interfacepriorities": {
|
||||
"type": "array",
|
||||
"items": {"$ref": "#/definitions/interfacepriority"}
|
||||
},
|
||||
"interfacepriority": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"priority": {"type": "integer"}
|
||||
}
|
||||
},
|
||||
"network": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"classID": {"type": "integer"},
|
||||
"priorities": {"$ref": "#/definitions/interfacepriorities"}
|
||||
}
|
||||
},
|
||||
"resources": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"devices": {"$ref": "#/definitions/devicecgroups"},
|
||||
"memory": {"$ref": "#/definitions/memory"},
|
||||
"cpu": {"$ref": "#/definitions/cpu"},
|
||||
"pids": {"$ref": "#/definitions/pids"},
|
||||
"blockio": {"$ref": "#/definitions/blockio"},
|
||||
"hugepageLimits": {"$ref": "#/definitions/hugepagelimits"},
|
||||
"network": {"$ref": "#/definitions/network"}
|
||||
}
|
||||
},
|
||||
"interfaces": {
|
||||
"type": "array",
|
||||
"items": {"$ref": "#/definitions/interface"}
|
||||
},
|
||||
"interface": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"add": {"type": "string"},
|
||||
"peer": {"type": "string"},
|
||||
"createInRoot": {"type": "boolean"}
|
||||
}
|
||||
},
|
||||
"namespaces": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"cgroup": {"type": "string"},
|
||||
"ipc": {"type": "string"},
|
||||
"mnt": {"type": "string"},
|
||||
"net": {"type": "string"},
|
||||
"pid": {"type": "string"},
|
||||
"user": {"type": "string"},
|
||||
"uts": {"type": "string"}
|
||||
}
|
||||
},
|
||||
"runtime": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"cgroups": {"$ref": "#/definitions/strings"},
|
||||
"mounts": {"$ref": "#/definitions/mounts"},
|
||||
"mkdir": {"$ref": "#/definitions/strings"},
|
||||
"interfaces": {"$ref": "#/definitions/interfaces"},
|
||||
"bindNS": {"$ref": "#/definitions/namespaces"},
|
||||
"namespace": {"type": "string"}
|
||||
}
|
||||
},
|
||||
"image": {
|
||||
"type": "object",
|
||||
"additionalProperties": false,
|
||||
"required": ["name", "image"],
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"image": {"type": "string"},
|
||||
"capabilities": { "$ref": "#/definitions/strings" },
|
||||
"ambient": { "$ref": "#/definitions/strings" },
|
||||
"mounts": { "$ref": "#/definitions/mounts" },
|
||||
"binds": { "$ref": "#/definitions/strings" },
|
||||
"tmpfs": { "$ref": "#/definitions/strings" },
|
||||
"command": { "$ref": "#/definitions/strings" },
|
||||
"env": { "$ref": "#/definitions/strings" },
|
||||
"cwd": { "type": "string"},
|
||||
"net": { "type": "string"},
|
||||
"pid": { "type": "string"},
|
||||
"ipc": { "type": "string"},
|
||||
"uts": { "type": "string"},
|
||||
"userns": { "type": "string"},
|
||||
"readonly": { "type": "boolean"},
|
||||
"maskedPaths": { "$ref": "#/definitions/strings" },
|
||||
"readonlyPaths": { "$ref": "#/definitions/strings" },
|
||||
"uid": {"anyOf": [{"type": "string"}, {"type": "integer"}]},
|
||||
"gid": {"anyOf": [{"type": "string"}, {"type": "integer"}]},
|
||||
"additionalGids": {
|
||||
"type": "array",
|
||||
"items": {"anyOf": [{"type": "string"}, {"type": "integer"}]}
|
||||
},
|
||||
"noNewPrivileges": {"type": "boolean"},
|
||||
"hostname": {"type": "string"},
|
||||
"oomScoreAdj": {"type": "integer"},
|
||||
"rootfsPropagation": {"type": "string"},
|
||||
"cgroupsPath": {"type": "string"},
|
||||
"resources": {"$ref": "#/definitions/resources"},
|
||||
"sysctl": { "$ref": "#/definitions/mapstring" },
|
||||
"rlimits": { "$ref": "#/definitions/strings" },
|
||||
"uidMappings": { "$ref": "#/definitions/idmappings" },
|
||||
"gidMappings": { "$ref": "#/definitions/idmappings" },
|
||||
"annotations": { "$ref": "#/definitions/mapstring" },
|
||||
"runtime": {"$ref": "#/definitions/runtime"}
|
||||
}
|
||||
},
|
||||
"images": {
|
||||
"type": "array",
|
||||
"items": { "$ref": "#/definitions/image" }
|
||||
}
|
||||
},
|
||||
"properties": {
|
||||
"kernel": { "$ref": "#/definitions/kernel" },
|
||||
"init": { "$ref": "#/definitions/strings" },
|
||||
"onboot": { "$ref": "#/definitions/images" },
|
||||
"onshutdown": { "$ref": "#/definitions/images" },
|
||||
"services": { "$ref": "#/definitions/images" },
|
||||
"trust": { "$ref": "#/definitions/trust" },
|
||||
"files": { "$ref": "#/definitions/files" }
|
||||
}
|
||||
}
|
||||
`)
|
207
src/moby/trust.go
Normal file
207
src/moby/trust.go
Normal file
@ -0,0 +1,207 @@
|
||||
package moby
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/docker/distribution/reference"
|
||||
"github.com/docker/distribution/registry/client/auth"
|
||||
"github.com/docker/distribution/registry/client/auth/challenge"
|
||||
"github.com/docker/distribution/registry/client/transport"
|
||||
"github.com/opencontainers/go-digest"
|
||||
log "github.com/sirupsen/logrus"
|
||||
notaryClient "github.com/theupdateframework/notary/client"
|
||||
"github.com/theupdateframework/notary/trustpinning"
|
||||
"github.com/theupdateframework/notary/tuf/data"
|
||||
)
|
||||
|
||||
var (
|
||||
// ReleasesRole is the role named "releases"
|
||||
ReleasesRole = data.RoleName(path.Join(data.CanonicalTargetsRole.String(), "releases"))
|
||||
)
|
||||
|
||||
// TrustedReference parses an image string, and does a notary lookup to verify and retrieve the signed digest reference
|
||||
func TrustedReference(image string) (reference.Reference, error) {
|
||||
ref, err := reference.ParseAnyReference(image)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// to mimic docker pull: if we have a digest already, it's implicitly trusted
|
||||
if digestRef, ok := ref.(reference.Digested); ok {
|
||||
return digestRef, nil
|
||||
}
|
||||
// to mimic docker pull: if we have a digest already, it's implicitly trusted
|
||||
if canonicalRef, ok := ref.(reference.Canonical); ok {
|
||||
return canonicalRef, nil
|
||||
}
|
||||
|
||||
namedRef, ok := ref.(reference.Named)
|
||||
if !ok {
|
||||
return nil, errors.New("failed to resolve image digest using content trust: reference is not named")
|
||||
}
|
||||
taggedRef, ok := namedRef.(reference.NamedTagged)
|
||||
if !ok {
|
||||
return nil, errors.New("failed to resolve image digest using content trust: reference is not tagged")
|
||||
}
|
||||
|
||||
gun := taggedRef.Name()
|
||||
targetName := taggedRef.Tag()
|
||||
server, err := getTrustServer(gun)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rt, err := GetReadOnlyAuthTransport(server, []string{gun}, "", "", "")
|
||||
if err != nil {
|
||||
log.Debugf("failed to reach %s notary server for repo: %s, falling back to cache: %v", server, gun, err)
|
||||
rt = nil
|
||||
}
|
||||
|
||||
nRepo, err := notaryClient.NewFileCachedRepository(
|
||||
trustDirectory(),
|
||||
data.GUN(gun),
|
||||
server,
|
||||
rt,
|
||||
nil,
|
||||
trustpinning.TrustPinConfig{},
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
target, err := nRepo.GetTargetByName(targetName, ReleasesRole, data.CanonicalTargetsRole)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Only get the tag if it's in the top level targets role or the releases delegation role
|
||||
// ignore it if it's in any other delegation roles
|
||||
if target.Role != ReleasesRole && target.Role != data.CanonicalTargetsRole {
|
||||
return nil, errors.New("not signed in valid role")
|
||||
}
|
||||
|
||||
h, ok := target.Hashes["sha256"]
|
||||
if !ok {
|
||||
return nil, errors.New("no valid hash, expecting sha256")
|
||||
}
|
||||
|
||||
dgst := digest.NewDigestFromHex("sha256", hex.EncodeToString(h))
|
||||
|
||||
// Allow returning canonical reference with tag and digest
|
||||
return reference.WithDigest(taggedRef, dgst)
|
||||
}
|
||||
|
||||
func getTrustServer(gun string) (string, error) {
|
||||
if strings.HasPrefix(gun, "docker.io/") {
|
||||
return "https://notary.docker.io", nil
|
||||
}
|
||||
return "", errors.New("non-hub images not yet supported")
|
||||
}
|
||||
|
||||
func trustDirectory() string {
|
||||
return filepath.Join(MobyDir, "trust")
|
||||
}
|
||||
|
||||
type credentialStore struct {
|
||||
username string
|
||||
password string
|
||||
refreshTokens map[string]string
|
||||
}
|
||||
|
||||
func (tcs *credentialStore) Basic(url *url.URL) (string, string) {
|
||||
return tcs.username, tcs.password
|
||||
}
|
||||
|
||||
// refresh tokens are the long lived tokens that can be used instead of a password
|
||||
func (tcs *credentialStore) RefreshToken(u *url.URL, service string) string {
|
||||
return tcs.refreshTokens[service]
|
||||
}
|
||||
|
||||
func (tcs *credentialStore) SetRefreshToken(u *url.URL, service string, token string) {
|
||||
if tcs.refreshTokens != nil {
|
||||
tcs.refreshTokens[service] = token
|
||||
}
|
||||
}
|
||||
|
||||
// GetReadOnlyAuthTransport gets the Auth Transport used to communicate with notary
|
||||
func GetReadOnlyAuthTransport(server string, scopes []string, username, password, rootCAPath string) (http.RoundTripper, error) {
|
||||
httpsTransport, err := httpsTransport(rootCAPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("GET", fmt.Sprintf("%s/v2/", server), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pingClient := &http.Client{
|
||||
Transport: httpsTransport,
|
||||
Timeout: 5 * time.Second,
|
||||
}
|
||||
resp, err := pingClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
challengeManager := challenge.NewSimpleManager()
|
||||
if err := challengeManager.AddResponse(resp); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
creds := credentialStore{
|
||||
username: username,
|
||||
password: password,
|
||||
refreshTokens: make(map[string]string),
|
||||
}
|
||||
|
||||
var scopeObjs []auth.Scope
|
||||
for _, scopeName := range scopes {
|
||||
scopeObjs = append(scopeObjs, auth.RepositoryScope{
|
||||
Repository: scopeName,
|
||||
Actions: []string{"pull"},
|
||||
})
|
||||
}
|
||||
|
||||
// allow setting multiple scopes so we don't have to reauth
|
||||
tokenHandler := auth.NewTokenHandlerWithOptions(auth.TokenHandlerOptions{
|
||||
Transport: httpsTransport,
|
||||
Credentials: &creds,
|
||||
Scopes: scopeObjs,
|
||||
})
|
||||
|
||||
authedTransport := transport.NewTransport(httpsTransport, auth.NewAuthorizer(challengeManager, tokenHandler))
|
||||
return authedTransport, nil
|
||||
}
|
||||
|
||||
func httpsTransport(caFile string) (*http.Transport, error) {
|
||||
tlsConfig := &tls.Config{}
|
||||
transport := http.Transport{
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
Dial: (&net.Dialer{
|
||||
Timeout: 30 * time.Second,
|
||||
KeepAlive: 30 * time.Second,
|
||||
}).Dial,
|
||||
TLSHandshakeTimeout: 10 * time.Second,
|
||||
TLSClientConfig: tlsConfig,
|
||||
}
|
||||
// Override with the system cert pool if the caFile was empty
|
||||
if caFile != "" {
|
||||
certPool := x509.NewCertPool()
|
||||
pems, err := ioutil.ReadFile(caFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
certPool.AppendCertsFromPEM(pems)
|
||||
transport.TLSClientConfig.RootCAs = certPool
|
||||
}
|
||||
return &transport, nil
|
||||
}
|
58
src/moby/trust_test.go
Normal file
58
src/moby/trust_test.go
Normal file
@ -0,0 +1,58 @@
|
||||
package moby
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestEnforceContentTrust(t *testing.T) {
|
||||
type enforceContentTrustCase struct {
|
||||
result bool
|
||||
imageName string
|
||||
trustConfig *TrustConfig
|
||||
}
|
||||
testCases := []enforceContentTrustCase{
|
||||
// Simple positive and negative cases for Image subkey
|
||||
{true, "image", &TrustConfig{Image: []string{"image"}}},
|
||||
{true, "image", &TrustConfig{Image: []string{"more", "than", "one", "image"}}},
|
||||
{true, "image", &TrustConfig{Image: []string{"more", "than", "one", "image"}, Org: []string{"random", "orgs"}}},
|
||||
{false, "image", &TrustConfig{}},
|
||||
{false, "image", &TrustConfig{Image: []string{"not", "in", "here!"}}},
|
||||
{false, "image", &TrustConfig{Image: []string{"not", "in", "here!"}, Org: []string{""}}},
|
||||
|
||||
// Tests for Image subkey with tags
|
||||
{true, "image:tag", &TrustConfig{Image: []string{"image:tag"}}},
|
||||
{true, "image:tag", &TrustConfig{Image: []string{"image"}}},
|
||||
{false, "image:tag", &TrustConfig{Image: []string{"image:otherTag"}}},
|
||||
{false, "image:tag", &TrustConfig{Image: []string{"image@sha256:abc123"}}},
|
||||
|
||||
// Tests for Image subkey with digests
|
||||
{true, "image@sha256:abc123", &TrustConfig{Image: []string{"image@sha256:abc123"}}},
|
||||
{true, "image@sha256:abc123", &TrustConfig{Image: []string{"image"}}},
|
||||
{false, "image@sha256:abc123", &TrustConfig{Image: []string{"image:Tag"}}},
|
||||
{false, "image@sha256:abc123", &TrustConfig{Image: []string{"image@sha256:def456"}}},
|
||||
|
||||
// Tests for Image subkey with digests
|
||||
{true, "image@sha256:abc123", &TrustConfig{Image: []string{"image@sha256:abc123"}}},
|
||||
{true, "image@sha256:abc123", &TrustConfig{Image: []string{"image"}}},
|
||||
{false, "image@sha256:abc123", &TrustConfig{Image: []string{"image:Tag"}}},
|
||||
{false, "image@sha256:abc123", &TrustConfig{Image: []string{"image@sha256:def456"}}},
|
||||
|
||||
// Tests for Org subkey
|
||||
{true, "linuxkit/image", &TrustConfig{Image: []string{"notImage"}, Org: []string{"linuxkit"}}},
|
||||
{true, "linuxkit/differentImage", &TrustConfig{Image: []string{}, Org: []string{"linuxkit"}}},
|
||||
{true, "linuxkit/differentImage:tag", &TrustConfig{Image: []string{}, Org: []string{"linuxkit"}}},
|
||||
{true, "linuxkit/differentImage@sha256:abc123", &TrustConfig{Image: []string{}, Org: []string{"linuxkit"}}},
|
||||
{false, "linuxkit/differentImage", &TrustConfig{Image: []string{}, Org: []string{"notlinuxkit"}}},
|
||||
{false, "linuxkit/differentImage:tag", &TrustConfig{Image: []string{}, Org: []string{"notlinuxkit"}}},
|
||||
{false, "linuxkit/differentImage@sha256:abc123", &TrustConfig{Image: []string{}, Org: []string{"notlinuxkit"}}},
|
||||
|
||||
// Tests for Org with library organization
|
||||
{true, "nginx", &TrustConfig{Image: []string{}, Org: []string{"library"}}},
|
||||
{true, "nginx:alpine", &TrustConfig{Image: []string{}, Org: []string{"library"}}},
|
||||
{true, "library/nginx:alpine", &TrustConfig{Image: []string{}, Org: []string{"library"}}},
|
||||
{false, "nginx", &TrustConfig{Image: []string{}, Org: []string{"notLibrary"}}},
|
||||
}
|
||||
for _, testCase := range testCases {
|
||||
if enforceContentTrust(testCase.imageName, testCase.trustConfig) != testCase.result {
|
||||
t.Errorf("incorrect trust enforcement result for %s against configuration %v, expected: %v", testCase.imageName, testCase.trustConfig, testCase.result)
|
||||
}
|
||||
}
|
||||
}
|
16
src/moby/util.go
Normal file
16
src/moby/util.go
Normal file
@ -0,0 +1,16 @@
|
||||
package moby
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
var (
|
||||
// MobyDir is the location of the cache directory, defaults to ~/.moby
|
||||
MobyDir string
|
||||
)
|
||||
|
||||
func defaultMobyConfigDir() string {
|
||||
mobyDefaultDir := ".moby"
|
||||
home := homeDir()
|
||||
return filepath.Join(home, mobyDefaultDir)
|
||||
}
|
11
src/moby/util_unix.go
Normal file
11
src/moby/util_unix.go
Normal file
@ -0,0 +1,11 @@
|
||||
// +build !windows
|
||||
|
||||
package moby
|
||||
|
||||
import (
|
||||
"os"
|
||||
)
|
||||
|
||||
func homeDir() string {
|
||||
return os.Getenv("HOME")
|
||||
}
|
9
src/moby/util_windows.go
Normal file
9
src/moby/util_windows.go
Normal file
@ -0,0 +1,9 @@
|
||||
package moby
|
||||
|
||||
import (
|
||||
"os"
|
||||
)
|
||||
|
||||
func homeDir() string {
|
||||
return os.Getenv("USERPROFILE")
|
||||
}
|
46
src/pad4/pad4.go
Normal file
46
src/pad4/pad4.go
Normal file
@ -0,0 +1,46 @@
|
||||
package pad4
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
)
|
||||
|
||||
// A Writer is an io.WriteCloser. Writes are padded with zeros to 4 byte boundary
|
||||
type Writer struct {
|
||||
w io.Writer
|
||||
count int
|
||||
}
|
||||
|
||||
// Write writes output
|
||||
func (pad Writer) Write(p []byte) (int, error) {
|
||||
n, err := pad.w.Write(p)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
pad.count += n
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// Close adds the padding
|
||||
func (pad Writer) Close() error {
|
||||
mod4 := pad.count & 3
|
||||
if mod4 == 0 {
|
||||
return nil
|
||||
}
|
||||
zero := make([]byte, 4-mod4)
|
||||
buf := bytes.NewBuffer(zero)
|
||||
n, err := io.Copy(pad.w, buf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
pad.count += int(n)
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewWriter provides a new io.WriteCloser that zero pads the
|
||||
// output to a multiple of four bytes
|
||||
func NewWriter(w io.Writer) *Writer {
|
||||
pad := new(Writer)
|
||||
pad.w = w
|
||||
return pad
|
||||
}
|
Loading…
Reference in New Issue
Block a user