LoadConfigFiles() was only called inside the container-inspect block,
so filesToLoadIntoContainer was never populated when no builder
container existed yet. The subsequent copyFilesToContainer() call
received a nil map, sending an empty tar archive and leaving
/etc/buildkit/ empty inside the newly created container.
Move the LoadConfigFiles() call before the inspect check so the config
and certificate data is always available when creating a fresh builder.
Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Paul Gaiduk <paulg@zededa.com>
Introduce environment variables for key CI/CD flags so that self-hosted
runners (e.g. GitHub Actions) can configure registry mirrors and push
targets without modifying calling Makefiles:
- LINUXKIT_MIRROR - equivalent to --mirror (space/comma-separated);
CLI flags take precedence (last SetProxy wins)
- LINUXKIT_PKG_ORG - equivalent to --org for all pkg subcommands
- LINUXKIT_BUILDER_IMAGE - equivalent to --builder-image
- LINUXKIT_BUILDER_CONFIG - equivalent to --builder-config
All env var constants are consolidated in pkg_build.go alongside the
existing LINUXKIT_CACHE, LINUXKIT_BUILDER_NAME, LINUXKIT_BUILDERS.
Priority for all: CLI flag > env var > built-in default
Adds a new Environment Variables section to docs/packages.md with a
reference table covering all LINUXKIT_* vars and a note explaining the
two-layer mirror configuration required in CI (linuxkit pulls vs
buildkit Dockerfile pulls).
Signed-off-by: Roman Shaposhnik <rucoder@gmail.com>
Signed-off-by: Mikhail Malyshev <mike.malyshev@gmail.com>
The moby/buildkit image declares VOLUME /var/lib/buildkit, which causes
Docker to create an anonymous volume when no explicit mount is given.
These anonymous volumes are orphaned every time the builder container is
recreated (--builder-restart, config change, privilege fix), leaking
disk space.
Switch to a named volume (<builder-name>-state) that is explicitly
mounted on container creation. This:
- Preserves build cache across container restarts, config changes, and
privilege fixes, making rebuilds faster.
- Eliminates anonymous volume leaks.
- Removes the state volume when the builder image version changes, since
buildkit state compatibility across versions is not guaranteed.
Signed-off-by: Mikhail Malyshev <mike.malyshev@gmail.com>
On shared servers where multiple users build packages against the same
Docker daemon, all users fight over a single hardcoded builder container
named "linuxkit-builder". One user's build can destroy another's
in-flight build when builder lifecycle management detects mismatches.
Make the builder container name configurable:
1. --builder-name CLI flag (highest priority)
2. LINUXKIT_BUILDER_NAME environment variable
3. "linuxkit-builder" default (original behavior, unchanged)
The flag is available on both "linuxkit pkg build" and
"linuxkit pkg builder" (du/prune) commands. Users on shared servers
can set LINUXKIT_BUILDER_NAME or pass --builder-name to get per-user
isolation (e.g. LINUXKIT_BUILDER_NAME=linuxkit-builder-$USER).
Signed-off-by: Mikhail Malyshev <mike.malyshev@gmail.com>
Group the four builder-related fields (name, image, config path, restart)
that always travel together into a BuilderConfig struct. This simplifies:
- DockerRunner interface (Build() and Builder() lose 3 params each)
- buildOpts struct (4 fields -> 1)
- buildArch() function signature (3 fewer params)
- DiskUsage() / PruneBuilder() / getClientForPlatform() signatures
- 4 WithBuildBuilder*() option functions -> 1 WithBuildBuilderConfig()
Also rename the confusingly-named "builderName" local variables in
buildArch() and getClientForPlatform() to "dockerContext", which better
reflects their actual purpose (they hold a Docker context name, not the
builder container name).
No behavioral changes.
Signed-off-by: Mikhail Malyshev <mike.malyshev@gmail.com>
* separate kernel series hashing
Signed-off-by: Chris Irrgang <chris.irrgang@gmx.de>
* fix issues with the update component sha script
- add bsd/gnu cross compatibility for sed
- also replace in */test.sh files
- replace potentially problematic xargs
- remove potentially problematic word boundary \b
Signed-off-by: Chris Irrgang <chris.irrgang@gmx.de>
* Move common kernel files to dedicated folder
Signed-off-by: Chris Irrgang <chris.irrgang@gmx.de>
* run update-kernel-yamls
Signed-off-by: Chris Irrgang <chris.irrgang@gmx.de>
---------
Signed-off-by: Chris Irrgang <chris.irrgang@gmx.de>
Applied gofmt -s -w to fix formatting issues in pkglib/build.go that
were causing the make local-check target to fail during the gofmt step.
Signed-off-by: Paul Gaiduk <paulg@zededa.com>
The cache import command was not properly handling stdin input,
treating "-" as filename, causing failures when piping data.
This commit fixes the stdin handling logic.
Signed-off-by: Paul Gaiduk <paulg@zededa.com>
this makes it possible for a user of this API to
build their own DryRunner
also make newDockerRunner public as well to be consistent
Signed-off-by: Christoph Ostarek <christoph@zededa.com>
there is already a public method "WithBuildDocker",
so it makes sense that the parameter definition is public as well
so that a user of this method can actually use it
Signed-off-by: Christoph Ostarek <christoph@zededa.com>
* bump alpine to 3.22; include erofs-utils
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* tools/alpine: Update to latest
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* tools: Update to the latest linuxkit/alpine
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* Update use of tools to latest
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* Update use of test packages to latest
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* pkgs: Update packages to the latest linuxkit/alpine
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* Update package tags
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* fix scaleway error
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* simplify sharding in package tests for CI; increase to 12 shards
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* for CI setup-go action, determine it based on go.mod file
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* centralize all writing of the index.json to one place
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* create filelock utility
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* leverage file locks for cache index.json
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
if `pkglib.NewFromConfig` is used in parallel, it calls
```
git -C /some/directory update-index -q --refresh
```
in parallel.
But `git` does not like this and exits with 128.
This can be easily tried with:
```
git -C /some/dir update-index -q --refresh & \
git -C /some/dir update-index -q --refresh
```
Signed-off-by: Christoph Ostarek <christoph@zededa.com>
* bump buildkit to v0.23.1
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* bump buldkit library and deps to v0.23.1
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* containerd to semver v2.0.3
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* containerd v2.0.3 plus commits to fix blkdiscard
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update containerd-dev dependencies
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* updated pkg/init and pkg/containerd deps
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* bump containerd-dev to 2.0.2
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update pkg/init libs to containerd-20
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* bump linuxkit CLI containerd deps to 20
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update test/pkg/containerd to work with containerd v2.x tests
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update containerd-dev deps
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update pkg/init and pkg/containerd dependencies
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update test/pkg/containerd deps
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* bump buildkit version to 0.20.0
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update library dependency of buildkit to v0.20.0
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* include riscv64 in target architectures
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* add riscv64 to explicit packages
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* cadvisor update to v0.51.0 and support for riscv64
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update tools based on latest
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* updated example dependencies of tools
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* bump all test cases and example alpine:3.19 to alpine:3.21
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* add riscv64 kernels to kernel/Makefile and kernel/Dockerfile.*, riscv64 kernel config, bump alpine version for kernel builds
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update bcc to v0.32.0 to include needed fixes
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* bump kernel builder alpine base to version including llvm19
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* in kernel-bcc, automatically determine python path
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* in kernel-perf, suppress newer gcc errors
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* riscv path in kernel build was incorrect
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* remove bcc compilation from kernel
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update usages of kernel/6.6.13 to kernel/6.6.71
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* next run of updating kernel config
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update test dependencies on kernel hash version
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* Update linuxkit/alpine
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* tools/alpine: Update to latest
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* tools: Update to the latest linuxkit/alpine
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* Update use of tools to latest
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* tests: Update packages to the latest linuxkit/alpine
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* Update use of test packages to latest
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* pkgs: Update packages to the latest linuxkit/alpine
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* Update package tags
Signed-off-by: Avi Deitcher <avi@deitcher.net>
---------
Signed-off-by: Avi Deitcher <avi@deitcher.net>
under certain cases the container image is already in the local docker
registry, but with the wrong architecture; in this case just pretend
it is not there and let the caller decide if they want to build it
Signed-off-by: Christoph Ostarek <christoph@zededa.com>
according to the documentation the following command is valid:
`linuxkit build equinixmetal.yml equinixmetal.arm64.yml`
(docs/platform-equinixmetal.md)
So, make it valid.
Signed-off-by: Christoph Ostarek <christoph@zededa.com>
Update `ReferenceExpand` to support image references from remote
registries. This fixes local image lookup and pulling with newer
versions of Docker.
fixes#4045
Signed-off-by: Jameel Al-Aziz <jameel@bastion.io>
cgroups v2 has been out since 2015. Not having
to set a kernel parameter helps improve the user
experience by not requiring it when it is required
by services in a build. Making this the default was
discussed back in 2021.
Signed-off-by: Jacob Weinstock <jakobweinstock@gmail.com>
before a command like
linuxkit cache pull 127.0.0.1:5000/pkgalpine
would result in trying to pull the following image:
docker.io/127.0.0.1:5000/pkgalpine
and this is wrong
Signed-off-by: Christoph Ostarek <christoph@zededa.com>
Arguably the long term fix is to introduce a check for links in the
documentation with tools like markdown-link-check.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
* Use latest kernel in linuxkit
Signed-off-by: Frédéric Dalleau <frederic.dalleau@docker.com>
* Parallelize kernel source compression
This surpringly saves a lot of time:
M1: from 340 to 90 seconds
Intel: from 527 to 222 seconds (2 cores 4 threads)
Signed-off-by: Frédéric Dalleau <frederic.dalleau@docker.com>
* Add buildx target
buildx can use remote builders and automatically generate the multiarch manifest.
A properly configured builder is required :
First create docker context for the remote builders :
$ docker context create node-<arch> --docker "host=ssh://<user>@<host>"
Then create a buildx configuration using the remote builders:
$ docker buildx create --name kernel_builder --platform linux/amd64
$ docker buildx create --name kernel_builder --node node-arm64 --platform linux/arm64 --append
$ docker buildx use kernel_builder
$ docker buildx ls
Signed-off-by: Frédéric Dalleau <frederic.dalleau@docker.com>
* Add a PLATFORMS variable to declare platforms needed for buildx
Signed-off-by: Frédéric Dalleau <frederic.dalleau@docker.com>
* Make image name customizable
Signed-off-by: Frédéric Dalleau <frederic.dalleau@docker.com>
* Do not tag use the architecture suffix for images built with buildx
Signed-off-by: Frédéric Dalleau <frederic.dalleau@docker.com>
* Add make kconfigx to upgrade configs using buildx
To update configuration for 5.10 kernels use :
make -C kernel KERNEL_VERSIONS=5.10.104 kconfigx
Signed-off-by: Frédéric Dalleau <frederic.dalleau@docker.com>
---------
Signed-off-by: Frédéric Dalleau <frederic.dalleau@docker.com>
This allows SBOM tools to look at /lib/apk/db/installed to determine
which package versions are included in the container. This should
probably be applied across all of the linuxkit containers.
Signed-off-by: eriknordmark <erik@zededa.com>
Enables support for C version of virtiofs
A qemu option allows to specify virtiofsd path.
config.StatePath is used for storing the virtiofs sockets
Note that virtiofsd requires to start as root
Signed-off-by: Frédéric Dalleau <frederic.dalleau@docker.com>
bpfilter is not meant to be used at all at this point. Only the module's
boilerplate is available on upstream kernels.
Signed-off-by: Quentin Deslandes <qde@naccy.de>
* simplify test/pkg/Makefile
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* ensure pkg and test/pkg built before downstream workflows in CI
Signed-off-by: Avi Deitcher <avi@deitcher.net>
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* sync logwrite with memlogd
Signed-off-by: Avi Deitcher <avi@deitcher.net>
* update linuxkit/logwrite and linuxkit/memlogd dependencies
Signed-off-by: Avi Deitcher <avi@deitcher.net>
Signed-off-by: Avi Deitcher <avi@deitcher.net>
Seems buildkit breaks API compatibility with previous OCI implementation
in new RC release, let's update it
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
* Fix return code of rungetty.sh
In case of INITGETTY defined we will return exit code 1 which is not
expected
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
* Update getty sha
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
* restore package cache in LinuxKit Build Tests
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
* Update of buildkit to the last version
Commit contains the version of buildkit from output of
`go list -m -json github.com/moby/buildkit@c0ac5e8b9b51603c5a93795fcf1373d6d44d3a85`:
go get -u github.com/moby/buildkit@v0.11.0-rc1.0.20221213132957-c0ac5e8b9b51
go mod tidy
go mod vendor
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
* Fix handling of platform flag
In case of 'FROM --platform' defined I can see 'ERROR: no match for
platform in manifest: not found'. The problem was fixed on buildkit side
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
Seems we should not use own credential extraction logic as it should be
aligned with resolver internally to select correct information for the
host we want to push manifest. I.e. we may want to push manifest onto
ghcr.io, and in that case we will hit errors as we will extract
credentials for docker.io instead.
Signed-off-by: Petr Fedchenkov <giggsoff@gmail.com>
@@ -63,8 +63,8 @@ Once you have built the tool, use
```
linuxkit build linuxkit.yml
```
to build the example configuration. You can also specify different output formats, eg `linuxkit build -format raw-bios linuxkit.yml` to
output a raw BIOS bootable disk image, or `linuxkit build -format iso-efi linuxkit.yml` to output an EFI bootable ISO image. See `linuxkit build -help` for more information.
to build the example configuration. You can also specify different output formats, eg `linuxkit build --format raw-bios linuxkit.yml` to
output a raw BIOS bootable disk image, or `linuxkit build --format iso-efi linuxkit.yml` to output an EFI bootable ISO image. See `linuxkit build -help` for more information.
### Booting and Testing
@@ -87,7 +87,7 @@ Currently supported platforms are:
The kernel command-line is a string of text that the kernel parses as it is starting up. It is passed by the boot loader
to the kernel and specifies parameters that the kernel uses to configure the system. The command-line is a list of command-line
options separated by spaces. The options are parsed by the kernel and can be used to enable or disable certain features.
LinuxKit passes all command-line options to the kernel, which uses them in the usual way.
There are several options that can be used to control the behaviour of linuxkit itself, or specifically packages
within linuxkit. Unless standard Linux options exist, these all are prefaced with `linuxkit.`.
| Option | Description |
|---|---|
| `linuxkit.unified_cgroup_hierarchy=0` | Start up cgroups v1. If not present or set to 1, default to cgroups v1. |
| `linuxkit.runc_debug=1` | Start runc for `onboot` and `onshutdown` containers to run with `--debug`, and add extra logging messages for each stage of starting those containers. If not present or set to 0, default to usual mode. |
| `linuxkit.runc_console=1` | Send logs for runc for `onboot` and `onshutdown` containers, as well as the output of the containers themselves, to the console, instead of the normal output to logfiles. If not present or set to 0, default to usual mode. |
It often is useful to combine both of the `linuxkit.runc_debug` and `linuxkit.runc_console` options to get the most
information about what is happening with `onboot` containers.
@@ -59,3 +59,31 @@ is provided, it always will pull, independent of what is in the cache.
The read process is smart enough to check each blob in the local cache before downloading
it from a registry.
## Imports from local Docker instance
To import an image from your local Docker daemon into LinuxKit, you’ll need to ensure the image is exported in the [OCI image format](https://docs.docker.com/build/exporters/oci-docker/), which LinuxKit understands.
This requires using a `docker-container` [buildx driver](https://docs.docker.com/build/builders/drivers/docker-container/), rather than the default.
Note that this process, as described, will only produce images for the platform/architecture you're currently on. To produce multi-platform images requires extra docker build flags and external builder or QEMU support - see [here](https://docs.docker.com/build/building/multi-platform/).
This workaround is only necessary when working with the local Docker daemon. If you’re pulling from Docker Hub or another registry, you don’t need to do any of this.
@@ -10,17 +10,51 @@ The LinuxKit kernels are based on the latest stable releases and are
updated frequently to include bug and security fixes. For some
kernels we do carry additional patches, which are mostly back-ported
fixes from newer kernels. The full kernel source with patches can be
found on [github](https://github.com/linuxkit/linux). Each kernel
image is tagged with the full kernel version (e.g.,
`linuxkit/kernel:4.9.33`) and with the full kernel version plus the
hash of the files it was created from (git tree hash of the `./kernel`
directory). For selected kernels (mostly the LTS kernels and latest
stable kernels) we also compile/push kernels with additional debugging
enabled. The hub images for these kernels have the `-dbg` suffix in
the tag. For some kernels, we also provide matching packages
containing the `perf` utility for debugging and performance tracing.
The perf package is called `kernel-perf` and is tagged the same way as
the kernel packages.
found on [github](https://github.com/linuxkit/linux).
## Kernel Image Naming and Tags
We publish the following kernel images:
* primary kernel
* debug kernel
* tools for the specific kernel build - bcc and perf
* builder image for the specific kernel build, useful for compiling compatible kernel modules
### Primary Kernel Images
Each kernel image is tagged with:
* the full kernel version, e.g. `linuxkit/kernel:6.6.13`. This is a multi-arch index, and should be used whenever possible.
* the full kernel version plus hash of the files it was created from (git tree hash of the `./kernel` directory), e.g. `6.6.13-c0d96951e9892a7447a8e7965d2d6bd7e621c3fd`. This is a multi-arch index.
* the full kernel version plus architecture, e.g. `linuxkit/kernel:6.6.13-amd64` or `linuxkit/kernel:6.6.13-arm64`. Each of these is architecture specific.
* the full kernel version plus hash of the files it was created from (git tree hash of the `./kernel` directory) plus architecture, e.g. `6.6.13-c0d96951e9892a7447a8e7965d2d6bd7e621c3fd-arm64`.
### Debug Kernel Images
With each kernel image, we also publish kernels with additional debugging enabled.
These have the same image name and the same tags as the primary kernel, with the `-dbg`
In addition to the official images, there are also some
[scripts](../contrib/foreign-kernels) which repackage kernels packages
@@ -32,7 +66,6 @@ use cases for the promising IoT scenarios. All -rt patches are grabbed from
https://www.kernel.org/pub/linux/kernel/projects/rt/. But so far we just
enable it over 4.14.x.
## Loading kernel modules
Most kernel modules are autoloaded with `mdev` but if you need to `modprobe` a module manually you can use the `modprobe` package in the `onboot` section like this:
@@ -67,7 +100,7 @@ For example:
*`linuxkit/kernel:5.15.15` has builder `linuxkit/kernel:5.15.15-builder`
With the above in hand, you can create a multi-stage `Dockerfile` build to compile your modules.
There is an [example](../test/cases/020_kernel/011_kmod_4.9.x), but
There is an [example](../test/cases/020_kernel/113_kmod_5.10.x), but
basically one can use a multi-stage build to compile the kernel
modules:
@@ -87,7 +120,7 @@ To use the kernel module, we recommend adding a final stage to the
Dockerfile above, which copies the kernel module from the `build`
stage and performs a `insmod` as the entry point. You can add this
This section describes how to build kernels, and how to modify existing ones.
If you need to modify the kernel config, `make kconfig` in
the [kernel](../kernel) directory will create a local
`linuxkit/kconfig` Docker image, which contains the patched sources
for all support kernels and architectures in
`/linux-4.<minor>.<rev>`. The kernel source also has the kernel config
copied to the default kernel config.
Throughout the document, the terms used are:
Running the image like:
* kernel version: actual semver version of a kernel, e.g. `6.6.13` or `5.15.27`
* kernel series: major.minor version of a kernel, e.g. `6.6.x` or `5.15.x`
```sh
docker run --rm -ti -v $(pwd):/src linuxkit/kconfig
Throughout this document, the architecture used is the kernel-recognized one, available
on most systems as `uname -m`, e.g. `aarch64` or `x86_64`. You may be familiar with the alpine
or golang one, e.g. `amd64` or `amd64`, which are not used here.
**Note:** After changing _and committing any changes_ to the kernel directory or any
subdirectories, you must update tests, examples and other dependencies. This is done
via:
```bash
make update-kernel-yamls
```
will give you a interactive shell where you can modify the kernel
configuration you want, either by editing the config file, or via
`make menuconfig` etc. Once you are done, save the file as `.config`
and copy it back to the source tree,
e.g. `/src/kernel-config-4.9.x-x86_64`.
Each series of kernels has a dedicated directory in [../kernel/](../kernel),
e.g. [6.6.x](../kernel/6.6.x) or [5.15.x](../kernel/5.15.x).
Variants, like rt kernels, have their own directory as well, e.g. [5.11.x-rt](../kernel/5.11.x-rt).
However, for variants, the patches from _both_ the common kernel, e.g. [5.11.x](../kernel/5.11.x),
and the variant, e.g. [5.11.x-rt](../kernel/5.11.x-rt), are applied, and the configs from _both_ are combined.
You can also configure other architectures other than the native
one. For example to configure the arm64 kernel on x86_64, use:
Within the series-dedicated directory, there are:
```
make ARCH=arm64 defconfig
make ARCH=arm64 oldconfig # or menuconfig
```
* kernel config file for each architecture named `config-<arch>`, e.g. [6.6.13/config-x86_64](../kernel/6.6.13/config-x86_64), one per target architecture.
* optional patches directory, e.g. [6.6.13/patches](../kernel/6.6.13/patches), which contains patches to apply to the kernel source
The config file and patches are applied during the kernel build process.
**Note**: We try to keep the differences between kernel versions and
architectures to a minimum, so if you make changes to one
configuration also try to apply it to the others. The script [kconfig-split.py](../scripts/kconfig-split.py) can be used to compare kernel config files. For example:
creates a file with the common and the x86_64 and arm64 specific
config options for the 4.9.x kernel series.
config options for the 5.15.x kernel series.
**Note**: The CI pipeline does *not* push out kernel images.
Anyone modifying a kernel should:
1. Follow the steps below for the desired changes and commit them.
1. Run appropriate `make build` or variants to ensure that it works.
1. Open a PR with the changes. This may fail, as the CI pipeline may not have access to the modified kernels.
1. A maintainer should run `make push` to push out the images.
1. Run (or rerun) the tests.
#### Build options
The targets and variants for building are as follows:
*`make build` - make all kernels in the version list and their variants
*`make build-<version>` - make all variants of a specific kernel version
*`make buildkernel-<version>` - make all variants of a specific kernel version
*`make buildplainkernel-<version>` - make just the provided version's kernel
*`make builddebugkernel-<version>` - make just the provided version's debug kernel
*`make buildtools-<version>` - make just the provided version's tools
To push:
*`make push` - push all kernels in the version list and their variants
*`make push-<version>` - push all variants of a specific kernel version
Finally, for convenience:
*`make list` - list all kernels in the version list
By default, it builds for all supported architectures. To build just for a specific
architecture:
```sh
make build ARCH=amd64
```
The variable `ARCH` should use the golang variants only, i.e. `amd64` and `arm64`.
To build for multiple architectures, call it multiple times:
```sh
make build ARCH=amd64
make build ARCH=arm64
```
When building for a specific architecture, the build process will use your local
Docker, passing it `--platforms` for the architecture. If you have a builder on a different
architecture, e.g. you are running on an Apple Silicon Mac (arm64) and want to build for
`x86_64` without emulating (which can be very slow), you can use the `BUILDER` variable:
```sh
make build ARCH=x86_64 BUILDER=remote-amd64-builder
```
Builder also supports a builder pattern. If `BUILDER` contains the string `{{.Arch}}`,
it will be replaced with the architecture being built.
For example:
```sh
make build ARCH=x86_64 BUILDER=remote-{{.Arch}}-builder
make build ARCH=aarch64 BUILDER=remote-{{.Arch}}-builder
```
will build `x86_64` on `remote-amd64-builder` and `aarch64` on `remote-arm64-builder`.
Finally, if no `BUILDER` is specified, the build will look for a builder named
`linuxkit-linux-{{.Arch}}-builder`, e.g. `linuxkit-linux-amd64-builder` or
`linuxkit-linux-arm64-builder`. If that builder does not exist, it will fall back to
your local Docker setup.
### Modifying the kernel config
The process of modifying the kernel configuration is as follows:
1. Create a `linuxkit/kconfig` container image: `make kconfig`. This is not pushed out. By default, this will be for your local architecture, but you can override it with `make kconfig ARCH=${ARCH}`, e.g. `make kconfig ARCH=arm64`. The image is tagged with the architecture, e.g. `linuxkit/kconfig:arm64`.
1. Run a container based on `linuxkit/kconfig`.
1. In the container, modify the config to suit your needs using normal kernel tools like `make defconfig` or `make menuconfig`.
1. Save the config from the image.
The `linuxkit/kconfig` image contains the patched sources
for all support kernels and architectures in `/linux-<major>.<minor>.<rev>`.
The kernel source also has the kernel config copied to the default kernel config location,
so that `make menuconfig` and `make defconfig` work correctly.
Run the container as follows:
```sh
docker run --rm -ti -v $(pwd):/src linuxkit/kconfig:aarch64
# or
docker run --rm -ti -v $(pwd):/src linuxkit/kconfig:x86_64
# or
docker run --rm -ti -v $(pwd):/src linuxkit/kconfig:riscv64
```
This will give you a interactive shell where you can modify the kernel
configuration you want, while mounting the directory, so that you can save the
modified config.
To create or modify the config, you must cd to the correct directory,
e.g.
```sh
cd /linux-6.6.13
# or
cd /linux-5.15.27
```
Now you can build the config.
When `make defconfig` or `make menuconfig` is done,
the modified config file will be in `.config`; save the file back to `/src`,
e.g.
```sh
cp .config /src/6.6.x/config-x86_64
```
You can also configure other architectures other than the native
one. For example to configure the arm64 kernel on x86_64, use:
```sh
make ARCH=arm64 defconfig
make ARCH=arm64 oldconfig # or menuconfig
```
It is important to note that sometimes the configuration can be subtly different
when running `make defconfig` across architectures. Of note is that `make ARCH=riscv` on
x86_64 or aarch64 comes out slightly differently than when run natively on riscv64.
Feel free to try it cross, but do not be surprised if it generates outputs that are not the same.
Note that the generated file **must** be final. When you actually build the kernel,
it will check that running `make defconfig` will have no changes. If there are changes,
the build will fail.
The easiest way to check it is to rerun `make defconfig` inside the kconfig container.
1. Finish your creation of the config file, as above.
1. Copy the `.config` file to the target location, as above.
1. Copy the `.config` file to the source location for defconfig, e.g. `cp .config arch/x86/configs/x86_64_config` or `cp. config /linux/arch/arm64/configs/defconfig`
1. Run `make defconfig` again, and check that there are no changes, e.g. `diff .config arch/x86/configs/x86_64_config` or `diff .config /linux/arch/arm64/configs/defconfig`
If there are no differences, then you can commit the new config file.
Finally, test that you can build the kernel with that config as `make build-<version>`, e.g. `make build-5.15.148`.
## Adding a new kernel version
If you want to add a new kernel version within an existing series, e.g. `5.15.27` already exists
and you want to add (or replace it with) `5.15.148`, apply the following process.
1. Determine the series, i.e. the kernel major.minor version, followed by `x`. E.g. for `5.15.148`, the series is `5.15.x`.
1. Modify the `KERNEL_VERSION` in the `build-args` file in the series directory to the new version. E.g. `5.15.x/build-args`.
1. Create a new `linuxkit/kconfig` container image: `make kconfig`. This is not pushed out.
1. Run a container based on `linuxkit/kconfig`.
```sh
docker run --rm -ti -v $(pwd):/src linuxkit/kconfig
```
1. In the container, change directory to the kernel source directory for the new version, e.g. `cd /linux-5.15.148`.
1. Run `make defconfig` to create the default config file.
1. If the config file has changed, copy it out of the container and check it in, e.g. `cp .config /src/5.15.x/config-x86_64`.
1. Repeat for other architectures.
1. Commit the changed config files.
1. Test that you can build the kernel with that config as `make build-<version>`, e.g. `make build-5.15.148`.
## Adding a new kernel series
To add a new kernel series, you need to:
1. Create new directory for the series, e.g. `6.7.x`
1. Create config files for each architecture in that directory
1. Optionally, create a `patches/` subdirectory in that directory with any patches to add
1. Create a `build-args` file in that directory with at least the following settings:
```bash
KERNEL_VERSION=<version>
KERNEL_SERIES=<series>
BUILD_IMAGE=linuxkit/alpine:<builder>
```
Since the last major series likely is the best basis for the new one, subject to additional modifications, you can use
the previous one as a starting point.
1. Make the directory for the new series, e.g. `mkdir 7.0.x`
1. Create a new `linuxkit/kconfig` container image: `make kconfig`. This is not pushed out.
1. Run a container based on `linuxkit/kconfig`.
```sh
docker run --rm -ti -v $(pwd):/src linuxkit/kconfig
```
1. In the container, change directory to the kernel source directory for the new version, e.g. `cd /linux-7.0.5`.
1. Copy the existing config file for the previous series, e.g. `cp /src/6.6.x/config-x86_64 .config`.
1. Run `make oldconfig` to create the config file for the new series from the old one. Answer any questions.
1. Save the newly generated config file `.config` to the source directory, e.g. `cp .config /src/7.0.x/config-x86_64`.
1. Repeat for other architectures.
1. Commit the new config files.
1. Test that you can build the kernel with that config as `make build-<version>`, e.g. `make build-7.0.5`.
In addition, there are tests that are applied to a specific kernel version, notably the tests in
[020_kernel](../test/cases/020_kernel/). You will need to add a new test case for the new series,
copying an existing one and modifying it as needed.
## Building and using custom kernels
@@ -391,3 +623,31 @@ Alpine `zfs` utilities are available in `linuxkit/alpine` and the
version of the kernel module should match the version of the
tools. The container where you run the `zfs` tools might also need
`CAP_SYS_MODULE` to be able to load the kernel modules.
## Kernels in examples and tests
All of the linuxkit `.yml` files use the images from `linuxkit/kernel:<tag>`.
When updating the kernel, you run commands to update the tests. The updates to any file that contains
references to `linuxkit/kernel` in this repository work as follows:
- Semver tags are replaced by the most recent kernel version. For example, `linuxkit/kernel:5.10.104` will become `6.6.13` when available, and then `6.6.15`, and then `7.0.1`, etc. The highest semver always is used.
- Semver+hash tags are replaced by the most recent hash and patch version for that series. For example, `linuxkit/kernel:5.10.104-abcdef1234` will become `5.10.104-aaaa54232` (same semver, newer hash), and then `5.10.105-bbbb12345` (newer semver, newer hash), etc. The highest semver+hash always is used.
This is not an inherent characteristic of `linuxkit` tool, which **never** will change your `.yml` files. It is part of
the update process for yml files _in this repository_.
The net of the above is the following rule:
* If you want a reference to a specific kernel series, e.g. a test or example that works only with `5.10.x`, then use a specific hash, e.g. `linuxkit/kernel:5.10.104-abcdef1234`. The hash and patch version will update, but not more. The most common use case for this is kernel version-specific tests.
* If you want a reference to the most recent kernel, whatever version it is, then use a semver tag, e.g. `linuxkit/kernel:6.6.13`. The most common use case for this is examples that work with any kernel version, which is the vast majority of cases.
You can get the current hash by executing the following:
@@ -50,12 +50,14 @@ A package source consists of a directory containing at least two files:
-`image`_(string)_: *(mandatory)* The name of the image to build
-`org`_(string)_: The hub/registry organisation to which this package belongs
-`tag`_(string)_: The tag to use for the image, can be fixed string or template (default: `{{.Hash}}`)
-`dockerfile`_(string)_: The dockerfile to use to build this package, must be in this directory or below (default: `Dockerfile`)
-`arches`_(list of string)_: The architectures which this package should be built for (valid entries are `GOARCH` names)
-`extra-sources`_(list of strings)_: Additional sources for the package outside the package directory. The format is `src:dst`, where `src` can be relative to the package directory and `dst` is the destination in the build context. This is useful for sharing files, such as vendored go code, between packages.
-`gitrepo`_(string)_: The git repository where the package source is kept.
-`network`_(bool)_: Allow network access during the package build (default: no)
-`disable-cache`_(bool)_: Disable build cache for this package (default: no)
-`buildArgs` will forward a list of build arguments down to docker. As if `--build-arg` was specified during `docker build`
-`buildArgs` will forward a list of build arguments down to docker. As if `--build-arg` was specified during `docker build`. See [BuildArgs][BuildArgs] for more information.
-`config`: _(struct `github.com/moby/tool/src/moby.ImageConfig`)_: Image configuration, marshalled to JSON and added as `org.mobyproject.config` label on image (default: no label)
-`depends`: Contains information on prerequisites which must be satisfied in order to build the package. Has subfields:
-`docker-images`: Docker images to be made available (as `tar` files via `docker image save`) within the package build context. Contains the following nested fields:
linuxkit will try to build for `linux/arm64` using the context `my-remote-arm64`. Since that context does not exist, you will get an error.
##### Preset build arguments
When building packages, the following build-args automatically are set for you:
* `SOURCE` - the source repository of the package
* `REVISION` - the git commit that was used for the build
* `GOPKGVERSION` - the go package version or pseudo-version per https://go.dev/ref/mod#glos-pseudo-version
* `PKG_HASH` - the git tree hash of the package directory, e.g. `45a1ad5919f0b6acf0f0cf730e9434abfae11fe6`; tag part of `linuxkit pkg show-tag`
* `PKG_IMAGE` - the name of the image that is being built, e.g. `linuxkit/init`; image name part of `linuxkit pkg show-tag`. Combine with `PKG_HASH` for the full tag.
Note that the above are set **only** if you do not set them in `build.yaml`. Your settings _always_
override these built-in ones.
To use them, simply address them in your `Dockerfile`:
```dockerfile
ARG SOURCE
```
### Build packages as a maintainer
All official LinuxKit packages are multi-arch manifests and most of
@@ -360,3 +381,100 @@ ARG all_proxy
LinuxKit does not judge between lower-cased or upper-cased variants of these options, e.g. `http_proxy` vs `HTTP_PROXY`,
as `docker build` does not either. It just passes them through "as-is".
### Environment Variables
The following environment variables can be used to configure `linuxkit` without
modifying command-line invocations — useful for CI/CD runners and shared build
scripts. CLI flags always take precedence over env vars, which take precedence
over built-in defaults.
| Variable | Equivalent flag | Scope | Description |
|---|---|---|---|
| `LINUXKIT_MIRROR` | `--mirror` | All commands | Space- or comma-separated list of mirror specs, each in `[<registry>=]<url>` format (same as `--mirror`). E.g. `LINUXKIT_MIRROR=docker.io=http://mymirror.local` |
| `LINUXKIT_PKG_ORG` | `--org` | `pkg` subcommands | Override the registry organisation used when tagging and pushing packages. E.g. `LINUXKIT_PKG_ORG=myorg/lfedge` |
| `LINUXKIT_BUILDER_IMAGE` | `--builder-image` | `pkg build` | buildkit container image to use. Useful when the builder image must come from an internal mirror. |
| `LINUXKIT_BUILDER_CONFIG` | `--builder-config` | `pkg build` | Path to a buildkit `config.toml` file. The primary way to configure buildkit's own registry mirrors for `FROM` pulls inside Dockerfiles. |
| `LINUXKIT_BUILDER_NAME` | `--builder-name` | `pkg build` | Name of the buildkit builder container. |
`linuxkit` does not support passing random CLI flags for build arguments when building packages.
This is inline with its philosophy, of having as reproducible builds as possible, which requires
everything to be available on disk and in the repository.
It is possible to bypass this, but this is not recommended.
As described in [Preset build arguments][Preset build arguments], linuxkit automatically sets some build arguments
when building packages. However, you can also set your own build arguments, which will be passed to the
`docker build` command.
You can include your own build args in several ways.
* `build.yml` - you can add a `buildArgs` field to the `build.yml` file, which will be passed as `--build-arg` to `docker build`.
* `linuxkit pkg build` - you can pass the `--build-arg-file <file>` flag, with one `<key>=<value>` pair per line, which will be passed as `--build-arg` to `docker build`.
When parsing for build args, whether from `build.yml`'s `buildArgs` field or from the `--build-arg-file`,
linuxkit has support for certain calculated build args for the value of the arg. You can set these using the following syntax.
All calculated build args are prefixed with `@lkt:`.
* `VAR=@lkt:pkg:<path>` - the linuxkit package hash of the path, as determined by `linuxkit pkg show-tag <path>`. The `<path>` can be absolute, or if provided as a relative path, it is relative to the working directory of the file. For example, if provided in the `buildArgs` section of `build.yml`, it is relative to the package directory; if provided in `--build-arg-file <file>`, it is relative to the directory in which <file> exists.
For example:
```yaml
buildArgs:
- DEP_HASH=@lkt:pkg:/usr/local/foo # will be replaced with the value of `linuxkit pkg show-tag /usr/local/foo`
- REL_HASH=@lkt:pkg:foo # will be replaced with the value of `linuxkit pkg show-tag foo` relative to this build.yml file
```
* `VAR_%=@lkt:pkgs:<paths>` - (note `pkgs` plural) the linuxkit package hashes of the multiple packages satisfied by `<paths>`. linuxkit will get the linuxkit package hash of each path in `<paths>`, as determined by `linuxkit pkg show-tag <path>`. The `<paths>` can be absolute, or if provided as a relative path, it is relative to the working directory of the file which contains the build arg, whether `build.yml` in a package or the build arg
file provided to `--build-arg-file <file>`. The `<paths>` supports basic shell globbing, such as `./foo/*` or `/var/foo{1,2,3}`. Globs that start with `.` will be ignored, e.g. `foo/*` will match `foo/one` and `foo/two` but not `foo/.git` and `foo/.bar`. For each package in `<paths>`, it will create a build arg with the name `VAR_<package-name>` and the value of the package hash, where: the `%` is replaced with the name of the package; an all `/` and `-` characters are replaced with `_`; all characters are upper-cased.
There _must_ be at least one valid environment variable character before the `%` character.
For example:
```yaml
buildArgs:
- DEP_HASH_%=@lkt:pkgs:/usr/local/foo/*
```
If there are packages in `/usr/local/foo/` named `bar`, `baz`, and `qux`, and each of them has a package as shown
by `linuxkit pkg show-tag` as `linuxkit/bar:123abc`, `linuxkit/baz:aabb666`, and `linuxkit/qux:bbcc777`, this will create the following build args:
```
DEP_HASH_LINUXKIT_BAR=linuxkit/bar:123abc
DEP_HASH_LINUXKIT_BAZ=linuxkit/baz:aabb666
DEP_HASH_LINUXKIT_QUX=linuxkit/qux:bbcc777
```
## Releases
Normally, whenever a package is updated, CI will build and push the package to Docker Hub by calling `linuxkit pkg push`.
This automatically creates a tag based on the git tree hash of the package's directory.
For example, the package in `./pkg/init` is tagged as `linuxkit/init:45a1ad5919f0b6acf0f0cf730e9434abfae11fe6`.
In addition, you can release semver tags for packages by adding a tag to the git repository that begins with `pkg-` and is
followed by a valid semver tag. For example, `pkg-v1.0.0`. This will cause CI to build and push the package to Docker Hub
with the tag `v1.0.0`.
Pure semver tags, like `v1.0.0`, are not used for package releases. They are used for the linuxkit project itself and to
@@ -24,9 +24,9 @@ specified with `-arch` and currently accepts `x86_64`, `aarch64`, and
`linuxkit run qemu` can boot in different types of images:
-`kernel+initrd`: This is the default mode of `linuxkit run qemu` [`x86_64`, `arm64`, `s390x`]
-`kernel+squashfs`: `linuxkit run qemu -squashfs <path to directory>`. This expects a kernel and a squashfs image. [`x86_64`, `arm64`, `s390x`]
-`iso-bios`: `linuxkit run qemu -iso <path to iso>` [`x86_64`]
-`iso-efi`: `linuxkit run qemu -iso -uefi <path to iso>`. This looks in `/usr/share/ovmf/bios.bin` for the EFI firmware by default. Can be overwritten with `-fw`. [`x86_64`, `arm64`]
-`kernel+squashfs`: `linuxkit run qemu --squashfs <path to directory>`. This expects a kernel and a squashfs image. [`x86_64`, `arm64`, `s390x`]
-`iso-bios`: `linuxkit run qemu --iso <path to iso>` [`x86_64`]
-`iso-efi`: `linuxkit run qemu --iso --uefi <path to iso>`. This looks in `/usr/share/ovmf/bios.bin` for the EFI firmware by default. Can be overwritten with `-fw`. [`x86_64`, `arm64`]
-`qcow-bios`: `linuxkit run qemu disk.qcow2` [`x86_64`]
-`raw-bios`: `linuxkit run qemu disk.img` [`x86_64`]
-`aws`: `linuxkit run qemu disk.img` boots a raw AWS disk image. [`x86_64`]
LinuxKit bootable images are composed of existing OCI images.
OCI images, when built, often are scanned to create a
software bill-of-materials (SBoM). The buildkit builder
system itself contains the [ability to integrate SBoM scanning and generation into the build process](https://docs.docker.com/build/attestations/sbom/).
When LinuxKit composes an operating system image using `linuxkit build`,
it will, by default, combine the SBoMs of all the OCI images used to create
This document contains a list of known issues related to using, building or testing linuxkit.
## Images
## Packages
### Invalid MediaType
**Problem**
```
Error: error building and pushing "linuxkit/mkimage-iso-efi-initrd:0e66171ffde9bb735b0e014f811f9626fc8b9bc9": PUT https://index.docker.io/v2/linuxkit/mkimage-iso-efi-initrd/manifests/0e66171ffde9bb735b0e014f811f9626fc8b9bc9: MANIFEST_INVALID: manifest invalid; if present, mediaType in image index should be 'application/vnd.oci.image.index.v1+json' not 'application/vnd.docker.distribution.manifest.list.v2+json'
```
The above message is caused by registries, notably docker hub, refusing to accept indexes with the
docker media type of `application/vnd.docker.distribution.manifest.list.v2+json`, rather than the OCI
one `application/vnd.oci.image.index.v1+json`.
Linuxkit _does_ use the OCI media type, however, if the image _already_ exists in the registry, linuxkit will
pull the index down, update it, and push it back up. The above error occurs because the index that exists in
the hub, the one that is pulled down, has the older media type, from when the registry accepted it.
**Solution**
The solution is to force an entirely new build, which will generate the images and index with the correct media
The `linuxkit build` command assembles a set of containerised components into in image. The simplest
type of image is just a `tar` file of the contents (useful for debugging) but more useful
outputs add a `Dockerfile` to build a container, or build a full disk image that can be
booted as a linuxKit VM. The main use case is to build an assembly that includes
booted as a linuxkit VM. The main use case is to build an assembly that includes
`containerd` to run a set of containers, but the tooling is very generic.
The yaml configuration specifies the components used to build up an image . All components
@@ -16,8 +16,19 @@ The Docker images are optionally verified with Docker Content Trust.
For private registries or private repositories on a registry credentials provided via
`docker login` are re-used.
The configuration file is processed in the order `kernel`, `init`, `onboot`, `onshutdown`,
`services`, `files`. Each section adds files to the root file system. Sections may be omitted.
## Sections
The configuration file is processed in the order:
1.`kernel`
1.`init`
1.`volumes`
1.`onboot`
1.`onshutdown`
1.`services`
1.`files`
Each section adds files to the root file system. Sections may be omitted.
Each container that is specified is allocated a unique `uid` and `gid` that it may use if it
wishes to run as an isolated user (or user namespace). Anywhere you specify a `uid` or `gid`
@@ -40,7 +51,7 @@ files:
mode: "0600"
```
## `kernel`
### `kernel`
The `kernel` section is only required if booting a VM. The files will be put into the `boot/`
directory, where they are used to build bootable images.
@@ -50,6 +61,9 @@ which should contain a `kernel` file that will be booted (eg a `bzImage` for `am
called `kernel.tar` which is a tarball that is unpacked into the root, which should usually
contain a kernel modules directory. `cmdline` specifies the kernel command line options if required.
The contents of `cmdline` are passed to the kernel as-is. There are several special values that are
used to control the behaviour of linuxkit packages. See [kernel command line options](../docs/cmdline.md).
To override the names, you can specify the kernel image name with `binary: bzImage` and the tar image
with `tar: kernel.tar` or the empty string or `none` if you do not want to use a tarball at all.
@@ -57,7 +71,7 @@ Kernel packages may also contain a cpio archive containing CPU microcode which n
the initrd. To select this option, recommended when booting on bare metal, add `ucode: intel-ucode.cpio`
to the kernel section.
## `init`
### `init`
The `init` section is a list of images that are used for the `init` system and are unpacked directly
into the root filesystem. This should bring up `containerd`, start the system and daemon containers,
@@ -65,14 +79,14 @@ and set up basic filesystem mounts. in the case of a LinuxKit system. For ease o
modification `runc` and `containerd` images, which just contain these programs are added here
rather than bundled into the `init` container.
## `onboot`
### `onboot`
The `onboot` section is a list of images. These images are run before any other
images. They are run sequentially and each must exit before the next one is run.
These images can be used to configure one shot settings. See [Image
specification](#image-specification) for a list of supported fields.
## `onshutdown`
### `onshutdown`
This is a list of images to run on a clean shutdown. Note that you must not rely on these
being run at all, as machines may be be powered off or shut down without having time to run
@@ -81,18 +95,149 @@ run and when they are not. Most systems are likely to be "crash only" and not ha
but you can attempt to deregister cleanly from a network service here, rather than relying
on timeouts, for example.
## `services`
### `services`
The `services` section is a list of images for long running services which are
run with `containerd`. Startup order is undefined, so containers should wait
on any resources, such as networking, that they need. See [Image
specification](#image-specification) for a list of supported fields.
## `files`
### `volumes`
The volumes section is a list of named volumes that can be used by other containers,
including those in `services`, `onboot` and `onshutdown`. The volumes are created in a directory
chosen by linuxkit at build-time. The volumes then can be referenced by other containers and
mounted into them.
Volumes can be in one of several formats:
* Blank directory: This is the default, and is an empty directory that is created at build-time. It is an overlayfs mount, and can be shared among multiple containers.
* Image laid out as filesystem: The contents of the image are used to populate the volume. Default format when an image is provided.
* Image as OCI v1-layout: The image is used as an [OCI v1-layout](https://github.com/opencontainers/image-spec/blob/main/image-layout.md). Indicated by `format: oci`.
Examples of each are given later in this section.
The `volumes` section can declare a volume to be read-write or read-only. If the volume is read-write,
a volume that is mounted into a container can be mounted read-only or read-write. If the volume is read-only,
it can be mounted into a container read-only; attempting to do so read-write will generate a build-time error.
By default, volumes are created read-write, and are mounted read-write.
Volume names **must** be unique, and must contain only lower-case alphanumeric characters, hyphens, and
underscores.
#### Samples of `volumes`
##### Empty directory
Yaml showing both read-only and read-write:
```yml
volumes:
- name:dira
readonly:true
- name:dirb
readonly:true
```
Contents:
```sh
$ cd dir && ls -la
drwxr-xr-x 19 root wheel 608 Sep 30 15:03 .
drwxrwxrwt 130 root wheel 4160 Sep 30 15:03 ..
```
In the above example:
*`dira` is empty and is read-only.
*`volb` is empty and is read-write.
##### Image directory
Yaml showing both read-only and read-write:
```yml
volumes:
- name:vola
image:alpine:latest
readonly:true
- name:volb
image:alpine:latest
format:filesystem # optional, as this is the default format
readonly:false
```
In the above example:
*`vola` is populated by the contents of `alpine:latest` and is read-only.
*`volb` is populated by the contents of `alpine:latest` and is read-write.
Contents:
```sh
$ cd dir && ls -la
drwxr-xr-x 19 root wheel 608 Sep 30 15:03 .
drwxrwxrwt 130 root wheel 4160 Sep 30 15:03 ..
drwxr-xr-x 84 root wheel 2688 Sep 6 14:34 bin
drwxr-xr-x 2 root wheel 64 Sep 6 14:34 dev
drwxr-xr-x 37 root wheel 1184 Sep 6 14:34 etc
drwxr-xr-x 2 root wheel 64 Sep 6 14:34 home
drwxr-xr-x 13 root wheel 416 Sep 6 14:34 lib
drwxr-xr-x 5 root wheel 160 Sep 6 14:34 media
drwxr-xr-x 2 root wheel 64 Sep 6 14:34 mnt
drwxr-xr-x 2 root wheel 64 Sep 6 14:34 opt
dr-xr-xr-x 2 root wheel 64 Sep 6 14:34 proc
drwx------ 2 root wheel 64 Sep 6 14:34 root
drwxr-xr-x 2 root wheel 64 Sep 6 14:34 run
drwxr-xr-x 63 root wheel 2016 Sep 6 14:34 sbin
drwxr-xr-x 2 root wheel 64 Sep 6 14:34 srv
drwxr-xr-x 2 root wheel 64 Sep 6 14:34 sys
drwxr-xr-x 2 root wheel 64 Sep 6 14:34 tmp
drwxr-xr-x 7 root wheel 224 Sep 6 14:34 usr
drwxr-xr-x 13 root wheel 416 Sep 6 14:34 var
```
##### Image OCI Layout
Yaml showing both read-only and read-write, and both all architectures and a limited subset:
```yml
volumes:
- name:volo
image:alpine:latest
format:oci
readonly:true
- name:volp
image:alpine:latest
readonly:false
format:oci
platforms:
- linux/amd64
```
In the above example:
*`volo` is populated by the contents of `alpine:latest` as an OCI v1-layout for all architectures and is read-only.
*`volb` is populated by the contents of `alpine:latest` as an OCI v1-layout just for linux/amd64 and is read-write.
##### Volumes in `services`
Sample usage of volumes in `services` section:
```yml
services:
- name:myservice
image:alpine:latest
binds:
- volA:/mnt/volA:ro
- volB:/mnt/volB
```
### `files`
The files section can be used to add files inline in the config, or from an external file.
```
```yml
files:
- path:dir
directory:true
@@ -118,16 +263,20 @@ user's home directory.
In addition there is a `metadata` option that will generate the file. Currently the only value
supported here is `"yaml"` which will output the yaml used to generate the image into the specified
file:
```
```yml
- path:etc/linuxkit.yml
metadata:yaml
```
Note that if you use templates in the yaml, the final resolved version will be included in the image,
and not the original input template.
Because a `tmpfs` is mounted onto `/var`, `/run`, and `/tmp` by default, the `tmpfs` mounts will shadow anything specified in `files` section for those directories.
## Image specification
Entries in the `onboot` and `services` sections specify an OCI image and
Entries in the `onboot`, `onshutdown`, `volumes` and `services` sections specify an OCI image and
options. Default values may be specified using the `org.mobyproject.config` image label.
For more details see the [OCI specification](https://github.com/opencontainers/runtime-spec/blob/master/spec.md).
@@ -202,7 +351,8 @@ which specifies some actions to take place when the container is being started.
-`namespace` overrides the LinuxKit default containerd namespace to put the container in; only applicable to services.
An example of using the `runtime` config to configure a network namespace with `wireguard` and then run `nginx` in that namespace is shown below:
```
```yml
onboot:
- name:dhcpcd
image:linuxkit/dhcpcd:<hash>
@@ -293,3 +443,43 @@ binds:
- /var:/var:rshared,rbind
rootfsPropagation:shared
```
## Templates
The `yaml` file supports templates for the names of images. Anyplace an image is used in a file and begins
with the character `@`, it indicates that it is not an actual name, but a template. The first word after
the `@` indicates the type of template, and the rest of the line is the argument to the template. The
templates currently supported are:
*`@pkg:` - the argument is the path to a linuxkit package. For example, `@pkg:./pkg/init`.
For `pkg`, linuxkit will resolve the path to the package, and then run the equivalent of `linuxkit pkg show-tag <dir>`.
For example:
```yaml
init:
- "@pkg:../pkg/init"
```
Will cause linuxkit to resolve `../pkg/init` to a package, and then run `linuxkit pkg show-tag ../pkg/init`.
The paths are relative to the directory of the yaml file.
You can specify absolute paths, although it is not recommended, as that can make the yaml file less portable.
The `@pkg:` templating is supported **only** when the yaml file is being read from a local filesystem. It does not
support when using via stdin, e.g. `cat linuxkit.yml | linuxkit build -`, or URLs, e.g. `linuxkit build https://example.com/foo.yml`.
The `@pkg:` template currently supports only default `linuxkit pkg` options, i.e. `build.yml` and `tag` options. There
are no command-line options to override them.
**Note:** The character `@` is reserved in yaml. To use it in the beginning of a string, you must put the entire string in
quotes.
If you use the template, the actual derived value, and not the initial template, is what will be stored in the final
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.