Making the LoggingConfiguration part of the versioned component-base/config API
had the theoretic advantage that components could have offered different
configuration APIs with experimental features limited to alpha versions (for
example, sanitization offered only in a v1alpha1.KubeletConfiguration). Some
components could have decided to only use stable logging options.
In practice, this wasn't done. Furthermore, we don't want different components
to make different choices regarding which logging features they offer to
users. It should always be the same everywhere, for the sake of consistency.
This can be achieved with a saner Go API by dropping the distinction between
internal and external LoggingConfiguration types. Different stability levels of
indidividual fields have to be covered by documentation (done) and potentially
feature gates (not currently done).
Advantages:
- everything related to logging is under component-base/logs;
previously this was scattered across different packages and
different files under "logs" (why some code was in logs/config.go
vs. logs/options.go vs. logs/logs.go always confused me again
and again when coming back to the code):
- long-term config and command line API are clearly separated
into the "api" package underneath that
- logs/logs.go itself only deals with legacy global flags and
logging configuration
- removal of separate Go APIs like logs.BindLoggingFlags and
logs.Options
- LogRegistry becomes an implementation detail, with less code
and less exported functionality (only registration needs to
be exported, querying is internal)
External Repository Staging Area
This directory is the staging area for packages that have been split to their own repository. The content here will be periodically published to respective top-level k8s.io repositories.
Repositories currently staged here:
k8s.io/apik8s.io/apiextensions-apiserverk8s.io/apimachineryk8s.io/apiserverk8s.io/cli-runtimek8s.io/client-gok8s.io/cloud-providerk8s.io/cluster-bootstrapk8s.io/code-generatork8s.io/component-basek8s.io/component-helpersk8s.io/controller-managerk8s.io/cri-apik8s.io/csi-translation-libk8s.io/kube-aggregatork8s.io/kube-controller-managerk8s.io/kube-proxyk8s.io/kube-schedulerk8s.io/kubectlk8s.io/kubeletk8s.io/legacy-cloud-providersk8s.io/metricsk8s.io/mount-utilsk8s.io/pod-security-admissionk8s.io/sample-apiserverk8s.io/sample-cli-plugink8s.io/sample-controller
The code in the staging/ directory is authoritative, i.e. the only copy of the code. You can directly modify such code.
Using staged repositories from Kubernetes code
Kubernetes code uses the repositories in this directory via symlinks in the
vendor/k8s.io directory into this staging area. For example, when
Kubernetes code imports a package from the k8s.io/client-go repository, that
import is resolved to staging/src/k8s.io/client-go relative to the project
root:
// pkg/example/some_code.go
package example
import (
"k8s.io/client-go/dynamic" // resolves to staging/src/k8s.io/client-go/dynamic
)
Once the change-over to external repositories is complete, these repositories
will actually be vendored from k8s.io/<package-name>.
Creating a new repository in staging
Adding the staging repository in kubernetes/kubernetes:
-
Send an email to the SIG Architecture mailing list and the mailing list of the SIG which would own the repo requesting approval for creating the staging repository.
-
Once approval has been granted, create the new staging repository.
-
Add a symlink to the staging repo in
vendor/k8s.io. -
Update
import-restrictions.yamlto add the list of other staging repos that this new repo can import. -
Add all mandatory template files to the staging repo as mentioned in https://github.com/kubernetes/kubernetes-template-project.
-
Make sure that the
.github/PULL_REQUEST_TEMPLATE.mdandCONTRIBUTING.mdfiles mention that PRs are not directly accepted to the repo. -
Ensure that
docs.gofile is added. Refer to #kubernetes/kubernetes#91354 for reference. -
NOTE: Do not edit go.mod or go.sum in the new repo (staging/src/k8s.io//) manually. Run the following instead:
./hack/update-vendor.sh
Creating the published repository
-
Create an issue in the
kubernetes/orgrepo to request creation of the respective published repository in the Kubernetes org. The published repository must have an initial empty commit. It also needs specific access rules and branch settings. See #kubernetes/org#58 for an example. -
Setup branch protection and enable access to the
stage-botsteam by adding the repo inprow/config.yaml. See #kubernetes/test-infra#9292 for an example. -
Once the repository has been created in the Kubernetes org, update the publishing-bot to publish the staging repository by updating:
-
rules.yaml: Make sure that the list of dependencies reflects the staging repos in theGodeps.jsonfile. -
fetch-all-latest-and-push.sh: Add the staging repo in the list of repos to be published.
-
-
Add the staging and published repositories as a subproject for the SIG that owns the repos in
sigs.yaml. -
Add the repo to the list of staging repos in this
README.mdfile.