The field names in godoc for various core storage structs have been
corrected with this commit.
Additional Ref# #105963 (comment)
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
The API was deprecated in 1.23 when output/v1alpha2 was
added. v1alpha1 is problematic since it embeds kubeadm/v1beta2
BootstrapToken related types directly. v1alpha2 imports
a new group dedicated to bootstrap tokens apis/bootstraptoken.
We don't need to worry about data loss once the data has been written to an
output stream. Calling fsync unnecessarily has been the reason for performance
issues in the past.
The recent regression https://github.com/kubernetes/kubernetes/issues/107033
shows that we need a way to automatically measure different logging
configurations (structured text, JSON with and without split streams) under
realistic conditions (time stamping, caller identification).
System calls may affect the performance and thus writing into actual files is
useful. A temp dir under /tmp (usually a tmpfs) is used, so the actual IO
bandwidth shouldn't affect the outcome. The "normal" json.Factory code is used
to construct the JSON logger when we have actual files that can be set as
os.Stderr and os.Stdout, thus making this as realistic as possible.
When discarding the output instead of writing it, the focus is more on the rest
of the pipeline and changes there can be investigated more reliably.
The benchmarks automatically gather "log entries per second" and "bytes per
second", which is useful to know when considering requirements like the ones
from https://github.com/kubernetes/kubernetes/issues/107029.
logcheck complains:
Additional arguments to ErrorS should always be Key Value pairs. Please check if there is any key or value missing.
That check is intentional, but not applicable here. The check can be worked
around by calling the functions through variables.
The benchmark depends on k8s.io/api (for v1.Container). Such a dependency is
not desirable for k8s.io/component-base/logs, even if it's just for
testing. The solution is to create a separate directory where such a dependency
isn't a problem.
The alternative, a separate package with its own go.mod file under
k8s.io/component-base/logs wouldd have been more complicated to maintain (yet
another go.mod file and different whitelisted dependencies).
The benchmark reads a JSON log file and measures how long it takes to re-encode
it. The focus is on the encoding of message and values, therefore additional
work (time stamping, caller, writing to file) gets avoided.
The encoder configuration can now be chosen by the caller. This will be used by
a benchmark to write messages without caller and time stamp.
While at it, some places where the logger was unnecessarily tested with split
output streams writing into the same actual stream were replaced with writing
as single stream. This is a leftover from a previous incarnation of the split
output stream patch where identical streams were used instead of nil for the
error stream to indicate "single stream".
This currently covers two cases:
- "kubectl list" (the regression from https://github.com/kubernetes/kubernetes/issues/107012)
- "kubectl get pods/no-such-pod" (no particular reason except that the output
should be deterministic)
In contrast to some other tests that check for strings inside the
output (run_deprecated_api_tests) or compare after
sorting (run_kubectl_version_tests), stdout, stderr and the return code must
match exactly.
This ensures that there is no extra, unexpected output and that the right
output stream is used.
cli.Run was an attempt to elliminate error handling in Kubernetes
commands. However, it had to rely on heuristics that are not necessarily right
for all commands.
kubectl is one example which has its own error printing code that should be
used in all cases after a command failure. It now gets used also for
`--warnings-as-errors`. Previously, that caused the following message to be
logged at the end:
E0110 16:56:01.987555 202060 run.go:120] "command failed" err="1 warning received"
Now it ends with:
error: 1 warning received
With the removal of the kubelet AppArmor profile validation in
https://github.com/kubernetes/kubernetes/pull/97966 we passed the
responsibility of the desired behavior to the container runtime.
Therefore we have to change the e2e test which silently broke after the
PR merge.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>