This replaces the experimental logr v0.4 with the stable v1.1.0
release. This is a breaking API change for some users because:
- Comparing logr.Logger against nil is not possible anymore:
it's now a struct instead of an interface. Code which
allows a nil logger should switch to *logr.Logger as type.
- Logger implementations must be updated in lockstep.
Instead of updating the forked zapr code in json.go, directly using
the original go-logr/zapr is simpler and avoids duplication of effort.
The updated zapr supports logging of numeric verbosity. Error messages
don't have a verbosity (= always get logged), so "v" is not getting
added to them anymore.
Source code logging for panic messages got fixed so that it references
the code with the invalid log call, not the json.go implementation.
Finally, zapr includes additional information in its panic
messages ("zap field", "ignored key", "invalid key").
This change fixes the order in which the PodSecurity and
PodSecurityPolicy admission plugins are run. The old code intended
for PSA to run before PSP, but attempted to enforce that via
registration order (which is irrelevant). Now PSA is correctly
executed before PSP to allow for audit and warning modes to be
exercised even in the presence of a deny PSP policy.
Signed-off-by: Monis Khan <mok@vmware.com>
15m is enough for Cluster Autoscaler to remove empty nodes, so we need
to break them sooner than that. Instead, wait 15m after breaking them to
ensure Cluster Autoscaler will consider them as unready instead of still
starting.
Prior to 1.22 a user could change NodePort values within a service
during an update, and the apiserver would allocate values for any that
were not specified.
Consider a YAML like:
```
apiVersion: v1
kind: Service
metadata:
name: foo
spec:
type: NodePort
ports:
- name: p
port: 80
- name: q
port: 81
selector:
app: foo
```
When this is created, nodeport values will be allocated for each port.
Something like:
```
apiVersion: v1
kind: Service
metadata:
name: foo
spec:
clusterIP: 10.0.149.11
type: NodePort
ports:
- name: p
nodePort: 30872
port: 80
protocol: TCP
targetPort: 9376
- name: q
nodePort: 31310
port: 81
protocol: TCP
targetPort: 81
selector:
app: foo
```
If the user PUTs (kubectl replace) the original YAML, we would see that
`.nodePort = 0`, and allocate new ports. This was ugly at best.
In 1.22 we fixed this to not allocate new values if we still had the old
values, but instead re-assign them. Net new ports would still be seen
as `.nodePort = 0` and so new allocations would be made.
This broke a corner case as follows:
Prior to 1.22, the user could PUT this YAML:
```
apiVersion: v1
kind: Service
metadata:
name: foo
spec:
type: NodePort
ports:
- name: p
nodePort: 31310 # note this is the `q` value
port: 80
- name: q
# note this nodePort is not specified
port: 81
selector:
app: foo
```
The `p` port would take the `q` port's value. The `q` port would be
seen as `.nodePort = 0` and a new value allocated. In 1.22 this results
in an error (duplicate value in `p` and `q`).
This is VERY minor but it is an API regression, which we try to avoid,
and the fix is not too horrible.
This commit adds more robust testing of this logic.
The makefiles scripts create a variable with all the go files
that are part of the Kubernetes source tree, including staging.
As today, this variable has a size of < 100kb
wc .make/all_go_dirs.mk
2326 2326 98905 .make/all_go_dirs.mk
This variable is passed as argument in the Makefiles, where it
is expanded. In Linux, there is a limit to the max size of
the arguments MAX_ARG_STRLEN.
If the arguments go above 128k, you get a nice:
execvp: /usr/bin/env: Argument list too long
If you, for whatever reason, do some go mod vendor inside the
hack/tools folder, these files will be added to the variable
and most probably you'll go above the limit and get that error.
Then, you'll learn a lot about Makefils, shell expansion, strace,
execpve, ARG_MAX and MAX_ARG_STRLEN,until you realize what is
the real problem :).