Adds the KUBE_BUILD_WINDOWS option to make release-images and quick-release-images,
which will allow it to build the a Windows kube-proxy image as well. That image can
then be used with Windows Host Process Containers to start the kube-proxy
service on Windows nodes.
Be smarter about finding the input packages for genclient et al. The
previous grep patterns were too generic. This caused code-generator, for
example, to pick up it's own auto-generated packages. In this particular
case having a status field in the type adds a comment to the
autogenerated code like:
// Add a +genclient:noStatus comment above the type...
This, in turn causes problems in some scenarios where the input (api)
and the target package for the auto-generated code reside in separate go
modules.
The loadbalancer status has added new fields during the latest releases,
but the helper function used by the service load balancer controller was
not updated with all the new fields, and for the new IPMode field it was
not taking into consideration that the field is a pointer.
Instead of checking fields one by one use the DeepEqual function that
provides semantic equality for these types.
It previously assumed that pod-to-other-node-nodeIP would be
unmasqueraded, but this is not the case for most network plugins. Use
a HostNetwork exec pod to avoid problems.
This also requires putting the client and endpoint on different nodes,
because with most network plugins, a node-to-same-node-pod connection
will end up using the internal "docker0" (or whatever) IP as the
source address rather than the node's public IP, and we don't know
what that IP is.
Also make it work with IPv6.
The existing test had two problems:
- It only made connections from within the cluster, so for VIP-type
LBs, the connections would always be short-circuited and so this
only tested kube-proxy's LBSR implementation, not the cloud's.
- For non-VIP-type LBs, it would only work if pod-to-LB connections
were not masqueraded, which is not the case for most network
plugins.
Fix this by (a) testing connectivity from the test binary, so as to
test filtering external IPs, and ensure we're testing the cloud's
behavior; and (b) using both pod and node IPs when testing the
in-cluster case.
Also some general cleanup of the test case.
* Add `Linux{Sandbox,Container}SecurityContext.SupplementalGroupsPolicy` and `ContainerStatus.user` in cri-api
* Add `PodSecurityContext.SupplementalGroupsPolicy`, `ContainerStatus.User` and its featuregate
* Implement DropDisabledPodFields for PodSecurityContext.SupplementalGroupsPolicy and ContainerStatus.User fields
* Implement kubelet so to wire between SecurityContext.SupplementalGroupsPolicy/ContainerStatus.User and cri-api in kubelet
* Clarify `SupplementalGroupsPolicy` is an OS depdendent field.
* Make `ContainerStatus.User` is initially attached user identity to the first process in the ContainerStatus
It is because, the process identity can be dynamic if the initially attached identity
has enough privilege calling setuid/setgid/setgroups syscalls in Linux.
* Rewording suggestion applied
* Add TODO comment for updating SupplementalGroupsPolicy default value in v1.34
* Added validations for SupplementalGroupsPolicy and ContainerUser
* No need featuregate check in validation when adding new field with no default value
* fix typo: identitiy -> identity