When using GOTOOLCHAIN with make verify the build results copied out of
the dockerized environment contains a go toolchain folder that is
write protected. In order to prevent failures during the cleanup step
opt-out of copying $GOPATH to the host.
When doing a kubelet health check on init/join, do not
hardcode the "localhost" address. Instead, use the
KubeletConfiguration HealthzBindAddress and HealthzPort
fields.
Adds the KUBE_BUILD_WINDOWS option to make release-images and quick-release-images,
which will allow it to build the a Windows kube-proxy image as well. That image can
then be used with Windows Host Process Containers to start the kube-proxy
service on Windows nodes.
Be smarter about finding the input packages for genclient et al. The
previous grep patterns were too generic. This caused code-generator, for
example, to pick up it's own auto-generated packages. In this particular
case having a status field in the type adds a comment to the
autogenerated code like:
// Add a +genclient:noStatus comment above the type...
This, in turn causes problems in some scenarios where the input (api)
and the target package for the auto-generated code reside in separate go
modules.
The loadbalancer status has added new fields during the latest releases,
but the helper function used by the service load balancer controller was
not updated with all the new fields, and for the new IPMode field it was
not taking into consideration that the field is a pointer.
Instead of checking fields one by one use the DeepEqual function that
provides semantic equality for these types.
It previously assumed that pod-to-other-node-nodeIP would be
unmasqueraded, but this is not the case for most network plugins. Use
a HostNetwork exec pod to avoid problems.
This also requires putting the client and endpoint on different nodes,
because with most network plugins, a node-to-same-node-pod connection
will end up using the internal "docker0" (or whatever) IP as the
source address rather than the node's public IP, and we don't know
what that IP is.
Also make it work with IPv6.
The existing test had two problems:
- It only made connections from within the cluster, so for VIP-type
LBs, the connections would always be short-circuited and so this
only tested kube-proxy's LBSR implementation, not the cloud's.
- For non-VIP-type LBs, it would only work if pod-to-LB connections
were not masqueraded, which is not the case for most network
plugins.
Fix this by (a) testing connectivity from the test binary, so as to
test filtering external IPs, and ensure we're testing the cloud's
behavior; and (b) using both pod and node IPs when testing the
in-cluster case.
Also some general cleanup of the test case.