In fact, this actually uses pkg/util/node's GetHostname() but takes
the unit tests from cmd/kubeadm/app/util's private fork of that
function since they were more extensive. (Of course the fact that
kubeadm had a private fork of this function is a strong argument for
moving it to component-helpers.)
When trying to bring up a cluster via kubeadm with these feature gates enabled,
kube-proxy fails because it didn't know about them:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
featureGates: {"DynamicResourceAllocation":true,"ContextualLogging":true}
runtimeConfig: {"resource.k8s.io/v1alpha1":"true"}
=>
2023-01-20T07:07:54.474966617Z stderr F E0120 07:07:54.474846 1 run.go:74] "command failed" err="failed complete: unrecognized feature gate: ContextualLogging"
The effect of the logging feature gates is minor for kube-proxy, supporting
them is mostly useful for the sake of consistency and to support kubeadm.
In the dual-stack case, iptables.NewDualStackProxier and
ipvs.NewDualStackProxier filtered the nodeport addresses values by IP
family before creating the single-stack proxiers. But in the
single-stack case, the kube-proxy startup code just passed the value
to the single-stack proxiers without validation, so they had to
re-check it themselves. Fix that.
Kube-proxy was checking that iptables supports both IPv4 and IPv6 and
falling back to single-stack if not. But it always fell back to the
primary IP family, regardless of which family iptables supported...
Fix it so that if the primary IP family isn't supported then it bails
out entirely.
Kube/proxy, in NodeCIDR local detector mode, uses the node.Spec.PodCIDRs
field to build the Services iptables rules.
The Node object depends on the kubelet, but if kube-proxy runs as a
static pods or as a standalone binary, it is not possible to guarantee
that the values obtained at bootsrap are valid, causing traffic outages.
Kube-proxy has to react on node changes to avoid this problems, it
simply restarts if detect that the node PodCIDRs have changed.
In case that the Node has been deleted, kube-proxy will only log an
error and keep working, since it may break graceful shutdowns of the
node.
Some of the unit tests cannot pass on Windows due to various reasons:
- fsnotify does not have a Windows implementation.
- Proxy Mode IPVS not supported on Windows.
- Seccomp not supported on Windows.
- VolumeMode=Block is not supported on Windows.
- iSCSI volumes are mounted differently on Windows, and iscsiadm is a
Linux utility.
wire up feature_gate.go with metrics via AddMetrics method
Change-Id: I9b4f6b04c0f4eb9bcb198b16284393d21c774ad8
wire in metrics to kubernetes components
Change-Id: I6d4ef8b26f149f62b03f32d1658f04f3056fe4dc
rename metric since we're using the value to determine if enabled is true or false
Change-Id: I13a6b6df90a5ffb4b9c5b34fa187562413bea029
Update staging/src/k8s.io/component-base/featuregate/feature_gate.go
Co-authored-by: Jordan Liggitt <jordan@liggitt.net>
If the user passes "--proxy-mode ipvs", and it is not possible to use
IPVS, then error out rather than falling back to iptables.
There was never any good reason to be doing fallback; this was
presumably erroneously added to parallel the iptables-to-userspace
fallback (which only existed because we had wanted iptables to be the
default but not all systems could support it).
In particular, if the user passed configuration options for ipvs, then
they presumably *didn't* pass configuration options for iptables, and
so even if the iptables proxy is able to run, it is likely to be
misconfigured.
Back when iptables was first made the default, there were
theoretically some users who wouldn't have been able to support it due
to having an old /sbin/iptables. But kube-proxy no longer does the
things that didn't work with old iptables, and we removed that check a
long time ago. There is also a check for a new-enough kernel version,
but it's checking for a feature which was added in kernel 3.6, and no
one could possibly be running Kubernetes with a kernel that old. So
the fallback code now never actually falls back, so it should just be
removed.
This was implemented partly in server.go and partly in
server_others.go even though even the parts in server.go were totally
linux-specific. Simplify things by putting it all in server_others.go
and get rid of some unnecessary abstraction.
This commit adds the framework for the new local detection
modes BridgeInterface and InterfaceNamePrefix to work.
Signed-off-by: Surya Seetharaman <suryaseetharaman.9@gmail.com>
This change adds 2 options for windows:
--forward-healthcheck-vip: If true forward service VIP for health check
port
--root-hnsendpoint-name: The name of the hns endpoint name for root
namespace attached to l2bridge, default is cbr0
When --forward-healthcheck-vip is set as true and winkernel is used,
kube-proxy will add an hns load balancer to forward health check request
that was sent to lb_vip:healthcheck_port to the node_ip:healthcheck_port.
Without this forwarding, the health check from google load balancer will
fail, and it will stop forwarding traffic to the windows node.
This change fixes the following 2 cases for service:
- `externalTrafficPolicy: Cluster` (default option): healthcheck_port is
10256 for all services. Without this fix, all traffic won't be directly
forwarded to windows node. It will always go through a linux node and
get forwarded to windows from there.
- `externalTrafficPolicy: Local`: different healthcheck_port for each
service that is configured as local. Without this fix, this feature
won't work on windows node at all. This feature preserves client ip
that tries to connect to their application running in windows pod.
Change-Id: If4513e72900101ef70d86b91155e56a1f8c79719
* kube-proxy cluder-cidr arg accepts comma-separated list
It is possible in dual-stack clusters to provide kube-proxy with
a comma-separated list with an IPv4 and IPv6 CIDR for pods.
update: signoff
update2: update email profile
Signed-off-by: Tyler Lloyd <Tyler.Lloyd@microsoft.com>
Signed-off-by: Tyler Lloyd <tylerlloyd928@gmail.com>
* Updating cluster-cidr comment description
Signed-off-by: Tyler Lloyd <tyler.lloyd@microsoft.com>