Got the proxy-server coming up in the master.
Added certs and have it comiung up with those certs.
Added a daemonset to run the network-agent.
Adding support for agent running as a sameon set on every node.
Added quick hack to test that proxy server/agent were correctly
tunneling traffic to the kubelet.
Added more WIP for reading network proxy configuration.
Get flags set correctly and fix connection services.
Adding missing ApplyTo
Added ConnectivityService.
Fixed build directives. Added connectivity service configuration.
Fixed log levels.
Fixed minor issues for feature turned off.
Fixed boilerplate and format.
Moved log dialer initialization earlier as per Liggits suggestion.
Fixed a few minor issues in the configuration for GCE.
Fixed scheme allocation
Adding unit test.
Added test for direct connectivity service.
Switching to injecting the Lookup method rather than using a Singleton.
First round of mikedaneses feedback.
Fixed deployment to use yaml and other changes suggested by MikeDanese.
Switched network proxy server/agent which are kebab-case not camelCase.
Picked up DIAL_RSP fix.
Factored in deads2k feedback.
Feedback from mikedanese
Factored in second round of feedback from David.
Fix path in verify.
Factored in anfernee's feedback.
First part of lavalamps feedback.
Factored in more changes from lavalamp and mikedanese.
Renamed network-proxy to konnectivity-server and konnectivity-agent.
Fixed tolerations and config file checking.
Added missing strptr
Finished lavalamps requested rename.
Disambiguating konnectivity service by renaming it egress selector.
Switched feature flag to KUBE_ENABLE_EGRESS_VIA_KONNECTIVITY_SERVICE
For file discovery, in case the user feeds a file for the CA
from the kubeconfig, make sure it's preloaded and embedded using
the new function EnsureCertificateAuthorityIsEmbedded().
This commit also applies cleanup:
- unroll validateKubeConfig() into ValidateConfigInfo() as this way
the default cluster can be re-used.
- in ValidateConfigInfo() reuse the variable config instead of creating
a new variable kubeconfig.
- make the Ensure* functions return descriptive errors instead of
wrapping the errors on the side of the callers.
Secure serving was already enabled for kube-controller-manager.
Do the same for kube-scheduler, by passing the flags
"authentication-kubeconfig" and "authorization-kubeconfig"
to the binary in the static Pod.
This change allows the scheduler to perform reviews on incoming
requests, such as:
- authentication.k8s.io/v1beta1 TokenReview
- authorization.k8s.io/v1 SubjectAccessReview
The authentication and authorization checks for "system:kube-scheduler"
users were previously enabled by PR 72491.
On local networks (such as the typical connection path between
control plane components) gzip compression increases CPU use and
end to end p99 latency rather than decreasing it. Disable compression
within the control plane components like a 1.15 cluster would be
configured.
Kube-proxy's iptables mode used to care whether utiliptables's
EnsureRule was able to use "iptables -C" or if it had to implement it
hackily using "iptables-save". But that became irrelevant when
kube-proxy was reimplemented using "iptables-restore", and no one ever
noticed. So remove that check.
- common_test.go: use constants.CurrentKubernetesVersion
- diff_test.go: write temporary files instead of using testdata.
this allows us to not have to bump kubernetesVersions in the
testdata files (now removed)
- policy_test.go: apply fixes to tests that were previously passing,
but a bump in constants.go breaks them. these tests now work
for any version.
ipvs `getProxyMode` test fails on mac as `utilipvs.GetRequiredIPVSMods`
try to reach `/proc/sys/kernel/osrelease` to find version of the running
linux kernel. Linux kernel version is used to determine the list of required
kernel modules for ipvs.
Logic to determine kernel version is moved to GetKernelVersion
method in LinuxKernelHandler which implements ipvs.KernelHandler.
Mock KernelHandler is used in the test cases.
Read and parse file is converted to go function instead of execing cut.
progagates the kubeadm FG to the individual k8scomponents
on the control-plane node.
* Note: Users who want to join worker nodes to the cluster
will have to specify the dual-stack FG to kubelet using the
nodeRegistration.kubeletExtraArgs option as part of their
join config. Alternatively, they can use KUBELET_EXTRA_ARGS.
kubeadm FG: kubernetes/kubeadm#1612