MultiCIDRServiceAllocator implements a new ClusterIP allocator based on
IPAddress object to solve the problems and limitations caused by
existing bitmap allocators.
However, during the rollout of new versions, deployments need to support
a skew of one version between kube-apiservers. To avoid the possible
problem where there are multiple Services requests on the skewed
apiservers and that both allocate the same IP to different Services,
the new allocator will implement a dual-write strategy under the
feature gate DisableAllocatorDualWrite.
After the MultiCIDRServiceAllocator is GA, the DisableAllocatorDualWrite
can be enabled safely as all apiservers will run with the new
allocators. The graduation of DisableAllocatorDualWrite can also
be used to clean up the opaque API object that contains the old bitmaps.
If MultiCIDRServiceAllocator is enabled and DisableAllocatorDualWrite is disable
and is a new environment, there is no bitmap object created, hence, the
apiserver will initialize it to be able to write on it.
The current results with 100 works and 15k services on a (n2-standard-48) vCPU: 48 RAM: 192 GB are:
Old allocator:
perf_test.go:139: [RESULT] Duration 1m9.646167533s: [quantile:0.5 value:0.462886801 quantile:0.9 value:0.496662838 quantile:0.99 value:0.725845905]
New allocator:
perf_test.go:139: [RESULT] Duration 2m12.900694343s: [quantile:0.5 value:0.481814448 quantile:0.9 value:1.3867615469999999 quantile:0.99 value:1.888190671]
The new allocator has higher latency but in contrast allow to use a
larger number of services, when tested with 65k Services the old
allocator etcd crashes with storage exceeded.
The scenario is also not realistic, as a continuous and high load on
Service creation is not expected.
ServiceCIDRs are protected by finalizers and the CIDRs fields are
inmutable once set, only the readiness state impact the allocator
as it can only allocate IPs if any of the ServiceCIDR is ready.
The Add/Update events triggers a reconcilation of the current state
of the ServiceCIDR present in the informers with the existing IP
allocators.
The Delete events are handled directly to update or delete the
corresponing IP allocator.
* kubectl: add internalTrafficPolicy to Service describer
* kubectl: add loadBalancer ipMode to Service describer
* kubectl: fix duplicate IP fields in Service describer
For a LoadBalancer Service, there were two "IP" fields in the output of
`kubectl describe service` if its loadBalancerIP is not empty, which
looks ambiguous.