the kubernetesservice controller is in charge of reconciling the
kubernetes.default service with the first IP in the service CIDR range
and port 443, it also maintains the Endpoints associated to the Service
using the configure EndpointReconciler.
Until now, the controller was creating the default namespace if it
doesn't exist , and creating the kubernetes.default service if it
doesn't exist too. However, it was polling the Service in each loop,
with this change we reuse the apiserver informers to watch the Service
instead of polling.
It also removes the logic to create the default network namespace, since
this is part of the systemnamespaces controller now.
Change-Id: I70954f8e6309e7af8e4b749bf0752168f0ec2c42
Signed-off-by: Antonio Ojea <aojea@google.com>
If something goes wrong during the Azure cloud detection, trying to cast
the returned value will result in the following panic and give no clue
as to what the error was.
```
panic: interface conversion: cloudprovider.Interface is nil, not *azure.Cloud
goroutine 1 [running]:
k8s.io/kubernetes/test/e2e/framework/providers/azure.newProvider()
test/e2e/framework/providers/azure/azure.go:50 +0x2b5
k8s.io/kubernetes/test/e2e/framework.SetupProviderConfig({0xc0007966b8, 0x5})
test/e2e/framework/provider.go:82 +0x1a6
```
Combining all prepare/unprepare operations for a pod enables plugins to
optimize the execution. Plugins can continue to use the v1beta2 API for now,
but should switch. The new API is designed so that plugins which want to work
on each claim one-by-one can do so and then report errors for each claim
separately, i.e. partial success is supported.
Change name to make it compliant with prometheus guidelines.
Calculate it on demand instead of periodic to comply with prometheus standards.
Replace "endpoint" with "server" label to make it semantically consistent with storage factory
The main problem probably was that
https://github.com/kubernetes/kubernetes/pull/118862 moved creating the first
pod before setting up the callback which blocks allocating one claim for that
pod. This is racy because allocations happen in the background.
The test also was unnecessarily complex and hard to read:
- The intended effect can be achieved with three instead of four claims.
- It wasn't clear which claim has "external-claim-other" as name.
Using the claim variable avoids that.
This is a combination of two related enhancements:
- By implementing a PreEnqueue check, the initial pod scheduling
attempt for a pod with a claim template gets avoided when the claim
does not exist yet.
- By implementing cluster event checks, only those pods get
scheduled for which something changed, and they get scheduled
immediately without delay.
Informer callbacks must be prepared to get cache.DeletedFinalStateUnknown as
the deleted object. They can use that as hint that some information may have
been missed, but typically they just retrieve the stored object inside it.