This adds a metric, kubeproxy_sync_proxy_rules_last_queued_timestamp,
that captures the last time a change was queued to be applied to the
proxy. This matches the healthz logic, which fails if a pending change
is stale.
This allows us to write alerts that mirror healthz.
Signed-off-by: Casey Callendrello <cdc@redhat.com>
This is to avoid unnecessary GCE API calls done by getInstanceByName
helper, which is iterating over all zones to find in which zone the
VM exists.
ProviderID already contains all the information - it's in the form:
gce://<VM URL> (VM URL contains project, zone, VM name).
ProviderID is propagated by Kubelet on node registration and in case
of bugs backfilled by node-controller.
The protobuf generator cannot handle embedded struct fields correctly.
When using an embedded field by pointer it produces the error message
`unable to get name for tag from struct "...", field ...`
The reason is that the name determination evaluating the AST does not
handle pointers if the AST does not already contain a field name.
This seems to be the case for structs embedded by pointers.
Example:
```
type MyStruct struct {
*OtherStruct `protobuf:"bytes,1,opt,name=otherStruct"`
}
```
'docker pull' is a time consuming operation. It makes sense to check
if image exists locally before pulling it from a registry.
Checked if image exists by running 'docker inspect'. Only pull if
image doesn't exist.
The service allocator is used to allocate ip addresses for the
Service IP allocator and NodePorts for the Service NodePort
allocator. It uses a bitmap backed by etcd to store the allocation
and tries to allocate the resources directly from the local memory
instead from etcd, that can cause issues in environment with
high concurrency.
It may happen, in deployments with multiple apiservers, that the
resource allocation information is out of sync, this is more
sensible with NodePorts, per example:
1. apiserver A create a service with NodePort X
2. apiserver B deletes the service
3. apiserver A creates the service again
If the allocation data of apiserver A wasn't refreshed with the
deletion of apiserver B, apiserver A fails the allocation because
the data is out of sync. The Repair loops solve the problem later,
but there are some use cases that require to improve the concurrency
in the allocation logic.
We can try to not do the Allocation and Release operations locally,
and try instead to check if the local data is up to date with etcd,
and operate over the most recent version of the data.