Generally try to waive away folks who see a particular event stream
and feel tempted to extrapolate and build tooling that expects the
same underlying resource transition chain to continue to produce a
similar event stream as the underlying components evolve and are
updated. New controllers should not be constrained to be
backwards-compatible with previous versions with regard to Event
emission. This is distinct from the Event type itself, which has the
usual Kubernetes-API compatibility commitments for versioned types.
The EventTTL default has been 1h since 7e258b85bd (Reduce TTL for
events in etcd from 48hrs to 1hr, 2015-03-11, #5315), and remains so
today:
$ git --no-pager log -1 --format='%h %s' origin/master
8e5c02255c Merge pull request #90942 from ii/ii-create-pod%2Bpodstatus-resource-lifecycle-test
$ git --no-pager grep EventTTL: 8e5c02255c cmd/kube-apiserver/app/options/options.go
8e5c02255cc:cmd/kube-apiserver/app/options/options.go: EventTTL: 1 * time.Hour,
In this space [1,2]:
To avoid filling up master's disk, a retention policy is enforced:
events are removed one hour after the last occurrence. To provide
longer history and aggregation capabilities, a third party solution
should be installed to capture events.
...
Note: It is not guaranteed that all events happening in a cluster
will be exported to Stackdriver. One possible scenario when events
will not be exported is when event exporter is not running
(e.g. during restart or upgrade). In most cases it's fine to use
events for purposes like setting up metrics and alerts, but you
should be aware of the potential inaccuracy.
...
To prevent disturbing your workloads, event exporter does not have
resources set and is in the best effort QOS class, which means that
it will be the first to be killed in the case of resource
starvation.
Although that's talking more about export from etcd -> external
storage, and not about cluster components submitting events to etcd.
[1]: https://kubernetes.io/docs/tasks/debug-application-cluster/events-stackdriver/
[2]: https://github.com/kubernetes/website/pull/4155/files#diff-d8eb69c5436aa38b396d4f3ed75e4792R10
As part of externalizing this function to the k8s.io/component-helpers repo,
this commit simplifies the function signature and makes its 2 helpers private
(nodeSelectorRequirementsAsSelector and nodeSelectorRequirementsAsFieldSelector).
Normally, the PV controller knows about the PVC that triggers the
creation of a PV before it sees the PV, because the PV controller must
set the volume.beta.kubernetes.io/storage-provisioner annotation that
tells an external provisioner to create the PV.
When restarting, the PV controller first syncs its caches, so that
case is also covered.
However, the creator of a PVC might decided to set that annotation
itself to speed up volume creation. While unusual, it's not forbidden
and thus part of the external Kubernetes API. Whether it makes sense
depends on the intentions of the user.
When that is done and there is heavy load, an external provisioner
might see the PVC and create a PV before the PV controller sees the
PVC. If the PV controller then encounters the PV before the PVC, it
incorrectly concludes that the PV needs to be deleted instead of being
bound.
The same issue occurred earlier for external binding and the existing
code for looking up a PVC in the cache or in the apiserver solves the
issue also for volume provisioning, it just needs to be enabled also
for PVs without the pv.kubernetes.io/bound-by-controller annotation.
* api: structure change
* api: defaulting, conversion, and validation
* [FIX] validation: auto remove second ip/family when service changes to SingleStack
* [FIX] api: defaulting, conversion, and validation
* api-server: clusterIPs alloc, printers, storage and strategy
* [FIX] clusterIPs default on read
* alloc: auto remove second ip/family when service changes to SingleStack
* api-server: repair loop handling for clusterIPs
* api-server: force kubernetes default service into single stack
* api-server: tie dualstack feature flag with endpoint feature flag
* controller-manager: feature flag, endpoint, and endpointSlice controllers handling multi family service
* [FIX] controller-manager: feature flag, endpoint, and endpointSlicecontrollers handling multi family service
* kube-proxy: feature-flag, utils, proxier, and meta proxier
* [FIX] kubeproxy: call both proxier at the same time
* kubenet: remove forced pod IP sorting
* kubectl: modify describe to include ClusterIPs, IPFamilies, and IPFamilyPolicy
* e2e: fix tests that depends on IPFamily field AND add dual stack tests
* e2e: fix expected error message for ClusterIP immutability
* add integration tests for dualstack
the third phase of dual stack is a very complex change in the API,
basically it introduces Dual Stack services. Main changes are:
- It pluralizes the Service IPFamily field to IPFamilies,
and removes the singular field.
- It introduces a new field IPFamilyPolicyType that can take
3 values to express the "dual-stack(mad)ness" of the cluster:
SingleStack, PreferDualStack and RequireDualStack
- It pluralizes ClusterIP to ClusterIPs.
The goal is to add coverage to the services API operations,
taking into account the 6 different modes a cluster can have:
- single stack: IP4 or IPv6 (as of today)
- dual stack: IPv4 only, IPv6 only, IPv4 - IPv6, IPv6 - IPv4
* [FIX] add integration tests for dualstack
* generated data
* generated files
Co-authored-by: Antonio Ojea <aojea@redhat.com>