* api: structure change
* api: defaulting, conversion, and validation
* [FIX] validation: auto remove second ip/family when service changes to SingleStack
* [FIX] api: defaulting, conversion, and validation
* api-server: clusterIPs alloc, printers, storage and strategy
* [FIX] clusterIPs default on read
* alloc: auto remove second ip/family when service changes to SingleStack
* api-server: repair loop handling for clusterIPs
* api-server: force kubernetes default service into single stack
* api-server: tie dualstack feature flag with endpoint feature flag
* controller-manager: feature flag, endpoint, and endpointSlice controllers handling multi family service
* [FIX] controller-manager: feature flag, endpoint, and endpointSlicecontrollers handling multi family service
* kube-proxy: feature-flag, utils, proxier, and meta proxier
* [FIX] kubeproxy: call both proxier at the same time
* kubenet: remove forced pod IP sorting
* kubectl: modify describe to include ClusterIPs, IPFamilies, and IPFamilyPolicy
* e2e: fix tests that depends on IPFamily field AND add dual stack tests
* e2e: fix expected error message for ClusterIP immutability
* add integration tests for dualstack
the third phase of dual stack is a very complex change in the API,
basically it introduces Dual Stack services. Main changes are:
- It pluralizes the Service IPFamily field to IPFamilies,
and removes the singular field.
- It introduces a new field IPFamilyPolicyType that can take
3 values to express the "dual-stack(mad)ness" of the cluster:
SingleStack, PreferDualStack and RequireDualStack
- It pluralizes ClusterIP to ClusterIPs.
The goal is to add coverage to the services API operations,
taking into account the 6 different modes a cluster can have:
- single stack: IP4 or IPv6 (as of today)
- dual stack: IPv4 only, IPv6 only, IPv4 - IPv6, IPv6 - IPv4
* [FIX] add integration tests for dualstack
* generated data
* generated files
Co-authored-by: Antonio Ojea <aojea@redhat.com>
The provided DialContext wraps existing clients' DialContext in an attempt to
preserve any existing timeout configuration. In some cases, we may replace
infinite timeouts with golang defaults.
- scaleio: tcp connect/keepalive values changed from 0/15 to 30/30
- storageos: no change
The implementation consists of
- identifying all places where VolumeSource.PersistentVolumeClaim has
a special meaning and then ensuring that the same code path is taken
for an ephemeral volume, with the ownership check
- adding a controller that produces the PVCs for each embedded
VolumeSource.EphemeralVolume
- relaxing the PVC protection controller such that it removes
the finalizer already before the pod is deleted (only
if the GenericEphemeralVolume feature is enabled): this is
needed to break a cycle where foreground deletion of the pod
blocks on removing the PVC, which waits for deletion of the pod
The controller was derived from the endpointslices controller.
update tests
add comment
amend var name
update comment
add check for empty slice
fix tests
fix mask size in test
review feedback
add ipv4 and ipv6 flag for mask sizes
add to violation exception list
remove import alias
run update-openapi-spec
review feedback
run update-bazel
review feedback
review feedback
All controllers in controller-manager that deal with objects generically
work with those objects without needing the full object. Update the GC
and quota controller to use PartialObjectMetadata input objects which
is faster and more efficient.
The metadata client uses protobuf and returns only a subset of object
data (the metadata) which allows operations that act only on objects
generically to work much faster. Use the metadata client in the
namespace controller to reduce the amount of work the namespace controller
has to do in large namespaces.
This is the 2nd PR to move CSINodeInfo/CSIDriver APIs to
v1beta1 core storage APIs. It includes controller side changes.
It depends on the PR with API changes:
https://github.com/kubernetes/kubernetes/pull/73883
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
Automatic merge from submit-queue (batch tested with PRs 63049, 59731). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
re-enable nodeipam in kube-controller-manager
**What this PR does / why we need it**:
Re-enables nodeipam controller for external clouds. Also does a small refactor so that we don't need to pass in `allocateNodeCidr` into the controller.
In v1.10 we made a change (9187b343e1 (diff-f11913dc67d80d36b3d06a93f61c49cf) in https://github.com/kubernetes/kubernetes/pull/57492) where nodeipam would be disabled for any cluster that sets `--cloud-provider=external`. The original intention behind this was that the nodeipam controller is cloud specific for some clouds (only GCE at the moment) so it should be moved to the CCM (cloud controller manager). After some discussions with wg-cloud-provider it makes sense to re-enable nodeipam controller in KCM and have GCE CCM enable its own cloud-specific IPAM controller as part of [Initialize()](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go#L33-L35). This would allow for GCE to run nodeipam in both KCM (by setting --cloud-provider=gce and --allocate-node-cidr) and in the CCM (once implemented in `Initialize()`) without disabling nodeipam in the KCM for all external clouds and avoids having to implement nodeipam in CCM.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Re-enable nodeipam controller for external clouds.
```
https://github.com/kubernetes/kubernetes/pull/62913 switched from using a client pool, where each groupVersionResource got its own rest client, to a single client.
This increases the QPS to account for increased requests using a single rest client rate limiter.
After K8s 1.10 is upgraded to K8s 1.11 finalizer [kubernetes.io/pvc-protection] is added to PVCs
because StorageObjectInUseProtection feature will be GA in K8s 1.11.
However, when K8s 1.11 is downgraded to K8s 1.10 and the StorageObjectInUseProtection feature is disabled
the finalizers remain in the PVCs and as pvc-protection-controller is not started in K8s 1.10 finalizers
are not removed automatically from deleted PVCs and that's why deleted PVC are not removed from the system
but remain in Terminating phase.
The same applies to pv-protection-controller and [kubernetes.io/pvc-protection] finalizer in PVs.
That's why pvc-protection-controller is always started because the pvc-protection-controller removes finalizers
from PVCs automatically when a PVC is not in active use by a pod.
Also the pv-protection-controller is always started to remove finalizers from PVs automatically when a PV is not
Bound to a PVC.
Related issue: https://github.com/kubernetes/kubernetes/issues/60764
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Do not recycle volumes that are used by pods
**What this PR does / why we need it**:
Recycler should wait until all pods that use a volume are finished.
Consider this scenario:
1. User creates a PVC that's bound to a NFS PV.
2. User creates a pod that uses the PVC
3. User deletes the PVC.
Now the PV gets `Released` (the PVC does not exists) and recycled, however the PV is still mounted to a running pod. PVC protection won't help us, because it puts finalizers on PVC that is under user's control and user can remove it.
This PR checks that there is no pod that uses a PV before it recycles it.
**Release note**:
```release-note
NONE
```
/sig storage
Prepatory work fpr removing cloud provider dependency from node
controller running in Kube Controller Manager. Splitting the node
controller into its two major pieces life-cycle and CIDR/IP
management. Both pieces currently need the the cloud system to do their work.
Removing lifecycles dependency on cloud will be fixed ina followup PR.
Moved node scheduler code to live with node lifecycle controller.
Got the IPAM/Lifecycle split completed. Still need to rename pieces.
Made changes to the utils and tests so they would be in the appropriate
package.
Moved the node based ipam code to nodeipam.
Made the relevant tests pass.
Moved common node controller util code to nodeutil.
Removed unneeded pod informer sync from node ipam controller.
Fixed linter issues.
Factored in feedback from @gmarek.
Factored in feedback from @mtaufen.
Undoing unneeded change.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Tolerate partial discovery in garbage collector
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
```release-note
API discovery failures no longer crash the kube controller manager via the garbage collector.
```
/cc @caesarxuchao
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check RegisterMetricAndTrackRateLimiterUsage error when starting BootstrapSigner & TokenCleaner controllers
**What this PR does / why we need it**:
Prevent `BootstrapSigner` and `TokenCleaner` controllers to start if `metrics.RegisterMetricAndTrackRateLimiterUsage` returns an error.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: complements #53571
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
ClusterCIDR and ServiceCIDR are settings that are only used if at least
AllocateNodeCIDRs is set. The route controller requires in addition to
it for ConfigureCloudRoutes to be true as well. Since
AllocateNodeCIDRs is by default false, if guard the parsing of these
settings in order to not unnecessarily spam logs. Amend the
documentation of kube-controller-manager for the 2 settings to point
out the requirement of AllocateNodeCIDRs to be true as well