This is useful when a given set is shared through pointers, and
therefore can't be replaced with a new empty set. This replaces
set.Delete(set.UnsortedList()...), and allows the compiler to optimize
the function to a call to runtime.mapclear() when the set content type
is reflexive for ==. That optimization *is* currently accessible using
s := Set[...]{}
...
for i := range s {
delete(s, i)
}
but this circumvents the published API for sets; calling s.Delete(i)
instead can't be optimized in this fashion.
Alternatives considered but discarded include:
* turning sets into a pointer type (this isn't possible because
pointer types can't be receivers)
* using a pointer receiver for the Clear() function, i.e.
func (s *Set[T]) Clear() {
*s = New[T]()
}
but this doesn't update shared references.
Signed-off-by: Stephen Kitt <skitt@redhat.com>
Without this change, sometimes leaked goroutines were reported for
test/integration/scheduler_perf. The one that caused the cleanup to get delayed
was this one:
goleak.go:50: found unexpected goroutines:
[Goroutine 2704 in state chan receive, 2 minutes, with k8s.io/client-go/tools/cache.(*Reflector).watch on top of the stack:
goroutine 2704 [chan receive, 2 minutes]:
k8s.io/client-go/tools/cache.(*Reflector).watch(0xc00453f590, {0x0, 0x0}, 0x1f?, 0xc00a128080?)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:388 +0x5b3
k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch(0xc00453f590, 0xc006e94900)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:324 +0x3bd
k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:279 +0x45
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc007aafee0)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:157 +0x49
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc003e18150?, {0x75e37c0, 0xc00389c280}, 0x1, 0xc006e94900)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:158 +0xcf
k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00453f590, 0xc006e94900)
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:278 +0x257
k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:58 +0x3f
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:75 +0x74
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0xe5
watch() was stuck in an exponential backoff timeout. Logging confirmed that:
I0309 21:14:21.756149 1572727 reflector.go:387] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim returned Get "https://127.0.0.1:38269/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=7m47s&timeoutSeconds=467&watch=true": dial tcp 127.0.0.1:38269: connect: connection refused - backing off