mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-23 19:56:01 +00:00
Merge pull request #46377 from noah8713/master
Automatic merge from submit-queue (batch tested with PRs 45327, 46217, 46377, 46428, 46588) Fix comment typo in kube-apiserver and cachesize **What this PR does / why we need it**: Fix comment typo in files cmd/kube-apiserver/app/server.go and pkg/registry/cachesize/cachesize.go **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # Not a major issue, just a minor improvement. **Special notes for your reviewer**: **Release note**: ```release-note NONE ```
This commit is contained in:
commit
b6c00aeb10
@ -536,7 +536,7 @@ func defaultOptions(s *options.ServerRunOptions) error {
|
||||
// This is the heuristics that from memory capacity is trying to infer
|
||||
// the maximum number of nodes in the cluster and set cache sizes based
|
||||
// on that value.
|
||||
// From our documentation, we officially recomment 120GB machines for
|
||||
// From our documentation, we officially recommend 120GB machines for
|
||||
// 2000 nodes, and we scale from that point. Thus we assume ~60MB of
|
||||
// capacity per node.
|
||||
// TODO: We may consider deciding that some percentage of memory will
|
||||
|
@ -73,7 +73,7 @@ func InitializeWatchCacheSizes(expectedRAMCapacityMB int) {
|
||||
// This is the heuristics that from memory capacity is trying to infer
|
||||
// the maximum number of nodes in the cluster and set cache sizes based
|
||||
// on that value.
|
||||
// From our documentation, we officially recomment 120GB machines for
|
||||
// From our documentation, we officially recommend 120GB machines for
|
||||
// 2000 nodes, and we scale from that point. Thus we assume ~60MB of
|
||||
// capacity per node.
|
||||
// TODO: Revisit this heuristics
|
||||
|
Loading…
Reference in New Issue
Block a user