Merge pull request #60720 from dashpole/allocatable_flake

Automatic merge from submit-queue (batch tested with PRs 60159, 60731, 60720, 60736, 60740). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

[Flaky Test] Increase amount of memory filled by memory allocatable eviction test

**What this PR does / why we need it**:
MemoryAllocatableEviction tests have been somewhat flaky: https://k8s-testgrid.appspot.com/sig-node-kubelet#kubelet-serial-gce-e2e&include-filter-by-regex=MemoryAllocatable
The failure on the flakes is ["Pod ran to completion"](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/3785#k8sio-memoryallocatableeviction-slow-serial-disruptive-when-we-run-containers-that-should-cause-memorypressure-should-eventually-evict-all-of-the-correct-pods).
Looking at [an example log](https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-node-kubelet-serial/3785/artifacts/tmp-node-e2e-6070a774-cos-stable-63-10032-71-0/kubelet.log) (and search for memory-hog-pod, we can see that this pod fails admission because the allocatable memory threshold has already been crossed.
`eviction manager: thresholds - ignoring grace period: threshold [signal=allocatableMemory.available, quantity=250Mi] observed 242404Ki`
There is likely memory usage because the allocatable cgroup is not low on memory, and thus has not reclaimed all pages belonging to previous test containers.  Of the 300Mi of capacity in the allocatalbe cgroup, 250Mi is reserved for the eviction threshold, and only 50 is left for the test.  Increasing this to a 400Mi cgroup limit, with 150Mi for pods should eliminate this flake.

**Release note**:
```release-note
NONE
```

/sig node
/kind bug
/priority critical-urgent
/assign @Random-Liu @yujuhong
This commit is contained in:
Kubernetes Submit Queue 2018-03-02 18:35:55 -08:00 committed by GitHub
commit 63a05c8bc9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -131,8 +131,8 @@ var _ = framework.KubeDescribe("MemoryAllocatableEviction [Slow] [Serial] [Disru
// Set large system and kube reserved values to trigger allocatable thresholds far before hard eviction thresholds.
kubeReserved := getNodeCPUAndMemoryCapacity(f)[v1.ResourceMemory]
// The default hard eviction threshold is 250Mb, so Allocatable = Capacity - Reserved - 250Mb
// We want Allocatable = 50Mb, so set Reserved = Capacity - Allocatable - 250Mb = Capacity - 300Mb
kubeReserved.Sub(resource.MustParse("300Mi"))
// We want Allocatable = 150Mb, so set Reserved = Capacity - Allocatable - 250Mb = Capacity - 400Mb
kubeReserved.Sub(resource.MustParse("400Mi"))
initialConfig.KubeReserved = map[string]string{
string(v1.ResourceMemory): kubeReserved.String(),
}