mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-27 13:37:30 +00:00
Merge pull request #32780 from dshulyak/vagrant_memory
Automatic merge from submit-queue Increase default memory of vagrant slave nodes to 2048 With current default number (1024) i am not able to spawn all required for e2e tests kube-system pods. I had problems with heapster replicas, and it was obvious from kube-scheduler logs that it is in Pending exaclty because of insufficient memory. To reproduce: 1. KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh 2. Run any e2e test
This commit is contained in:
commit
78ecb0435b
2
Vagrantfile
vendored
2
Vagrantfile
vendored
@ -111,7 +111,7 @@ end
|
||||
# When doing Salt provisioning, we copy approximately 200MB of content in /tmp before anything else happens.
|
||||
# This causes problems if anything else was in /tmp or the other directories that are bound to tmpfs device (i.e /run, etc.)
|
||||
$vm_master_mem = (ENV['KUBERNETES_MASTER_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 1280).to_i
|
||||
$vm_node_mem = (ENV['KUBERNETES_NODE_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 1024).to_i
|
||||
$vm_node_mem = (ENV['KUBERNETES_NODE_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 2048).to_i
|
||||
|
||||
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
|
||||
if Vagrant.has_plugin?("vagrant-proxyconf")
|
||||
|
Loading…
Reference in New Issue
Block a user