Tag Archives: drs

PART 2: vSphere DPM vs ESXi memory ballooning

In PART I of my post we learned how vSphere DPM works & DPM memory demand metric. If you have not read PART I, I strongly recommend reading it first.

In PART 2 we will touch upon 2 important points with example.
1. How earlier vSphere DPM behavior was more aggressive from memory perspective?
2. How can we now control DPM behavior with new memory demand metric & avoid memory ballooning?

Let’s start with first point: How earlier DPM behavior was more aggressive?
OLD DPM memory demand metric was considering just Active memory as memory demand. To understand it clearly, we will take one example: Say we have 2 ESXi hosts(H1 & H2) with 2 VMs on each hosts(VM1,VM2 on H1 & VM3, VM4 on H2) in the DPM enabled cluster. Memory configuration was as follows:
H1-4GB
H2-4GB
VM1-3GB
VM2-3GB
VM3-3GB
VM4-3GB
Clearly environment is memory over-committed. All VMs are already powered ON, host consumed memory on H1 =3 GB, H2=1GB & current active memory usage on each VM is just 256MB. You may wonder that why host consumed memory is 3 GB on H1 when current active memory on H1 is just 512 MB(256x2VMs). The reason is, initially active memory usage for VMs on host H1 was 3GB but at this point active memory usage on H1 reduced to 512MB & as per the ESXi memory management, unused memory on the VMs will not freed itself to host until ESXi does not use its memory reclamation technique such as memory ballooning. Memory ballooning reclaims the VM memory only when host memory usage crosses 96% of its total host memory.

DPM will evaluate the cluster based on Target Resource Utilization Range = 63±18 i.e. Default range is 45 to 81%. As per active memory usage on both hosts, it is clear that only 1GB(256×4 VMs) memory is being used actively in the cluster which is just around 12.5% of the total memory (4 GB) available per host. Each ESXi host’s resource utilization demand is calculated as aggregate of memory required by VMs running on that host. Based on Target utilization range DPM identifies one of hosts as candidate host (in our case H2) to put into standby mode. DPM runs DRS simulation on the remaining one host i.e. H1 (DRS simulation will not consider the candidate hosts to be powered OFF, in our case it is H2), simulation uses the DPM demand metric formula i.e. just active memory to analyze whether VMs(VM3,VM4) on candidate host H2 can be accommodated on host H1 without impacting existing VMs(VM1,VM2). As the overall active memory across all the VMs is 1GB (256×4 VMs) & 1GB is just 25% of the memory utilization of H1 which is too less than upper target utilization range i.e. 81. (Before putting host into standby mode, DPM also makes sure that remaining host memory utilization should not cross the upper utilization range i.e. 81%). Hence DPM will vMotion VMs (with the help of DRS) from candidate host H2 to H1 and will put H2 into standby mode to save the power consumption. Note that by this time host consumed memory would be at-least 3 GB(earlier)+512 MB(VMs on H2 those are migrated to H1)=3.5 GB. If say, suddenly memory demand for recently migrated VMs increased by even 200MB each, host consumed memory on H1 would cross 96% of total memory available. This is where host H1 is very short on memory & it will immediately start memory ballooning in order to reclaim the unused VM memory to satisfy the memory demand by VMs. It is clear that old DPM memory demand metric did not consider future memory demand growth on any VMs those are currently on H1. Memory ballooning itself will not cause the performance impact as balloon driver will first reclaim the guest memory which is unused by guest OS which is perfectly safe (More on ballooning in next post). If memory is excessively over-committed & memory reclaimed by ballooning is not enough, it can lead to host swapping and it severally impacts the performance of the VMs.

Note: For the sake of simplicity in examples I did not consider memory required for virtualization layer.

Now you will be wondering, does not DPM evaluate(in above case) cluster to bring back the standby host H2 to meet the memory demand? Of course YES but as DPM evaluates hosts for power ON recommendation every 5 min, DPM will wait to complete the 5 min. As soon as DPM is invoked, DPM evaluates the cluster to bring back the standby host to meet the memory demand, once the standby host is powered ON, DRS balances the memory load in the cluster and it will stop the memory ballooning. However, hosts in the cluster may come across memory ballooning for the minimum time ranges from 5 min i.e. DPM invocation time for power on host recommendations + time DPM takes to evaluate the hosts + time required for the host to boot up from standby mode + Time required to vMotion as a result of balancing the memory load.

Overall, with DPM’s old memory demand metric DPM may lead to memory ballooning when active memory is low but host consumed memory is high. Host consumed memory can be high with low active memory when allocated VMs memory is either overcommitted OR it can even happen when VMs memory is fully backed by physical memory.

At this point I assume that you clearly understood that how earlier memory demand metric (active memory) was very aggressive.

Now it is time to see, how can we now control DPM behavior with new memory demand metric?
In order to control DPM’s aggressiveness, from vCenter 5.1 U2c onwards and all the versions of vCenter 5.5, DPM can be tuned to consider idle consumed memory as well in DPM memory demand metric. i.e. new DPM memory demand metric = active memory + X% of idle consumed memory. Default value of the X is 25. X value can be modified by using DRS advanced option “ PercentIdleMBInMemDemand” on cluster level. We can set this value in the range from 0 to 100. Refer this KB on how to configure PercentIdleMBInMemDemand advanced option.

We will continue the same above example:
We set “PercentIdleMBInMemDemand” option to 100 i.e. X value is 100. Initially on H1 host consumed memory was 3 GB & active memory usage was 512MB (256×2 VMs) and on H2 host consumed memory was 1 GB & active memory usage was 512MB(256×2 VMs). In this case when DPM evaluates the cluster, as host H2 has less than 45% memory usage, DPM picks host H2 as candidate host in order to put H2 into standby mode. DPM runs DRS simulation without considering the host H2 as it is a identified candidate host to be put into standby mode. DRS simulation uses DPM new demand memory metric i.e. Active memory + X% of idle consumed memory. Active memory on H1 is 512 MB, hence idle consumed memory on H1 is 3GB-512 MB=2.5GB. It shows that memory demand on H1 is 3 GB as X is 100. Active memory usage for the VMs on H2 those are going to be migrated to H1 (only if DPM finalizes to put H2 into standby mode) is 512 MBs, hence idle consumed memory on H2 is 1GB-512 MB=512 MB, it shows memory demand for VMs on H2 is 1 GB as X is 100. Finally, total memory demand by DPM is 4GB (3GB from VMs on H1 and 1 GB from VMs on H2), 4GB memory demand is way out of memory utilization target range i.e. 81%. As total memory demand by all the VMs goes out of utilization target range(Default range 45%-81%), DPM does not see any value in putting H2 into standby mode as for DPM performance is preference than saving power consumption, hence DPM will not put any host into standby mode & consequently avoids memory ballooning. This example shows that when there is high consumed memory and low active memory usage in your environment, it is better to set DRS advanced option PercentIdleMBInMemDemand to 100.

Note: DRS memory demand metric also uses the same formula but in this post I have just focused on DPM.

Now I am sure that you understood how DPM’s new memory demand metric can be used to fine tune DPM behavior which in turn can help to avoid memory ballooning.

is there direct relationship between new DPM memory demand metric and ESXi host memory ballooning. Answer is NO. There is NO direct relationship between new DPM memory demand metric and VM memory ballooning. New memory demand metric just gives us configurable option to fine tune DPM to consider more consumed memory as future memory demand while making host power ON/Off decisions. This would keep more memory resources available in cluster. Hence, it should indirectly avoid the ballooning.
It is also important to note that, DPM will not power on the standby host only because ballooning is happening on other host in the cluster as there is no direct relation between ballooning and DPM. In this case as well, when DPM gets invoked, it will check the target utilization range and only if host memory utilization exceeds the range, it starts evaluating the standby host based on memory demand formula (active memory + X% of idle consumed memory) in order to take hosts out of standby mode.
However, memory ballooning on VMs may happen (when host(s) are in standby mode) very rarely as DPM already would have considered conservative X% value before putting hosts into standby mode i.e. DPM would have kept enough memory resources(of course it is depend on the value of X i.e. PercentIdleMBInMemDemand) available in the cluster before putting any host into standby. Even then if ballooning happens, it is mean that there is excessive memory over commitment and/or actual memory demand by powered ON VMs is more than anticipated by DPM (using active + X% idle consumed).

Can actual memory demand of powered ON VMs be more than anticipated by DPM (which was based on new demand metric)? Yes, it can happen in very rare cases that too due to highly unpredictable increase in memory workloads/usage . Ex. Say cluster has 2 hosts(H1 and H2) with 1 VM on each. Consider, VM on a H1 has 8GB memory allocated but only 3GB is consumed by VM at the moment & X is set to 100. If X is 100, DPM considers entire consumed memory as memory demand. Based on 3GB memory demand, DPM puts host into standby mode (consider 3 GB is available on other host H2) but unfortunately, the moment DPM put the host into standby mode, memory demand of the VM got increased & consumed memory for the VM reached to say 7GB (very corner case) which is 4GB more than DPM had just anticipated. Now if host H2 does not have memory to satisfy this memory demand, it can lead to ballooning. However, once DPM realizes that memory utilization range is exceeding the target utilization range, it again evaluates cluster to bring back the standby host. It is worth to note that if VMs shares,limits & reservations are misconfigured, it can lead to ballooning even if there is plenty of memory available on host. (More on this in next post).

I hope you enjoyed DPM memory behavior, please leave the comment if you have any query. Stay tuned for PART 3 post on ESXi memory ballooning & memory best practices.

If you want to have even more depth understanding of DPM, please refer below resources
1. White Paper on DPM by VMware
2. Great book by “Duncan & Frank”: VMware vSphere 5.1 Clustering Deepdive