Category Archives: Storage and Availability

All about storage and availability, it will have backup restore recovery related posts as well

vCenter Server 6.0U3c is live now: Some of cool improvements on Storage DRS

Today vCenter Server 6.0U3c is released. As per me, though this release is a patch release but looks like lot of improvements done on various vCenter/ESXi components. I had an opportunity to work on some of Storage DRS issues and I thought to share those with you. In fact, there are multiple improvements done from Storage DRS perspective but I am sharing some of very interesting ones. Below is the content from release notes with respect to Storage DRS.

1. Storage DRS might place the thin provisioned disks into one datastore, instead of distributing them evenly

During the initial placement of thin provisioned disks, the Storage Distributed Resource Scheduler (SDRS) might miscalculate the entitled space requirement by excluding the reserved space. As a result, SDRS might use only the committed megabytes for the entitled space calculation, causing a wrong placement recommendation on one datastore, instead of distributing them evenly. This issue is resolved in this release.

2. Storage DRS generates only one suitable datastore for initial virtual machine placement

In case there are virtual machines with Raw Device Mapping (RDM) virtual disks, the Storage DRS might consider the actual mapping file size instead of the pointer file size, even though it does not consume any disk space. As a result, the Storage DRS might generate only one suitable datastore when creating new virtual machines. This issue is resolved in this release.

3.vCenter Server might fail in attempt to create a virtual machine on a Storage DRS cluster using script with null CPU or Mem Share values

vCenter Server might fail if you attempt to create a virtual machine on a Storage DRS cluster using a script with null CPU or Mem Share values in a RecommendDatastores() API call. This issue is resolved in this release.

Based on above info, you might have got the high level insights on these Storage DRS improvements. If you ask me, all are really good improvements from Storage DRS perspective. In my future blog posts, I will have deep dive into each of the above issues.

I think, if you are using Storage DRS, it is one of the great reasons to upgrade your environment (apart from important fixes on other vCenter/ESXi areas)

Important links

Download vCenter Server 6.0U3c bits here

vCenter server 6.0 U3c release notes

Yes, there is corresponding ESXi release, please take a look at this KB

It seems, there are some great improvements made to VMware vSAN as well, please take a look at this KB

DRS rules PART II: How to create VM-VM affinity rules using vSphere API.

In my last post we learned how to create VM-Host DRS affinity rules using vSphere API. Now today in PART II, we will see how to create VM-VM DRS affinity rules using vSphere API. Before jumping on API coding, I would like to list out what VM-VM rules are there and when we should use these rules.

What are the DRS VM-VM rules we can create on ESXi host cluster.

1. VM-VM affinity rule: This rule will keep the 1 or more VMs together on a host. i.e. DRS will make sure these VMs are running together on the same host all the time. Note that, this rule is soft rule, it is mean that, DRS can violate this rule if required in order to balance the cpu/memory load on the cluster. However, DRS will try its best to resolve this rule violation in next DRS invocation(Default DRS invocation is 5 mins).

Use cases:
1. If there is a group of VMs in DRS cluster those communicate frequently with each other, it would make sense to keep these VMs together on the same host to save some network bandwidth & increase the performance. If we keep such VMs on separate hosts, network traffic should exit from external physical network & will impact network latency.
2. If there is group of VMs in DRS cluster with same GuestOS, Apps or user data, we can keep these VMs together on same host to take advantage of Transparent Page Sharing (TPS) memory reclamation technique, so that host memory will be efficiently shared wherever there is opportunity.
Both of above use-cases can be satisfied by using VM-VM affinity rule

2. VM-VM anti-affinity rule: This rule is exactly opposite to above rule. Here this rule will keep 2 or more VMs away from each other. As per this rule, all the VMs involved in this rule should run on separate hosts. Again this rule is soft rule and DRS can violate this rule if required.
Use case:
If you want to make 2 critical VMs highly available, it makes sense to configure VM-VM anti-affinity rule on these 2 critical VMs. If one host goes down, second VM will be still running.

I strongly suggest you to read my blog post is HA aware of DRS rules in order to understand impact of above DRS rules on vSphere HA.

Note: All DRS rules are very popular among VMware admin but it is important to note that, if there are multiple rules configured on DRS cluster, it can impose constraint on DRS load balancing ability as DRS has to think on satisfying configured rules on cluster. It reduces DRS migration options. Hence please make use of these rules if absolutely required.

Creating these rules using vSphere API
Now with above basic fundamentals on DRS rules, we are ready to deal with creating these rules using vSphere APIs. I assume now you are familiar with vSphere API reference, go through below pointed data-object in detail so that you will understand code yourself. (Click on image to enlarge)
VMVMrules Dataobject
Refer: ClusterRuleInfo data object API reference

Below program creates both VM-VM affinity rule and VM-VM anti-affinity rule

package com.vmware.vijava;
import java.net.MalformedURLException;
import java.net.URL;
import java.rmi.RemoteException;
import com.vmware.vim25.ArrayUpdateOperation;
import com.vmware.vim25.ClusterAffinityRuleSpec;
import com.vmware.vim25.ClusterAntiAffinityRuleSpec;
import com.vmware.vim25.ClusterConfigSpec;
import com.vmware.vim25.ClusterRuleSpec;
import com.vmware.vim25.InvalidProperty;
import com.vmware.vim25.ManagedObjectReference;
import com.vmware.vim25.RuntimeFault;
import com.vmware.vim25.mo.ClusterComputeResource;
import com.vmware.vim25.mo.Folder;
import com.vmware.vim25.mo.InventoryNavigator;
import com.vmware.vim25.mo.ServiceInstance;
import com.vmware.vim25.mo.VirtualMachine;
import com.vmware.vim25.mo.util.MorUtil;

public class DRSVMVMRules {

public static void main(String[] args) throws InvalidProperty,
RuntimeFault, RemoteException, MalformedURLException {
ServiceInstance si = new ServiceInstance(new URL(args[0]), args[1],
args[2], true); // Pass 3 argument as vCenterIP/username/password
String ClusterName = "BLR-NTP"; // Cluster Name
String affineVM1 = "CentOS6_x64_2GB_1"; // First VM for affinity rule
String affineVM2 = "CentOS6_x64_2GB_2"; // Second VM for affinity rule
String anti_affineVM1 = "CentOS6_x64_2GB_3"; // First VM for anti-affinity rule
String anti_affineVM2 = "CentOS6_x64_2GB_4"; // Second VM for anti-affinity rule
Folder rootFolder = si.getRootFolder();

ClusterComputeResource cluster = null;
cluster = (ClusterComputeResource) new InventoryNavigator(rootFolder)
.searchManagedEntity("ClusterComputeResource", ClusterName);
ManagedObjectReference ClusterMor = cluster.getMOR();
ClusterComputeResource ccr = (ClusterComputeResource) MorUtil
.createExactManagedEntity(si.getServerConnection(), ClusterMor);

// VM-VM affinity rule configuration
ClusterConfigSpec ccs = new ClusterConfigSpec();
ClusterAffinityRuleSpec cars = null;
VirtualMachine vm1 = (VirtualMachine) new InventoryNavigator(rootFolder)
.searchManagedEntity("VirtualMachine", affineVM1);
VirtualMachine vm2 = (VirtualMachine) new InventoryNavigator(rootFolder)
.searchManagedEntity("VirtualMachine", affineVM2);
ManagedObjectReference vmMor1 = vm1.getMOR();
ManagedObjectReference vmMor2 = vm2.getMOR();
ManagedObjectReference[] vmMors1 = new ManagedObjectReference[] {
vmMor1, vmMor2 };
cars = new ClusterAffinityRuleSpec();
cars.setName("VM-VM Affinity Rule");
cars.setEnabled(true);
cars.setVm(vmMors1);
ClusterRuleSpec crs1 = new ClusterRuleSpec();
crs1.setOperation(ArrayUpdateOperation.add);
crs1.setInfo(cars);

// VM-VM Anti-affinity rule configuration
ClusterAntiAffinityRuleSpec caars = null;
VirtualMachine vm3 = (VirtualMachine) new InventoryNavigator(rootFolder)
.searchManagedEntity("VirtualMachine", anti_affineVM1);
VirtualMachine vm4 = (VirtualMachine) new InventoryNavigator(rootFolder)
.searchManagedEntity("VirtualMachine", anti_affineVM2);
ManagedObjectReference vmMor3 = vm3.getMOR();
ManagedObjectReference vmMor4 = vm4.getMOR();
ManagedObjectReference[] vmMors2 = new ManagedObjectReference[] {
vmMor3, vmMor4 };
caars = new ClusterAntiAffinityRuleSpec();
caars.setName("VM-VM Anti-Affinity Rule");
caars.setEnabled(true);
caars.setVm(vmMors2);
ClusterRuleSpec crs2 = new ClusterRuleSpec();
crs2.setOperation(ArrayUpdateOperation.add);
crs2.setInfo(caars);

// Passing the rule spec
ccs.setRulesSpec(new ClusterRuleSpec[] { crs1, crs2 });
// Reconfigure the cluster
ccr.reconfigureCluster_Task(ccs, true);
System.out.println("Rules are created with no issues:");

}
}

Code itself is self explanatory, just map the code with data-objects in vSphere API reference. Note that for the sake of simplicity, I have hard-coded VM/Cluster name , do make changes according to your environment. Please do leave the comment if you have any doubt.

Below is the VI client view of created VM-VM DRS rules using above code.
VM-VM Rules

If you have still not setup your VI JAVA Eclipse environment:Getting started tutorial
Important tutorials to start with: Part I & Part II

DRS rules PART I: How to create VM-Host DRS affinity rule using vSphere API

On high level there are 2 types of DRS rules, one is VM-VM affinity rules and other is VM-Host affinity rules. Today in Part I, I will brief on what are the VM-Host affinity rules DRS has & what are the use cases where VM-Host rules can be used. However more focus would be on how to create VM-Host rules on ESXi cluster using vSphere API. In general, VM-Host rules will allow us to run particular group of VMs on specific group of ESXi hosts or away from specific group of ESXi hosts.

What are the DRS VM-Host rules we can create on ESXi host cluster.
1. VM-Host must affinity rule : It is mandatory rule where specific group of VMs must run on specific group of hosts.
2. VM-Host must anti-affinity rule : It is mandatory rule where specific group of VMs must not run on specific group of hosts.
3. VM-Host should affinity rule : It is soft rule where specific group of VMs should run on specific group of hosts. DRS or user can violate this rule whenever required. But DRS will make best effort to correct the violation in the next DRS invocation.
4. VM-Host should anti-affinity rule : It is soft rule where specific group of VMs should not run on specific group of hosts. DRS or user can violate this rule whenever required. But DRS will make best effort to correct the violation in the next DRS invocation.

What are the use cases where we can use these rules
1. There are some products their licenses are based on ESXi host CPUs, it is mean that you need to apply licenses on these ESXi host to run the VMs. Classic example is Oracle licensing for databases. In this case, we can configure VM-Host must affinity rule so that oracle DB VMs will only run on set of hosts with oracle license.
2. VM host anti-affinity rule can be used to increase the availability of the VM. i.e. Running some critical VMs on set of hosts with different power supplies than rest of the host in the cluster so that even when there is power failure, some critical VMs can be still running on set of host with different power supplies.
I strongly suggest you to read my blog post is vSphere HA aware of DRS rules in order to understand impact of the DRS rules on vSphere HA.

Now it is time to look into creating these rules using vSphere APIs. Before jumping on developing code, It would be great if you get familiar with ClusterConfigSpecEx data object required for creating VM-Host rules. Specifically focus on data object properties high-lighted in below screenshot.
RuleSpec_GroupSpec

Below code is for creating VM-Host must affinity rule, I will explain how to leverage this code to create other 3 rules. It is really very simple.

package com.vmware.vijava;
import java.net.URL;
import com.vmware.vim25.ArrayUpdateOperation;
import com.vmware.vim25.ClusterConfigSpecEx;
import com.vmware.vim25.ClusterGroupInfo;
import com.vmware.vim25.ClusterGroupSpec;
import com.vmware.vim25.ClusterHostGroup;
import com.vmware.vim25.ClusterRuleSpec;
import com.vmware.vim25.ClusterVmGroup;
import com.vmware.vim25.ClusterVmHostRuleInfo;
import com.vmware.vim25.ManagedObjectReference;
import com.vmware.vim25.mo.ClusterComputeResource;
import com.vmware.vim25.mo.Folder;
import com.vmware.vim25.mo.HostSystem;
import com.vmware.vim25.mo.InventoryNavigator;
import com.vmware.vim25.mo.ServiceInstance;
import com.vmware.vim25.mo.VirtualMachine;
import com.vmware.vim25.mo.util.MorUtil;

public class VMHOSTrule {

public static void main(String[] args) throws Exception {
if (args.length != 3) {
System.out.println("Usage: java SearchDatastore [url] "
+ "[username] [password]");
return;
}

ServiceInstance si = new ServiceInstance(new URL(args[0]), args[1],
args[2], true);
/*
* you need to pass 3 parameters 1. https://x.y.z.r/sdk 2. username 3.
* password. Plz connect to vCenter Server
*/
String vmGroupName = "vmGroup_1";
String hostGroupName = "HostGroup_1";
Folder rootFolder = si.getRootFolder();

ClusterComputeResource clu = null;

clu = (ClusterComputeResource) new InventoryNavigator(rootFolder)
.searchManagedEntity("ClusterComputeResource", "India_Cluster");
ManagedObjectReference ClusterMor = clu.getMOR();

HostSystem host1 = (HostSystem) new InventoryNavigator(rootFolder)
.searchManagedEntity("HostSystem", "10.10.1.1");
HostSystem host2 = (HostSystem) new InventoryNavigator(rootFolder)
.searchManagedEntity("HostSystem", "10.10.1.2");
ManagedObjectReference hostMor1 = host1.getMOR();
ManagedObjectReference hostMor2 = host2.getMOR();
VirtualMachine vm1 = (VirtualMachine) new InventoryNavigator(rootFolder)
.searchManagedEntity("VirtualMachine", "CentOS6_x64_2GB_1");
VirtualMachine vm2 = (VirtualMachine) new InventoryNavigator(rootFolder)
.searchManagedEntity("VirtualMachine", "CentOS6_x64_2GB_2");
ManagedObjectReference vmMor1 = vm1.getMOR();
ManagedObjectReference vmMor2 = vm2.getMOR();
ClusterComputeResource ccr = (ClusterComputeResource) MorUtil
.createExactManagedEntity(si.getServerConnection(), ClusterMor);
ManagedObjectReference[] vmMors = new ManagedObjectReference[] {
vmMor1, vmMor2 };
ManagedObjectReference[] hostMors = new ManagedObjectReference[] {
hostMor1, hostMor2 };

ClusterGroupInfo vmGroup = new ClusterVmGroup();
((ClusterVmGroup) vmGroup).setVm(vmMors);
vmGroup.setUserCreated(true);
vmGroup.setName(vmGroupName);

ClusterGroupInfo hostGroup = new ClusterHostGroup();
((ClusterHostGroup) hostGroup).setHost(hostMors);
hostGroup.setUserCreated(true);
hostGroup.setName(hostGroupName);

ClusterVmHostRuleInfo vmHostAffRule = new ClusterVmHostRuleInfo();
vmHostAffRule.setEnabled(new Boolean(true));
vmHostAffRule.setName("VMHOSTAffinityRule");
vmHostAffRule.setAffineHostGroupName("HostGrp_1");
vmHostAffRule.setVmGroupName("VMGrp_1");
vmHostAffRule.setMandatory(true);
ClusterGroupSpec groupSpec[] = new ClusterGroupSpec[2];
groupSpec[0] = new ClusterGroupSpec();
groupSpec[0].setInfo(vmGroup);
groupSpec[0].setOperation(ArrayUpdateOperation.add);
groupSpec[0].setRemoveKey(null);
groupSpec[1] = new ClusterGroupSpec();
groupSpec[1].setInfo(hostGroup);
groupSpec[1].setOperation(ArrayUpdateOperation.add);
groupSpec[1].setRemoveKey(null);
/* RulesSpec for the rule populated here */
ClusterRuleSpec ruleSpec[] = new ClusterRuleSpec[1];

ruleSpec[0] = new ClusterRuleSpec();
ruleSpec[0].setInfo(vmHostAffRule);
ruleSpec[0].setOperation(ArrayUpdateOperation.add);
ruleSpec[0].setRemoveKey(null);
ClusterConfigSpecEx ccs = new ClusterConfigSpecEx();
ccs.setRulesSpec(ruleSpec);
ccs.setGroupSpec(groupSpec);

ccr.reconfigureComputeResource_Task(ccs, true);
System.out.println("VM Host Rule created successfully");

}

}


Line 22-29: Refer my this blog post on ServiceInstance data object to understand these lines.
Line 41-54: Refer my this blog post on Inventory Navigator to understand these lines.
Line 64-72: We are populating VMGroup i.e. Set of VMs of those will be part of rule. & HostGroup i.e. set of hosts those will be part of rule. These groups will be pinned with GroupSpec in line 80-88.
Line 74-78: Creation of ClusterVmHostRuleInfo data object which helps to identify which host group and VM Group is associated with this rule as there can be several other host and VM groups associated with other rules already exists or planned to configure. Note that line 77 is responsible in deciding whether rule is affinity or anti affinity.
Line 80-88: Here we are populating GroupSpec required for creating VM & Host group on cluster based on the line 64-72.
Line 90-95: Here we are populating RuleSpec based on line 74-78.
Line 96-98: Populating ClusterConfigSpecEx data object with RuleSpec and GroupSpec which will be passed to reconfigureComputeResource_Task method which actually re-configures cluster with created VM-Host must affinity rule.

Here is the created VM-Host rule view from vSphere Client.
VM Host Rule

Noteworthy points:
1. For the sake of simplicity we have hard-coded some IPs, cluster, host & VMs names. Please do make changes if you are using this in production environment.
2. In order to create anti-affinity rule, you will just need to replace line 77 to vmHostAffRule.setAntiAffineHostGroupName(“HostGrp_1”); is it not too simple?.
3.In order to make this rule soft, just replace line 79 to vmHostAffRule.setMandatory(false); Note that default rule created is soft rule(i.e. If we do not set the property).

I hope you enjoyed this post, stay tuned for PART II on VM-VM affinity rules.

Learn more on VM-Host rules here.
If you have still not setup your Eclipse environment:Getting started tutorial

vSphere HA VM protection when VM restart priority and VM monitoring are disabled

Sometime back I got a question on VMTN & also one of friends had same doubt. Hence I thought it is worth to have one small post on this. Question was: Even when VM restart priority and VM monitoring settings are disabled for a particular VM in HA enabled cluster, why vCenter Server reports that VM as HA protected?
First of all, I would suggest you to look into below screenshot taken from VI client VM summary tab.
HA protected state

According to the VM summary, below condition should meet in order vCenter to report VM as HA protected.
– VM is in a vSphere HA enabled Cluster.
– VM powered on successfully after a user-initiated power on.
– vSphere HA has recorded the power state for this VM is on.

Note that vSphere HA maintains one file called “Protectedlist” for all the VMs in HA enabled cluster in order to identify which VMs are protected. Below is sequence of steps takes place when we power on the VM in HA cluster.
1. When we power on VM from vCenter, vCenter informs ESXi host to power on the VM.
2. When VM is powered on, host informs vCenter that VM is powered ON.
3. vCenter then contacts vSphere HA master for making powered ON VM protected.
4. HA master makes entry into “ProtectedList” file which exhibits that Master is now responsible to restart the VM when there is failure.
5. Finally HA master informs vCenter that I have added the entry into “ProtectedList” file & then vCenter reports VM as HA protected in VI client as we see in above screenshot.

Original question “When VM restart priority & VM Monitoring are disabled, why does vCenter report as HA protected?” is still unanswered. Here is the answer:
Note that By design, “VM restart priority” and “VM monitoring” settings are orthogonal to vSphere HA protection. vSphere HA protection state has nothing to with restart priority and VM monitoring settings, no matter these settings are enabled or disabled. Now one more question arises i.e. is it possible that HA removes VM from protectedList when we disable either or both settings (i.e. VM restart priority & VM monitoring)? Diplomatic answer is :It depends. We will see sequence of actions that HA takes on VM protected state when we disable either or both of these settings in case of failure.

1. if VM restart priority is disabled & VM monitoring is enabled:
-Host is up and VM is up, vSphere HA will keep VM in protected list.
-When ESXi host fails, VM can not be restarted on other available host. Even If failed host comes back, still VM will not be restarted, it will be powered OFF and now vSphere HA will remove that VM from protected list.
-When Guest OS fails (ex. BSOD), HA will reset that Guest on the same host and HA continue to keep in protected list. It shows that VM monitoring is orthogonal to restart priority as well.

2. if VM restart priority is disabled & VM monitoring is also disabled:
-Host is up and VM is up, vSphere HA will keep in protected list.
-When ESXi host fails, VM can not be restarted on other available host. Even If failed host comes back, still VM will not be restarted, it will be powered OFF and now vSphere HA will remove that VM from protected list.
-When Guest OS fails, HA can not restart that Guest OS on the same host but as VM itself does not have any issue (i.e. VM is ON but you can not access Guest), HA continue to keep that VM in protected list.

3.if VM restart prioriy is enabled & VM monitoring is disabled:
-Host is up and VM is up, vSphere HA will keep in protected list.
-When ESXi host fails, VM will be restarted on other available host, After restart, VM will be powered ON and HA continue to keep that VM in protected list
-When Guest OS fails, HA can not restart that Guest on the same host but as VM itself does not have any issue (VM is ON but you can not access Guest), HA continue to keep that VM in protected list.

There are 3 state of the VM from HA perspective, today’s post was focused on Protected VM state, there are 2 more i.e. unprotected and N/A, I will write another post later on other two VM HA state.

Learn more on vSphere HA here

I hope you enjoyed this post. Please let me know if you have any additional doubts.

is vSphere HA aware of DRS affinity rules?

Recently I was exploring vSphere HA interop with DRS affinity rules. It is quite interesting to know that as of vCenter server 5.5, how vSphere HA deals with various DRS affinity rules.

Here are the current DRS rules we can configure on the DRS cluster:
1. VM-VM affinity rule: This rule is intended to keep group of VMs together on single host.

2. VM-VM anti-affinity rule: This rule is intended to keep group of VMs away from each other all the time.

3. VM-Host affinity rule: This rule restricts to run a group of VMs on a group of Host. It is mean that, VMs in VM group should/must always be running on hosts in Host Group. This rule can be hard/must or soft/should rule.

4. VM-Host anti affinity rule: This rule is exactly opposite to above VM-Host Affinity rule. This rule does not allow  to  run a group of VMs on a group of Host. This rule can be hard/must or soft/should rule.

Now question is : whether vSphere HA aware of DRS affinity rules? Answer is “Yes”, as of vSphere 5.5, vSphere HA is aware of 2 DRS rules. Here are the rules those are honored by vSphere HA.

1 . VM-Host must affinity/anti-affinity rule

2.  VM-VM anti-affinity rule.

vSphere HA honors these rules, it is mean that, in case of host failure, if re-starting VMs on available host  leads to rule violation, vSphere HA will not re-start VMs those were on the failed host. vSphere HA will raise the error instead.

Example: Say , you have a HA-DRS enabled cluster of 2 hosts (H1, H2) & with 2 VMs (VM1-H1, VM2-H2) one on each host in powered on state. Now you configured VM-Host must affinity rule :HostGroup:H1 & VMGroup:VM1. It is mean that VM1 must always run on H1. Once you configure this rule, say, H1 host got failed. Now vSphere HA will try to restart the VM on H2 but as HA knows that VM-Host rule is configured, it will not restart the VM1 on H2.

It is important to note that, by default, VM-Host must affinity/anti-affinity is honored by vSphere HA. You just need to configure VM-host must affinity/anti-affinity rule but in order to make VM-VM anti-affinity rule vSphere HA aware, you will have to configure one HA advanced option “das.respectVmVmAntiAffinityRules” to true. (default value of this advanced option is false). I repeat this is a HA advanced option, not the DRS advanced option. You can configure this option from web client through this workflow (Cluster>>Manage>>vSphere HA>>Advanced option). However, you can configure this option from Desktop client as well.

It is also important to note that, even when DRS is disabled on cluster, HA continues to honor them. As per the current design, we can not disable these rules when DRS is disabled. Hence care must be taken while disabling DRS, you can disable these rule when you disable DRS or when you want to disable these rules, enable DRS for a while in conservative migration threshold mode & disable these rules.

As I specified earlier, in case of host failure, HA will not restart VMs if that is going to violate the configured rule. Hence, admin need be to very cautious while configuring these rules as these rule can have availability impact. These rules should be configured when it is absolutely required.