Tag Archives: vsphere 6.0

vCenter Server 6.0U3c is live now: Some of cool improvements on Storage DRS

Today vCenter Server 6.0U3c is released. As per me, though this release is a patch release but looks like lot of improvements done on various vCenter/ESXi components. I had an opportunity to work on some of Storage DRS issues and I thought to share those with you. In fact, there are multiple improvements done from Storage DRS perspective but I am sharing some of very interesting ones. Below is the content from release notes with respect to Storage DRS.

1. Storage DRS might place the thin provisioned disks into one datastore, instead of distributing them evenly

During the initial placement of thin provisioned disks, the Storage Distributed Resource Scheduler (SDRS) might miscalculate the entitled space requirement by excluding the reserved space. As a result, SDRS might use only the committed megabytes for the entitled space calculation, causing a wrong placement recommendation on one datastore, instead of distributing them evenly. This issue is resolved in this release.

2. Storage DRS generates only one suitable datastore for initial virtual machine placement

In case there are virtual machines with Raw Device Mapping (RDM) virtual disks, the Storage DRS might consider the actual mapping file size instead of the pointer file size, even though it does not consume any disk space. As a result, the Storage DRS might generate only one suitable datastore when creating new virtual machines. This issue is resolved in this release.

3.vCenter Server might fail in attempt to create a virtual machine on a Storage DRS cluster using script with null CPU or Mem Share values

vCenter Server might fail if you attempt to create a virtual machine on a Storage DRS cluster using a script with null CPU or Mem Share values in a RecommendDatastores() API call. This issue is resolved in this release.

Based on above info, you might have got the high level insights on these Storage DRS improvements. If you ask me, all are really good improvements from Storage DRS perspective. In my future blog posts, I will have deep dive into each of the above issues.

I think, if you are using Storage DRS, it is one of the great reasons to upgrade your environment (apart from important fixes on other vCenter/ESXi areas)

Important links

Download vCenter Server 6.0U3c bits here

vCenter server 6.0 U3c release notes

Yes, there is corresponding ESXi release, please take a look at this KB

It seems, there are some great improvements made to VMware vSAN as well, please take a look at this KB

Great vSphere API ever: Part -I : placevm() API which places the VM on best possible host and datastore

This is the vSphere API personally I was waiting for since long days, I believe this is one of the powerful vSphere APIs ever. The API I am talking about is introduced as part of vSphere 6.0 i.e. placevm(). Here is the API reference for the same.

Why this API is so powerful?
-Basically this API helps to place the VM on appropriate host and datastore. Placing the VM can be as part of creating the VM, relocating the VM, cloning the VM or re-configuring the existing VM.
-How does this API achieves the best host from CPU and Memory perspective and datastore from storage perspective? As per API reference, this API can be invoked to ask DRS (Distributed resource scheduler) for a set of recommendations for placing a virtual machine and its virtual disks into a DRS cluster.
-This API offers so much flexibility that, from storage perspective, it can take input as set of datastores of our choice as well as set of SDRS (Storage DRS) PODs, we can even specify one particular datastore of our choice. From compute perspective, it takes set of hosts or particular host as input. How cool is that when DRS is involved and SDRS POD(s)?
-It also gives us flexibility on not to specify any hosts as well as datastores, in that case, it automatically picks all the hosts inside the DRS cluster and all the datastores connected to hosts inside the cluster.
– Another beauty of this API is, it works perfectly with SPBM (Storage Policy Based Management) as well as across vCenter server. is not it something great capability into single API?

Lets learn now how to use this API in real time. For the sake simplicity I have divided this post into 2 parts.

Part I: How to relocate a Powered ON VM from a DRS enabled cluster to another DRS enabled cluster (One SDRS POD as input) within single vCenter
Part II: How to relocate a Powered ON VM from a DRS enabled cluster to another DRS enabled cluster (Multiple SDRS PODs as input) across vCenter.

Below is the placeVM API sample that achieves the Part I where we invoke placeVM API to get the set of recommendations for VM (cpu, mem, storage) and then we invoke RelocateVM API to apply one of the first recommendations.

Same sample is available on my git hub repository as well as on VMware Sample exchange

//:: # Author: Vikas Shitole
//:: # Website: www.vThinkBeyondVM.com
//:: # Product/Feature: vCenter Server/DRS
//:: # Reference:
//:: # Description: Tutorial: PlaceVM API: Live relocate a VM from one DRS cluster to another DRS cluster (in a Datacenter or across Datacenter)
//:: # How cool is it when DRS takes care of placement from cpu/mem perspective and at the same time SDRS take care of storage placement
//::# How to run this sample: http://vthinkbeyondvm.com/getting-started-with-yavi-java-opensource-java-sdk-for-vmware-vsphere-step-by-step-guide-for-beginners/

package com.vmware.yavijava;

import java.net.MalformedURLException;
import java.net.URL;
import com.vmware.vim25.ClusterAction;
import com.vmware.vim25.ClusterRecommendation;
import com.vmware.vim25.ManagedObjectReference;
import com.vmware.vim25.PlacementAction;
import com.vmware.vim25.PlacementResult;
import com.vmware.vim25.PlacementSpec;
import com.vmware.vim25.VirtualMachineMovePriority;
import com.vmware.vim25.VirtualMachineRelocateSpec;
import com.vmware.vim25.mo.ClusterComputeResource;
import com.vmware.vim25.StoragePlacementSpecPlacementType;
import com.vmware.vim25.mo.Datacenter;
import com.vmware.vim25.mo.Datastore;
import com.vmware.vim25.mo.Folder;
import com.vmware.vim25.mo.HostSystem;
import com.vmware.vim25.mo.InventoryNavigator;
import com.vmware.vim25.mo.ManagedEntity;
import com.vmware.vim25.mo.ResourcePool;
import com.vmware.vim25.mo.ServiceInstance;
import com.vmware.vim25.mo.StoragePod;
import com.vmware.vim25.mo.VirtualMachine;

public class PlaceVMRelocate {

public static void main(String[] args) throws Exception {
if(args.length!=3)
{
System.out.println("Usage: PlaceVMRelocate url username password");
System.exit(-1);
}

URL url = null;
try
{
url = new URL(args[0]);
} catch ( MalformedURLException urlE)
{
System.out.println("The URL provided is NOT valid. Please check it...");
System.exit(-1);
}
String username = args[1];
String password = args[2];
String SourceClusterName = "Cluster1"; //Source cluster Name, It is not required to have DRS enabled
String DestinationClusterName="Cluster2"; //Destination cluster with DRS enabled
String SDRSClusterName1="POD_1"; //SDRS POD
String VMTobeRelocated="VM2"; //VM Name to be relocated to other cluster
ManagedEntity[] hosts=null;

// Initialize the system, set up web services
ServiceInstance si = new ServiceInstance(url, username,
password, true);
if(si==null){
System.out.println("ServiceInstance Returned NULL, please check your vCenter is up and running ");
}
Folder rootFolder = si.getRootFolder();
ManagedObjectReference folder=rootFolder.getMOR();
StoragePod pod1=null;

//Getting datacenter object
Datacenter dc=(Datacenter) new InventoryNavigator(rootFolder)
.searchManagedEntity("Datacenter", "vcqaDC");

//Getting SDRS POD object
pod1=(StoragePod) new InventoryNavigator(rootFolder)
.searchManagedEntity("StoragePod", SDRSClusterName1);
ManagedObjectReference podMor1=pod1.getMOR();
ManagedObjectReference[] pods={podMor1};

//Getting source cluster object, It is NOT needed to enable DRS on source cluster
ClusterComputeResource cluster1 = null;
cluster1 = (ClusterComputeResource) new InventoryNavigator(rootFolder)
.searchManagedEntity("ClusterComputeResource", SourceClusterName);

//Getting VM object to be relocated
VirtualMachine vm=null;
vm=(VirtualMachine) new InventoryNavigator(cluster1)
.searchManagedEntity("VirtualMachine", VMTobeRelocated);
ManagedObjectReference vmMor=vm.getMOR();

//Getting destination cluster object, DRS must be enabled on the destination cluster
ClusterComputeResource cluster2 = null;
cluster2 = (ClusterComputeResource) new InventoryNavigator(rootFolder)
.searchManagedEntity("ClusterComputeResource", DestinationClusterName);
ManagedObjectReference cluster2Mor=cluster2.getMOR();

//Getting all the host objects from destination cluster.
hosts = new InventoryNavigator(cluster2).searchManagedEntities("HostSystem");
System.out.println("Number of hosts in the destination cluster::" + hosts.length);
ManagedObjectReference[] hostMors=new ManagedObjectReference[hosts.length];
int i=0;
for(ManagedEntity hostMor: hosts){
hostMors[i]=hostMor.getMOR();
i++;
}

//Building placement Spec to be sent to PlaceVM API
PlacementSpec placeSpec=new PlacementSpec();
placeSpec.setPlacementType(StoragePlacementSpecPlacementType.relocate.name());
placeSpec.setPriority(VirtualMachineMovePriority.highPriority);
// placeSpec.setDatastores(dss); //We can pass array of datastores of choice as well
placeSpec.setStoragePods(pods); // Destination storage SDRS POD (s)
placeSpec.setVm(vmMor); //VM to be relocated
placeSpec.setHosts(hostMors); //Destination DRS cluster hosts/ We can keep this unset as well
placeSpec.setKey("xvMotion placement");
VirtualMachineRelocateSpec vmrelocateSpec=new VirtualMachineRelocateSpec();
vmrelocateSpec.setPool(cluster2.getResourcePool().getMOR()); //Destination cluster root resource pool
vmrelocateSpec.setFolder(dc.getVmFolder().getMOR()); //Destination Datacenter Folder
placeSpec.setRelocateSpec(vmrelocateSpec);
PlacementResult placeRes= cluster2.placeVm(placeSpec);
System.out.println("PlaceVM() API is called");

//Getting all the recommendations generated by placeVM API
ClusterRecommendation[] clusterRec=placeRes.getRecommendations();
ClusterAction[] action= clusterRec[0].action;
VirtualMachineRelocateSpec vmrelocateSpecNew=null;
vmrelocateSpecNew=((PlacementAction) action[0]).getRelocateSpec();
vm.relocateVM_Task(vmrelocateSpecNew, VirtualMachineMovePriority.highPriority);

si.getServerConnection().logout();
}

}

Notes:
– For the sake of simplicity I have hardcoded some variables, you can change as per your environment
– We can leverage this sample either within a single vCenter datacenter or across vCenter datacenters.
– All the required documentation is added inside the sample itself. Source cluster need not to be DRS enabled.
-Same Sample can used not only for relocate ops but also clone, create and reconfigure VM Ops.

If you have still not setup your YAVI JAVA Eclipse environment:Getting started tutorial

Important tutorials to start with: Part I & Part II

One of cool vSphere features: Tutorial:How Storage DRS works with storage policies in SPBM?

SDRS integration with storage profiles was one of the cool feature I was waiting for since vSphere 5.5. In vSphere 6.0 GA, there were some public references of its support but no detailed documentation was available. As per this KB I am happy to know that in vCenter Server 6.0.0b and above, Storage DRS is fully supported to have storage profile enforcement. Now SDRS is aware of storage policies in SPBM (Storage Policy Based Management). Recently I got opportunity to play with this feature and I thought to write a detailed post which can help all. Here we go

As part of this SDRS integration with storage profile/policy, One SDRS cluster level advanced option is introduced i.e. “EnforceStorageProfiles”. Advanced option EnforceStorageProfiles takes one of these integer values, 0,1 or 2 where the default value is 0.

When option is set to 0, it is mean that NO storage profile enforcement on the SDRS cluster.

When option is set to 1, it is mean that there is storage profile SOFT enforcement on the SDRS cluster. SDRS will try its best to comply with storage profile/policy. However if required, SDRS will violate the storage profile compliant.

When option is set to 2, it is mean that there is storage profile HARD enforcement on the SDRS cluster. In any case, SDRS will not violate the storage profile.

Refer KB 2142765 in order to know how to configure SDRS advanced option “EnforceStorageProfiles” using vSphere web client and vSphere client.

Now I will walk you through vSphere web client workflows as follows in order to play with this cool feature. This is kind of Tutorial.

1. Configuring SDRS advanced option to enable SOFT (1) storage profile enforcement with Storage DRS.

-Create a SDRS cluster (aka POD) with 3 datastores namely DS1, DS2 and DS3.
-Go to SDRS Cluster >> Manage >> Settings >> Storage DRS web client workflow. Click on Edit and configure the option “EnforceStorageProfiles” as shown in below screenshot.

Adding SDRS option

You could see I had set this option to 1 i.e. SOFT enforcement

2. Creating 2 Tags named “Gold” and Silver”.

Tags are required to attach to all the datastores in datastore cluster (as per datastore capability) as well as to create tag based VM storage policies.

– Go to Home >> Tags >> click on New tag and create a “Gold” tag with “FC SAN” category as shown in below screenshot

Create Tag

Similar to Gold tag, please create a “Silver” tag with “iSCSI SAN” category. At this point make sure you have 2 tags, Gold and Silver.

3. Assign the tags to datastores in SDRS POD.

-Go to SDRS POD >>DS1 >> Manage >> Tags >> Click on Assign tags as shown in below screenshot

Assign tags to Datastore

Please assign “Gold” tag to the DS1 as well as DS2. Finally assign “Silver” tag to datastore DS3. You can see the assigned tags for each datastore as shown below screenshot.

Assiged-tag-on-DS3

You can see in above screenshot, DS3 was assigned with “Silver” tag.

4. Creating VM storage policy

– Go to Home >> Policies and profiles >> VM storage policy >> Click on the create VM storage policy as specified in below screen shot.

Create VM storage policy

-Once you click on “Create a new VM storage policy”, specify policy name as “Gold VM Storage Policy” and Click next.
– On Rule-Sets window, click on Add tag-based rule and select “Gold” tag under “FC SAN” category as shown in below screenshot.
Adding tag based rule
– On Storage compatibility UI page, keep default and click next. Finally click on finish at the end as shown below screenshot.
VM storage policy finish

– Similar to “Gold VM Storage Policy” creation, repeat above steps to create “Silver VM Storage Policy” based on “Silver” tag. At this moment you will have 2 VM storage policies “Gold VM Storage Policy” and “Silver VM Storage Policy”.

Now we are ready to play with SDRS with Storage profile integration feature. Lets try out some workflows from SDRS perspective to verify whether profiles are being considered by SDRS. Note that in the very first step we have set the “EnforceStorageProfiles” SDRS option to 1 i.e. SOFT profile enforcement.

1. Create VM workflow: i.e. SDRS initial placement workflow.

– From web client. Start create VM workflow >> Give VM name as “GoldVM” >> select compute resource >> under select storage, select VM storage policy as “Gold VM Storage Policy” that we created in last section and storage as SDRS POD we created initially as shown in below screenshot

Warning on selecting policy create VM

If you see in above screenshot, SDRS POD is listed under “incompatible” storage. Note that this is expected as SDRS POD has 3 datastores and 2 of those have “Gold” tag attached and 3rd has “Silver” attached. Warning shown in above screenshot also shows the same. i.e. Datastore does not satisfy compatibility since it does not support one or more required properties. Tags with name “Gold” not found on datastore”. As I said, this is expected as we have selected “Gold VM Storage policy” and NOT all the datastores in SDRS POD have “Gold” tag attached. Overall, no panic, just click next.

-Finally on the finish page of VM creation, we can see SDRS initial placement recommendations as shown in below screenshotSDRS recommendations

From above screen shot you can understand that SDRS placement recommendations on DS2 are absolutely spot on as DS2 datastore has “Gold” tag attached.

Now you can check whether the created VM “GoldVM” is placed on the datastore compatible with Gold VM storage policy by SDRS. You could see in below screenshot, SDRS has placed the GoldVM on right datastore and VM storage policies are compliant.

VM-complaint

Below is the screenshot for the “GoldVM” VM files. You can see all the VM files are placed on datastore DS2 which has Gold tag attached. is not it cool?

Datastore files on DS2

Based on above screenshots we can say that SDRS is aware of VM storage policies. Now lets try another SDRS workflow i.e. Putting datastore in maintenance mode.

2. Putting datastore in maintenance mode.

As we know that all the “GoldVM” VM files are in datastore DS2 as expected, now we will put datastore DS2 into maintenance mode and we expect SDRS will storage vMotion the VM to DS1 as DS1 is the only other datastore where “Gold” tag is attached.

From web client, Go to SDRS POD >> DS2 >> right click on DS2 and select “Enter Maintenance mode”. As soon as we click Enter Maintenance mode we get SDRS migration recommendations in order to evacuate the DS2 datastore as shown in below screenshot.

After putting into MM

You could see in above screenshot, SDRS has recommended to migrate VM files to DS1 as expected. How cool is that?

To see if SOFT profile integration is working fine, I went ahead and put DS1 also into maintenance mode and I observed VM files were moved to DS3 as expected as we have set the “EnforceStorageProfiles” SDRS option to 1 i.e. SOFT profile enforcement (DS2 is already in maintenamce mode.).

3. Configuring SDRS affinity rules.

I tested first case with VMDK anti affinity (Intra VM) and second case with VM anti affinity(Inter VM), in both the cases, I have observed affinity rules will have higher precedence over storage profiles attached to the VMs/datastores. SDRS will first obey the affinity rules first.

Finally I even tested some SDRS workflows where I filled the datastores so that they cross the SDRS space threshold and finally invoked SDRS to see if SDRS really does consider storage profiles while generating space migration recommendations. I observed SDRS does consider storage profiles as expected.

Overall, above tutorial will help you to get started. Testing SDRS with HARD storage profile enforcement I leave it to you. Enjoy!

One caveat : As noted in this KB, vCloud Director (vCD) backed by SDRS cluster does NOT support Soft (1) or Hard (2) storage profile enforcements. vCloud Director (vCD) will work well with Default (0) option.

Let me know if you have comments.

Want to vMotion a VM from one vCenter server to another vCenter using vSphere API? Here you go

As part of one of customers case, I was testing vMotion across two 6.0 vCenter servers those are connected to the same SSO domain. As we know already, when 2 vCenter servers are connected to the same SSO domain, you can manage both VCs using web client. Initially I tested vMotion from web client and later in order to automate bulk VMs migration across vCenter, I was looking for vSphere APIs and I could find below 2 vSphere APIs.

1. RelocateVM_Task() under VirtualMachine Managed object
2. placevm() under ClusterComputeResource managed object

I went through above vSphere API reference and I could see RelocateVM_Task() API was already there. However, in vSphere 6.0, it is improved greatly in order to support vMotion between 2 vCenters. On the other hand, placeVM() API was new brand API introduced in vSphere 6.0. Initially, I decided to try RelocateVM_Task() API for automating vMotion across 2 vCenters with same SSO domain. After automating this, I tested my java SDK script on vCenters with same SSO domain and it worked with no any issues. Later, I just thought lets give a try across vCenters those are connected to the different SSO domains and to my surprise, it worked like charm. is it not super cool? How easy it is now to migrate your workloads across data-centers/VCs/Clusters!.

So vMotion between 2 completely different vCenter is supported in vSphere 6.0 but there is NO UI support from web client at the moment. If you want to this functionality i.e. vMotion between 2 VCs with different SSO domains. vSphere API is the only way.

After successful vMotion across different SSO domains, I was really excited and thought to play with this little more by creating a DRS cluster in both VCs. I created a VM-VM affinity rule with couple of VMs in DRS cluster in VC1 as shown in below screenshot.

VC1_Rule_before_vMotion

Now I initiated vMotion from first VC to DRS cluster in second VC using same Java SDK script and I could see the DRS rule associated the migrated VM also got migrated as shown in below screen shot. How cool is that!

VC2_Rule_After_vMotion

Below is the complete code sample which can help you quickly to vMotion from a VC to other VC.

Note that this sample works fine with VCs with same SSO or VCs with different SSO. You do not even need to have shared storage between both VCs. This script will work fine within the same VC as well (with some change).

Same script is available on my git hub repository: ExVC_vMotion.java


//:: # Author: Vikas Shitole
//:: # Website: www.vThinkBeyondVM.com
//:: # Product/Feature: vCenter Server/DRS/vMotion
//:: # Description: Extended Cross VC vMotion using RelocateVM_Task() API. vMotion between vCenters (with same SSO domain or different SSO domain)

package com.vmware.yavijava;

import java.net.MalformedURLException;
import java.net.URL;
import com.vmware.vim25.ManagedObjectReference;
import com.vmware.vim25.ServiceLocator;
import com.vmware.vim25.ServiceLocatorCredential;
import com.vmware.vim25.ServiceLocatorNamePassword;
import com.vmware.vim25.VirtualDevice;
import com.vmware.vim25.VirtualDeviceConfigSpec;
import com.vmware.vim25.VirtualDeviceConfigSpecOperation;
import com.vmware.vim25.VirtualDeviceDeviceBackingInfo;
import com.vmware.vim25.VirtualEthernetCard;
import com.vmware.vim25.VirtualMachineMovePriority;
import com.vmware.vim25.VirtualMachineRelocateSpec;
import com.vmware.vim25.mo.ClusterComputeResource;
import com.vmware.vim25.mo.Datastore;
import com.vmware.vim25.mo.Folder;
import com.vmware.vim25.mo.HostSystem;
import com.vmware.vim25.mo.InventoryNavigator;
import com.vmware.vim25.mo.ServiceInstance;
import com.vmware.vim25.mo.VirtualMachine;

public class ExVC_vMotion {

public static void main(String[] args) throws Exception {
if(args.length!=7)
{
//Parameters you need to pass
System.out.println("Usage: ExVC_vMotion srcVCIP srcVCusername srcVCpassword destVCIP destVCusername destVCpassword destHostIP");
System.exit(-1);
}

URL url1 = null;
try
{
url1 = new URL("https://"+args[0]+"/sdk");
} catch ( MalformedURLException urlE)
{
System.out.println("The URL provided is NOT valid. Please check it.");
System.exit(-1);
}

String srcusername = args[1];
String srcpassword = args[2];
String DestVC=args[3];
String destusername=args[4];
String destpassword = args[5];
String destvmhost=args[6];

//Hardcoded parameters for simplification
String vmName="VM1"; //VM name to be migrated
String vmNetworkName="VM Network"; //destination vSphere VM port group name where VM will be migrated
String destClusterName="ClusterVC2"; //Destination VC cluster where VM will be migrated
String destdatastoreName="DS1"; //destination datastore where VM will be migrated
String destVCThumpPrint="c7:bc:0c:a3:15:35:57:bd:fe:ac:60:bf:87:25:1c:07:a9:31:50:85"; //SSL Thumbprint (SHA1) of the destination VC

// Initialize source VC
ServiceInstance vc1si = new ServiceInstance(url1, srcusername,
srcpassword, true);
URL url2 = null;
try
{
url2 = new URL("https://"+DestVC+"/sdk");
} catch ( MalformedURLException urlE)
{
System.out.println("The URL provided is NOT valid. Please check it.");
System.exit(-1);
}

// Initialize destination VC

ServiceInstance vc2si = new ServiceInstance(url2, destusername,
destpassword, true);
Folder vc1rootFolder = vc1si.getRootFolder();
Folder vc2rootFolder = vc2si.getRootFolder();

//Virtual Machine Object to be migrated
VirtualMachine vm = null;
vm = (VirtualMachine) new InventoryNavigator(vc1rootFolder)
.searchManagedEntity("VirtualMachine", vmName);

//Destination host object where VM will be migrated
HostSystem host = null;
host = (HostSystem) new InventoryNavigator(vc2rootFolder)
.searchManagedEntity("HostSystem", destvmhost);
ManagedObjectReference hostMor=host.getMOR();

//Destination cluster object creation
ClusterComputeResource cluster = null;
cluster = (ClusterComputeResource) new InventoryNavigator(vc2rootFolder)
.searchManagedEntity("ClusterComputeResource", destClusterName);

//Destination datastore object creation
Datastore ds=null;
ds = (Datastore) new InventoryNavigator(vc2rootFolder)
.searchManagedEntity("Datastore", destdatastoreName);
ManagedObjectReference dsMor=ds.getMOR();

VirtualMachineRelocateSpec vmSpec=new VirtualMachineRelocateSpec();
vmSpec.setDatastore(dsMor);
vmSpec.setHost(hostMor);
vmSpec.setPool(cluster.getResourcePool().getMOR());

//VM device spec for the VM to be migrated
VirtualDeviceConfigSpec vdcSpec=new VirtualDeviceConfigSpec();
VirtualDevice[] devices= vm.getConfig().getHardware().getDevice();
for(VirtualDevice device:devices){

if(device instanceof VirtualEthernetCard){

VirtualDeviceDeviceBackingInfo vddBackingInfo= (VirtualDeviceDeviceBackingInfo) device.getBacking();
vddBackingInfo.setDeviceName(vmNetworkName);
device.setBacking(vddBackingInfo);
vdcSpec.setDevice(device);
}

}

vdcSpec.setOperation(VirtualDeviceConfigSpecOperation.edit);
VirtualDeviceConfigSpec[] vDeviceConSpec={vdcSpec};
vmSpec.setDeviceChange(vDeviceConSpec);

//Below is code for ServiceLOcator which is key for this vMotion happen
ServiceLocator serviceLoc=new ServiceLocator();
ServiceLocatorCredential credential=new ServiceLocatorNamePassword();
((ServiceLocatorNamePassword) credential).setPassword(destpassword);
((ServiceLocatorNamePassword) credential).setUsername(destusername);
serviceLoc.setCredential(credential);

String instanceUuid=vc2si.getServiceContent().getAbout().getInstanceUuid();
serviceLoc.setInstanceUuid(instanceUuid);
serviceLoc.setSslThumbprint(destVCThumpPrint);
serviceLoc.setUrl("https://"+DestVC);
vmSpec.setService(serviceLoc);
System.out.println("VM relocation started....please wait");
boolean flag=false;
vm.relocateVM_Task(vmSpec, VirtualMachineMovePriority.highPriority);
flag=true;
if(flag){
System.out.println("VM is relocated to 2nd vCenter server");
}
vc1si.getServerConnection().logout();
vc2si.getServerConnection().logout();
}
}

Notes:
– There is one imp parameter you need to pass into migration spec i.e. Destination VC Thumbprint. There are several ways to get it as shown here. I personally used below way to get the destination VC thumbprint.
From google chrome browser : URL box of the VC (besides https:// lock symbol)>>view site information >> Certificate information >> Details >> Scroll down till last as shown in below screenshot

Thumbprint

– For the sake of simplicity, I have hard-coded some parameters, you can change it based on your environment.
– You can scale the same code to vMotion multiple VMs across vCenter server
– As I said, there is another API for vMotion across VCs i.e. placevm() under ClusterComputeResource managed object . Please stay tuned for my next blog post on the same.

If you want to automate the same use case using PowerCLI, here is great post by William Lam.

If you have still not setup your YAVI JAVA Eclipse environment:Getting started tutorial

Important tutorials to start with: Part I & Part II

vSphere 6.0 cool APIs to mark local host HDD to SSD, SSD to HDD: Sample API script

Recently I wanted to test vFRC (vSphere Flash Read Cache) feature interop with vSphere DRS and vSphere HA. I was figuring out ways to emulate my hosts local HDDs as SSD as I did not have real SSDs in my lab. There are couple of ways I used to fake/emulate local HDD as SSD earlier but this time I found other cool way to automate this quickly by using readily available API. The API that I am talking about is markAsSSD(). This API got introduced in vSphere 6.0. It is important to know why this API is made officially available. The reason this API primarily introduced is: Some time SSDs behind some controllers might not be recognized as SSD correctly. Other use cases for this API can be used to test VMware vSAN (just playing around), vFRC, flash host cache, interop testing etc. Of course, performance of real SSD is incomparable with fake SSD.

Below is the complete code sample which can help you quickly to Mark the local Lun of the host as SSD..


//:: # Author: Vikas Shitole
//:: # Website: www.vThinkBeyondVM.com
//:: # Product/Feature: vCenter Server/Storage
//:: # Description: Mark the local Lun of the host as SSD for testing purpose.

package com.vmware.yavijava;
import java.net.MalformedURLException;
import java.net.URL;
import com.vmware.vim25.ScsiLun;
import com.vmware.vim25.mo.Folder;
import com.vmware.vim25.mo.HostStorageSystem;
import com.vmware.vim25.mo.HostSystem;
import com.vmware.vim25.mo.InventoryNavigator;
import com.vmware.vim25.mo.ServiceInstance;
import com.vmware.vim25.mo.VirtualMachine;

public class MarkAsSSD {

public static void main(String[] args) throws Exception {
if(args.length!=4)
{
System.out.println("Usage: MarkAsSSD url username password hostip/fqdn");
System.exit(-1);
}

URL url = null;
try
{
url = new URL(args[0]);
} catch ( MalformedURLException urlE)
{
System.out.println("The URL provided is NOT valid. Please check it.");
System.exit(-1);
}
String username = args[1]; //vCenter username
String password = args[2]; //vCenter password
String hostname = args[3]; // host IP on which local HDD is available
String LunDisplayName="Local VMware Disk (mpx.vmhba1:C0:T2:L0)"; //Add the Display name of the lun that can be seen from VI client or NGC
// Initialize the system, set up web services
ServiceInstance si = new ServiceInstance(url, username,
password, true);

Folder rootFolder = si.getRootFolder();
HostSystem host = null;

host = (HostSystem) new InventoryNavigator(rootFolder)
.searchManagedEntity("HostSystem", hostname);

if (host == null) {
System.out.println("Host not found on vCenter");
si.getServerConnection().logout();
return;
}

HostStorageSystem hhostsystem = host.getHostStorageSystem();
ScsiLun[] scsilun = hhostsystem.getStorageDeviceInfo().getScsiLun();
boolean flag = false;
for (ScsiLun lun : scsilun) {
System.out.println("Display Name"+lun.getDisplayName());
if (lun.getDisplayName().equals(
LunDisplayName)) {
hhostsystem.markAsSsd_Task(lun.getUuid());
flag = true;
break;
// hhostsystem.markAsNonSsd_Task(lun.getUuid());

}

}
if (flag) {
System.out.println("LUN is marked as SSD successfully");

}else{
System.out.println("LUN is NOT marked as SSD, plz check if local lun is in use");

}

si.getServerConnection().logout();

}
}

Notes:
– For the sake of simplicity, I have hard-coded LUN display name, you can change it based on your environment.
– You can scale the same code to mark all the local HDDs to SSDs in all the hosts across datacenter or multiple datacenter.
– I would like you to have your attention on other new useful/handy related vSphere 6.0 APIs such as “MarkAsNonSSD” , “MarkAsLocal”, “MarkAsNonLocal“. Same sample can be leveraged to automate these related APIs.

If you have still not setup your YAVI JAVA Eclipse environment:Getting started tutorial

Important tutorials to start with: Part I & Part II