Deep dive: New REST APIs to manage VM service and Namespace self service

Couple of weeks back vCenter server 7.0 U2a (a monthly patch focused on vSphere with Tanzu) released with 2 super cool features. In this post, I would like to take you through new REST APIs introduced as part of these features & key notes around how these features/APIs behave.

Virtual Machine Service

As per me, this feature is one of the key features (like vMotion) in the history of vSphere. In brief, it enables managing virtual machines using Kubernetes control plane. With this feature, user can define desired state of VMs, virtual networks, virtual storage device. How cool is that if you can create vms (with customization) using simple kubectl command like “kubectl apply -f vm.yaml” ! To learn more about it, I highly recommend you to read this official deep dive blog and associated video.

Namespace Self-Service

Prior to this release, Supervisor namespace life cycle was completely managed by the vSphere admin. It was limiting the flexibility that k8s user had. With this feature, k8s user has ability to manage the life cycle (create/delete) of their own Supervisor namespaces while the resource constraints wrt cpu, mem, storage policy are still controlled by vSphere admin. Simply k8s user can run ” kubectl create ns ” to create their own namespaces in order to deploy k8s objects such as vSphere pods, guest clusters (aka Tanzu kubernetes cluster/TKC ) and now with this release VMs as well. Note that if this feature is not activated in your environment, vSphere admin can continue to manage Supervisor namespaces as it was prior to this release. To learn more about this feature, please go through this quick and cool blog post.

New vSphere with Tanzu APIs in action

If you are completely new to vSphere with Tanzu (specifically Supervisor cluster APIs exposed by wcpsvc service running on vCenter server) REST APIs, I highly recommend you to first read “Introduction to Supervisor cluster REST APIs ” post.

VM-class

VM-class is nothing but t-shirt sizes available for deploying vSphere pods/TKC/VMs under namespace created by k8s user or the traditional Supervisor namespaces (the ones created from H5C or REST API) created by vSphere admin. K8s user needs to pass these classes as part of TKC or VM creation yaml manifest file. Below is how we create custom VM class.

POST: https://{api_host}/api/vcenter/namespace-management/virtual-machine-classes
{
"cpu_count": 2,
"cpu_reservation": 0,
"description": "my vm class",
"id": "custom-class",
"memory_MB": 1024,
"memory_reservation": 0
}

Here is REST API super cool documentation for managing VM-class.

Associating VM-class and Content library to namespace

This is one of the super important operations must be performed by vSphere admin i.e. Every Supervisor namespace created either by k8s user through namespace self-service or namespace created by vSphere admin must have associated VM-class and Content library configured.
In order to configure this association, existing create namespace API is modified. Let’s see how to update existing namespace with a VM-class we created above and couple of existing content libraries that I had created already.

PATCH: https://{api_host}/api/vcenter/namespaces/instances/{namespace}
{
"vm_service_spec" : {
"vm_classes" : ["best-effort-xsmall","custom-class"
],
"content_libraries" : [
"62537201-8194-4d98-aea3-47b95f17077b",
"14268a2c-847b-4f84-9a1a-a97b524e263b"
]
}
}


“vm_classes”: They are simply the name (id) of the vm class. I have passed one custom vm-class and one default vm-class.
“namespace”: It is name of the existing Supervisor namespace to be updated.
“content_libraries” : These are content library ids that we can fetch using content library GET API. I have passed 2 content libraries one for VM service OVA images and another is for TKC/guest cluster OVA images

Here is REST API super cool documentation for managing supervisor namespaces

Key notes on VM-class/CL association

  1. VM class & Content library association with new supervisor namespaces is required for both TKC/Guest clusters as well as VMs created through VM service. However, TKC/Guest clusters content library can be configured at Supervisor cluster level as well (as it is from beginning)
  2. If Supervisor namespace is created prior to this release, all such namespaces will have all default VM-classes configured automatically, hence it will not impact any new or old TKC/Guest clusters. This automatic association happens as part of first k8s version upgrade at Supervisor cluster level
  3. In future releases, I personally expect associating VM-classes and content library gets further simplified so that vSphere admin is not forced to monitor new namespaces getting created and associate these mandatory constructs accordingly.

Namespace self-service workflows

As shown in this post, we need to activate this feature with setting controls such as cpu, mem, storage policy & users/groups. Let’s take a look at how to configure this using API. This is API doc for same i.e. Create a self-service template and further activate it.

POST: https://{api_host}/api/vcenter/namespaces/namespace-self-service/{cluster}?action=activateWithTemplate
{
"permissions": [ {
"domain": "vsphere.local",
"subject": "devops1",
"subject_type": "USER"
},
{
"domain": "vsphere.local",
"subject": "devops-group",
"subject_type": "GROUP"
}
],
"resource_spec": {
"cpu_limit": 10000,
"memory_limit": 20480,
"storage_request_limit": 204800
},
"storage_specs": [ {
"limit":104800,
"policy": "7327e17f-15fb-47a6-a82e-e7ec54839b59"
}
],
"template": "my-first-template"
}

“permissions”: Here we need to configure SSO users and groups. User/Group can come from vsphere.local or any custom identity source. Note that administrators group is configured by default, we do not explicitly need to configure any administrator.
“resource_spec”: These are the resource limits for all the namespace self service created by configured k8s users
“storage_specs”: The storage policies that namespace self service will get storage from.
“template”: Name of the template. Note that it must not have any space the string.

Here is REST API super cool documentation for managing self-service namespaces

Introduction of “OWNER” role

  1. There is new Supervisor namespace role is introduced i.e. OWNER. Earlier, we had only “EDIT” and “VIEW” roles. Note that this role is specifically introduced as part of Namespace self service feature. Users are not expected use it directly from H5C and even if they do, it will behave same as “EDIT” role from user standpoint.
  2. When k8s user creates supervisor namespace from kubectl after activating Namespace self service feature, every namespace by default will get this role. This enables create/delete these namespaces from kubectl itself.

Key notes on namespace self service APIs

  1. Currently H5C UI supports only one storage policy but using API you can configure multiple storage policies, so namespaces get multiple storage policies get configured automatically though Self-service template might show only one policy.
  2. If you see closely, there is storage storage “limit” param under “resource_spec” as well “storage_specs”. The limit in storage_specs applies to individual storage policy while limit in “resource_spec” is storage limit on all the namespace self service created by k8s user.
  3. Another important behavior is: above API can be used for 2 operations. One for initially creating the template and activating it. Second is updating the same template (only exception I see is we can not change the name of the template once created initially). Usually update operation is done via PATCH API but this API is an exception as both operations are done via POST API.
  4. Note that there are few more separate APIs for managing self-service namespace templates , here you can update the template with PATCH API.
  5. Currently, per supervisor cluster only one template is supported. This could be the reason name of the template need not be changed once its created. UI also does not provide an option for setting the name. If you activate this feature for the first time from H5C UI, template name would be “default”. In future release, we can expect support for multiple templates.
  6. One more important factor is that since currently only one template is supported, once you create template for the first time, there is no way you can delete the template but user can only update its configuration or simply deactivate this feature it.
  7. How do we know whether given namespace is created by k8s user or vSphere admin? When we make GET call on given namespace, it provides one Boolean property i.e. self_service_namespace
  8. When supervisor cluster is not in running state, user is not expected to activate/deactivate or update the template. Cluster may not be in running state when Supervisor cluster upgrade is is in progress or cluster is not in good state due to some issue. Even if you do, these ops will be will keep waiting for cluster to get into running state and the proceed as expected.

Automating above ops using SDKs

  1. It is important to notice that probably 70U2a release is first vSphere patch release, which introduces new APIs or modification to the existing API (specially vSphere with Tanzu REST APIs). Usually only major or update release had such changes.
  2. In order to write automation around above workflows, you must upgrade your SDK to the latest available on github.
  3. You can simply refer my post on Automating supervisor cluster operations through Java SDK. I highly encourage you contribute more samples around these features. Java SDK documentation for these ops are here: VM-class , Self-service namespace & templates
  4. Apart from Java, there is official python SDK as well.

VM operator

The VM service (even Namespace self-service) we explored above is specifically driven by wcpsvc service running on vCenter server. There is another key VM service kubernetes side component (runs as part of Supervisor cluster) as well i.e. VM operator. Beauty is that this critical component is completely open sourced. How cool is that!

Further learning

You can learn more around vSphere with Tanzu & its APIs here
Detailed post on VM service by Frank and Cormac here and here
Cool post by William here on how VM service capabilities can be used for cool use-case like Nested-ESXi.


If you have any query or comment, please feel free to post me on Twitter