If you enjoy podcasts or have a long commute and don’t mind listening to people talk about PowerShell then I can highly recommend the PowerScripting Podcast, they have some great PowerShell information every week and if you have not listened before then you already have 219 podcasts to catch up on!
Recently I had the pleasure of being interviewed on the PowerScripting Podcast by Hal and Jonathan, we talked about what was new with PowerCLI, what was cool in the world of PowerShell and also what I would do on a trip to Mars (I know – Random!).
For more information and ways to download the podcast visit their site here: http://powerscripting.wordpress.com/2013/03/13/episode-219-alan-renouf-from-vmware-on-powercli/
For more information on what’s new in PowerCLI 5.1 R2 make sure you check out this blog post: http://blogs.vmware.com/vipowershell/2013/02/powercli-5-1-release-2-now-available.html
As more organizations leverage software-defined datacenter technology to increase resource utilization and automate IT processes, what does this mean for how IT can organize itself to optimize results?
There are a variety of ways IT can transform itself to increase agility, reduce costs, and improve quality of service.
Cloud Computing: 4 Ways To Overcome IT Resistance (Kyle Falkenhagen, ReadWrite)
Enterprise cloud adoption is a transformative shift – these organizational change strategies can help IT departments fight fear as they move to cloud computing.
Secrets of a DevOps Ninja: Four Techniques to Overcome Deployment Roadblocks (Jonathan Thorpe, Serena Software)
Process consistency and automation help development and operations work closely together to get software that delivers value to customers faster.
On IT’s Influence on Technology Buying Decisions. Role #1: Get Out of the way (Ben Kepes, Diversity Limited)
IT needs to help set parameters, then get out of the way and let the business and users drive the process.
The Orchestrated Cloud (Venyu)
The Software Defined Admin – orchestrates provisioning, scaling, incident response and disaster recovery.
When all resources in the datacenter can be manipulated via API (Software-defined data center), the traditional role of the IT admin and how admins are grouped in the IT organization will change.
This means that IT has a great opportunity to reinvent itself as a strategic business enabler. The question is whether you’re ready to rise to the occasion.
Follow us on Twitter at @VMwareCloudOps for future updates, and join the conversation using the #CloudOps and #SDDC hashtags.
Watch as Undeleeb Din, VMware Enablement Lead for the VMware Certified Instructor Program, details the VMware Cloud Certification programs.
By: Chris Colotti
So all week I have been posting tidbits about the vCloud Director Hybrid cloud I have been building. So what was my purpose for doing so? Well I did it to make a couple of points of course the following is the final outcome formed into a bit of a case study that you can digest for a while. The main reason I did this is that I feel we are still struggling with how to CONSUME the hybrid cloud model. We’ve spent a lot of time architecting the vCloud Director implementations in both the public and private cloud space. I decided I wanted to take a look at this from the consumer’s point of view. Those people who would be wanting to come to those of you that are vCloud Director providers and help them understand HOW to use these public clouds.Setting the Stage For vCloud Director Hybrid Clouds
So who are these consumers and users I am speaking about trying to help? It can be any one of us but for the purpose of this case study I want to take two specific examples that fit many possible situations out there.
- A new startup with NO Infrastructure
- An enterprise that has reached the limit of their current Datacenter
In both cases the need is simple. They both need to find new infrastructure without having to build it themselves. In the case of both I am actually focussing on them not building more themselves, but rather leveraging the vCloud Providers out there. They could consume in either a public cloud fashion, or a hosted private cloud fashion. For purposes of this study let’s assume they have decided to go to public cloud providers. I will play the role of the consumer as we continue forward. I will also be taking the aspect of the second scenario above. I have a datacenter, that’s met its limits of compute, memory, and storage.Choosing your Providers
To be clear I am not suggesting where you go, but for my purposes here I happened to already have resources at two vCloud Public providers running vCloud Director 5.1 so I decided I was going to split my Infrastructure as a service (IaaS) between the two for some level of redundancy. Also I personally think that makes you a smart IT person leveraging two different providers. For my scenario as we know I have been using:
Obviously you can choose whomever you want, but in this case we are focussing on providers that are using vCloud Director 5.1 for it’s flexibility and simplicity to build your new organization Once I have decided on the providers I am going to use the next steps are fairly simple and frankly are no different than you would do if you were building a physical datacenter, except now we are doing a Software Defined Datacenter, (SDDC).Build your SDDC – Start with the Networking
Like any new datacenter you need to get the basic things configured. As I have shown in previous posts, vCloud Director 5.1 provides a lot of power to the organization administrator….YOU. The first order of business in my mind is the networking. You want to design this separately for each site as you would for a new physical site. Most all your traffic will leverage the Edge Gateway as well.
- Decide on and configure your routed networks
- Decide on and configure and isolated networks
- Configure your SNAT rules
- Configure basic outbound internet access firewall rules
- Determine DHCP settings and Static IP Rules if any
- Be sure to get from your provider various Public IP’s
Once you have gotten this figure out in your design of the two remote datacenters you can move forward. It goes without saying you don’t want to cross networking subnets between sites or VPN will not work. At this point you will also want to establish VPN connectivity between the sites and write the basic firewall rules for traffic to pass as you wish. This will be important as you begin to stand up your infrastructure as a service.Build your SDDC – Setup vCloud Connector, Import or Build New Templates
Here you can basically download and import the vCloud Connector Nodes into your two Public Clouds. However, some providers are now building Multi-Tenant Nodes that you can simply leverage based on vCloud Connector 2.0. If this is the case you only need to build your vCloud Connector Server hosted in one of your clouds, but maybe you want one in both.
Once you have this you can choose to move templates you already have in your current datacenter, or build fresh ones. You can upload ISO images and just build new if you want to be sure things are setup fresh. Either way you have the option so proceed as you wish. So at this point, we have networking, templates, and site 2 site VPN connectivity established. Now we just need to build out the Infrastructure we need to get started.Build your SDDC – Active Directory
Like any new datacenter the first thing we probably need is localized Active Directory. Assuming you have Active directory servers in your first datacenter you will want to make sure you setup new Sites and Services with the correct IP ranges. Now I am now Active Directory expert, I am just trying to at least cover the basics. Below you can see in my scenario I have set up the three sites, and also gone ahead and installed at least one Active Directory server in each of the new sites. This will become the local authentication and DNS server for any new Windows infrastructure in that site.
Once you have pre-configured Active Directory Sites and Services in your Physical Datacenter controllers you can install from templates and promote the ones in the other sites. At this point you are ready to continue installing application servers, or other IaaS you want to add to your enterprise using your new vCloud Director Hybrid setup. These can be things like Public DNS, Public SMTP servers, maybe even Desktops at some point although that’s neither tested, nor supported on vCloud Director.Some Final Thoughts And Diagram
Although this has been a basic study of how you can leverage vCloud Director Hybrid Clouds to expand your enterprise, it should give you a foundation to start thinking about. The diagram below is a much more expanded view of the possibilities you can reach to host many services in your new public vCloud Director Hybrid cloud. Really the point is that this is just like building a new physical datacenter, only in most cases it’s much faster. Of course as Network Virtualization and Storage Virtualization moves along this will only get better. I will be presenting this on next weeks vBrown Bag as well so we can open up discussion.
Chris is a Consulting Architect with the VMware vCloud Delivery Services team with over 10 years of experience working with IT hardware and software solutions. He holds a Bachelor of Science Degree in Information Systems from the Daniel Webster College. Prior to VMware he served a Fortune 1000 company in southern NH as a Systems Architect/Administrator, architecting VMware solutions to support new application deployments. At VMware, in the roles of a Consultant and now Consulting Architect, Chris has guided partners as well as customers in establishing a VMware practice and consulted on multiple customer projects ranging from datacenter migrations to long-term residency architecture support. Currently, Chris is working on the newest VMware vCloud solutions and architectures for enterprise-wide private cloud deployments.
A common question that I see asked on the VMTN community forums is the ability to programmatically identify which guest OSes (Operating Systems) are supported in vSphere using the vSphere APIs. This request usually comes in handy for folks looking to build their own custom provisioning solution or portal to provide to their end users.
Similar to the way the vSphere Web Client / C# Client provides a list of supported guestOSes and recommended configurations and maximums, you can also generate this list dynamically using the vSphere API.
This functionality is exposed in what’s called the Environmental Browser which provides the list of supported Virtual Machine configurations, device targets and capabilities for a given ESXi host. Within the Environmental Browser, there is a method called QueryConfigOption() that accepts an ESXi host as input to determine the capabilities and returns back a VirtualMachineConfigOption object. In this object, there are various properties and the one we are interested in is called guestOSDescriptor which provides a list of all the supported guestOSes including the supported virtual hardware and configuration maximums for each OS type.
Even though it is possible to retrieve the list of guest OSes, this is not the exhaustive nor the definitive list as it does not cover minor OS releases or OSes that may have been added recently to the support matrix. For the official list, you should still refer to VCG (VMware Compatibility Guide) website for the complete list of supported guest OSes for a particular version of vSphere.
Disclaimer: These script are provided for informational/educational purposes only. It should be thoroughly tested before attempting to use in a production environment.
To demonstrate the QueryConfigOption method, I have created a simple vSphere SDK for Perl script called getSupportedGuestOSes.pl which lists all supported guestOSes given a vSphere Cluster as input.
Here is the syntax for the script:./getSupportedGuestOSes.pl --server [VCENTER] --username [USERNAME] --cluster [CLUSTER-NAME]
Here is a screenshot of the output from the script;
The output above contains both the guestOS identifier as well as the guestOS full name and this is just a subset of the various properties that can be retrieved for each guestOS. By leveraging this API, you can now dynamically generate and the list of supported guestOSes during provisioning and you no longer have to hard code a static list of guestOSes and their identifiers for provisioning.
Get notification of new blog postings and more by following VMware Automation on Twitter: @VMWAutomation
Happy Friday to all of our VMware KBTV fans!
We have a new video for your viewing pleasure today and this video will be of specific interest to any of our vSphere vCenter Server Appliance users.
This video discusses and demonstrates upgrading the VMware vCenter Server Appliance from version 5.0.x to 5.1. This tutorial is based on VMware Knowledge Base article Upgrading vCenter Server Appliance from 5.0.x to 5.1 (2033990). All of the steps performed within this video demonstration are also documented in that KB.
The upgrade process is relatively straight forward and easy to follow.
Note: For best viewing results ensure that the 720p HD quality setting is selected and view in full screen mode.
VMware for Small-Medium Business Blog: VMware’s Partner Exchange Conference – Highlights for Small and Mid-Size Businesses
Weeks after the VMware Partner Exchange Conference, drawing more than 4,000 attendees to Las Vegas, partners are continuing the conversation and collaboration: learning new techniques for identifying customer needs, discovering best practices for acquiring new customers and accelerating their business with go-to-market selling strategies.
One of our partner’s, MicroAge, told us why PEX is important to them and their business.
- VMware’s virtualization and cloud computing products, solutions, and services
From the desktop to the datacenter, VMware offers the most robust and reliable foundation, and has been adopted by the world’s leading organizations, across all industries and in companies of all sizes. This also attracts Partners/Resellers, like MicroAge, to work with us and share the knowledge and offerings with you the customer. MicroAge’s COO, Mark McKeever, was focused on learning more about desktop virtualization and Horizon Suite: “Offering these products gives MicroAge a great market opportunity to serve SMBs.”
- Expansive Partner Network to help implement solutions
The VMware partner network has more than 55,000 partners, a huge advantage for mid-market and small businesses to leverage. MicroAge finds that the access to key people at VMware is the best benefit of PEX. Another highlight for MicroAge was meeting Kristen Carnes, Director of Channel Sales with Nimble. The meeting jump started our relationship and Nimble was at our offices right after PEX to meet with our sales staff.
- Industry-recognized certification training
MicroAge finds VMware to be such a critical partner that their whole sales staff is VSP Certified with many also earning their VTSP. Partners get training and certifications for their company’s specific needs and that brings expertise and builds confidence for VMware customers, giving reassurance for a top-standard design and deployment.
VMware and its Partner Network are critical in helping mid-market and small businesses build IT infrastructures and bringing success for these businesses.
Let us know – whether a Partner or small to mid-size business – what are your top reasons for partnering with VMware?
By Peter Brown, Senior R&D Manager, VMware, London, UK
With the EUC Solutions Management and Technical Marketing teamWhat Is USB Device Redirection?
We are all used to USB devices on laptop or desktop machines. If you are working in a VDI environment such as VMware Horizon View*, you may want to use your USB devices in that virtualized desktop too. USB device redirection is functionality in Horizon View that allows the USB device to be connected to the virtualized desktop as if it had been physically plugged into it.USB Redirection Changes in VMware View 5.1
The USB device is redirected from the physical device to the virtual desktop using network redirection of the USB request block (URB). The USB device driver needs to be installed on the VDI desktop (but it does not need installing on the client machine). Recent enhancements in VMware View 5.1 have greatly improved device compatibility as well as support for USB redirection on Windows, Mac, and Linux hosts.
At a high level, the changes between VMware View 5.0 and 5.1 include:
- Integration with other VMware components (allowing devices to be used between VMware applications, such as between Horizon View, VMware Workstation, and VMware Fusion).
- Broader device support, adding devices such as SanDisk Cruzer and IronKey.
- A new filtering mechanism on the client and agent, which allows specific devices to be blocked from redirection. These filtering rules can be applied locally on a client or via administrative policy using GPOs.
- A splitting mechanism allowing complex composite USB devices to be partially forwarded.
- Devices that reset themselves during operation are automatically re-forwarded (notably, Blackberry or iPhone system updates, SanDisk Cruzer, and IronKey).
- The driver for a device does not need to be installed on the client machine.
… and much, much more!
For details about USB device redirection in Horizon View, read on.Horizon View Clients to Support New USB Redirection Features
The latest Horizon View clients can be downloaded from here. The Horizon View Windows client v5.1 and later supports the new USB redirection functionality. This support was added to the Linux and Arm clients in v1.5, and more recently we added it to the Mac OSX client in v1.7.USB Device Support in Virtual Environments
Horizon View does not implement anything to explicitly block USB devices from working. However, some devices are not designed to work in a virtualized environment. For example:
- Webcams are not officially supported in Horizon View. Some may work, but it is not recommended to use them at any scale. Webcams typically send uncompressed images, which require a huge amount of bandwidth. Therefore, redirected webcams are unsuitable for large-scale use. Testing in our lab shows that some webcams running at 640×480 at 15 fps can consume 62Mbps!
- Some third-party device drivers contain internal timeouts. If the network latency causes messages to exceed these timeouts, then the device may not work.
- Some security USB devices explicitly check if they are plugged into a local machine and are not being redirected. These devices will therefore present problems for redirection.
In general, most devices redirect correctly, although, depending on latency, the performance may be slower than if they were connected locally.USB Device Filtering
USB device filtering allows specific devices, device families (e.g., storage devices), or vendor product models to be restricted from being forwarded to the virtualized desktop. These rules can be applied locally at the client, or at the virtualized desktop. Administrative group policy (GPOs) can be applied, too, allowing company-wide configurations to be applied across all or some desktops.
USB device filtering is often used by companies to disable the use of mass storage devices on virtualized desktops, or perhaps to block a specific device which a user never wants to be forwarded (e.g., USB-to-Ethernet adapter).
Complex filter rules can be constructed – for example, to disallow all products from a specific vendor, except for a specific device model. When used in conjunction with USB device splitting (see below), the configuration options can be very powerful. A previously posted engineering blog on this topic is Filtering and Splitting for USB Devices in VMware View 5.1.USB Device Splitting
Some USB devices are composite devices. Many such devices exist; for example, a single physical device may contain a speaker, microphone, keypad, and mouse. In Horizon View 5.1 and later, it is possible to split this device such that some parts of the device (e.g., mouse) are left local to the client machine, and other parts are forwarded to the virtualized desktop. This can result in a much more effective user experience.
Check out the blog post What’s New with USB Redirection in VMware View 5.1? for more information and a practical USB-device splitting example.Does It matter If I’m Using an RDP or PCoIP Display Protocol?
No – VMware Horizon View USB redirection works independently of the display protocol.USB1 / USB2 / USB3 Compatibility
USB redirection operates over a network. The throughput (performance) of forwarded devices will depend directly on your network latency. The higher the latency, the lower the throughput. USB1 and USB2 devices are supported in Horizon View, but with high network latency, it is likely that you will have slower performance with lower throughput than if the devices were used locally.
Super-speed USB3 devices are not currently supported in Horizon View. USB3 devices will however often work (in USB2 mode) when plugged into a USB2 port on the client machine. This method should always work when running Windows 8. However, we have found that on other operating systems, depending on the USB chipset on the client motherboard, these USB3 devices may not work properly in USB2 mode when redirected to the virtualized desktop.USB Redirection Performance in a LAN Compared to a WAN
As mentioned above, the performance of the redirected USB device will vary greatly depending on the network latency and reliability. For example, a single USB storage device read-request requires three roundtrips between the client and virtualized desktop. A read of a complete file may need multiple USB read operations, and the larger the latency, the longer the roundtrips will take. An unreliable network link will cause retries, and the performance can be further reduced.
For this reason, some devices do not work well over a latent network such as a WAN. Examples include USB DVD writers, which require a steady bit-rate of data to allow the burn operation to complete correctly, or USB audio and video devices, which require low latency for the data to be useful.
It is possible to simulate WAN environments in a virtualized environment with tools such as WanEm. This simulation can be useful for testing specific device performance in a virtual desktop over latent or unreliable networks in advance of deploying the virtual desktops to end users.USB Storage Device Performance
Due to the way that USB storage devices work, performance can be slow over a WAN. This is because before the USB device can appear in the Windows operating system, the file structure needs to be read from the device. The file structure can be very large (depending on how the device has been formatted) and can take significant time to read, so the device may take a long time to appear for use. There are some tricks that can help improve the performance – for example, formatting a USB device as NTFS rather than FAT helps to decrease the initial connection time. The KB article Redirecting a USB flash drive might take several minutes explains this trick in more detail.Auto-Connecting USB Devices to a Virtual Desktop
Configuration options allow USB devices to be automatically forwarded to the virtualized desktop after they are connected to the client device. Alternatively, on Windows and Mac clients the menu allows manual selection of which devices are forwarded.Is USB Data Encrypted?
Yes, from VMware View 5.0 onward. Redirected USB data is encoded in an SSL channel from the client right through to the desktop. USB redirection requires port 32111 to be open on your firewalls.Is It Possible to Disable USB Redirection?
Some highly security-sensitive applications require that USB redirection be disabled to virtualized desktops. This can be achieved in one of several ways:
- Horizon View pool policy can be used to disable USB redirection for a specific pool. This can be configured from the VMware Horizon View Administrator UI:
User overrides can also be applied to enable or disable USB redirection on a per user basis in a specific pool.
- The ExcludeAllDevices configuration option can be applied on the agent or the client side to prevent any devices from being forwarded. (Note: This can be used in conjunction with an “AllowFilter” rule to permit only a specific device to work and to block all others.)
- During installation of the View Agent on the Horizon View desktop, you can de-select the USB redirection components. Without these components installed, it is absolutely not possible to do USB redirection!
Using USB devices to listen to audio from your virtualized desktop has always been possible. However, in VMware View 5.1 and earlier – depending on what you were “doing” in the desktop – redirection of USB audio devices could cause audio quality problems. Depending on the specific USB device and also on the way you plan to use that device, an enhancement in Horizon View 5.2 can improve the audio quality. This enhancement isn’t a fix-all solution, and this functionality is disabled by default. However, if you do experience low-quality audio for your device and application, then it might be worth experimenting with this new option.
For example, this enhancement has improved audio-out performance with the Olympus DR-2000 Speech Mike device.
To enable the new audio-out enhancement, you need to set a registry key in your Horizon View guest desktop. For best-quality audio, set the following registry key:
Windows XP: HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VDM\USB\AudioOutDeviceFlags = 0×600
Win Vista/7/8: HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware VDM\USB\AudioOutDeviceFlags = 0×700Wrap-Up
The enhancements for USB redirection in Horizon View 5.1 and 5.2 enable you to do just about anything you want. Give it a try, and join the conversation on the Horizon View USB Community Forum.
* We changed the name from VMware View to VMware Horizon View with the 5.2 release. We use the legacy name here for the 5.0 and 5.1 releases, but we use the new name when referring to 5.2 alone or when aggregated with prior releases.
vCloud Director has a nice feature allowing to organize vApps into catalogs. If like me you need to replicate these catalog across organization or even different vCloud hosts you may find your solution here.
I have written a solution to do it a long time ago but never had the time to release it publicly. It is now done and here are the features.Features
This package comes with a rich set of features.Catalog comparison and replication
Replicate a catalog from one source catalog to a destination organization. If a catalog with the same name exists in the destination it will be modified to have the same content as in the source catalog. If not a new catalog with the same name will be created. The replication is one way (source to destination).
The actions taken are:
- Delete catalog items in destination not present in source
- Update name and description of previously copied catalog items that would have been changed in the destination
- Copy new source catalog item to destination (download once to vCO, upload to one or more vCD)
Each of these are optional. Destination VDC is also optional. If provided all the elements to be copied will have for destination the provided vDC. If not the workflow will find destination VDCs with matching names. If not the workflow will raise an exception.
When using the "Replicate catalog" workflow a single vCO server is used to download the vApp Templates from the vCloud Director hosting the source vApp Templates and upload these to the other vCloud Director. This requires to have the two vCloud hosts configured in vCO.
vCloud Director source -----------> vCO -----------> vCloud Director Destination
While the vCO server could be located on any network having access to both vCloud servers the vCO server should be located to optimize the network speed for example if the upload speed between the source and destination is a lot slower then the download speed the vCO server should be on the same network as the destination.
Several instances of the same workflow can run on the same vCO server with different destinations (there is a locking system to avoid uploading a file being downloaded). Below an example of a vCO server replicating a catalog from one cloud source to two cloud destinations.
Replication from filesystem
When using the "Export catalog to filesystem" a first vCO server is used to download the vApp Templates from the vCloud Director hosting the source vApp Templates. XML files describing the catalog and the catalog items are also downloaded.
You can then use the "Replicate catalog from filesystem" workflow to compare the file based catalog to the one on the destination server.
The main idea behind this two steps replication is to optimize the replication process by using a third party replication using deduplication, compression, safe transfers.
vCloud Director source -> vCO 1 -> Third party replication system source -----------> Third party replication system destination -> vCO 2 -> vCloud Director Destination
A first vCO server is located on the same network as the source vCloud Director to download the vAppTemplates as fast as possible to a filesystem hosted on a replication system (this requires having this filesystem accessible from vCO) . The replication system copies the changed blocks to the destination replication system. Once the replication finished the destination vCO upload the new files to the vCloud DIrector on the same network.
The workflows use and abuse asynchronous sub workflows to make several operations running at the same time. and also copies the VMDK files within a vApp Templates in parallel. The level of parallelism is configurable.
The "Replicate catalog" workflow ran.
It started one to many "Copy catalog item" workflows in parallel
The "Copy catalog item" ran a single "Copy vApp Template multithreade" workflow
Which in turn ran one to many "Copy file across vAppTemplates" workflows in parallel
Which in turn runs:
- "Download single file from vApp Template" workflow (not shown on the picture since in this case it was already downloaded previously and the workflow optimize the process with not downloading the vApp Template again)
- Upload single file to vApp Template which will run several "Upload File chink to vAppTemplate" in serial (parallel upload of the chunks is not supported, they need to arrive in the right order).
Sending over a multi GB file over http may cause reliability issues. If the upload fails you have to restart again. To avoid this the workflows uploads the files in chunks of a configurable size. If a chunk fails to be uploaded the workflow retries to upload it hence avoiding the need to restart completely. The chunk size is configurable.
Here I have run the same replication as before. At 14:35 I have unpluged my vCO server network connection to make the file chunk being uploaded failed. I have then plugged it again to simulate a temporary network issue. As you can see below the upload was resumed. The file chunk that was uploaded starting at 14:36:32 seconds was the same as the previous one (this can be checked by clicking on the workflow run and check the variables tab) . This is why we had 11 chunks uploaded in the previous replication and 12 in this one.
The other advantage is since vCO is an orchestration engine supporting checkpointing if the replication workflow would be stopped because of vCO server maintenance (i.e Windows update rebooting) the workflow would resume from the point where it uploaded the last chunk (as long as the maintenance time is not longer then the timeout vCloud Director has for waiting on the vApp Template update which is one hour according to my observation).
vApp Template Metadata
The vApp Templates metadatas are copied as part of the catalog replication (connected or from filesystem). If the metadatas are copied from a vCloud Host using a System admin organization and credentials to a host connection using a specific organization then the hidden and read only metadata will be created as read / write (this is the only option when connecting as org admin).
There are a lot of things that are doable and other that were not because of lack of API support in vCLoud Director or lack of time to add more features:
- Does not replicate medias : There is no API yet to download a media so no replication possible.
- Does not reconnect the vApp Template to the networks of the target cloud. This requires deploying the template, delete it from the catalog match the network to the ones in th e new environment, capture the modified vApp as a vApp Template. You can do this manually (one time operation) or create your workflow to do it (there is something similar already implemented in the mass VM import workflow in communities). The updated vAppTemplate will not be considered as a new one and will not be deleted / overwritten by the replication mechanism.
- Does not clean up the downloaded files. A separate workflow (scheduled or run by policy) could do this.
- The update of the catalog item does not update metadatas. I will consider adding it if this is something people ask for.
Import the replication package using the vCO client.
The "Catalog Replication Settings" saves all the configuration in a configuration element. Open either the vSphere Web client or the vCO client and search for the "Catalog Replication Settings" workflow.
Catalog Replication Settings
Click on it, then right click / "Run a workflow".
The first step is to define a path where vCO will download the vApp Templates. The pre-requesites are:
- There is enough disk space in this path (if you use the appliance you will have to mount an NFS share or create an additional drive)
- You have set proper share / file permissions
- The user running the vCO service has access to this path (by default this is not the case if this is a network share)
- You have changed the vCO access rights as documented here.
Replicate a catalog settings
The next step is for configuring the vApp Template transfer. This are the settings used when running the "Replicate a Catalog" workflow. You can set the number of parallel copies (one copy = 1 download + 1 upload) and a time out. When timing out the copy will be canceled. This is to avoid a catalog replication to ait forever on a copy that would be stalled.
The next setting is to set the number of parallel copies within a single vApp Template. For example if a vApp Template has several VMs with several disks these can be copied concurently.Export a catalog to a filesystem
These settings apply for the "Export a catalog to a filesystem" workflow. These are similar settings as above except it has an option to keep some vApp Template settings during the (lossless) download. If this option is selected it is not possible to download files concurently within a vApp Template.
If the lossless option was not selected it is possible as before to provide the number of concurent downloads.Replicate a catalog from the filesystem
The same settings as above but this time when uploading the vApp Templates.
And the same settings for the the vApp template file. The last one is for specifying the size of each chunk when uploading. This allows to have safer uploads that can be resumed in case of a network issue or vCO host maintenance.Operations
Now that the replication configuration is done it is possible to start the replication workflows.
Replicating a catalog using a single vCO server
To replicate a catalog using a single vCO orchestrating a source and destination cloud run the "Replicate a catalog" workflow.
The first parameter is the source catalog.
The next step is to provide a destination organization and a destination VDC. If the catalog does not exist in the destination organization it is created. If it exists it will be updated. A one time full copy will be required before the workflow can snychronize the changes.
The next step defines what are the changes to apply. Beware that if you already have catalog items in the destination catalog the workflow will delete these on the first run and then copying the ones from the source catalog if the copy and delete options are selected.
You have the option to run the synchonization now or to Schedule it. Scheduling a recurrent task is a good way to automate the synchronization. Another way is to trigger the workflow using the vCloud Director notification on catalog changes.
The workflow will syncronize the catalog.
Replicating a catalog to and from the filesystem
To optimize the replication speed / bandwith usage you may want to use a third party system leveraging deduplication, compression and other mechanisms. To do this you need to run the "Export a catalog to a filesystem" workflow on the vCO server local to your source vCloud Director. You need to provide the catalog that you want to replicate.
Then you need to provide the export path. As a default it will use the one you have configured previously.
As before you can run the workflow immediately or schedule it.
Once the export completed and your third party replication is completed.
Run "Replicate a catalog from the filesystem" on the vCO server local to your destination cloud. This vCO server must have access to a copy of the files that were exported. Provide the full path to the catalog file (The catalog file name is the catalog name + ".xml").
Select the destination organization and VDC.
Select the synchronization options.
And as always you can run it right away or schedule it.
You can find the package on the vCenter Orchestrator Communities
The American Recovery and Reinvestment Act (ARRA), with its Health Information Technology for Economic and Clinical Health (HITECH) Act provision, kicked-started a healthcare technology modernization wave. The Patient Protection and Affordable Care Act (a.k.a. “ObamaCare”) ignited a national dialogue about healthcare. These and other initiatives have spawned new and disruptive business models, blurred lines between traditional providers, and opened a window for new players to enter an industry that historically has not welcomed outsiders. Concurrently, information technology has matured. Virtualization, for example, has gone from being an interesting cost-savings tool to becoming the very foundation for a new era in IT—cloud computing.
The tone of today’s discussions with healthcare CIOs is serious and business focused. Providers want to understand how they can better leverage IT and their staff to meet meaningful use deadlines, build new partnerships, and differentiate their brand through the services and care they provide—all while keeping their hospital’s name off the violations listed on the U.S. Department of Health and Human Services website. What has been most striking in my recent conversations has been how eager healthcare IT executives are—now that they have a seat at the decision-making table—to show what IT can do to help reinvent the healthcare industry. And as their technology partners, we can help drive this incredible transformation.
Last week, I wrote about how healthcare IT executives and their colleagues need to know technology companies are working to build reliable IT bridges to the future—paths based on proven technology and paved to support whatever the future of patient care brings. Today, I’m pleased to introduce a way to futureproof healthcare IT investments.
VMware vCloud® for Healthcare is an end-to-end care cloud computing platform for exchanging information and delivering products and services that can help lead to better outcomes. It is our new solution for helping healthcare IT organizations define and build the right cloud models for their organizations, and it includes a proven roadmap for how to fully realize the benefits of cloud computing.
Built specifically for healthcare, the vCloud for Healthcare framework of solutions and services leverages and builds on existing investments in VMware skills and platforms. It incorporates the most commonly requested and fundamental services a healthcare private cloud should deliver, including the following:
- Integrated industry security and compliance
- Point of care virtual desktops and workspaces
- Self-service end-user application provisioning
- Secure mobility and management for mobile devices
- Virtual and physical systems and application analytics
- Care systems and application automation
- Care systems and application disaster recovery and continuity
And when healthcare IT is ready, it includes a hybrid cloud connector to safely and securely connect a private healthcare cloud to one or more public clouds.
vCloud for Healthcare brings all of the products in our VMware vCloud Suite and our new VMware Horizon SuiteTM together with our rich ecosystem of partners, including those that specialize in healthcare, and the more than 200 certified vCloud Datacenter Providers that support hybrid cloud computing. The integrated solution is built on our industry-leading VMware vSphere® platform, which is KLAS rated and supported by the world’s leading healthcare application vendors. When customers and partners choose vCloud for Healthcare, they can leverage validated architectures and services to deploy point of care systems such as VMware AlwaysOn Point of Care.
With vCloud for Healthcare, we are helping to remove uncertainty by providing both a vision and a roadmap that address the unique needs of a healthcare provider. We are connecting critical technologies and services to help organizations efficiently and cost-effectively meet real business goals and mandates—from meaningful use and compliance audits to establishing new business models and services, like accountable care organizations, to even becoming providers of IT services to other hospitals.
I’ve seen healthcare and IT change dramatically over the past several years. As the rate of change continues—and even accelerates—VMware and our healthcare team are dedicated to making sure that our new bridge to the future of healthcare IT remains strong during this historic industry transformation.
In a previous article I explained how we could extend vCloud Director with vCenter Orchestrator and then provided a full implementation. This was using the vCloud Director blocking tasks and notifications feature which allow to extend existing vCoud Director operations.
vCloud Director 5.1 introduced a new feature called API extensions which basically allows a cloud provider to extend the vCloud Director API with custom services with leveraging VMware or third party applications. In this article I discuss why someone would want to leverage this feature and explain the "service builder" solution I have created around it.The first question you may ask is "Why would you want to extend vCloud Director API ?"
As the vCloud Director Administrator I need to provide a feature not available in the vCloud API to cloud API consumers (i.e. the portal development team)
- It is a combination of several vCloud API calls, and or API calls with predetermined parameters I can define
- It is included in another VMware or third party API
- It is a mix of both
Lets see some of the common portal / cloud implementations.
The first one is the old fashioned portal calling the vCloud and other APIs. Here are two variations exposing or not the vCloud API. Exposing the vCloud API allows to leverage all the applications consuming the vCloud API. For example a service provider will want his tenants to use vCloud Connector to copy back & forth vApps from / to vSphere / vCloud. This design is IMHO flawed in several aspects:
The portal must include all the automation you would typically put in an orchestration solution. This means re-inventing the wheel to re-create a lot of facilities offered by an orchestration engine, making these operation only usable by the portal or creating a custom web service when thousands of customers already used standard orchestration APIs. This will be a solution difficult to support that will be tied with the scripting language used to consume the underlying cloud and third parties APIs. These APIs will support different types of web services, different type of authentications.
- Please note - I have used a vCloud Automation Center picture to represent the portal. This could be any portal. vCloud Automation Center is not only a portal but also a workflow engine.
The second approach is that orchestration rules them all ! The portal calls in the orchestrator calling the underlying systems. The two designs below are using this approach with keeping the orchestration API private and either exposing the vCloud REST API or not. The Orchestrator APi could ahve been made public to expose the additional services but there are some serious workflows / resources access control considerations required.
This approach was a good one when there was no cloud API and that the target system was vSphere / vCenter. The Orchestrator would expose a subset of the functionality with using several API calls chained together to perform the desired automation. Using orchestration has several benefits including supportability, modularity, use of hundreds of supported workflows, API abstraction, caching, high availability and many other things I detail in this article.
However this has a few flaws as well. An orchestration engine is meant to automate complex operations. A lot of the API calls started from the portal will be about almost real time operations such as refreshing a list of vApps, presenting information in a panel and so on. I have written about this in details in this article : Building your custom cloud portal - Knowing when to use the vCloud API and the vCenter Orchestrator web service
Several successful cloud implementation involving using custom services I have seen have implemented the Cloud & orchestration design below in the left. While this resolve some of the issues mentioned previously it still make the custom services only available to the portal. With vCloud Director 1.5 blocking tasks and notifications allowed to customize existing vCloud Director operations, hence diminishing the number of calls to be made from the portal to the orchestration engine but direct calls to orchestration are most often required for operations required that cannot be started by vCloud blocking tasks and notifications. With vCloud Director 5.1 you can use the bottom right design.
The reasons for prefering this design on the right are the following: As a cloud administrator I do not want to give direct access to my cloud backend applications:
- Access: I provide only one single point of access : the vCloud Director API and I want to publish as a service only a given operation, not give access to all the API of any of my cloud back-end components.
- Security: The only authentication, session management is the vCloud one.
- Consistency : My customers only use the vCloud API which reduces their development effort (avoid using different web services and different authentications)
Giving API access for the functionality exposed in the portal is also an important differentiator between different cloud providers. This provides ways to develop client applications for specific uses, build automation. At VMware API parity with functionality exposed in the UI is an important goal for engineering for these reasons.
OK but what for ?
There are several use cases for providing functionality that is not out of the box for example:
- vApp custom provisioning or decommissioning
- vApp backup
- vApp load balancing
- vApp fault tolerance
- vApp updates
- vApp VMs security hardening
- vApp VMs affinity / anti-affinity rules
- vShield security groups
- Install agents, inject parameters in vApp VMs
- Database as a service
- Update user password in Active Directory
- iSCSI target for MSCS Clusters
A lot of these are achievable with the components of the vCloud Suite. There are a lot more use cases with integrating third party systems such as management systems. You can "cloudify" the whole datacenter if needed.How ?
My colleagues at VMware have done an excellent job exploring the API documentation and explaining vCloud API extensions. Here are some references I have used to create "service builder". If you are not affraid of going deep technical I definitely recommend going through these:
- Blog articles written by Christopher Knowles:
- Blog articles written by Thomas Kraus:vCloud Director API Extensions (implementation example)
- VMworld 2013 session OPS-CSM1379
With using service builder. Service Builder is:
- A wizard based workflow to create a custom service and service operations (called service links in the vCloud extensibility API)
- Expose workflow as service operations (/api/myservice/myOperation)
- Expose this workflow as well in the "Workflow run service" so they can be managed (/api/workflow/workflowID/executionId)
In addition service builder creates a "workflow run service" that allows vCloud Director API to manage vCenter Orchestrator workflows.
And I forgot to mention it is free.
How do I get started with service builder ?
I have put together an online presentation explaining in details :
- What is "vCloud service builder" and "vCloud workflow run service"
- How it works animations
- Examples for calling the newly created APIs
- Custom operations monitoring with vCloud Director UI
- Download link
Note : The presentation requires your Email and organization. This is solely for me to better understand who has interest in this tool and eventually to contact you in case I need further information on your submitted comments. This information will not be used for anything else and not be transmitted.
If you want to go directly to the download page you can find it here.