Getting started with Nutanix Community Edition on the Intel NUC Skull Canyon NUC6i7KYK

Today I want to show you how simple it is to setup Nutanix Community Edition on the latest and greatest Intel NUC model, the NUC6i7KYK.

Basically four steps are required to get Community Edition up and running:

  • Get latest Nutanix Community Edition image
  • Transfer the image to an USB stick
  • Add the latest Intel NIC driver
  • Start your NUC

Get latest Nutanix Community Edition image

This one is easy, just register on nutanix.com and download the image from the forum.

Transfer the image to an USB stick

For this you need a USB stick with at least 8GB.
I recommend using Rufus as it is the easiest method to “dd” the Community Edition image to your USB stick.

RufusCommunityEdition

Add the latest Intel NIC driver

As the Skull Canyon NUC uses one of the newest Intel I219-LM NIC those drivers are not yet build into the current Nutanix Community Edition release.
But fear not, Nutanix released the drivers as download which can be easily integrated. You can find the download at the end of this post.
After you put the image onto your USB drive you can mount the ext4 based partition using Ext2Fsd.

Ext2FSCommunityEdition

After you downloaded and extracted the driver you replace the old e1000e.ko file with the new one.
It is located on the USB sticks ext4 partiton in the following location:

/lib/module/3.10.0-229.4.2.e17.nutanix.20150513.x86_64/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko

Now remove the USB stick and put it in your NUC.

Start your NUC

When the NUC is started you first have to logon as user “nutanix” with the password “nutanix/4u”.
Now we have to load the driver, restart the networking stack and log off by issuing the following commands:

modprobe e1000e
service network restart
exit

And that’s it! Proceed by installing the Nutanix Community Edition as it is documented in the Getting Started Guide.

This blog post was inspired by the article The Prestige Continues – Community Edition and my article was just a documentation of my progress setting up the Nutanix Community Edition on the Intel NUC.

Attached file: e1000e-ko.zip.

Bandwidth management with VMware Mirage

Using Mirage in large, distributed networks or even small ones sometimes can be a bit problematic. This is especially true during times in which layers are updated, machines are migrated and so on.

The reason for this is the limited amount of bandwidth available between the Mirage servers and clients. While Mirage works perfectly fine with slow and somewhat limited network connections it still takes what it gets. This means if you have, for example, a 10 Mbit line and you are updating a Mirage managed end point over this connection it will most certainly congest. Because, as I already said, Mirage will use as much bandwidth as available – like almost every other protocol.

This is the reason why it is recommended to use Quality of Service (QoS) in environments where Mirage will be used, especially when branch offices with limited bandwidth come into play. Configuring the existing QoS solution to work with Mirage most of the time is very easy, because Mirage only uses one port (TCP 8000) for communication between client and server. But often no QoS is implemented and implementing it as part of the Mirage project is most often not possible.

Quality of service
While the new Mirage bandwidth limiting feature works very well and is easy to implement, implementing a proper QoS based on the network infrastructure has still some advantages, for example, allowing Mirage to use more bandwidth if the line utilizatization is low.

Based on this experience in version 5.1 of Mirage a new feature called bandwidth limiting was introduced. With the new bandwidth management features you will be able to limit the bandwidth Mirage uses without the need for 3rd party QoS solutions. You are able to specify the maximum amount of bandwidth (in KB/s) Mirage can use for upload and download operations based on the clients IP subnet or Active Directory site. Actually you set the bandwidth limit from the servers point of view, this means you set the bandwidth limit for outgoing (download from the clients point of view) and incoming (upload from the clients point of view) traffic.

Screenshot 2014-11-03 12.43.23

To set bandwidth limits you create a CSV file that specifies how much bandwidth can be consumed for outgoing respectively for incoming traffic based on the location of the client. The location is identified by either the IP subnet or AD site. Here is an example:

Screenshot 2014-11-03 12.43.33

For more information on how to set up bandwidth rules in the Mirage management console have a look at the official documentation: Managing Bandwidth Limitation Rules.

Now, let’s talk about the priority of the rules. First of all the order of the entries has no effect (besides on exception I will cover in a moment) on which rule is applied to which end point. Have a look at the screenshot above. As you can see you can specify a rule based on:

  • the Active Directory site,
  • the IP (v4) subnet
  • and a single end point (also based on an IP subnet rule).

For each of these rules you can set up a limit for outgoing and incoming traffic. So, to stick to my example, I limited the bandwidth for the AD site “Branch” to 2500 KB/s, the IP subnet 192.168.87.0-254 to 8000 KB/s and each of the IP addresses to 5000 KB/s. I set each limit to the same value for incoming and outgoing traffic. To keep it simple for now we will only talk about outgoing traffic (client download).

First of all both clients identified by their specific IP address can download with a maximum speed of 5000 KB/s. Theoretically that would allow them to consume 10.000 KB/s in total in the subnet 192.168.87.0/24. But because the subnet is limited to 8000 KB/s the max. amount of bandwidth the can be consume in total is 8000 KB/s. Still each client itself is limited to 5000 KB/s. Now, because both clients or to be more precise the IP subnet belongs to the Active Directory site “Branch” the bandwidth is further limited to 2500 KB/s. So regardless of the bandwidth limit for the specific device or subnet the Active Directory site rule in this case wins. But the rule does not win because it is listed last or because AD sites have a higher priority instead it is the rules with the most restrictive limit. If I had set a limit of 500 KB/s to the IP 192.168.87.100 then this limit would be enforced and not the subnet or site limit.

As I mentioned before the order of the entries has more or less no effect unless you specify the same rule twice. Then the latter one will be used.

After the priority of the rules is sorted out let’s talk about the limitation of incoming and outgoing traffic. As you can see you can set both limits (upload and download) independent from each other. This means Mirage bandwidth limitations can even be used on asynchronous connections where the upload bandwidth may be lower than the download bandwidth. For rules including only a single device up- and download operations will never run at the same time as the Mirage client is either uploading or downloading. Rules based on AD-sites or subnets will most definitely run up- and download operations at the same time as they will include more than one devices. Please be aware of this fact and plan your bandwidth limits accordingly. Also make sure you understand that you specify the max. amount of bandwidth that Mirage can use.

While the priority of the rules and the maximum amount of used bandwidth are the basics you need to know to work properly with the Mirage bandwidth limiting feature the following facts are also very helpful and good to know:

  • As soon as you import new rules they will take effect immediately. No restart of the Mirage server services or the clients are necessary.
  • Mirage will not guarantee fairness between clients but from personal testing it looks like that bandwidth is divided equally under full load.
  • If a Mirage client is configured as branch reflector all bandwidth limitations still apply. Layers downloads to the reflector will be limited by the rules applied to it.
  • Bandwidth limits do not apply to transfers between branch reflectors and clients. So clients that download their layer from a branch reflector will not be limited in any way.
  • The auto update feature of Mirage clients are also affected by the configured bandwidth limits.
  • Bandwidth limits will be divided between servers proportionally to the number of connected clients and each server gets a fair share of bandwidth. This means, for example, if you have five servers and a bandwidth limit of 5000 KB/s set for a subnet each server will get 1000 KB/s under full load. Also, for example, if you have two servers with a limit of 5000 KB/s set for a subnet and three clients connect to the first server and two to the second server the first server will get 3000 KB/s of bandwidth and the second one 2000 KB/s.
  • And of course bandwidth rules can be imported and exported using the Mirage server CLI using the getBandwidthRules and setBandwidthRules option.

Thats about it. How do you like the new feature? Anything missing in regards to bandwidth management you may want to see in future version of Mirage?

CloudVolumes, Mirage and ThinApp: when to use what?

Just last week VMware bought a company called CloudVolumes. CloudVolumes is a solution that uses a technique called layering to deploy application in real-time. After the news broke many customers asked what will happen to Mirage, isn’t that layering as well? And what about ThinApp? Do I need ThinApp anymore?

First of all we need to take a step back and have a look on how these products actually work.

Mirage

VMware Mirage, formerly called Wanova, was developed with physical machines in mind. Mirage completely operates inside the Windows operating system and uses core Windows technologies like VSS. Besides deploying operating system and applications, called base layer and application layers, Mirage supports additional functions like backup and recovery of end points and also Windows migration scenarios. A huge benefit of Mirage, especially in distributed environments with WAN connections, branch offices and roaming users, are the optimisation techniques included. Mirage uses file- and block-level deduplication as well as compression to reduce the amount of data transferred between the Mirage servers and end points as much as possible.

Of course Mirage will work with virtual machines and VDI environments (using full clones) because it operates purely inside the Windows operating system and therefore doesn’t care if it is run on a physical machine, a virtual desktop on top of VMware Workstation/Fusion or even a Hyper-V virtual machine. Mirage also introduced some optimisations especially for VDI environments, for example, disabling compression and block-level deduplication as well as possibility to limit concurrent operations. But still the way Mirage works isn’t really optimised for VDI environments because for each layer update a reboot is required, each deployment operation is done inside each individual desktop including the file dedupe calculation and non-persistent desktops / linked-clones are not supported. In addition VMware doesn’t support back up/recovery scenarios in virtual environments, even though it is technically possible.

While Mirage can be used in virtual desktop environments and it makes absolutely sense for some use cases, e.g. persistent full clone desktops and containerised desktops, there are some use cases where it doesn’t fit quite right and there is a better way – introducing CloudVolumes.

CloudVolumes

CloudVolumes works very different in comparison to Mirage. It uses hypervisor technologies to optimise the delivery of layers, in CloudVolume terms a layer is referred to as a CloudVolume. Because CloudVolumes uses hypervisor technologies out of the box it only works with virtual machines running on top of VMware vSphere.

A layer, or a CloudVolume, is basically a VMDK containing the application executables, registry and all supporting application data. When the application layer is deployed to a virtual desktop or user the VMDK is mounted to the corresponding virtual machine and the CloudVolume agent running inside Windows is integrating the mounted VMDK so that is not represented as an additional drive but instead integrated in the native file system and registry. For example, if you deploy Mozilla Firefox using a CloudVolume it is not represented as E:Mozilla Firefoxfirefox.exe because it is integrated in the native file system and looks like a natively installed application which is located at C:Program FileMozilla Firefoxfirefox.exe.

Because the VMDK is read-only the same VMDK can be used for a virtually unlimited number of virtual desktop. Another huge benefit is that layers can be assigned on demand without the need of a reboot.

In addition CloudVolumes supports non-persistent and persistent desktop as well as full and linked clones.

ThinApp

CloudVolumes as well as Mirage are technologies to deploy application. Simply put they are both just a way of transporting application files and registry to a Windows desktop. While ThinApp can also be used as delivery mechanism, especially in combination with Workspace portal, the true power of ThinApp is the ability to isolate an application.

An application deployed using Mirage or CloudVolumes behaves like a natively installed application. It has full access to all installed applications and operating system components and vice versa. This fact makes it simple to deploy applications and gives us a very high success rate when deploying applications this way but it also has the same limitations as any other deployment mechanism that is not application virtualization. You will not be able to run multiple version of an application (e.g. run older version of Internet Explorer in parallel to the latest one) and you won’t be able to prevent DLL conflicts, just to give you two examples.

With ThinApp, because it adds a layer of virtualization, you will be able to run applications isolated from each other and from the operating system. This allows running applications independent from other native installed and virtualized applications and therefore prevent conflicts. It also makes the virtualized applications, to a certain degree, independent from the underlying operating system.

When to use what?

First of all lets discuss when to use Mirage and when to use CloudVolumes. Currently it totally depends on the use case.

When it comes to managing physical end points and containerised desktops Mirage is the way to go. The features Mirage offers, e.g. distribute layers in a highly optimised fashion (file- and block-level dedupe, compression) and backup / recovery functionalities, are huge benefits and must have features in these environments.

In VDI environments, especially in environments in which linked-clones are used, CloudVolumes is the perfect fit. It allows us, in a best case scenario, to use just one golden master and add all user and department specific application using layers. In addition adding an application to a desktop is simply put just a mount of a VMDK the overhead from a performance point of view is negligible especially in comparison to Mirage.

What about full clone, persistent VDI environments? In this case Mirage as well as CloudVolumes offer great functionality and need to be evaluated. But what if I tell you that CloudVolumes enables you to have a persistent desktop experience on top of a non-persistent linked-clone using CloudVolumes. CloudVolumes allows users to install their own applications. All changes are dynamically redirected to a user-specific writable CloudVolume. This volume is automatically mounted when a user logs on to a non-persistent desktop. This makes CloudVolumes a rather nice fitting solution for persistent desktop environments because you can still efficiently manage your master image using linked-clones (View composer) and add user/department specific application using CloudVolumes and optionally even support user-installed application using writable CloudVolumes.

Last but not least there is server-based computing. While Mirage does not support managing server-based operating systems this can be done with CloudVolumes today.

Did I forget ThinApp? No, I didn’t. ThinApp can and should be used complementary on top Mirage and/or CloudVolumes. If you require the benefits of isolating an application, e.g. running an old version Java for a specific application, you can just add this package to a Mirage application layer or a CloudVolume. Because, as I already said, CloudVolumes and Mirage are both just a way to deliver an application, and an application virtualized using ThinApp is still an application that needs to be delivered.

Upgrading VMware Mirage 4.x to 5.0

As VMware Mirage 5.0 is now available many want to upgrade to the latest version. While the way of doing an upgrade from 4.x to 5.0 has not change it will change with 5.0 going forward.

Up to version 5.0 you need to uninstall old versions of the Mirage server components first and then install the new ones. This needs to be done in a specific order:

  1. Uninstall all Mirage servers
  2. Uninstall Mirage management server
  3. Uninstall additional components (Mirage management console, WebAccess and WebManagement)

After everything is uninstalled the components need do be installed as follows:

  1. Install Mirage management server
  2. Install Mirage servers
  3. Install additional components

It is most important to (un)install the Mirage management server and Mirage server in the right order. The remaining components can be (un)installed in no particular order. For more information about upgrading Mirage have a look at the following article: Best practices for upgrading VMware Horizon Mirage.

For future upgrades, for example, from Mirage 5.0 to Mirage 5.next it will not be required to uninstall the Mirage server components first. You will be able to just run the MSI of the new Mirage version and it will automatically detect all existing settings (like database configuration, service account used, cache and storage location) and then do the upgrade.

From a technical point of view the upgrade from 4.4.x to 5.0 could also be done using this new mechanism. But because some advanced error handling needed to be implemented in the Mirage server software, which was implemented with Mirage 5.0 and is not available in prior versions of Mirage, it is not supported and therefore not recommended to use this upgrade method when upgrading from 4.4.x to 5.0. The traditional way via the (un)install upgrade method must be used.

Post-scripts in VMware Mirage

For many operations VMware Mirage allows to run so-called post-scripts.
Post-script are scripts that run on the endpoint after one of the following operations is completed:

  • Windows migration
  • Base layer provisioning
  • Base layer assignment
  • App layer deployment

A post-script allows you to run scripts and programs and do customisations.

Practical use cases are:

  • Delete the Windows.old directory after a Windows migration
  • Customise configuration files based on computer name
  • Install OEM software based on the specific hardware type
  • Deploy applications that are not yet compatible with app layers
  • and many more

All post-scripts are located under %ProgramData%WanovaMirage Service and need to be included in the base layer respectively in the app layer in case of the post-app layer deployment script.
Below you find an overview of the different script names that are used.

Script file name Execution time
post_migration.bat Post-Windows migration
post_provisioning.bat Post-Base Layer Provisioning
post_core_update.bat Post-Base Layer Assignment
post_layer_update_*.bat Post-App Layer Deployment

By default the post_migration.bat and a file called post_bi_update.bat are located inside the %ProgramData%WanovaMirage Service directory. While you can freely modify the post_migration.bat to execute programs and scripts after a Windows migration it is highly recommended not modify the post_bi_update.bat script. The post_bi_update.bat script is more or less just a wrapper for the post_provisioning.bat and post_core_update.bat.

So, if you want to run scripts after a base layer provisioning or assignment you first have you create the corresponding script (post_provisioning.bat or post_core_update.bat) inside the %ProgramData%WanovaMirage Service directory. Those two script are call by the post_bi_update.bat so there is no need to modify this file directly.

Inside the scripts you can basically do whatever you want you just have to be aware of the following:

  1. The scripts and therefore everything you do inside the scripts are executed in the context of the system account
  2. Make sure your script returns a proper error code. Return a zero (0) if the script execution is successful. Everything else is interpreted as an error and logged as such in the Mirage event log.
  3. By default there is a 300 second (5 minute) time out for post-scripts. If the script isn’t finished during this timeframe Mirage will no longer wait for the script and proceed.

MiragePostScriptErrorMiragePostScriptTimeout
The screenshots above showing a post-script error and time out message in the Mirage event log.

While most scripts are placed inside the base layer the post-app layer deployment script needs to be included in an app layer. During the app layer recording process create a file called post_layer_update_.bat in the folder %ProgramData%WanovaMirage Service. Of course you need to replace the asterisk () with a unique name. You have to make sure that each app layer has a unique script name.

Example: post_layer_update_firefox22_0485345.bat

Unfortunately troubleshooting post-scripts can be bit cumbersome as you have no chance to see the script execution interactively. Therefore it is recommended to implement logging functionalities in your script so that each step is recorded in a log file. A perfect location for log files created by post-layer scripts is the Mirage service log directory located at %Program Files%WanovaMirage ServiceLogs. Just make sure you choose a unique log file name.

VMware Horizon Mirage driver library best practice

Mirage offers many great functionalities which can be applied to different types of hardware. With Mirage you are able to do hardware migrations (e.g. from a hardware vendor to another), single image management (one image for different types of hardware), disaster recovery (e.g. from a hardware to a VM) and of course Windows migrations.

But all of these operations need one crucial thing to work: device drivers.

Without the correct drivers for your hardware many operations will fail. For example, Mirage will not be able to migrate a client to a new hardware if no suitable drivers are available. The good thing though is that Mirage will never render a system unbootable as it actively checks if all boot critical drivers are available. But Mirage will not check if you have, for example, the proper network or sound card drivers available instead Mirage will warn you if you do not have a matching driver profile for your hardware.

Driver profile

What is a driver profile? Mirage uses driver folders and driver profiles to dynamically provide drivers to a type of hardware.

A driver profile consists out of two things: (1) matching rules to specify to which hardware / endpoint drivers should be provided and (2) which drivers out of the driver folders should used.

From my experience driver profiles should be created with matching rules that are as specific as possible because wrong driver profile rules may result in unnecessary traffic (drivers will be downloaded to devices which do not require them) and wasted disk space on the endpoint. Also you want to make sure that each device only get the drivers meant for this specific device. This is to prevent that untested/unwanted drivers may be installed on a device.

Most of the time I use a combination of the following rules:

  • Vendor (e.g. Hewlett-Packard)
  • Model (e.g. HP Compaq 8200 Elite)
  • Version / Part of the serial number (if you have the same hardware model in different revisions)
  • Operating system (e.g Win7 (x64))

driverlibrary4

As profile name I use a combination out of the values above, for example: HP Compaq 8200 Elite – Windows 7 x64.

driverlibrary3

Driver folders

Driver folders are more or less the same as a folder structure in your file system to sort and store your drivers. My personal preference is to set up the folders in the following format:

  • Vendor
    • Model
      • Operating System
        • audio
        • graphics
        • network
        • storage

If you use Mirage only for Windows migrations you can disregard the operating system folder as you are only using one OS. A productive structure may look something like this:

driverlibrary1

Of course the driver library is also deduplicated. So if you import a driver more than once it will not consume more space on the Mirage volume because of dedupe.

Drivers

For Mirage you need “raw” device drivers with an inf and a cat file. Most of the times, if the customer does not provide some specific drivers, I rely on the the SCCM driver packs offered by the hardware vendors. Most vendors, including Dell, Lenovo and HP, offer ready to go driver packs. While these driver packs are designed to be used with SCCM/MDT they are in “raw” format and can also be used by Mirage.

Driver profile, driver folders and drivers combined

If you combine all three components the result is called the driver library.

Mirage driver library at work

The driver library is downloaded in almost all Mirage operations. To be specific:

  • Centralisation
  • Windows migration
  • Hardware migration and restore
  • Machine cleanup (Layer enforcement with remove user applications option set)
  • Base layer assignment, update and provisioning
  • Apply driver library

When a driver library download is triggered the drivers, which are dynamically matched using the driver profile, are downloaded to the endpoint to the following location: %windir%WDLDrv. This folder is also added to the DevicePath registry key (For more information see Configure Windows to Search Additional Folders for Device Drivers).

Now, when Mirage starts the plug and play driver detection (e.g. during a migration, base layer update, etc.) Windows first looks in the Windows driver store for suitable drivers. If no driver was found it looks for a suitable driver in the locations listed in the DevicePath registry key. In this case %windir%WDLDrv.

By the way, the “Apply driver library” option only triggers the download but no plug and play driver detection. Also every driver library download operation is optimised. If the drivers already downloaded they will not be downloaded again because Mirage’s deduplication functionality is used.

Prioritise drivers in the Mirage driver library

Sometimes, even if the correct drivers are available in the driver library, Windows does not detect/install the driver provided by the driver library and instead uses a basic Windows driver. This happens because for some drivers it is specified that the basic driver is sufficient and therefore after a driver is found in the Windows driver store no search will be done in the DevicePath locations.

Setting the following registry entry on your endpoint (or base layer) will enable driver search in both locations (Windows driver store and DevicePath) regardless if the driver configuration specifies that a basic driver out of the Windows driver store is sufficient.

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\DriverSearching] "SearchOrderConfig"=dword:00000000

What’s new in VMware Horizon Mirage 4.4 (Tech Edit)

Today VMware released version 4.4 of Horizon Mirage. Even if the official version is just a minor change there are major new features in this release.

In this article I try, like I did for ThinApp 5.0, to summarise and give an overview of all the new features included in version 4.4 from a technical point of view.

Windows 8 / 8.1 support for disaster recovery scenarios

First of all the installation of the Mirage client is now supported on Windows 8 and 8.1. While the full feature set, especially layer management and migrations, isn’t yet support the disaster recovery scenario is fully supported in Mirage 4.4. The disaster recovery scenario contains file/full system recovery and restore as well as self-service recovery using the Mirage file portal and client context menu. One thing important to know is that restore operations for Windows 8 devices can only be performed within the same operating system version, for example, Windows 8.0 to Windows 8.0 or Windows 8.1 to Windows 8.1. But using Mirage 4.4 you will be able to downgrade a Windows 8/8.1 device to Windows 7. This feature is more then welcome when you get new hardware preinstalled with Windows 8(.1) and want to deploy your corporate standard Windows 7 image using Mirage.

To support Windows 8 and Windows 8.1 Mirage now supports version 6.3 of the User State Migration Toolkit (USMT). Mirage now actually supports three versions of USMT:

  • USMT 4 or 5  for Windows XP to Windows 7 migrations and user data restores
  • USMT 6.3 for Windows 8 and 8.1 for user data restores

Mirage44USMT

The import function also was enhanced to block the import of USMT 4.0 if it does not include the Office 2010 hotfix.

Mirage DMZ edge gateway

The ability to connect external / roaming users to Mirage without VPN was request from many clients. While Mirage always supported to work via VPN most users aren’t connect to VPN all the time. Therefore Mirage 4.4 introduces the Mirage gateway. The Mirage gateway is a hardend Windows service which normally will be deployed in a DMZ and is made available over the internet. This allows the Mirage client to securely connect (SSL is a requirement) to the internal Mirage infrastructure, tunnel through the Mirage gateway, if the machine is successfully authenticated. The authentication is done by the user itself when the Mirage client connects to the Edge gateway the first time. When the username and password is validated a is token created and stored for future authentications. It is possible to set a time-out for the token so a re-authentication, to make sure the device is still allowed to connect to Mirage, would be required. A Mirage edge gateway supports up to 1000 end points per server and multiple edge gateways can be deployed.

One thing to mention is that the Mirage gateway allows features like file restore, layer management and centralisation to work for distributed users. But operations which require the end point itself to communicate with the Active directory, for example full system restores of domain joined computer or Windows migrations, are not supported using the gateway because it only tunnels Mirage specific traffic and nothing more.

Support for updating Horizon View agents via Mirage (app) layers

Following the support of managing full clone persistent desktops in Mirage 4.3 the newest release of Mirage now supports updating Horizon View agents from version 5.3 to future releases. As you can see the integration of View and Mirage proceeds further and makes managing persistent desktops much easier. The ability to do Horizon View agents upgrade via Mirage application or base layers makes migrations to newer View releases much easier as all persistent desktops can update using as simple 2-step process.

  1. Create an updated app or base layer containing a future View agent release
  2. Deploy the layer to all clients and after a reboot the update is done

Additionally, no 3rd party tools are required to manage full clone persistent desktops.

Do Windows migrations without centralising the end point

In Mirage 4.3 the ability to do layer management only was introduced. This allowed you to deploy application and base layers to desktops without centralising them. This way you can do central image management for all your end points without using Mirage backup / disaster recovery functionalities. In Mirage 4.4 this function was enhanced to support Windows 7 migrations without the need to centralising the client first.

While you will lose the possibility to revert your desktop to your old Windows version in the case something goes wrong the storage requirements are much less. Therefore the most complex part (storage) of a Mirage infrastructure design is no longer needed. You just need a small amount of storage to hold your base and application layers, drivers and USMT.

Enhancements to the file portal and web management

The most prominent change to the Mirage file portal and the web management is the requirement of a SSL certificate. Access to both services is now only possible through a HTTPS connection. This decision was made to protect user (and admin) credentials and data accessed using both portals. So please make sure when you upgrade to version 4.4 to have a SSL certificate ready to use on your IIS hosting the file portal and web management.

The file portal was enhanced to allow users to allow multiple file portal at once.

Mirage44FilePortalMultiSelect

The web management  got some minor face lifting and also a new mass centralisation functionality which allows you to centralise many clients at once by simply selecting them or applying a rule.  The mass centralisation functionality can be access by clicking the “pending devices” tile on the dashboard.

Mirage44WebMgmtMassCentralization

Better client identification

Last but not least in Mirage 4.4 the client identification was enhanced. In prior version of Mirage the client was identified by using the clients UUID and BIOS identifier. Unfortunately sometimes, when these value were not available or could not be read, Mirage misbehaved. To solve this issue all clients are now identified not only by the UUID and BIOS identifier but also using a attribute, an auto-generated GUID.

Release notes, documentation and download

Using the buttons below you find direct access to the VMware Horizon Mirage 4.4 release notes, the documentation and the download (requires a MyVMware account). With Mirage 4.4 also a nice new VMware Horizon Mirage Getting Started Guide (PDF direct link) is released.

I hope you enjoy this release as much as I do and if you have any comments or questions just leave them below.

Release Notes | Documentation | Download

How to deploy application installers using Horizon Mirage application layers

In one of my last blog post I wrote about the different ways to use Mirage app layers to deploy applications. I also introduced a new way to deploy application installers using application layers.

Currently there are two types of applications which can not be deployed using a Horizon Mirage application layer: apps which create users and/or groups during the installation and also applications which modify the local disk like some disk encryption software does.

Still you may want to use Mirage application layers because you may not have another ESD solution or you also want to use all the compression, deduplication and branch reflector functionalities for the deployment and installation of native application installers.

Now I will show you how to deploy an application installer using application layers and how you automate the installation using post-application layer deployment scripts. I will use VMware Player as an example.

I assume that you already have practice in creating and deploying application layers using Mirage and therefore an application layer reference machine is available. If this is not the case please have a look at my article on how Horizon Mirage application layering works.

First you have to start the recording progress for a new application. During the recording process you have to copy the installation source files of the application you want to install to the application layer reference machine. I normally use a location under the Program files directory, something like

%ProgramFiles%\MirageAppLayerDeployment\%AppLayerName%

In my example the variable %AppLayerName%; is replaced by VMwarePlayer6. So I copy the VMware Player installation executable inside this directory.

After that I create a new post-application layer deployment script. For each application layer the script needs to have a unique name using the format “post_layer_update_*.bat”. The asterisk needs to be replaced with an unique value. I recommend to use the application name for identification and a random number at the end.

For example: post_layer_update_vmwareplayer6_7716.bat

The script has to be placed inside the directory “%ProgramData%WanovaMirage Service”.

The following command line creates a unique post-script in the correct directory and automatically opens it up using Notepad.

notepad.exe "%ProgramData%\Wanova\Mirage Service\post_layer_update_AppName_%random%.bat

Please make sure to replace “AppName” with the name of the application you want to install and to run the command as administrator.

After the script is created and Notepad is opened up you have to add two things: (1) the command line to install the application silently and (2) optionally a command to delete the installation source files after the installation to save local disk space.

For my example I  created the following post-app layer script:

start /wait "Installing VMware Player" "%ProgramFiles%\MirageAppLayerDeployment\VMwarePlayer6\VMware-player-6.0.1-1379776.exe" /s /v EULAS_AGREED=1 COMPONENTDOWNLOAD=0 SIMPLIFIEDUI=1 ADDLOCAL=ALL AUTOSOFTWAREUPDATE=0 DESKTOP_SHORTCUT=0 QUICKLAUNCH_SHORTCUT=0
rd /s "%ProgramFiles%\MirageAppLayerDeployment\VMwarePlayer6"

Of course this script can contain everything you want. You could also run a PowerShell script or something else. One thing you need to keep in mind is that the script is running under the local system account. So user specific changes (HKCU, %AppData%, etc) will no arrive at the locally logged on user but for the local system account.

After you copied the installation sources files to the corresponding directory and created the script file you can finish the application layer.

Now, when you assign the application layer to a managed device the following will happen:

  1. The installation source files and the post-application layer deployment script will be downloaded to the client.
  2. A reboot is required to apply the application layer.
  3. After the reboot is done the post-layer script will run and install the application.

As you can see the installation is not directly done when the application layer is downloaded but after the application layer is applied (after the reboot/pivot phase). Another gotcha to be aware of is when the application layer is removed the application will not be uninstalled. An application layer can only remove files and registry keys which are included in the layer itself, nothing else. So when you remove the layer only the installation source files will be remove but not the application installed by the post-layer script.

If you also want to use application layers to remove applications installed this way just create an uninstall application layer. This layer contains nothing more than post-layer script which silently runs the uninstallation for your specific application.

Virtualize Internet Explorer with VMware ThinApp

For many years now application virtualization solutions and especially ThinApp support the virtualization of Internet Explorer.

A lot of companies need to run old or even multiple versions of Internet Explorer for several reasons, for example:

  • Web applications only work within a specific Internet Explorer version
  • A browser plug-in (Active X, Java, etc.) requires a specific Internet Explorer version

ThinApp allows you to run multiple versions of Internet Explorer and also to run old IE versions on different Windows operating systems, i.e. running Internet Explorer 6 on Windows 8.1.

ThinAppIE6onWin81

All supported scenarios can be found in the following knowledge base article: Support Policy for Internet Explorer virtualized with VMware ThinApp

While virtualizing Internet Explorer is somewhat straight forward please follow the procedures outlined in the KB article. If you want to virtualize IE10 you should also have a look at another article of mine: Virtualizing Internet Explorer 10 with ThinApp 5.0. You should also make sure to always use the stand-alone installer of Internet Explorer and not Windows Update to virtualize it.

The last thing to consider is the way virtual instances of Internet Explorer are supported by Microsoft. Officially Microsoft does not support running multiple instance of Internet Explorer on a single instance of Windows.  Also Microsoft will probably not support a virtualized version of IE using ThinApp as Microsoft suggest the following solutions to virtualize Internet Explorer: Med-V, Windows XP Mode or Terminal Services.

While these solutions of course do all work it is a massive effort to build up a terminal services environment to just run Internet Explorer. Also running a virtual machine (Med-V, Windows XP Mode) to run a single version of IE is just a waste of resources.

Virtualizing Internet Explorer using ThinApp is a much leaner way and adding ThinDirect into the mix makes it much more user friendly. But is this a feasible solution if it is not supported by Microsoft? Of course it is. First of all you will still get support from Microsoft if your IE problem can be recreated with a native installation. So if there is a problem with your web application or browser plug-in in a virtualized IE instance just try to recreate it in a native one. If the problem still exist just contact Microsoft Support.

But what happen when the problem only exist in the virtual Internet Explorer instance? This is also not problem at all. Just contact the VMware Technical Support. As long as you have virtualized Internet Explorer according to the Support Policy for Internet Explorer virtualized with VMware ThinApp you will get support from VMware.

For more information about support and licensing of ThinApp’s virtualized Microsoft Internet Explorer 6 on Windows have a look at the following knowledgeable article: Support and licensing of ThinApp’s virtualized Microsoft Internet Explorer 6 on Windows 7.

Three ways to deploy applications using Horizon Mirage application layers

Application layers in Horizon Mirage are a pretty flexible way to deploy applications. In fact you have three ways of deploying applications using Mirage application layering:

  1. Deploy applications using native application layer functionality
  2. Deploy ThinApps using application layers
  3. Deploy application installers using application layers

The first way is obviously the preferred way for any application you want to deploy and use natively. While I normally recommend to install core applications in the base layer some applications may be better suited when deployed via application layers. Examples are departmental apps or applications which are only deployed to a few users. Also applications that need to be updated often and need to be used natively (not virtualized) are good application layer candidates.

The second option is to use ThinApp inside an application layer. Using ThinApp has many great benefits compared to the native application layer deployment. For example running different application versions at the same time, isolate applications to prevent conflicts, get old Windows XP applications up and running on Windows 7 more easily, run old versions of Internet Explorer and browser plug-ins and so on.
While application layers are not yet optimized to deliver ThinApps (you still need to reboot) this is nevertheless a viable option. Especially if you have no other deployment system or deploying ThinApps to branch office users. When deploying ThinApps to branch offices using Mirage application layering the data will be cached on the Mirage branch reflector. Therefore it will only be transferred over the WAN once. This of course is true regardless of which way you use application layers.

The third way is a bit special. When deploying application layers Mirage allows you to run so called post-application layer deployment scripts. Using these scripts you can do more or less anything you want after an application layer is deployed. So why not install an application this way? This may seem a bit cumbersome at first but makes sense for some applications, i.e. to deploy disk encryption software which is not supported using application layers. Or deploy applications which do not work with application layers yet, like VMware Workstation/Player or Microsoft SQL Server. Again deploying applications this way is highly optimized for branch offices scenarios. The application installation files which are placed inside the application layer and the post-script itself are cached on a branch reflector.

Looking at all three options you see that Mirage application layers can be used in many ways. You can deploy any type of application using application layers.