Bandwidth management with VMware Mirage

Using Mirage in large, distributed networks or even small ones sometimes can be a bit problematic. This is especially true during times in which layers are updated, machines are migrated and so on.

The reason for this is the limited amount of bandwidth available between the Mirage servers and clients. While Mirage works perfectly fine with slow and somewhat limited network connections it still takes what it gets. This means if you have, for example, a 10 Mbit line and you are updating a Mirage managed end point over this connection it will most certainly congest. Because, as I already said, Mirage will use as much bandwidth as available – like almost every other protocol.

This is the reason why it is recommended to use Quality of Service (QoS) in environments where Mirage will be used, especially when branch offices with limited bandwidth come into play. Configuring the existing QoS solution to work with Mirage most of the time is very easy, because Mirage only uses one port (TCP 8000) for communication between client and server. But often no QoS is implemented and implementing it as part of the Mirage project is most often not possible.

Quality of service
While the new Mirage bandwidth limiting feature works very well and is easy to implement, implementing a proper QoS based on the network infrastructure has still some advantages, for example, allowing Mirage to use more bandwidth if the line utilizatization is low.

Based on this experience in version 5.1 of Mirage a new feature called bandwidth limiting was introduced. With the new bandwidth management features you will be able to limit the bandwidth Mirage uses without the need for 3rd party QoS solutions. You are able to specify the maximum amount of bandwidth (in KB/s) Mirage can use for upload and download operations based on the clients IP subnet or Active Directory site. Actually you set the bandwidth limit from the servers point of view, this means you set the bandwidth limit for outgoing (download from the clients point of view) and incoming (upload from the clients point of view) traffic.

Screenshot 2014-11-03 12.43.23

To set bandwidth limits you create a CSV file that specifies how much bandwidth can be consumed for outgoing respectively for incoming traffic based on the location of the client. The location is identified by either the IP subnet or AD site. Here is an example:

Screenshot 2014-11-03 12.43.33

For more information on how to set up bandwidth rules in the Mirage management console have a look at the official documentation: Managing Bandwidth Limitation Rules.

Now, let’s talk about the priority of the rules. First of all the order of the entries has no effect (besides on exception I will cover in a moment) on which rule is applied to which end point. Have a look at the screenshot above. As you can see you can specify a rule based on:

  • the Active Directory site,
  • the IP (v4) subnet
  • and a single end point (also based on an IP subnet rule).

For each of these rules you can set up a limit for outgoing and incoming traffic. So, to stick to my example, I limited the bandwidth for the AD site “Branch” to 2500 KB/s, the IP subnet 192.168.87.0-254 to 8000 KB/s and each of the IP addresses to 5000 KB/s. I set each limit to the same value for incoming and outgoing traffic. To keep it simple for now we will only talk about outgoing traffic (client download).

First of all both clients identified by their specific IP address can download with a maximum speed of 5000 KB/s. Theoretically that would allow them to consume 10.000 KB/s in total in the subnet 192.168.87.0/24. But because the subnet is limited to 8000 KB/s the max. amount of bandwidth the can be consume in total is 8000 KB/s. Still each client itself is limited to 5000 KB/s. Now, because both clients or to be more precise the IP subnet belongs to the Active Directory site “Branch” the bandwidth is further limited to 2500 KB/s. So regardless of the bandwidth limit for the specific device or subnet the Active Directory site rule in this case wins. But the rule does not win because it is listed last or because AD sites have a higher priority instead it is the rules with the most restrictive limit. If I had set a limit of 500 KB/s to the IP 192.168.87.100 then this limit would be enforced and not the subnet or site limit.

As I mentioned before the order of the entries has more or less no effect unless you specify the same rule twice. Then the latter one will be used.

After the priority of the rules is sorted out let’s talk about the limitation of incoming and outgoing traffic. As you can see you can set both limits (upload and download) independent from each other. This means Mirage bandwidth limitations can even be used on asynchronous connections where the upload bandwidth may be lower than the download bandwidth. For rules including only a single device up- and download operations will never run at the same time as the Mirage client is either uploading or downloading. Rules based on AD-sites or subnets will most definitely run up- and download operations at the same time as they will include more than one devices. Please be aware of this fact and plan your bandwidth limits accordingly. Also make sure you understand that you specify the max. amount of bandwidth that Mirage can use.

While the priority of the rules and the maximum amount of used bandwidth are the basics you need to know to work properly with the Mirage bandwidth limiting feature the following facts are also very helpful and good to know:

  • As soon as you import new rules they will take effect immediately. No restart of the Mirage server services or the clients are necessary.
  • Mirage will not guarantee fairness between clients but from personal testing it looks like that bandwidth is divided equally under full load.
  • If a Mirage client is configured as branch reflector all bandwidth limitations still apply. Layers downloads to the reflector will be limited by the rules applied to it.
  • Bandwidth limits do not apply to transfers between branch reflectors and clients. So clients that download their layer from a branch reflector will not be limited in any way.
  • The auto update feature of Mirage clients are also affected by the configured bandwidth limits.
  • Bandwidth limits will be divided between servers proportionally to the number of connected clients and each server gets a fair share of bandwidth. This means, for example, if you have five servers and a bandwidth limit of 5000 KB/s set for a subnet each server will get 1000 KB/s under full load. Also, for example, if you have two servers with a limit of 5000 KB/s set for a subnet and three clients connect to the first server and two to the second server the first server will get 3000 KB/s of bandwidth and the second one 2000 KB/s.
  • And of course bandwidth rules can be imported and exported using the Mirage server CLI using the getBandwidthRules and setBandwidthRules option.

Thats about it. How do you like the new feature? Anything missing in regards to bandwidth management you may want to see in future version of Mirage?

Three ways to deploy applications using Horizon Mirage application layers

Application layers in Horizon Mirage are a pretty flexible way to deploy applications. In fact you have three ways of deploying applications using Mirage application layering:

  1. Deploy applications using native application layer functionality
  2. Deploy ThinApps using application layers
  3. Deploy application installers using application layers

The first way is obviously the preferred way for any application you want to deploy and use natively. While I normally recommend to install core applications in the base layer some applications may be better suited when deployed via application layers. Examples are departmental apps or applications which are only deployed to a few users. Also applications that need to be updated often and need to be used natively (not virtualized) are good application layer candidates.

The second option is to use ThinApp inside an application layer. Using ThinApp has many great benefits compared to the native application layer deployment. For example running different application versions at the same time, isolate applications to prevent conflicts, get old Windows XP applications up and running on Windows 7 more easily, run old versions of Internet Explorer and browser plug-ins and so on.
While application layers are not yet optimized to deliver ThinApps (you still need to reboot) this is nevertheless a viable option. Especially if you have no other deployment system or deploying ThinApps to branch office users. When deploying ThinApps to branch offices using Mirage application layering the data will be cached on the Mirage branch reflector. Therefore it will only be transferred over the WAN once. This of course is true regardless of which way you use application layers.

The third way is a bit special. When deploying application layers Mirage allows you to run so called post-application layer deployment scripts. Using these scripts you can do more or less anything you want after an application layer is deployed. So why not install an application this way? This may seem a bit cumbersome at first but makes sense for some applications, i.e. to deploy disk encryption software which is not supported using application layers. Or deploy applications which do not work with application layers yet, like VMware Workstation/Player or Microsoft SQL Server. Again deploying applications this way is highly optimized for branch offices scenarios. The application installation files which are placed inside the application layer and the post-script itself are cached on a branch reflector.

Looking at all three options you see that Mirage application layers can be used in many ways. You can deploy any type of application using application layers.

Release: VMware Horizon Mirage 4.2.3

Just now VMware released a new version of Horizon Mirage. The new version 4.2.3 features a lot of bug fixes and some new features.

  • Better storage disconnects handling – Horizon Mirage servers now identify high load conditions on storage that can cause disconnects and can throttle down to allow storage utilization to return to normal.
  • SSL configuration option in Horizon Mirage server installation – The IT manager can now configure SSL communication during server installation. There is no need to perform this operation with the Management console after installing.
  • Branch reflector enhancements – The IT manager can better design and monitor branch reflector deployment by simulating the branch reflector selection on endpoints. The IT manager can now click any device from the Management console and see if it has a branch reflector in its vicinity.

Also the Mirage documentation got a major overhaul. There are now an administrator guide and installation guide as well as a web manager guide all available as PDF and HTML versions.

If you want to learn more about this release have a look at the release notes.

Update: Due to some installation issues version 4.2.3 isn’t available to download anymore. I will update this post as soon as problem is fixed and the download is back again. VMware re-released version 4.2.3 with additional bug fixes. Everyone who has already update to the first release please consider upgrading your environment to the latest release.

Release Notes | Documentation | Download