Tag Archives: Virtualization

Beware of Vendor-Sponsored “Analysis” Reports

Mark Twain allegedly came up with the famous line: "Figures don’t lie, but liars figure." That’s a good thing to keep in mind any time you’re looking through a report that was sponsored ("sponsored" = "paid for") by a vendor that concludes that their product is better than the other guy’s.

Maybe it is better than the other guy’s. But you might want to look closely at what was tested, how it was tested, and whether they were, shall we say, selective in the facts they present.

Case in point: The Tolly Group’s report, released May 27, comparing VMware View 4.6 Premier Edition to Citrix XenDesktop 5 Platinum edition. There are several interesting aspects to this report, which are dealt with in detail in Tal Klein’s blog over on the Citrix Community blog site. Here are a few of the more egregious items:

  • VMware View 4.6 Premier licensing costs less than XenDesktop 5 Platinum. Absolutely true, and absolutely irrelevant. That’s like pointing out that if you load every possible dealer option onto your new car, it’s going to cost more than the basic model. Thank you, Captain Obvious. If you want an "apples-to-apples" comparison, you need to compare VMware View to the XenDesktop VDI Edition. But wait, if you do that, XenDesktop is actually less expensive, and that would be an awkward point to publish in a paper that’s being paid for by VMware.
  • VMware’s PCoIP provides a more consistent multi-media experience than XenDesktop 5. (Over a LAN. Using a single thin client device that did not support any of the Citrix HDX media acceleration features.) Sorry, guys, but once again this is not an apples-to-apples comparison. And did they publish any results of testing across a WAN link? Nope…and for the same reason they didn’t use XenDesktop VDI Edition for their price comparison.
  • It’s easier to upgrade View 4.5 to View 4.6 than it is to upgrade XenDesktop 4 to XenDesktop 5. Once again, both true and irrelevant. It’s easier to give your kitchen a new coat of paint than it is to rip out the cabinets and completely remodel it. Anybody surprised by that? There are significant architectural changes from XenDesktop 4 to XenDesktop 5. It shouldn’t be surprising to anyone that this will involve more effort than a "dot release" upgrade.

I’ve always been skeptical of vendor-sponsored "analysis" reports, and, to be fair, Citrix has used the Tolly Group in the past for its own sponsored reports – but it seems to me that this one is just over the top. Apparently, former Gartner analyst Simon Bramfitt agrees. His pithy assessment of the report: "There are undiscovered tribes lost in the darkest parts of the Amazon jungle that would know exactly what to do if a vendor airdropped a pile of competitive marketing literature authored by the Tolly Group; send it back, and asked [sic] that it be re-printed on more absorbent paper."

What do you think?

Thin Clients vs. Cheap PCs

We have, for a long time, been fans of thin client devices. However, if you run the numbers, it turns out that thin-clients may not necessarily be the most cost-effective client devices for a VDI deployment.

Just before writing this post, I went to the Dell Web site and priced out a low-end Vostro Mini Tower system: 3.2 GHz Intel E5800 dual-core processor, 3 Gb RAM, 320 Gb disk drive, integrated Intel graphics, Windows 7 Professional 64-bit OS, 1 year next-business-day on-site service. Total price: $349.00.

When you buy a new PC with an OEM license of Windows on it, you have 90 days to add Microsoft Software Assurance to that PC. That will cost you $109.00 for two years of coverage. You’re now out of pocket $458.00. However, one of the benefits of Software Assurance is that you don’t need any other Microsoft license component to access a virtual desktop OS. You also have the rights, under SA, to install Windows Thin PC (WinTPC) on the system, which strips out a lot of non-essential stuff and allows you to administratively lock it down – think of WinTPC as Microsoft’s own tool kit for turning a PC into a thin client device.

Now consider the thin client option. A new Wyse Winterm built on Embedded Windows 7 carries an MSRP of $499. There are less expensive thin clients, but this one would be the closest to a Windows 7 PC in terms of the user experience (media redirection to a local Windows Media Player, Windows 7 user interface, etc.). However, having bought the thin client, you must now purchase a Microsoft Virtual Desktop Access (VDA) license to legally access your VDI environment. The VDA license is only available through the Open Value Subscription model, and will cost you $100/year forever. So your total cost over two years is $699 for the Wyse device vs. $458 for the Dell Vostro.

After the initial two year term, you’ll have to renew Software Assurance on the PC for another two years. That will continue to cost you roughly $54.50/year vs. $100/year to keep paying for that VDA license.

Arguably, the Wyse thin client is a better choice for some use cases. It will work better in a hostile environment – like a factory floor – because it has no fan to pull dust and debris into the case. In fact, it has no moving parts at all, and will likely last longer as a result…although PC hardware is pretty darned reliable these days, and at that price point, the low-end PC becomes every bit as disposable as a thin client device.

So, as much as we love our friends at Wyse, the bottom line is…well, it’s the bottom line. And if you’re looking at a significant VDI deployment, it might be worth running the numbers both ways before you decide for sure which way you’re going to go.

IntelliCache and the IOPS Problem

If you’ve been following this blog for any length of time, you know that we’ve written extensively about XenDesktop, and spent a lot of time on best practices and problems to avoid. And one of the biggest problems to avoid is poor storage design resulting in poor VDI performance.

In a nutshell, the problem is that a Windows desktop OS uses disk far differently than a Windows server OS. Thanks to the way Windows uses the swap file, disk writes outnumber disk reads by about 2 to 1. You can build your virtual desktop infrastructure on the latest and greatest server hardware, with tons of processing power and insanely huge amounts of RAM, but if all of the disk I/O for all of those virtual desktops is hitting your SAN, you’ve got a scalability problem on your hands.

Provisioning Services (“PVS”) can help to mitigate this in two ways (assuming for sake of argument that you’re provisioning multiple virtual systems from a common, read-only image): First, if you build your Provisioning Servers correctly, you should be able to serve up most of the OS read operations from the Provisioning Server’s own cache memory. Second, you can use the virtualization host’s local disk storage as the required “write cache” – because all of those write operations have to go somewhere while the virtual system is running.

But XenDesktop 5 introduced a new way to provision desktops called “Machine Creation Services” (“MCS”). We wrote about this in the April edition of our Moose Views newsletter, so if you’re not familiar with all the pros and cons of MCS vs. PVS, I’d encourage you to take a brief time out and read that article. Suffice it to say that, despite all the advantages of MCS, the biggest downside of using MCS to provision pooled desktops was that all of the IOPS hit your SAN storage, which limited the scalability of an MCS-provisioned VDI deployment.

But all of that just changed, with the release of XenDesktop 5 Service Pack 1, which was made available for download a week ago (May 13). With SP1, XenDesktop 5 is now able to take advantage of the “IntelliCache” feature that was introduced as part of XenServer v5.6 Service Pack 2. Using MCS with the combination of XenDesktop 5 SP1 and XenServer SP2…

  • The first time a virtual desktop is booted on a given XenServer, the boot image is cached on that XenServer’s local storage.
  • Subsequent virtual desktops booted on that same XenServer will boot and run from that locally cached image.
  • You can use the XenServer’s local storage for the write cache as well.

The bottom line is that you can move as much as 90% of the IOPS off of the SAN and onto local XenServer storage, removing nearly all of the scalability limitations from an MCS-provisioned environment.

With most of the IOPS for running VMs taking place on local storage, it’s pretty straightforward to figure out how many VMs you can expect to support on a given virtualization host. Dan Feller’s blog post does a great job of walking through the process of calculating the functional IOPS that your local XenServer storage repository should be able to support, and inferring from that number how many light, normal, or power users you should be able to support as a result.

This also means that using XenServer as the hypervisor for your XenDesktop 5 deployment is going to yield a significant performance advantage over any other hypervisor, unless or until the other guys come out with similar local caching features. So, if you’re a VMware shop, my advice is this: Go ahead and virtualize all of the supporting XenDesktop server components on your VSphere infrastructure. Run your XenDesktop 5 VMs on XenServer hosts, and just don’t tell anyone! If you’re asked, just say, “Oh, yeah, these are my XenDesktop host systems – they’re completely separate from our VSphere infrastructure, because we don’t need the (insert favorite VSphere feature) function for these systems.” Your infrastructure will run better, and no one will know but you…

Top Ten VDI Mistakes (According to Dan Feller)

Dan Feller is a Lead Architect with the Citrix Consulting group, and has written extensively about XenDesktop. We found his series on the top ten mistakes people make when implementing desktop virtualization to be quite enlightening. In case you missed it, we thought we’d share his “top ten” list here, with links to the individual posts. We would highly recommend that you take the time to read through the series in its entirety:

#10 – Not calculating user bandwidth requirements
Back in the “good old days” of MetaFrame, when we didn’t particularly care about 3D graphics, multimedia content, etc., we could get by with roughly 20 Kbps of network bandwidth per user session. That’s not going to cut it for a virtualized desktop, for a number of reasons that Dan outlines in his blog post. He provides the following estimates for the average bandwidth required both with and without the presence of a pair of Citrix Branch Repeaters (which have some secret sauce that is specifically designed to accelerate Citrix traffic) between the client device and the virtual desktop session:

Parameter XenDesktop Bandwidth without Branch Repeater XenDesktop Bandwidth with Branch Repeater
Office Productivity Apps 43 Kbps 31 Kbps
Internet 85 Kbps 38 Kbps
Printing 553 – 593 Kbps 155 – 180 Kbps
Flash Video (with HDX redirection) 174 Kbps 128 Kbps
Standard WMV Video (with HDX redirection) 464 Kbps 148 Kbps
HD WMV Video (with HDX redirection) 1812 Kbps 206 Kbps

NOTE: These are estimates – your mileage may vary!

One thing that should come across loud and clear from the table above is what a huge difference the Citrix Branch Repeater can make in your bandwidth utilization. And as we’ve always said: you only buy hardware once – bandwidth costs go on forever!

#9 – Not considering the user profile
It should go without saying that user profiles are important. But if it’s number 9 on the list of things people most often screw up, then apparently it doesn’t. In a nutshell: If you mess up the users’ profiles, the users won’t be happy – logon/logoff performance will suffer, settings (including personalization) will be lost. If the users aren’t happy, they will be extremely vocal about it, and your VDI deployment will fail for lack of user buy-in and support. There are some great tools available for managing user profiles, including the Citrix Profile Manager, and the AppSense Environment Manager. AppSense can even maintain a consistent user experience across platforms – making sure that the user profile is the same regardless of whether the user is logged onto a Windows XP system, a Windows 7 System, or a Windows Server 2008 R2-based XenApp server.

Do yourself a favor and make sure you understand what your users’ profile requirements are, then investigate the available tools and plan accordingly.

#8 – Lack of an application virtualization strategy
How many applications are actually deployed in your organization? Do you even know? Are the versions consistent across all users? Which users use which applications? You have to understand the application landscape before you can decide how you’re going to deploy applications in your new virtualized desktop environment.

You have three basic choices on how to deliver apps:

  1. You can install every application into a single desktop image. That means that whenever an application changes, you have to change your base image, and do regression testing to make sure that the new or changed application didn’t break something else.
  2. You can create multiple desktop images with different application sets in each image, depending on the needs of your different user groups. Now if an application changes, you may have to change and do regression testing on multiple images. It’s worth noting that many organizations have been taking this approach in managing PC desktop images for years…but part of the promise of desktop virtualization is that, if done correctly, you can break out of that cycle. But to do that, you must…
  3. Remove the applications from the desktop image and deliver them some other way: either by running them on a XenApp server, or by streaming the application using either the native XenApp streaming technology or Microsoft’s App-V (or some other streaming technology of your choice).

Ultimately, you may end up with a mixed approach, where some core applications that everyone uses are installed in the desktop image, and the rest are virtualized. But, once again, it’s critical to first understand the application landscape within your organization, and then plan (and test) carefully to determine the best application delivery approach.

#7 – Improper resource allocation
Quoting Dan: “Like me, many users only consume a fraction of their total potential desktop computing power, which makes desktop virtualization extremely attractive. By sharing the resources between all users, the overall amount of required resources is reduced. However, there is a fine line between maximizing the number of users a single server can support and providing the user with a good virtual desktop computing experience.”

This post provides some great guidelines on how to optimize the environment, depending on the underlying hypervisor you’re planning to use.

#6 – Protection from Anti-Virus (as well as protection from viruses)
If you are provisioning desktops from a shared read-only image (e.g., Citrix Provisioning Services), then any virus infection will go away when the virtual PC is rebooted, because changes to the base image – including the virus – are discarded by design. But you still need AV protection, because the virus can use the interval between infection and reboot to propagate itself to other systems. The gotcha here is that the AV software itself can cause serious performance issues if it is not configured properly. Dan provides a great outline in this post for how to approach AV protection in a virtual desktop environment.

#5 – Managing the incoming storm
In most organizations, the majority of users arrive and start logging into their desktops at approximately the same time. What you don’t want is dozens, or hundreds, of virtual desktops trying to start up simultaneously, because it will hammer your virtualization environment. There are some very specific things you need to do to survive the “boot storm,” and Dan outlines them in this post.

#4 – Not optimizing the virtual desktop image
Dan provides several tips on things you should do to optimize your desktop image for the virtual environment. He also has specific sections on his blog that deal with recommended optimizations for Windows 7.

#3 – Not spending your cache wisely
Specifically, we’re talking about configuring the system cache on your Provisioning Server appropriately, depending on the OS and amount of RAM in your Provisioning Server, and the type of storage repository you’re using for your vDisk(s).

#2 – Using VDI defaults
Default settings are great for getting a small Proof of Concept up and running quickly. But as you scale up your VDI environment, there are a number of things you should do. If you ignore them, performance will suffer, which means that users will be upset, which means that your VDI project is more likely to fail.

#1 – Improper storage design
This shouldn’t be a surprise, because we’ve written about this before, and even linked to a Citrix TV video of Dan discussing this very thing as part of developing a reference architecture for an SMB (under 500 desktops) deployment. We’re talking here about how to calculate the “functional IOPS” available from a given storage system, and what that means in relation to the number of IOPS a typical user will need at boot time, logon time, working hours (which will vary depending on the users themselves), and logoff time.

Just to round things out, Dan also tossed in a few “honorable mentions,” like the improper use of NIC teaming or not optimizing the NIC configuration in Provisioning Servers, trying to provision images to hardware with mismatched hardware device drivers (generally not an issue if you’re provisioning into a virtual environment), and failing to have a good business reason for launching a VDI project in the first place.

Again, this post was intended to whet your appetite by giving you enough information that you’ll want to read through Dan’s individual “top ten” posts. We would heartily recommend that you do that – you’ll probably learn a lot. (We certainly did!)

How’s That “Cloud” Thing Working For You?

Color me skeptical when it comes to the “cloud computing” craze. Well, OK, maybe my skepticism isn’t so much about cloud computing per se as it is about the way people seem to think it is the ultimate answer to Life, the Universe, and Everything (shameless Douglass Adams reference). In part, that’s because I’ve been around IT long enough that I’ve seen previous incarnations of this concept come and go. Application Service Providers were supposed to take the world by storm a decade ago. Didn’t happen. The idea came back around as “Software as a Service” (or, as Microsoft preferred to frame it, “Software + Services”). Now it’s cloud computing. In all of its incarnations, the bottom line is that you’re putting your critical applications and data on someone else’s hardware, and sometimes even renting their Operating Systems to run it on and their software to manage it. And whenever you do that, there is an associated risk – as several users of Amazon’s EC2 service discovered just last week.

I have no doubt that the forensic analysis of what happened and why will drag on for a long time. Justin Santa Barbara had an interesting blog post last Thursday (April 21) that discussed how the design of Amazon Web Services (AWS), and its segmentation into Regions and Availability Zones, is supposed to protect you against precisely the kind of failure that occurred last week…except that it didn’t.

Phil Wainewright has an interesting post over at ZDnet.com on the “Seven lessons to learn from Amazon’s outage.” The first two points he makes are particularly important: First, “Read your cloud provider’s SLA very carefully” – because it appears that, despite the considerable pain some of Amazon’s customers were feeling, the SLA was not breached, legally speaking. Second, “Don’t take your provider’s assurances for granted” – for reasons that should be obvious.

Wainewright’s final point, though, may be the most disturbing, because it focuses on Amazon’s “lack of transparency.” He quotes BigDoor CEO Keith Smith as saying, “If Amazon had been more forthcoming with what they are experiencing, we would have been able to restore our systems sooner.” This was echoed in Santa Barbara’s blog post where, in discussing customers’ options for failing over to a different cloud, he observes, “Perhaps they would have started that process had AWS communicated at the start that it would have been such a big outage, but AWS communication is – frankly – abysmal other than their PR.” The transparency issue was also echoed by Andrew Hickey in an article posted April 26 on CRN.com.

CRN also wrote about “lessons learned,” although they came up with 10 of them. Their first point is that “Cloud outages are going to happen…and if you can’t stand the outage, get out of the cloud.” They go on to talk about not putting “Blind Trust” in the cloud, and to point out that management and maintenance are still required – “it’s not a ‘set it and forget it’ environment.”

And it’s not like this is the first time people have been affected by a failure in the cloud:

  • Amazon had a significant outage of their S3 online storage service back in July, 2008. Their northern Virginia data center was affected by a lightning strike in July of 2009, and another power issue affected “some instances in its US-EAST-1 availability zone” in December of 2009.
  • Gmail experienced a system-wide outage for a period of time in August, 2008, then was down again for over 1 ½ hours in September, 2009.
  • The Microsoft/Danger outage in October, 2009, caused a lot of T-Mobile customers to lose personal information that was stored on their Sidekick devices, including contacts, calendar entries, to-do lists, and photos.
  • In January, 2010, failure of a UPS took several hundred servers offline for hours at a Rackspace data center in London. (Rackspace also had a couple of service-affecting failures in their Dallas area data center in 2009.)
  • Salesforce.com users have suffered repeatedly from service outages over the last several years.

This takes me back to a comment made by one of our former customers, who was the CIO of a local insurance company, and who later joined our engineering team for a while. Speaking of the ASPs of a decade ago, he stated, “I wouldn’t trust my critical data to any of them – because I don’t believe that any of them care as much about my data as I do. And until they can convince me that they do, and show me the processes and procedures they have in place to protect it, they’re not getting my data!”

Don’t get me wrong – the “Cloud” (however you choose to define it…and that’s part of the problem) has its place. Cloud services are becoming more affordable, and more reliable. But, as one solution provider quoted in the CRN “lessons learned” article put it, “Just because I can move it into the cloud, that doesn’t mean I can ignore it. It still needs to be managed. It still needs to be maintained.” Never forget that it’s your data, and no one cares about it as much as you do, no matter what they tell you. Forrester analyst Rachel Dines may have said it best in her blog entry from last week: “ASSUME NOTHING. Your cloud provider isn’t in charge of your disaster recovery plan, YOU ARE!” (She also lists several really good questions you should ask your cloud provider.)

Cloud technologies can solve specific problems for you, and can provide some additional, and valuable, tools for your IT toolbox. But you dare not assume that all of your problems will automagically disappear just because you put all your stuff in the cloud. It’s still your stuff, and ultimately your responsibility.