Tag Archives: Virtualization

Top Ten VDI Mistakes (According to Dan Feller)

Dan Feller is a Lead Architect with the Citrix Consulting group, and has written extensively about XenDesktop. We found his series on the top ten mistakes people make when implementing desktop virtualization to be quite enlightening. In case you missed it, we thought we’d share his “top ten” list here, with links to the individual posts. We would highly recommend that you take the time to read through the series in its entirety:

#10 – Not calculating user bandwidth requirements
Back in the “good old days” of MetaFrame, when we didn’t particularly care about 3D graphics, multimedia content, etc., we could get by with roughly 20 Kbps of network bandwidth per user session. That’s not going to cut it for a virtualized desktop, for a number of reasons that Dan outlines in his blog post. He provides the following estimates for the average bandwidth required both with and without the presence of a pair of Citrix Branch Repeaters (which have some secret sauce that is specifically designed to accelerate Citrix traffic) between the client device and the virtual desktop session:

Parameter XenDesktop Bandwidth without Branch Repeater XenDesktop Bandwidth with Branch Repeater
Office Productivity Apps 43 Kbps 31 Kbps
Internet 85 Kbps 38 Kbps
Printing 553 – 593 Kbps 155 – 180 Kbps
Flash Video (with HDX redirection) 174 Kbps 128 Kbps
Standard WMV Video (with HDX redirection) 464 Kbps 148 Kbps
HD WMV Video (with HDX redirection) 1812 Kbps 206 Kbps

NOTE: These are estimates – your mileage may vary!

One thing that should come across loud and clear from the table above is what a huge difference the Citrix Branch Repeater can make in your bandwidth utilization. And as we’ve always said: you only buy hardware once – bandwidth costs go on forever!

#9 – Not considering the user profile
It should go without saying that user profiles are important. But if it’s number 9 on the list of things people most often screw up, then apparently it doesn’t. In a nutshell: If you mess up the users’ profiles, the users won’t be happy – logon/logoff performance will suffer, settings (including personalization) will be lost. If the users aren’t happy, they will be extremely vocal about it, and your VDI deployment will fail for lack of user buy-in and support. There are some great tools available for managing user profiles, including the Citrix Profile Manager, and the AppSense Environment Manager. AppSense can even maintain a consistent user experience across platforms – making sure that the user profile is the same regardless of whether the user is logged onto a Windows XP system, a Windows 7 System, or a Windows Server 2008 R2-based XenApp server.

Do yourself a favor and make sure you understand what your users’ profile requirements are, then investigate the available tools and plan accordingly.

#8 – Lack of an application virtualization strategy
How many applications are actually deployed in your organization? Do you even know? Are the versions consistent across all users? Which users use which applications? You have to understand the application landscape before you can decide how you’re going to deploy applications in your new virtualized desktop environment.

You have three basic choices on how to deliver apps:

  1. You can install every application into a single desktop image. That means that whenever an application changes, you have to change your base image, and do regression testing to make sure that the new or changed application didn’t break something else.
  2. You can create multiple desktop images with different application sets in each image, depending on the needs of your different user groups. Now if an application changes, you may have to change and do regression testing on multiple images. It’s worth noting that many organizations have been taking this approach in managing PC desktop images for years…but part of the promise of desktop virtualization is that, if done correctly, you can break out of that cycle. But to do that, you must…
  3. Remove the applications from the desktop image and deliver them some other way: either by running them on a XenApp server, or by streaming the application using either the native XenApp streaming technology or Microsoft’s App-V (or some other streaming technology of your choice).

Ultimately, you may end up with a mixed approach, where some core applications that everyone uses are installed in the desktop image, and the rest are virtualized. But, once again, it’s critical to first understand the application landscape within your organization, and then plan (and test) carefully to determine the best application delivery approach.

#7 – Improper resource allocation
Quoting Dan: “Like me, many users only consume a fraction of their total potential desktop computing power, which makes desktop virtualization extremely attractive. By sharing the resources between all users, the overall amount of required resources is reduced. However, there is a fine line between maximizing the number of users a single server can support and providing the user with a good virtual desktop computing experience.”

This post provides some great guidelines on how to optimize the environment, depending on the underlying hypervisor you’re planning to use.

#6 – Protection from Anti-Virus (as well as protection from viruses)
If you are provisioning desktops from a shared read-only image (e.g., Citrix Provisioning Services), then any virus infection will go away when the virtual PC is rebooted, because changes to the base image – including the virus – are discarded by design. But you still need AV protection, because the virus can use the interval between infection and reboot to propagate itself to other systems. The gotcha here is that the AV software itself can cause serious performance issues if it is not configured properly. Dan provides a great outline in this post for how to approach AV protection in a virtual desktop environment.

#5 – Managing the incoming storm
In most organizations, the majority of users arrive and start logging into their desktops at approximately the same time. What you don’t want is dozens, or hundreds, of virtual desktops trying to start up simultaneously, because it will hammer your virtualization environment. There are some very specific things you need to do to survive the “boot storm,” and Dan outlines them in this post.

#4 – Not optimizing the virtual desktop image
Dan provides several tips on things you should do to optimize your desktop image for the virtual environment. He also has specific sections on his blog that deal with recommended optimizations for Windows 7.

#3 – Not spending your cache wisely
Specifically, we’re talking about configuring the system cache on your Provisioning Server appropriately, depending on the OS and amount of RAM in your Provisioning Server, and the type of storage repository you’re using for your vDisk(s).

#2 – Using VDI defaults
Default settings are great for getting a small Proof of Concept up and running quickly. But as you scale up your VDI environment, there are a number of things you should do. If you ignore them, performance will suffer, which means that users will be upset, which means that your VDI project is more likely to fail.

#1 – Improper storage design
This shouldn’t be a surprise, because we’ve written about this before, and even linked to a Citrix TV video of Dan discussing this very thing as part of developing a reference architecture for an SMB (under 500 desktops) deployment. We’re talking here about how to calculate the “functional IOPS” available from a given storage system, and what that means in relation to the number of IOPS a typical user will need at boot time, logon time, working hours (which will vary depending on the users themselves), and logoff time.

Just to round things out, Dan also tossed in a few “honorable mentions,” like the improper use of NIC teaming or not optimizing the NIC configuration in Provisioning Servers, trying to provision images to hardware with mismatched hardware device drivers (generally not an issue if you’re provisioning into a virtual environment), and failing to have a good business reason for launching a VDI project in the first place.

Again, this post was intended to whet your appetite by giving you enough information that you’ll want to read through Dan’s individual “top ten” posts. We would heartily recommend that you do that – you’ll probably learn a lot. (We certainly did!)

How’s That “Cloud” Thing Working For You?

Color me skeptical when it comes to the “cloud computing” craze. Well, OK, maybe my skepticism isn’t so much about cloud computing per se as it is about the way people seem to think it is the ultimate answer to Life, the Universe, and Everything (shameless Douglass Adams reference). In part, that’s because I’ve been around IT long enough that I’ve seen previous incarnations of this concept come and go. Application Service Providers were supposed to take the world by storm a decade ago. Didn’t happen. The idea came back around as “Software as a Service” (or, as Microsoft preferred to frame it, “Software + Services”). Now it’s cloud computing. In all of its incarnations, the bottom line is that you’re putting your critical applications and data on someone else’s hardware, and sometimes even renting their Operating Systems to run it on and their software to manage it. And whenever you do that, there is an associated risk – as several users of Amazon’s EC2 service discovered just last week.

I have no doubt that the forensic analysis of what happened and why will drag on for a long time. Justin Santa Barbara had an interesting blog post last Thursday (April 21) that discussed how the design of Amazon Web Services (AWS), and its segmentation into Regions and Availability Zones, is supposed to protect you against precisely the kind of failure that occurred last week…except that it didn’t.

Phil Wainewright has an interesting post over at ZDnet.com on the “Seven lessons to learn from Amazon’s outage.” The first two points he makes are particularly important: First, “Read your cloud provider’s SLA very carefully” – because it appears that, despite the considerable pain some of Amazon’s customers were feeling, the SLA was not breached, legally speaking. Second, “Don’t take your provider’s assurances for granted” – for reasons that should be obvious.

Wainewright’s final point, though, may be the most disturbing, because it focuses on Amazon’s “lack of transparency.” He quotes BigDoor CEO Keith Smith as saying, “If Amazon had been more forthcoming with what they are experiencing, we would have been able to restore our systems sooner.” This was echoed in Santa Barbara’s blog post where, in discussing customers’ options for failing over to a different cloud, he observes, “Perhaps they would have started that process had AWS communicated at the start that it would have been such a big outage, but AWS communication is – frankly – abysmal other than their PR.” The transparency issue was also echoed by Andrew Hickey in an article posted April 26 on CRN.com.

CRN also wrote about “lessons learned,” although they came up with 10 of them. Their first point is that “Cloud outages are going to happen…and if you can’t stand the outage, get out of the cloud.” They go on to talk about not putting “Blind Trust” in the cloud, and to point out that management and maintenance are still required – “it’s not a ‘set it and forget it’ environment.”

And it’s not like this is the first time people have been affected by a failure in the cloud:

  • Amazon had a significant outage of their S3 online storage service back in July, 2008. Their northern Virginia data center was affected by a lightning strike in July of 2009, and another power issue affected “some instances in its US-EAST-1 availability zone” in December of 2009.
  • Gmail experienced a system-wide outage for a period of time in August, 2008, then was down again for over 1 ½ hours in September, 2009.
  • The Microsoft/Danger outage in October, 2009, caused a lot of T-Mobile customers to lose personal information that was stored on their Sidekick devices, including contacts, calendar entries, to-do lists, and photos.
  • In January, 2010, failure of a UPS took several hundred servers offline for hours at a Rackspace data center in London. (Rackspace also had a couple of service-affecting failures in their Dallas area data center in 2009.)
  • Salesforce.com users have suffered repeatedly from service outages over the last several years.

This takes me back to a comment made by one of our former customers, who was the CIO of a local insurance company, and who later joined our engineering team for a while. Speaking of the ASPs of a decade ago, he stated, “I wouldn’t trust my critical data to any of them – because I don’t believe that any of them care as much about my data as I do. And until they can convince me that they do, and show me the processes and procedures they have in place to protect it, they’re not getting my data!”

Don’t get me wrong – the “Cloud” (however you choose to define it…and that’s part of the problem) has its place. Cloud services are becoming more affordable, and more reliable. But, as one solution provider quoted in the CRN “lessons learned” article put it, “Just because I can move it into the cloud, that doesn’t mean I can ignore it. It still needs to be managed. It still needs to be maintained.” Never forget that it’s your data, and no one cares about it as much as you do, no matter what they tell you. Forrester analyst Rachel Dines may have said it best in her blog entry from last week: “ASSUME NOTHING. Your cloud provider isn’t in charge of your disaster recovery plan, YOU ARE!” (She also lists several really good questions you should ask your cloud provider.)

Cloud technologies can solve specific problems for you, and can provide some additional, and valuable, tools for your IT toolbox. But you dare not assume that all of your problems will automagically disappear just because you put all your stuff in the cloud. It’s still your stuff, and ultimately your responsibility.

DataCore Lowers Prices on SANsymphony-V

Back at the end of January, DataCore announced the availability of a new product called SANsymphony-V. This product replaces SANmelody in their product line, and is the first step in the eventual convergence of SANmelody and SANsymphony into a single product with a common user interface.

Note: In case you’re not familiar with DataCore, they make software that will turn an off-the-shelf Windows server into an iSCSI SAN node (FibreChannel is optional) with all the bells and whistles you would expect from a modern SAN product. You can read more about them on our DataCore page.

We’ve been playing with SANsymphony-V in our engineering lab, and our technical team is impressed with both the functionality and the new user interface – but that’s another post for another day. This post is focused on the packaging and pricing of SANsymphony-V, which in many cases can come in significantly below the old SANmelody pricing.

First, we need to recap the old SANmelody pricing model. SANmelody nodes were priced according to the maximum amount of raw capacity that node could manage. The full-featured HA/DR product could be licensed for 0.5 Tb, 1 Tb, 2 Tb, 3 Tb, 4 Tb, 8 Tb, 16 Tb, or 32 Tb. So, for example, if you wanted 4 Tb of mirrored storage (two 4 Tb nodes in an HA pair), you would purchase two 4 Tb licenses. At MSRP, including 1 year of software maintenance, this would have cost you a total of $17,496. But what if you had another 2 Tb of archival data that you wanted available, but didn’t necessarily need it mirrored between your two nodes? Then you would want 4 Tb in one node, and 6 Tb in the other node. However, since there was no 6 Tb license, you’d have to buy an 8 Tb license. Now your total cost is up to $21,246.

SANsymphony-V introduced the concept of separate node licenses and capacity licenses. The node license is based on the maximum amount of raw storage that can exist in the storage pool to which that node belongs. The increments are:

  • “VL1″ – Up to 5 Tb – includes 1 Tb of capacity per node (more on this in a moment)
  • “VL2″ – Up to 16 Tb – includes 2 Tb of capacity per node
  • “VL3″ – Up to 100 Tb – includes 8 Tb of capacity per node
  • “VL4″ – Up to 256 Tb – includes 40 Tb of capacity per node
  • “VL5″ – More than 256 Tb – includes 120 Tb of capacity per node

In my example above, with 4 Tb of mirrored storage and 2 Tb of non-mirrored storage, there is a total of 10 Tb of storage in the storage pool: (4 x 2) + 2 = 10. Therefore, each node needs a “VL2″ node license, since the total storage in the pool is more than 5 Tb but less than 16 Tb. We also need a total of 10 Tb of capacity licensing. We’ve already got 4 Tb, since 2 Tb of capacity were included with each node license. So we need to buy an additional six 1 Tb capacity licenses. At MSRP, this would cost a total of $14,850 – substantially less than the old SANmelody price.

The cool thing is, once we have our two VL2 nodes and our 10 Tb of total capacity licensing, DataCore doesn’t care how that capacity is allocated between the nodes. We can have 5 Tb of mirrored storage, we can have 4 Tb in one node and 6 Tb in the other, we can have 3 Tb in one node and 7 Tb in the other. We can divide it up any way we want to.

If we now want to add asynchronous replication to a third SAN node that’s off-site (e.g., in our DR site), that SAN node is considered a separate “pool,” so its licensing would be based on how much capacity we need at our DR site. If we only cared about replicating 4 Tb to our DR site, then the DR node would only need a VL1 node license and a total of 4 Tb of capacity licensing (i.e., a VL1 license + three additional 1 Tb capacity licenses, since 1 Tb of capacity is included with the VL1 license).

At this point, no new SANmelody licenses are being sold – although, if you need to, you can still upgrade an existing SANmelody license to handle more storage. If you’re an existing SANmelody customer with current software maintenance, rest assured that you will be entitled to upgrade to SANsymphony-V as a benefit of your software maintenance coverage. However, there will not be a mechanism that allows for an easy in-place upgrade until sometime in Q3. In the meantime, an upgrade from SANmelody to SANsymphony-V would entail a complete rebuild from the ground up. (Which we would be delighted to do for you if you just can’t wait for the new features.)

Machine Creation Services and KMS

We’ve written extensively here about the challenges of using Citrix Provisioning Services to provision VMs that require key activation (i.e., Vista, Win7, and Server 2008/2008R2). We publicly rejoiced when the news broke that PVS v5.6, SP1, supported both KMS and MAK activation.

But now, with the advent of XenDesktop 5, there is a new way to provision desktops: Machine Creation Services (“MCS”). As a public service to those who follow this blog, I thought I’d share Citrix’s official statement regarding MCS and KMS activation:

MCS does not support or work with KMS based Microsoft Windows 7 activation by default, however the following workaround has been provided and will be supported by Citrix Support should an issue arise.

For details on the workaround, click through the link above to the KB article.

It does not appear that there is a workaround that will allow MCS to be used with MAK activation, and I saw a comment by a Citrix employee on a forum post that indicated that there were “no plans to support it in the near future.” So…MCS with KMS, yes; MCS with MAK, no.

Not having MAK support probably isn’t a big deal, since the main reason why you would go with MAK activation rather than KMS activation would be if you had fewer than 25 desktops to activate, and if you have fewer than 25 virtual desktops, you may as well just stick with 1-to-1 images instead of messing around with provisioning anyway. But we thought you should know.

You’re welcome.

The Future Is Now

I recently discovered a video on “Citrix TV” that does as good a job as I’ve ever seen in presenting the big picture of desktop and application virtualization using XenApp and XenDesktop (which, as we’ve said before, includes XenApp now). The entire video is just over 17 minutes long, which is longer than most videos we’ve posted here (I prefer to keep them under 5 minutes or so), but in that 17 minutes, you’re going to see:

  • How easy it is for a user to install the Citrix Receiver
  • Self-service application delivery
  • Smooth roaming (from a PC to a MacBook)
  • Application streaming for off-line use
  • A XenDesktop virtual desktop following the user from an HP Thin Client…
    • …to an iPad…
    • …as the iPad switches to 3G operation aboard a commuter train…
    • …to a Mac in the home office…
    • …to a Windows multi-touch PC in the kitchen…
    • …to an iPhone on the golf course.
  • And a demo of XenClient to wrap things up.

I remember, a few years ago, sitting through the keynote address at a Citrix conference and watching a similar video on where the technology was headed. But this isn’t smoke and mirrors, and it isn’t a presentation of some future, yet-to-be-released technology. All of this functionality is available now, and it’s all included in a single license model. The future is here. Now.

I think you’ll find that it’s 17 minutes that are well-spent: