Category Archives: Daas

Cloud-Based VDI vs. DaaS - Is There a Difference?

With nearly all new technologies in the IT space comes confusion over terminology. Some of the confusion is simply because the technology is new, and we’re all trying to understand how it works and how – or whether – it fits the needs of our businesses. Unfortunately, some of the confusion is often caused by technology vendors who want to find a way to label their products in a way that associates them with whatever is perceived as new, cool, innovative, cutting-edge, etc. Today, we’re seeing that happen with terms like “cloud,” “DaaS,” and “VDI.”

“VDI” stands for Virtual Desktop Infrastructure. Taken literally, it’s an infrastructure that delivers virtual desktops to users. What is a virtual desktop? It is a (usually Windows) desktop computing environment where the user interface is abstracted and delivered to a remote user over a network using some kind of remote display protocol such as ICA, RDP, or PCoIP. That desktop computing environment is most often virtualized using a platform such as VMware, Hyper-V, or XenServer, but could also be a blade PC or even an ordinary desktop PC. If the virtual desktop is delivered by a service provider (such as VirtualQube) for a monthly subscription fee, it is often referred to as “Desktop as a Service,” or “DaaS.”

There are a number of ways to deliver a virtual desktop to a user:

  • Run multiple, individual instances of a desktop operating system (e.g., Windows 7 or Windows 8) on a virtualization host that’s running a hypervisor such as VMware, Hyper-V, or XenServer. Citrix XenDesktop, VMware View, and Citrix VDI-in-a-Box are all products that enable this model.
  • Run multiple, individual instances of a server operating system (e.g., 2008 R2 of 2012 R2) on a virtualization host that’s running a hypervisor such as VMware, Hyper-V, or XenServer. In such a case, a policy pack can be applied that will make the 2008 R2 desktop look like Windows 7, and the 2012 R2 desktop look like Windows 8. In a moment we’ll discuss why you might want to do that.
  • Run multiple, individual desktops on a single, shared server operating system, using Microsoft Remote Desktop Services (with or without added functionality from products such as Citrix XenApp). This “remote session host,” to use the Microsoft term, can be a virtual server or a physical server. Once again, the desktop can be made to look like a Windows 7 or Windows 8 desktop even though it’s really a server OS.
  • Use a brokering service such as XenDesktop to allow remote users to connect to blade PCs in a data center, or even to connect to their own desktop PCs when they’re out of the office.
  • Use client-side virtualization to deliver a company-managed desktop OS instance that will run inside a virtualized “sandbox” on a client PC, such as is the case with Citrix XenClient, or the Citrix Desktop Player for Macintosh. In this case, the virtual desktop can be cached on the local device’s hard disk so it can continue to be accessed after the client device is disconnected from the network.

Although any of the above approaches could lumped into the “VDI” category, the common usage that seems to be emerging is to use the term “VDI” to refer specifically to approaches that deliver an individual operating system instance (desktop or server) to each user. From a service provider perspective, we would characterize that as cloud-based VDI. So, to answer the question we posed in the title of this post, cloud-based VDI is one variant of DaaS, but not all DaaS is delivered using cloud-based VDI – and for a good reason.

Microsoft has chosen not to put its desktop operating systems on the Service Provider License Agreement (“SPLA”). That means there is no legal way for a service provider such as VirtualQube to provide a customer with a true Windows 7 or Windows 8 desktop and charge by the month for it. The only way that can be done is for the customer to purchase all the licenses that would be required for their own on-site VDI deployment (and we’ve written extensively about what licenses those are), and provide those licenses to the service provider, which must then provision dedicated hardware for that customer. That hardware cannot be used to provide any services to any other customer. (Anyone who tells you that there’s any other way to do this is either not telling you the truth, or is violating the Microsoft SPLA!)

Unfortunately, the requirement for dedicated hardware will, in many cases, make the solution unaffordable. Citrix recently published the results of a survey of Citrix Service Providers. They received responses from 718 service providers in 25 countries. 70% of them said that their average customer had fewer than 100 employees. 40% said their average customer had fewer than 50 employees. It is simply not cost-effective for a service provider to dedicate hardware to a customer of that size, and unlikely that it could be done at a price the customer would be willing to pay.

On the other hand, both Microsoft and Citrix have clean, easy-to-understand license models for Remote Desktop Services and XenApp, which is the primary reason why nearly all service providers, including VirtualQube, use server-hosted desktops as their primary DaaS delivery method. We all leverage the policy packs that can make a Server 2008 R2 desktop look like a Windows 7 desktop, and a 2012 R2 desktop look like a Windows 8 desktop, but you’re really not getting Windows 7 or Windows 8, and Microsoft is starting to crack down on service providers who fail to make that clear.

Unfortunately, there are still some applications out there that will not run well – or will not run at all – in a remote session hosted environment. There are a number of reasons for this:

  • Some applications check for the OS version as part of their installation routines, and simply abort the installation if you’re trying to install them on a server OS.
  • Some applications will not run on a 64-bit platform – and Server 2008 R2 and 2012 R2 are both exclusively 64-bit platforms.
  • Some applications do not follow proper programming conventions, and insist on doing things like writing temp files to a hard-coded path like C:\temp. If you have multiple users running that application on the same server via Remote Desktop Services, and each instance of the application is trying to write to the same temp file, serious issues will result. Sometimes we can use application isolation techniques to redirect the writes to a user-specific path, but sometimes we can’t.
  • Some applications are so demanding in terms of processor and RAM requirements that anyone else trying to run applications on the same server will experience degraded performance.

There’s not much that a service provider can do to address the first two of these issues, short of going the dedicated-hardware route (for those customers who are large enough to afford it) and provisioning true Windows 7 or Windows 8 desktops. But there is a creative solution for the third and fourth issues, and that’s to use VDI technology to provision individual instances of Server 2008 R2 or Server 2012 R2 per user. From the licensing perspective, it’s no different than supporting multiple users on a remote session host. Once the service provider has licensed a virtualization host for Windows Datacenter edition, there is no limit to the number of Windows Server instances that can be run on that host – you can keep spinning them up until you don’t like the performance anymore. And the Citrix and Microsoft user licensing is the same whether the user has his/her own private server instance, or is sharing the server OS instance with several other users via Remote Desktop Services.

On the positive side, this allows an individual user to be guaranteed a specified amount of CPU and RAM to handle those resource-intensive applications, avoids “noisy neighbor” issues where a single user impacts the performance of other users who happen to be sharing the same Remote Desktop Server, and allows support of applications that just don’t want to run in a multi-user environment. It’s even possible to give the user the ability to install his/her own applications – this may be risky in that the user could break his/her own virtual server instance, but at least the user can’t affect anyone else.

On the negative side, this is a more expensive alternative simply because it is a less efficient way to use the underlying virtualization host. Our tests indicate that we can probably support an average of 75 individual virtual instances of Server 2008 or Server 2012 for VDI on a dual-processor virtualization host with, say, 320 Gb or so of RAM. We can support 200 – 300 concurrent users on the same hardware by running multiple XenApp server instances on it rather than an OS instance per user.

That said, we believe there are times when the positives of cloud-based VDI is worth the extra money, which is why we offer both cloud-based VDI and remote session hosted DaaS powered by Remote Desktop Services and XenApp.

Why Desktop as a Service?

This morning, I ran across an interesting article over on techtarget.com talking about the advantages of the cloud-hosted desktop model. Among other things, it listed some of the reasons why businesses are deploying DaaS, which align quite well with what we’ve experienced:

  • IaaS - Businesses are finding that as they move their data and server applications into the cloud, the user experience can degrade, because they’re moving farther and farther away from the clients and users who access them. That’s reminiscent of our post a few months ago about the concept of “Data Gravity.” In that post, we made reference to the research by Jim Gray of Microsoft, who concluded that, compared to the cost of moving bytes around, everything else is essentially free. Our contention is that your application execution platform should be wherever your data is. If your data is in the cloud, it just makes sense to have a cloud-hosted desktop to run the applications that access that data.
  • Seasonality - Businesses whose employee count varies significantly over the course of the year may find that the pay-as-you-go model of DaaS makes more sense than building an on-site infrastructure that will handle the seasonal peak.
  • DR/BC - This can be addressed two ways: First, simply having your data and applications in a state-of-the-art data center gives you protection against localized disasters at your office location. If your cloud hosting provider offers data replication to geo-redundant data centers, that’s even better, because you’re also protected against a catastrophic failure of the data center as well. Second, you can replicate the data (and, optionally, even replicate server images) from your on-site infrastructure to a cloud storage repository, and have your hosting provider provision servers and desktops on demand in the event of a disaster - or, although this would cost a bit more, have them already provisioned so they could simply be turned on.
  • Cost - techtarget.com points out that DaaS allows businesses to gain the benefits of virtual desktops without having to acquire the in-house knowledge and skills necessary to deploy VDI themselves. While this is a true statement, it may be difficult to build a reliable ROI justification around it. We’ve found that it often is possible to see a positive ROI if you compare the cost of doing a “forklift upgrade” of servers and server software to the cost of simply moving everything to the cloud and never buying servers or server software again.

It’s worth taking a few minutes to read the entire article on techtarget.com (note - registration may be required to access some content). And, of course, it’s always nice to know we’re not the only ones who think there are some compelling advantages to cloud-hosted desktops!

So You Want to Be a Hosting Provider? (Part 3)

In Part 1 of this series, we discussed the options available to aspiring hosting providers:

  1. Buy hardware and build it yourself.
  2. Rent hardware and build it yourself.
  3. Rent VMs (e.g., Amazon, Azure) and build it yourself.
  4. Partner with someone who has already built it.

We went on to address the costs and other considerations of buying or renting hardware.

Then, in Part 2, we discussed using the Amazon EC2 cloud, with cost estimates based on the pricing tool that Citrix provides as part of the Citrix Service Provider program. We stressed that Amazon has built a great platform for building a hosting infrastructure for thousands of users, provided that you’ve got the cash up front to pay for reserved instances, and that your VMs only need to run for an average of 14 hours per day.

Our approach is a little different.

First, we believe that VARs and MSPs need a platform that will do an excellent job for their smaller customers – particular those who do not have a large staff of IT professionals, or those who are using what AMI Partners, in a study they did on behalf of Microsoft, referred to as an “Involuntary IT Manager” (IITM). These are the people who end up managing their organizations’ IT infrastructures because they have an interest in technology, or perhaps because they just happen to be better at it than anyone else in the organization, but who have other job responsibilities unrelated to IT. Often these individuals are senior managers, partners, or owners, and in nearly all cases could bring more value to the organization if they could spend 100% of their time doing what they were originally hired to do. Getting rid of on-site servers and moving data and applications to a private hosted cloud will allow these people to regain that lost productivity.

Second, we believe that most of these customers are going to need access to their cloud infrastructure on a 24/7 basis. Smaller companies tend to be headed by entrepreneurial people who don’t work traditional hours, and who tend to hire managers who also don’t work traditional hours. Turning their systems off for 10 hours per day to save on run-time costs simply isn’t going to be acceptable.

Third, we believe that the best mix of security and cost-effectiveness for most customers is to have a multi-tenant Active Directory, Exchange, and SharePoint infrastructure, but to dedicate one or more XenApp server(s) to each customer, along with a file server and whatever other application servers they may require (e.g., SQL Server, accounting server, etc.). This is done not only for security reasons, but to avoid “noisy neighbor” problems from poorly behaved applications (or users).

In VirtualQube’s multi-tenant hosting infrastructure, each customer is a separate Organizational Unit (OU) in our Active Directory. Each customer’s servers are in a separate OU, and are isolated on a customer-specific vLAN. Access from the public Internet is secured with a common Watchguard perimeter firewall and a Citrix NetScaler SSL/VPN appliance. Multi-tenant customers who need a permanent VPN connection to one or more office locations can have their own Internet port and their own firewall.

We also learned early on that some customers prefer not to participate in any kind of multi-tenant infrastructure, and others are prevented from doing so by security and compliance regulations. To accommodate these customers, we provision completely isolated environments with their own Domain Controllers, Exchange Servers, etc. A customer that does not participate in our multi-tenant infrastructure always gets a customer-specific firewall and NetScaler, and customer-specific Domain Controllers. At their option, they can still use our multi-tenant Exchange Server, or have their own.

Finally, we believe that many VARs and MSPs will benefit from prescriptive guidance for not just how to build a hosting infrastructure, but how to sell it. That’s why our partners have access to a document template library that covers how to do the necessary discovery to properly scope a cloud project, how to determine what cloud resources will be required and how to price out a customized private hosted cloud environment, how to position the solution to the customer, how to write the final proposal, how to handle customer data migration, and much, much more.

We believe that partnering with VirtualQube makes sense for VARs and MSPs because that’s the world we came from. Our hosting platform was built by a VAR/MSP for VARs/MSPs, and we used every bit of the experience we gained from twenty years of working with Citrix technology. That’s the VirtualQube difference.

So You Want to Be a Hosting Provider? (Part 2)

In Part 1 of this series, we talked about the options available to prospective hosting providers, and specifically about the costs of purchasing your own equipment. In this post we’re going to drill down into the costs of building a Citrix Service Provider hosting infrastructure on Amazon.

Amazon has some great offerings, and Citrix has spent a lot of time lately talking about using the EC2 infrastructure as a platform for Citrix Service Providers. There was an entire breakout session devoted to this subject at the 2014 Citrix Summit conference in Orlando. Anyone who signs up as a Citrix Service Provider can get access to a spreadsheet that allows you to input various assumptions about your infrastructure (e.g., number of users to support, assumed number of users per XenApp server, number of tenants in your multi-tenant environment, etc.) and calculates how many of what kind of compute instances you will need as well as the projected costs (annualized over three years). At first glance, these costs may look fairly attractive. But there are a number of assumptions built into the cost model that should make any aspiring service provider think twice:

  • It assumes that you’ve got enough users lined up that you can get the economies of scale from building an infrastructure for several hundred, if not thousands, of users.
  • It assumes that you’ve got enough free cash to pay up front for 3-year reserved instances of all the servers you’ll be provisioning.
  • It assumes that, on average, your servers will need to run only 14 hours per day. If your customers expect to be able to work when they want to work, day or night, this will be a problem.
  • It assumes that you will be able to support an average of 150 concurrent users on a XenApp server that’s running on a “Cluster Compute Eight Extra Large” instance. Anyone who has worked with XenApp knows that these assumptions must be taken with a very large grain of salt, as the number of concurrent users you can support on a XenApp server is highly dependent on the application set, and doesn’t necessarily scale linearly as you throw more processors at it.

If all of these assumptions are correct, the Citrix-provided spreadsheet says that you can build an EC2 infrastructure that will support 1,000 concurrent users (assuming 10 customers with 100 users each for the multi-tenancy calculation) for an average cost/user/month of $45.94 over a three year period. But that number is misleading, because you have to come up with $377,730 up front to reserve your EC2 instances for three years. So your first-year cost is not $551,270, but $803,081 – that’s actually $66.92/user/month for the first year, and then it drops to $35.45/user/month in years two and three, then back to $66.92/user/month in the fourth year, because you’ll have to come up with the reservation fees again at the beginning of year four.

There are a couple of other things about this model that are troublesome:

  1. By default, it assumes only a single file server for 1,000 users, meaning that you would administer security strictly via AD permissions. It also means that if anything happens to that file server, all of your tenants are impacted. If we instead provision ten file servers, so that each of the ten tenants has a dedicated file server, it bumps the average cost by roughly $5/user/month.
  2. If your user count is 100 users per tenant, but you’re expecting to support 150 users per XenApp server, you’ll obviously have users from multiple tenant organizations running concurrently on the same XenApp server. This, in turn, means that if a user from one tenant organization does something that impacts XenApp performance – e.g., launches the Production Planning Spreadsheet from Hell that pegs the processor for five minutes recalculating the entire spreadsheet whenever a single cell is changed – it will affect more than just that tenant organization. (And, yes, I know that there are ways to protect against runaway processor utilization - but that’s still something else you have to set up and manage, and, depending on how you approach the problem, potentially another licensing component you have to pay for.) If we assume only 100 users per XenApp server, so that we can dedicate one XenApp server to each tenant organization, it bumps the average cost by roughly another $1.50/user/month.

“But wait,” you might say, “not many VARs/MSPs will want to – or be able to – build an infrastructure for 1,000 users right off the bat.” And you would be correct. So let’s scale it back a bit. Let’s look at an infrastructure that’s built for 250 users, and let’s assume that breaks down into five tenants, with 50 users each. Let’s further assume, for reasons touched on above, that each customer will get a dedicated file server, and one dedicated XenApp server. We’ll dial those XenApp servers back to “High CPU Extra Large” instances, which have 4 vCPUs and 7.5 Gb of vRAM each. Your average cost over three years, still assuming 3-year reserved instances, jumps to $168.28/user/month, and you must still be prepared to write a check for just over $350,000 for the 3-year reservation fees. Why the big jump? Primarily because there is a minimum amount of “overhead” in the server resources required simply to manage the Citrix infrastructure, the multi-tenant Active Directory and Exchange infrastructure, etc., and that overhead is now spread across fewer users.

Now consider that all of the prices we’ve been looking at so far cover only the compute and storage resources. We haven’t begun to factor in the monthly cost of Citrix or Microsoft Service Provider licensing. In round numbers, that will add another $25/user/month or so to your cost, including MS Office. Nor have we accounted for the possibility that some of your users may need additional SPLA applications, such as Visio or Project, or that some tenants may require a SQL server or some other additional application server. Nor have we accounted for the possibility that some of your tenants may require access to the infrastructure on a 24×7 basis, meaning that their servers have to run 24 hours per day, not just 14.

This is why, at the aforementioned session at the 2014 Citrix Summit conference in Orlando, the numbers presented in the session were challenged by several people during the ensuing Q&A, the general feedback being that they simply didn’t work in the real world.

So let’s quickly review where we are: As stated in Part 1 of this series, an aspiring hosting provider has four basic choices:

  1. Buy hardware and build it yourself. This was discussed in Part 1.
  2. Rent hardware (e.g., Rackspace) and build it yourself. This was not covered in detail, but once you’ve developed the list of equipment for option #1, it’s easy enough to get quotes for option #2.
  3. Rent VMs, as we have discussed above, and build it yourself.
  4. Partner with someone that has already built the required infrastructure.

We would respectfully submit that, for most VARs/MSPs, option #4 makes the most sense. But we’re biased, because (full disclosure again) VirtualQube has already built the infrastructure, and we know that our costs are significantly less than it would take to replicate our infrastructure on EC2. And we’re looking for some good partners.

In Part 3, we’ll go into what we believe an infrastructure needs to look like for a DaaS hosting provider that’s targeting the SMB market, so stay tuned.

So You Want to Be a Hosting Provider? (Part 1)

If you’re a VAR or MSP, you’ve been hearing voices from all quarters telling you that you’ve got to get into cloud services:

  • The 451 Research Group has estimated that, by 2015, the market for all kinds of “virtual desktops” will be as large as $5.6 Billion. IDC estimates that the portion of these virtual desktops sourced solely from the cloud could be over $600 Million by 2016, and growing at a more than 84% annually.
  • John Ross, technology consultant and former CTO of GreenPages Technology solutions was quoted in a crn.com article as saying, “This is the last time we are going to see hardware purchases through resellers for many, many years.” He predicts that 50% of the current crop of resellers will either be gone or have changed to a service provider model by 2018.
  • The same article cited research by UBM Tech Channel (the parent company of CRN) which indicated that “vintage VARs” that stay with the current on-premises model will have to add at least 50% more customers in the next few years to derive the same amount of sales, which will require them to increase their marketing budgets by an order of magnitude.
  • Dave Rice, co-founder and CTO of TrueCloud in Tempe, AZ, predicted in the same article that fewer than 20% of the current crop of solution providers will be able to make the transition to the cloud model. He compares the shift to cloud computing to the kind of transformational change that took place when PCs were first introduced to the enterprise back in the 1980s.

If you place any credence at all in these predictions, it’s pretty clear that you need to develop a cloud strategy. But how do you do it?

First of all, let’s be clear that, in our opinion, selling Office 365 to your customers is not a cloud strategy. Office 365 may be a great fit for some customers, but it still assumes that most computing will be done on a PC (or laptop) at the client endpoint, and your customer will still, in most cases, have at least one server to manage, backup, and repair when it breaks. Moreover, you are giving up a great deal of account control, and account “stickiness,” when you sell Office 365.

In our opinion, a cloud strategy should include the ability to make your customers’ servers go away entirely, move all of their data and applications into the cloud, and provide them with a Windows desktop, delivered from the cloud, that the user can access any time, from any location where Internet access is available. (Full disclosure: That’s precisely what we do here at VirtualQube, so we have an obvious bias in that direction.) There’s a pretty good argument to be made that if your data is in the cloud, your applications should be there too, and vice versa.

The best infrastructure for such a hosting environment (in the opinion of a lot of hosting providers, VirtualQube included) is a Microsoft/Citrix-powered environment. Currently, the most commonly deployed infrastructure is Windows Server 2008 R2 with Citrix XenApp v6.5. Microsoft and Citrix both have Service Provider License Agreements available so you can pay them monthly as your user count goes up. However, once you’ve signed those agreements, you’re still going to need some kind of hosting infrastructure.

Citrix can help you there as well. Once you’ve signed up with them, you can access their recommended “best practice” reference architecture for Citrix Service Providers. That architecture looks something like this:

When you’ve become familiar enough with the architectural model to jump into the deep end of the pool and start building servers, your next task is to find some servers to build. Broadly speaking, your choices are:

  1. Buy several tens of thousands of dollars (at least) of server hardware, storage systems, switches, etc., secure some space in a co-location facility, rack up the equipment, and start building servers. Repeat in a second location, if geo-redundancy is desired. Then sweat bullets hoping that you can sign enough customers to not only pay for the equipment you bought, but make enough profit that you can afford to refresh that hardware in three or four years.
  2. Rent hardware from someone like Rackspace, and build on their platform. Again, if you want geo-redundancy, you’re going to need to pay for hardware in at least two separate Rackspace facilities to insure that you have something to fail over to if you ever need to fail over.
  3. Rent VMs from someone like Amazon or Azure. Citrix has been talking a lot about this lately, and has even produced some helpful pricing tools that will allow you to estimate your cost/user/month on these platforms.
  4. Partner with someone who has already built it, so you can start small and “pay as you grow.”

Now, in all fairness, the reference architecture above is what you would build if you wanted to scale your hosting service to several thousand customers. A wiser approach for a typical VAR or MSP would be to start much smaller. Still, you will need at least two beefy virtualization hosts – preferably three so if you lose one, your infrastructure is still redundant – a SAN with redundancy built in, a switching infrastructure, a perimeter firewall, and something like a Citrix NetScaler (or NetScaler VPX) for SSL connections into your cloud.

Both VMware and Hyper-V require server-based management tools (vCenter and System Center, respectively), so if you’ve chosen one of those products as your virtualization platform, don’t forget to allocate resources for the management servers. Also if you’re running Hyper-V, you will need at least one physical Domain Controller (for technical reasons that are beyond the scope of this article). Depending on how much storage you want to provision, and whose SAN you choose, you’re conservatively looking at $80,000 - $100,000. Again, if geo-redundancy is desired, double the numbers, and don’t forget to factor in the cost of one or more co-location facilities.

Finally, you should assume at least 120 – 150 hours of work effort (per facility) to get everything put together and tested before you should even think of putting a paying customer on that infrastructure.

If you’re not put off by the prospect of purchasing the necessary equipment, securing the co-lo space, and putting in the required work effort to build the infrastructure, you should also begin planning the work required to successfully sell your services: Creating marketing materials, training materials, and contracts will take considerable work, and creating repeatable onboarding and customer data migration processes will be critical to creating a manageable and scalable solution. If, on the other hand, this doesn’t strike you as a good way to invest your time and money, let’s move on to other options.

Once you’ve created your list of equipment for option #1, it’s easy enough to take that list to someone like Rackspace and obtain a quote for renting it so you can get a feeling for option #2. The second part of this series will take a closer look at the next option.