Category Archives: Application Hosting

Ingram-Micro Cloud Summit 2014

On Monday afternoon, I walked by the beautiful 3 story atrium and into the conference center attached to the Westin Diplomat Hotel in Hollywood, FL. It was torturous. After experiencing a March in Seattle which had 3x the normal amount of rain, I was so excited to see the beautiful blue sky and feel the 70 degree temperatures. And it was just a few feet beyond me as I walked down the long hallway to the Conference Center.

Minutes later, I headed into my first session titled “Effective Executive Leadership Skills” led by Gary Beechum of SPC International. If you haven’t met Gary, you really should. He’s no-nonsense, direct, inspirational and articulate. He often references he time in the military and even uses some of the tools he picked-up while in the Army in his presentation. I definitely learned some things to bring back to our Leadership Team. One of the best parts of his presentation was the 14 Traits of Leaders.

At the reception that followed our classroom sessions I met a ton of new people. Many were from across the country and wanted to work with a firm like VirtualQube, and some who wanted to partner with us to deliver new bundles to customers. Our story really resonated with the attendees. There are a number of MSPs looking for a white-labeled cloud offering, and people would actually overhear my conversation and ask me for a card. I think one of the great benefits of this conference was since it was focused on “cloud” there weren’t MSPs who didn’t have any idea about how they were going to deliver cloud services. Many had come to the conclusion that they would rather hire-out a solid cloud vendor instead of re-invent the wheel and build their own hardware. Our story was like music to their ears. And we’ve even written about it recently here.

All-in-all, the first day of the conference has been so valuable that I’m excited not only for the rest of the conference, but for working more closely with Ingram Micro over the coming months.

Karl Burns

So You Want to Be a Hosting Provider? (Part 3)

In Part 1 of this series, we discussed the options available to aspiring hosting providers:

  1. Buy hardware and build it yourself.
  2. Rent hardware and build it yourself.
  3. Rent VMs (e.g., Amazon, Azure) and build it yourself.
  4. Partner with someone who has already built it.

We went on to address the costs and other considerations of buying or renting hardware.

Then, in Part 2, we discussed using the Amazon EC2 cloud, with cost estimates based on the pricing tool that Citrix provides as part of the Citrix Service Provider program. We stressed that Amazon has built a great platform for building a hosting infrastructure for thousands of users, provided that you’ve got the cash up front to pay for reserved instances, and that your VMs only need to run for an average of 14 hours per day.

Our approach is a little different.

First, we believe that VARs and MSPs need a platform that will do an excellent job for their smaller customers – particular those who do not have a large staff of IT professionals, or those who are using what AMI Partners, in a study they did on behalf of Microsoft, referred to as an “Involuntary IT Manager” (IITM). These are the people who end up managing their organizations’ IT infrastructures because they have an interest in technology, or perhaps because they just happen to be better at it than anyone else in the organization, but who have other job responsibilities unrelated to IT. Often these individuals are senior managers, partners, or owners, and in nearly all cases could bring more value to the organization if they could spend 100% of their time doing what they were originally hired to do. Getting rid of on-site servers and moving data and applications to a private hosted cloud will allow these people to regain that lost productivity.

Second, we believe that most of these customers are going to need access to their cloud infrastructure on a 24/7 basis. Smaller companies tend to be headed by entrepreneurial people who don’t work traditional hours, and who tend to hire managers who also don’t work traditional hours. Turning their systems off for 10 hours per day to save on run-time costs simply isn’t going to be acceptable.

Third, we believe that the best mix of security and cost-effectiveness for most customers is to have a multi-tenant Active Directory, Exchange, and SharePoint infrastructure, but to dedicate one or more XenApp server(s) to each customer, along with a file server and whatever other application servers they may require (e.g., SQL Server, accounting server, etc.). This is done not only for security reasons, but to avoid “noisy neighbor” problems from poorly behaved applications (or users).

In VirtualQube’s multi-tenant hosting infrastructure, each customer is a separate Organizational Unit (OU) in our Active Directory. Each customer’s servers are in a separate OU, and are isolated on a customer-specific vLAN. Access from the public Internet is secured with a common Watchguard perimeter firewall and a Citrix NetScaler SSL/VPN appliance. Multi-tenant customers who need a permanent VPN connection to one or more office locations can have their own Internet port and their own firewall.

We also learned early on that some customers prefer not to participate in any kind of multi-tenant infrastructure, and others are prevented from doing so by security and compliance regulations. To accommodate these customers, we provision completely isolated environments with their own Domain Controllers, Exchange Servers, etc. A customer that does not participate in our multi-tenant infrastructure always gets a customer-specific firewall and NetScaler, and customer-specific Domain Controllers. At their option, they can still use our multi-tenant Exchange Server, or have their own.

Finally, we believe that many VARs and MSPs will benefit from prescriptive guidance for not just how to build a hosting infrastructure, but how to sell it. That’s why our partners have access to a document template library that covers how to do the necessary discovery to properly scope a cloud project, how to determine what cloud resources will be required and how to price out a customized private hosted cloud environment, how to position the solution to the customer, how to write the final proposal, how to handle customer data migration, and much, much more.

We believe that partnering with VirtualQube makes sense for VARs and MSPs because that’s the world we came from. Our hosting platform was built by a VAR/MSP for VARs/MSPs, and we used every bit of the experience we gained from twenty years of working with Citrix technology. That’s the VirtualQube difference.

So You Want to Be a Hosting Provider? (Part 2)

In Part 1 of this series, we talked about the options available to prospective hosting providers, and specifically about the costs of purchasing your own equipment. In this post we’re going to drill down into the costs of building a Citrix Service Provider hosting infrastructure on Amazon.

Amazon has some great offerings, and Citrix has spent a lot of time lately talking about using the EC2 infrastructure as a platform for Citrix Service Providers. There was an entire breakout session devoted to this subject at the 2014 Citrix Summit conference in Orlando. Anyone who signs up as a Citrix Service Provider can get access to a spreadsheet that allows you to input various assumptions about your infrastructure (e.g., number of users to support, assumed number of users per XenApp server, number of tenants in your multi-tenant environment, etc.) and calculates how many of what kind of compute instances you will need as well as the projected costs (annualized over three years). At first glance, these costs may look fairly attractive. But there are a number of assumptions built into the cost model that should make any aspiring service provider think twice:

  • It assumes that you’ve got enough users lined up that you can get the economies of scale from building an infrastructure for several hundred, if not thousands, of users.
  • It assumes that you’ve got enough free cash to pay up front for 3-year reserved instances of all the servers you’ll be provisioning.
  • It assumes that, on average, your servers will need to run only 14 hours per day. If your customers expect to be able to work when they want to work, day or night, this will be a problem.
  • It assumes that you will be able to support an average of 150 concurrent users on a XenApp server that’s running on a “Cluster Compute Eight Extra Large” instance. Anyone who has worked with XenApp knows that these assumptions must be taken with a very large grain of salt, as the number of concurrent users you can support on a XenApp server is highly dependent on the application set, and doesn’t necessarily scale linearly as you throw more processors at it.

If all of these assumptions are correct, the Citrix-provided spreadsheet says that you can build an EC2 infrastructure that will support 1,000 concurrent users (assuming 10 customers with 100 users each for the multi-tenancy calculation) for an average cost/user/month of $45.94 over a three year period. But that number is misleading, because you have to come up with $377,730 up front to reserve your EC2 instances for three years. So your first-year cost is not $551,270, but $803,081 – that’s actually $66.92/user/month for the first year, and then it drops to $35.45/user/month in years two and three, then back to $66.92/user/month in the fourth year, because you’ll have to come up with the reservation fees again at the beginning of year four.

There are a couple of other things about this model that are troublesome:

  1. By default, it assumes only a single file server for 1,000 users, meaning that you would administer security strictly via AD permissions. It also means that if anything happens to that file server, all of your tenants are impacted. If we instead provision ten file servers, so that each of the ten tenants has a dedicated file server, it bumps the average cost by roughly $5/user/month.
  2. If your user count is 100 users per tenant, but you’re expecting to support 150 users per XenApp server, you’ll obviously have users from multiple tenant organizations running concurrently on the same XenApp server. This, in turn, means that if a user from one tenant organization does something that impacts XenApp performance – e.g., launches the Production Planning Spreadsheet from Hell that pegs the processor for five minutes recalculating the entire spreadsheet whenever a single cell is changed – it will affect more than just that tenant organization. (And, yes, I know that there are ways to protect against runaway processor utilization – but that’s still something else you have to set up and manage, and, depending on how you approach the problem, potentially another licensing component you have to pay for.) If we assume only 100 users per XenApp server, so that we can dedicate one XenApp server to each tenant organization, it bumps the average cost by roughly another $1.50/user/month.

“But wait,” you might say, “not many VARs/MSPs will want to – or be able to – build an infrastructure for 1,000 users right off the bat.” And you would be correct. So let’s scale it back a bit. Let’s look at an infrastructure that’s built for 250 users, and let’s assume that breaks down into five tenants, with 50 users each. Let’s further assume, for reasons touched on above, that each customer will get a dedicated file server, and one dedicated XenApp server. We’ll dial those XenApp servers back to “High CPU Extra Large” instances, which have 4 vCPUs and 7.5 Gb of vRAM each. Your average cost over three years, still assuming 3-year reserved instances, jumps to $168.28/user/month, and you must still be prepared to write a check for just over $350,000 for the 3-year reservation fees. Why the big jump? Primarily because there is a minimum amount of “overhead” in the server resources required simply to manage the Citrix infrastructure, the multi-tenant Active Directory and Exchange infrastructure, etc., and that overhead is now spread across fewer users.

Now consider that all of the prices we’ve been looking at so far cover only the compute and storage resources. We haven’t begun to factor in the monthly cost of Citrix or Microsoft Service Provider licensing. In round numbers, that will add another $25/user/month or so to your cost, including MS Office. Nor have we accounted for the possibility that some of your users may need additional SPLA applications, such as Visio or Project, or that some tenants may require a SQL server or some other additional application server. Nor have we accounted for the possibility that some of your tenants may require access to the infrastructure on a 24×7 basis, meaning that their servers have to run 24 hours per day, not just 14.

This is why, at the aforementioned session at the 2014 Citrix Summit conference in Orlando, the numbers presented in the session were challenged by several people during the ensuing Q&A, the general feedback being that they simply didn’t work in the real world.

So let’s quickly review where we are: As stated in Part 1 of this series, an aspiring hosting provider has four basic choices:

  1. Buy hardware and build it yourself. This was discussed in Part 1.
  2. Rent hardware (e.g., Rackspace) and build it yourself. This was not covered in detail, but once you’ve developed the list of equipment for option #1, it’s easy enough to get quotes for option #2.
  3. Rent VMs, as we have discussed above, and build it yourself.
  4. Partner with someone that has already built the required infrastructure.

We would respectfully submit that, for most VARs/MSPs, option #4 makes the most sense. But we’re biased, because (full disclosure again) VirtualQube has already built the infrastructure, and we know that our costs are significantly less than it would take to replicate our infrastructure on EC2. And we’re looking for some good partners.

In Part 3, we’ll go into what we believe an infrastructure needs to look like for a DaaS hosting provider that’s targeting the SMB market, so stay tuned.

So You Want to Be a Hosting Provider? (Part 1)

If you’re a VAR or MSP, you’ve been hearing voices from all quarters telling you that you’ve got to get into cloud services:

  • The 451 Research Group has estimated that, by 2015, the market for all kinds of “virtual desktops” will be as large as $5.6 Billion. IDC estimates that the portion of these virtual desktops sourced solely from the cloud could be over $600 Million by 2016, and growing at a more than 84% annually.
  • John Ross, technology consultant and former CTO of GreenPages Technology solutions was quoted in a crn.com article as saying, “This is the last time we are going to see hardware purchases through resellers for many, many years.” He predicts that 50% of the current crop of resellers will either be gone or have changed to a service provider model by 2018.
  • The same article cited research by UBM Tech Channel (the parent company of CRN) which indicated that “vintage VARs” that stay with the current on-premises model will have to add at least 50% more customers in the next few years to derive the same amount of sales, which will require them to increase their marketing budgets by an order of magnitude.
  • Dave Rice, co-founder and CTO of TrueCloud in Tempe, AZ, predicted in the same article that fewer than 20% of the current crop of solution providers will be able to make the transition to the cloud model. He compares the shift to cloud computing to the kind of transformational change that took place when PCs were first introduced to the enterprise back in the 1980s.

If you place any credence at all in these predictions, it’s pretty clear that you need to develop a cloud strategy. But how do you do it?

First of all, let’s be clear that, in our opinion, selling Office 365 to your customers is not a cloud strategy. Office 365 may be a great fit for some customers, but it still assumes that most computing will be done on a PC (or laptop) at the client endpoint, and your customer will still, in most cases, have at least one server to manage, backup, and repair when it breaks. Moreover, you are giving up a great deal of account control, and account “stickiness,” when you sell Office 365.

In our opinion, a cloud strategy should include the ability to make your customers’ servers go away entirely, move all of their data and applications into the cloud, and provide them with a Windows desktop, delivered from the cloud, that the user can access any time, from any location where Internet access is available. (Full disclosure: That’s precisely what we do here at VirtualQube, so we have an obvious bias in that direction.) There’s a pretty good argument to be made that if your data is in the cloud, your applications should be there too, and vice versa.

The best infrastructure for such a hosting environment (in the opinion of a lot of hosting providers, VirtualQube included) is a Microsoft/Citrix-powered environment. Currently, the most commonly deployed infrastructure is Windows Server 2008 R2 with Citrix XenApp v6.5. Microsoft and Citrix both have Service Provider License Agreements available so you can pay them monthly as your user count goes up. However, once you’ve signed those agreements, you’re still going to need some kind of hosting infrastructure.

Citrix can help you there as well. Once you’ve signed up with them, you can access their recommended “best practice” reference architecture for Citrix Service Providers. That architecture looks something like this:
CSPArchitecture

When you’ve become familiar enough with the architectural model to jump into the deep end of the pool and start building servers, your next task is to find some servers to build. Broadly speaking, your choices are:

  1. Buy several tens of thousands of dollars (at least) of server hardware, storage systems, switches, etc., secure some space in a co-location facility, rack up the equipment, and start building servers. Repeat in a second location, if geo-redundancy is desired. Then sweat bullets hoping that you can sign enough customers to not only pay for the equipment you bought, but make enough profit that you can afford to refresh that hardware in three or four years.
  2. Rent hardware from someone like Rackspace, and build on their platform. Again, if you want geo-redundancy, you’re going to need to pay for hardware in at least two separate Rackspace facilities to insure that you have something to fail over to if you ever need to fail over.
  3. Rent VMs from someone like Amazon or Azure. Citrix has been talking a lot about this lately, and has even produced some helpful pricing tools that will allow you to estimate your cost/user/month on these platforms.
  4. Partner with someone who has already built it, so you can start small and “pay as you grow.”

Now, in all fairness, the reference architecture above is what you would build if you wanted to scale your hosting service to several thousand customers. A wiser approach for a typical VAR or MSP would be to start much smaller. Still, you will need at least two beefy virtualization hosts – preferably three so if you lose one, your infrastructure is still redundant – a SAN with redundancy built in, a switching infrastructure, a perimeter firewall, and something like a Citrix NetScaler (or NetScaler VPX) for SSL connections into your cloud.

Both VMware and Hyper-V require server-based management tools (vCenter and System Center, respectively), so if you’ve chosen one of those products as your virtualization platform, don’t forget to allocate resources for the management servers. Also if you’re running Hyper-V, you will need at least one physical Domain Controller (for technical reasons that are beyond the scope of this article). Depending on how much storage you want to provision, and whose SAN you choose, you’re conservatively looking at $80,000 – $100,000. Again, if geo-redundancy is desired, double the numbers, and don’t forget to factor in the cost of one or more co-location facilities.

Finally, you should assume at least 120 – 150 hours of work effort (per facility) to get everything put together and tested before you should even think of putting a paying customer on that infrastructure.

If you’re not put off by the prospect of purchasing the necessary equipment, securing the co-lo space, and putting in the required work effort to build the infrastructure, you should also begin planning the work required to successfully sell your services: Creating marketing materials, training materials, and contracts will take considerable work, and creating repeatable onboarding and customer data migration processes will be critical to creating a manageable and scalable solution. If, on the other hand, this doesn’t strike you as a good way to invest your time and money, let’s move on to other options.

Once you’ve created your list of equipment for option #1, it’s easy enough to take that list to someone like Rackspace and obtain a quote for renting it so you can get a feeling for option #2. The second part of this series will take a closer look at the next option.

How’s That “Cloud” Thing Working For You?

Color me skeptical when it comes to the “cloud computing” craze. Well, OK, maybe my skepticism isn’t so much about cloud computing per se as it is about the way people seem to think it is the ultimate answer to Life, the Universe, and Everything (shameless Douglass Adams reference). In part, that’s because I’ve been around IT long enough that I’ve seen previous incarnations of this concept come and go. Application Service Providers were supposed to take the world by storm a decade ago. Didn’t happen. The idea came back around as “Software as a Service” (or, as Microsoft preferred to frame it, “Software + Services”). Now it’s cloud computing. In all of its incarnations, the bottom line is that you’re putting your critical applications and data on someone else’s hardware, and sometimes even renting their Operating Systems to run it on and their software to manage it. And whenever you do that, there is an associated risk – as several users of Amazon’s EC2 service discovered just last week.

I have no doubt that the forensic analysis of what happened and why will drag on for a long time. Justin Santa Barbara had an interesting blog post last Thursday (April 21) that discussed how the design of Amazon Web Services (AWS), and its segmentation into Regions and Availability Zones, is supposed to protect you against precisely the kind of failure that occurred last week…except that it didn’t.

Phil Wainewright has an interesting post over at ZDnet.com on the “Seven lessons to learn from Amazon’s outage.” The first two points he makes are particularly important: First, “Read your cloud provider’s SLA very carefully” – because it appears that, despite the considerable pain some of Amazon’s customers were feeling, the SLA was not breached, legally speaking. Second, “Don’t take your provider’s assurances for granted” – for reasons that should be obvious.

Wainewright’s final point, though, may be the most disturbing, because it focuses on Amazon’s “lack of transparency.” He quotes BigDoor CEO Keith Smith as saying, “If Amazon had been more forthcoming with what they are experiencing, we would have been able to restore our systems sooner.” This was echoed in Santa Barbara’s blog post where, in discussing customers’ options for failing over to a different cloud, he observes, “Perhaps they would have started that process had AWS communicated at the start that it would have been such a big outage, but AWS communication is – frankly – abysmal other than their PR.” The transparency issue was also echoed by Andrew Hickey in an article posted April 26 on CRN.com.

CRN also wrote about “lessons learned,” although they came up with 10 of them. Their first point is that “Cloud outages are going to happen…and if you can’t stand the outage, get out of the cloud.” They go on to talk about not putting “Blind Trust” in the cloud, and to point out that management and maintenance are still required – “it’s not a ‘set it and forget it’ environment.”

And it’s not like this is the first time people have been affected by a failure in the cloud:

  • Amazon had a significant outage of their S3 online storage service back in July, 2008. Their northern Virginia data center was affected by a lightning strike in July of 2009, and another power issue affected “some instances in its US-EAST-1 availability zone” in December of 2009.
  • Gmail experienced a system-wide outage for a period of time in August, 2008, then was down again for over 1 ½ hours in September, 2009.
  • The Microsoft/Danger outage in October, 2009, caused a lot of T-Mobile customers to lose personal information that was stored on their Sidekick devices, including contacts, calendar entries, to-do lists, and photos.
  • In January, 2010, failure of a UPS took several hundred servers offline for hours at a Rackspace data center in London. (Rackspace also had a couple of service-affecting failures in their Dallas area data center in 2009.)
  • Salesforce.com users have suffered repeatedly from service outages over the last several years.

This takes me back to a comment made by one of our former customers, who was the CIO of a local insurance company, and who later joined our engineering team for a while. Speaking of the ASPs of a decade ago, he stated, “I wouldn’t trust my critical data to any of them – because I don’t believe that any of them care as much about my data as I do. And until they can convince me that they do, and show me the processes and procedures they have in place to protect it, they’re not getting my data!”

Don’t get me wrong – the “Cloud” (however you choose to define it…and that’s part of the problem) has its place. Cloud services are becoming more affordable, and more reliable. But, as one solution provider quoted in the CRN “lessons learned” article put it, “Just because I can move it into the cloud, that doesn’t mean I can ignore it. It still needs to be managed. It still needs to be maintained.” Never forget that it’s your data, and no one cares about it as much as you do, no matter what they tell you. Forrester analyst Rachel Dines may have said it best in her blog entry from last week: “ASSUME NOTHING. Your cloud provider isn’t in charge of your disaster recovery plan, YOU ARE!” (She also lists several really good questions you should ask your cloud provider.)

Cloud technologies can solve specific problems for you, and can provide some additional, and valuable, tools for your IT toolbox. But you dare not assume that all of your problems will automagically disappear just because you put all your stuff in the cloud. It’s still your stuff, and ultimately your responsibility.