Tag Archives: Business Practices

What You Need to Know About the Heartbleed Bug

OpenSSL-HeartBleed-Bug-logoYou may have heard that a new critical vulnerability has been identified that has affected many Internet Web servers – specifically those that use certain versions of “Open SSL” as a means of encrypting user sessions. We have inspected all VirtualQube.com Web sites, and verified that none of our sites have this vulnerability. However, it is possible that other Web sites you use on a regular basis are, or were, vulnerable. You can find a list of the top 1000 Web sites and their status (“vulnerable” / “not vulnerable”) as of roughly 12:00 UTC yesterday at https://github.com/musalbas/heartbleed-masstest/blob/master/top1000.txt. It is possible that many of the sites listed as “vulnerable” at the time have since patched their servers. However, if you have accounts on any of these sites – and the “vulnerable” list includes some high-profile sites such as yahoo.com, flickr.com, okcupid.com, slate.com, and eventbrite.com – you should immediately change your passwords.

There is also a useful tool available at http://filippo.io/Heartbleed/ that will allow you to check out a Web site if you are unsure whether or not it is vulnerable.

For the more technical in the crowd who are wondering how this vulnerability affects Web security, it allows an attacker to extract data from the memory of a Web server in up to 64K chunks. That may not sound like much, but if enough 64K chunks are extracted, useful information can be reconstructed, including username/password combinations, and even the private encryption key of the server itself. http://www.mysqlperformanceblog.com/2014/04/08/openssl-heartbleed-cve-2014-0160/ contains a list of the specific versions of OpenSSL that are vulnerable to this exploit.

Fujitsu Ultrabook First Impressions


Last night I had the opportunity to configure a new Fujitsu U Series Ultrabook. I like my computers like I like my cars, sleek and fast, and this U904 Notebook delivers that and more! This system is carved out of a single slab of titanium and is ultra-light and ultra-thin. The multi-touch 14” screen is a blast to use and creates a whole new experience when using Windows 8.1. This system is hands down one of the sleekest systems I have used, and has plenty of power for demanding applications.  I would highly recommend you give Fujitsu a look the next time you are in the market for a quality notebook.

So You Want to Be a Hosting Provider? (Part 3)

In Part 1 of this series, we discussed the options available to aspiring hosting providers:

  1. Buy hardware and build it yourself.
  2. Rent hardware and build it yourself.
  3. Rent VMs (e.g., Amazon, Azure) and build it yourself.
  4. Partner with someone who has already built it.

We went on to address the costs and other considerations of buying or renting hardware.

Then, in Part 2, we discussed using the Amazon EC2 cloud, with cost estimates based on the pricing tool that Citrix provides as part of the Citrix Service Provider program. We stressed that Amazon has built a great platform for building a hosting infrastructure for thousands of users, provided that you’ve got the cash up front to pay for reserved instances, and that your VMs only need to run for an average of 14 hours per day.

Our approach is a little different.

First, we believe that VARs and MSPs need a platform that will do an excellent job for their smaller customers – particular those who do not have a large staff of IT professionals, or those who are using what AMI Partners, in a study they did on behalf of Microsoft, referred to as an “Involuntary IT Manager” (IITM). These are the people who end up managing their organizations’ IT infrastructures because they have an interest in technology, or perhaps because they just happen to be better at it than anyone else in the organization, but who have other job responsibilities unrelated to IT. Often these individuals are senior managers, partners, or owners, and in nearly all cases could bring more value to the organization if they could spend 100% of their time doing what they were originally hired to do. Getting rid of on-site servers and moving data and applications to a private hosted cloud will allow these people to regain that lost productivity.

Second, we believe that most of these customers are going to need access to their cloud infrastructure on a 24/7 basis. Smaller companies tend to be headed by entrepreneurial people who don’t work traditional hours, and who tend to hire managers who also don’t work traditional hours. Turning their systems off for 10 hours per day to save on run-time costs simply isn’t going to be acceptable.

Third, we believe that the best mix of security and cost-effectiveness for most customers is to have a multi-tenant Active Directory, Exchange, and SharePoint infrastructure, but to dedicate one or more XenApp server(s) to each customer, along with a file server and whatever other application servers they may require (e.g., SQL Server, accounting server, etc.). This is done not only for security reasons, but to avoid “noisy neighbor” problems from poorly behaved applications (or users).

In VirtualQube’s multi-tenant hosting infrastructure, each customer is a separate Organizational Unit (OU) in our Active Directory. Each customer’s servers are in a separate OU, and are isolated on a customer-specific vLAN. Access from the public Internet is secured with a common Watchguard perimeter firewall and a Citrix NetScaler SSL/VPN appliance. Multi-tenant customers who need a permanent VPN connection to one or more office locations can have their own Internet port and their own firewall.

We also learned early on that some customers prefer not to participate in any kind of multi-tenant infrastructure, and others are prevented from doing so by security and compliance regulations. To accommodate these customers, we provision completely isolated environments with their own Domain Controllers, Exchange Servers, etc. A customer that does not participate in our multi-tenant infrastructure always gets a customer-specific firewall and NetScaler, and customer-specific Domain Controllers. At their option, they can still use our multi-tenant Exchange Server, or have their own.

Finally, we believe that many VARs and MSPs will benefit from prescriptive guidance for not just how to build a hosting infrastructure, but how to sell it. That’s why our partners have access to a document template library that covers how to do the necessary discovery to properly scope a cloud project, how to determine what cloud resources will be required and how to price out a customized private hosted cloud environment, how to position the solution to the customer, how to write the final proposal, how to handle customer data migration, and much, much more.

We believe that partnering with VirtualQube makes sense for VARs and MSPs because that’s the world we came from. Our hosting platform was built by a VAR/MSP for VARs/MSPs, and we used every bit of the experience we gained from twenty years of working with Citrix technology. That’s the VirtualQube difference.

So You Want to Be a Hosting Provider? (Part 1)

If you’re a VAR or MSP, you’ve been hearing voices from all quarters telling you that you’ve got to get into cloud services:

  • The 451 Research Group has estimated that, by 2015, the market for all kinds of “virtual desktops” will be as large as $5.6 Billion. IDC estimates that the portion of these virtual desktops sourced solely from the cloud could be over $600 Million by 2016, and growing at a more than 84% annually.
  • John Ross, technology consultant and former CTO of GreenPages Technology solutions was quoted in a crn.com article as saying, “This is the last time we are going to see hardware purchases through resellers for many, many years.” He predicts that 50% of the current crop of resellers will either be gone or have changed to a service provider model by 2018.
  • The same article cited research by UBM Tech Channel (the parent company of CRN) which indicated that “vintage VARs” that stay with the current on-premises model will have to add at least 50% more customers in the next few years to derive the same amount of sales, which will require them to increase their marketing budgets by an order of magnitude.
  • Dave Rice, co-founder and CTO of TrueCloud in Tempe, AZ, predicted in the same article that fewer than 20% of the current crop of solution providers will be able to make the transition to the cloud model. He compares the shift to cloud computing to the kind of transformational change that took place when PCs were first introduced to the enterprise back in the 1980s.

If you place any credence at all in these predictions, it’s pretty clear that you need to develop a cloud strategy. But how do you do it?

First of all, let’s be clear that, in our opinion, selling Office 365 to your customers is not a cloud strategy. Office 365 may be a great fit for some customers, but it still assumes that most computing will be done on a PC (or laptop) at the client endpoint, and your customer will still, in most cases, have at least one server to manage, backup, and repair when it breaks. Moreover, you are giving up a great deal of account control, and account “stickiness,” when you sell Office 365.

In our opinion, a cloud strategy should include the ability to make your customers’ servers go away entirely, move all of their data and applications into the cloud, and provide them with a Windows desktop, delivered from the cloud, that the user can access any time, from any location where Internet access is available. (Full disclosure: That’s precisely what we do here at VirtualQube, so we have an obvious bias in that direction.) There’s a pretty good argument to be made that if your data is in the cloud, your applications should be there too, and vice versa.

The best infrastructure for such a hosting environment (in the opinion of a lot of hosting providers, VirtualQube included) is a Microsoft/Citrix-powered environment. Currently, the most commonly deployed infrastructure is Windows Server 2008 R2 with Citrix XenApp v6.5. Microsoft and Citrix both have Service Provider License Agreements available so you can pay them monthly as your user count goes up. However, once you’ve signed those agreements, you’re still going to need some kind of hosting infrastructure.

Citrix can help you there as well. Once you’ve signed up with them, you can access their recommended “best practice” reference architecture for Citrix Service Providers. That architecture looks something like this:

When you’ve become familiar enough with the architectural model to jump into the deep end of the pool and start building servers, your next task is to find some servers to build. Broadly speaking, your choices are:

  1. Buy several tens of thousands of dollars (at least) of server hardware, storage systems, switches, etc., secure some space in a co-location facility, rack up the equipment, and start building servers. Repeat in a second location, if geo-redundancy is desired. Then sweat bullets hoping that you can sign enough customers to not only pay for the equipment you bought, but make enough profit that you can afford to refresh that hardware in three or four years.
  2. Rent hardware from someone like Rackspace, and build on their platform. Again, if you want geo-redundancy, you’re going to need to pay for hardware in at least two separate Rackspace facilities to insure that you have something to fail over to if you ever need to fail over.
  3. Rent VMs from someone like Amazon or Azure. Citrix has been talking a lot about this lately, and has even produced some helpful pricing tools that will allow you to estimate your cost/user/month on these platforms.
  4. Partner with someone who has already built it, so you can start small and “pay as you grow.”

Now, in all fairness, the reference architecture above is what you would build if you wanted to scale your hosting service to several thousand customers. A wiser approach for a typical VAR or MSP would be to start much smaller. Still, you will need at least two beefy virtualization hosts – preferably three so if you lose one, your infrastructure is still redundant – a SAN with redundancy built in, a switching infrastructure, a perimeter firewall, and something like a Citrix NetScaler (or NetScaler VPX) for SSL connections into your cloud.

Both VMware and Hyper-V require server-based management tools (vCenter and System Center, respectively), so if you’ve chosen one of those products as your virtualization platform, don’t forget to allocate resources for the management servers. Also if you’re running Hyper-V, you will need at least one physical Domain Controller (for technical reasons that are beyond the scope of this article). Depending on how much storage you want to provision, and whose SAN you choose, you’re conservatively looking at $80,000 – $100,000. Again, if geo-redundancy is desired, double the numbers, and don’t forget to factor in the cost of one or more co-location facilities.

Finally, you should assume at least 120 – 150 hours of work effort (per facility) to get everything put together and tested before you should even think of putting a paying customer on that infrastructure.

If you’re not put off by the prospect of purchasing the necessary equipment, securing the co-lo space, and putting in the required work effort to build the infrastructure, you should also begin planning the work required to successfully sell your services: Creating marketing materials, training materials, and contracts will take considerable work, and creating repeatable onboarding and customer data migration processes will be critical to creating a manageable and scalable solution. If, on the other hand, this doesn’t strike you as a good way to invest your time and money, let’s move on to other options.

Once you’ve created your list of equipment for option #1, it’s easy enough to take that list to someone like Rackspace and obtain a quote for renting it so you can get a feeling for option #2. The second part of this series will take a closer look at the next option.

Adventures with Windows 8 (Part 1)

I’ve been holding back on doing any testing with Windows 8, mostly because I didn’t have a suitable system that I was willing to risk screwing up by putting a pre-release OS on it. But, now that Win8 has been RTM and the bits are out there on MSDN, the Microsoft Partner site, etc., I decided to take the plunge. I downloaded the bits and our internal-use license key via the Microsoft Partner site, and on Saturday, I decided to upgrade my Motion Computing LE1700 tablet to Windows 8.

The LE1700 has been my primary computing system now for at least four years. It’s got an Intel Core2 L7400 CPU (1.5 GHz), and 4 Gb of RAM. It came with Vista pre-installed, but when Win7 was released, I was able to upgrade it with a minimum of driver hassles. The LE1700 has a completely detachable keyboard, and I have a docking station in the office and a docking station at home with full-size keyboards and monitors in each location, so the ability to move back and forth has been great.

The only down side is that it only has a 70 Gb hard disk. As time has gone by, that’s become more and more difficult to live with – and I finally bought a 32 Gb SD card (fortunately, it does have an SD memory slot) and moved a lot of infrequently-accessed files off the hard disk. This also made it difficult to do the Win8 upgrade, in that I had to move a bunch of additional data off the hard disk to free up enough space, then upgrade, then get rid of the resulting Windows.old folder, then move stuff back.

Other than that, the upgrade went pretty smoothly. There were a couple of older apps that I needed to uninstall before I could upgrade, but they weren’t apps that I particularly cared about. One surprise, though, was that it suggested that I uninstall iTunes. I did so, as I may be the only person left on the planet who has not purchased any music through iTunes – I installed it only so I could load music onto my infrequently-used iPod nano – so there was no down side for me in doing the uninstall.

One oddity had to do with the license key. Based on what I had read, I expected to be prompted to enter a license key as part of the installation – but I wasn’t. Then, once the installation was complete, I couldn’t find any way to install a license key so I could activate the OS. Ultimately, I had to go to a command prompt and use the “slmgr” (Software License Manager) utility. The syntax is “slmgr /ipk [your product key]” – that’s “ipk” as in “install product key.” Once that was done, the system activated just fine. I do not know whether this is an anomaly that is specific to the MS Partner internal-use version of the product, or whether it will crop up in other volume license versions.

As I said, the upgrade went smoothly. Even though I was not connected to the Moose Logic network when I did the upgrade, it did not disrupt the domain membership, and I was able to authenticate with my domain credentials when I was done. So far, everything I’ve tried to run has run fine. As far as I can tell, even my AVG anti-virus is still functional.

I am a bit annoyed that Microsoft dropped the “Aero Glass” interface, but I guess I’ll get used to that. I’m also annoyed at the absence of a “Start” button on the desktop task bar, but I found a solution for that: the good folks at Stardock have a utility called “Start 8” that puts the Start button back, and gives you both a “Run” and a “Shutdown” option if you right-click on it. (At your option, it can also take you straight to a desktop when you log on.) The version of Start8 that is currently available for download was designed for the Consumer Preview of Win8, but appears to install and run just fine on the released version as well. I’m sure that Stardock will release an update for it soon.

I was also very pleased to discover that my two favorite Win7 utilities, “Fences” (also by Stardock) and “Display Fusion,” also still functioned within the Win8 desktop. In particular, the Fences utility eases some of the inconvenience of having to look for applications that aren’t on the new Win8 Start screen. Since I had used Fences to group application icons on my Win7 desktop for my most frequently-used apps, all I have to do is jump to a desktop, and those icons are still right there.

I suspect that, for the foreseeable future, I will still do most of what I do within the context of a traditional desktop, which begs the question of why I should have upgraded in the first place. One reason, of course, is so I can write posts like this one. Another is that, as a Microsoft Partner, I felt that I needed to be familiar with the new OS. Also, my LE1700 is touch-capable, although it requires the use of a stylus, so I’m curious to see how well things will work when I undock the system and actually use it as a tablet. Finally, I’ve got my sights set on a Surface Pro tablet when they become available (I’m due for a system upgrade anyway), so the more exposure I get to Win8 the more prepared I’ll be.

I’ll be writing more about my adventures with Windows 8 as time goes on…