Category Archives: Hyper-v

Hyperconvergence and the Advent of Software Defined Everything (Part 2)

As cravings go, the craving for the perfect morning cup of tea in jolly old England rivals that of the most highly-caffeinated Pacific Northwest latte-addict. So, in the late 1800s, some inventive folks started thinking about what was actually required to get the working man (or woman) out of bed in the morning. An alarm clock, certainly. A lamp of some kind during the darker parts of the year (England being at roughly the same latitude as the State of Washington). And, most importantly, that morning cup of tea. A few patent filings later, the “Teasmade” was born. According to Wikipedia, they reached their peak of popularity in the 1960s and 1970s…although they are now seeing an increase in popularity again, partly as a novelty item. You can buy one on eBay for under $50.

The Teasmade, ladies and gentlemen, is an example of a converged appliance. It integrates multiple components – an alarm clock, a lamp, a teapot – into a pre-engineered solution. And, for it’s time, a pretty clever one, if you don’t mind waking up with a pot of boiling water inches away from your head. The Leatherman multi-tool is another example of a converged appliance. You get pliers, wire cutters, knife blades, phillips-head and flat-head screwdrivers, a can/bottle opener, and, depending on the model, an awl, a file, a saw blade, etc., etc., all in one handy pocket-sized tool. It’s a great invention, and I keep one on my belt constantly when I’m out camping, although it would be of limited use if I had to work on my car.

How does this relate to our IT world? Well, in traditional IT, we have silos of systems and operations management. We typically have separate admin groups for storage, servers, and networking, and each group maintains the architecture and the vendor relationships, and handles purchasing and provisioning for the stuff that group is responsible for. Unfortunately, these groups do not always play nicely together, which can lead to delays in getting new services provisioned at a time when agility is increasingly important to business success.

Converged systems attempt to address this by combining two or more of these components as a pre-engineered solution…components that are chosen and engineered to work well together. One example is the “VCE” system, so called because it is a bundle of VMware, Cisco UCS hardware, and EMC storage.

A “hyperconverged” system takes this concept a step further. It is a modular system from a single vendor that integrates all functions, with a management overlay that allows all the components to be managed via a “single pane of glass.” They are designed to scale by simply adding more modules. They can typically be managed by one team, or, in some cases, one person.

VMware’s EVO:RAIL system, introduced in August of last year, is perhaps the first example of a truly hyperconverged system. VMware has arrangements with several hardware vendors, including Dell, HP, Fujitsu, and even SuperMicro, to build EVO:RAIL on their respective hardware. All vendors’ products include four dual-processor compute nodes with 192 Gb RAM each, one 400 Gb SSD per node (used for caching), and three 1.2 Tb hot-plug disk drives per node, all in a 2U rack-mount chassis with dual hot-plug redundant power supplies. The hardware is bundled with VMWare’s virtualization software, as well as their virtual SAN. The concept is appealing – you plug it in, turn it on, and you’re 15 minutes away from building your first VM. EVO:RAIL can be scaled out to four appliances (today), with plans to increase the number of nodes in the future.

The good news is that it’s fast and simple, it has a small footprint (meaning it enables high-density computing), and places lower demands on power and cooling. Todd Knapp, writing for searchvirtualdesktop.techtarget.com, says, “Hyperconverged infrastructure is a good fit for companies with branch locations or collocated facilities, as well as small organizations with big infrastructure requirements.”

Andy Warfield (from whom I borrowed the Teasmade example), writing in his blog at www.cohodata.com, is a bit more specific: “…converged architectures solve a very real and completely niche problem: at small scales, with fairly narrow use cases, converged architectures afford a degree of simplicity that makes a lot of sense. For example, if you have a branch office that needs to run 10 – 20 VMs and that has little or no IT support, it seems like a good idea to keep that hardware install as simple as possible. If you can do everything in a single server appliance, go for it!”

But Andy also points out some not-so-good news:

However, as soon as you move beyond this very small scale of deployment, you enter a situation where rigid convergence makes little or no sense at all. Just as you wouldn’t offer to serve tea to twelve dinner guests by brewing it on your alarm clock, the idea of scaling cookie-cutter converged appliances begs a bit of careful reflection.

If your environment is like many enterprises that I’ve worked with in the past, it has a big mix of server VMs. Some of them are incredibly demanding. Many of them are often idle. All of them consume RAM. The idea that as you scale up these VMs on a single server, that you will simultaneously exhaust memory, CPU, network, and storage capabilities at the exact same time is wishful thinking to the point of clinical delusion…what value is there in an architecture that forces you to scale out, and to replace at end of life, all of your resources in equal proportion?

Moreover, hyperconverged systems are, at the moment, pretty darned expensive. An EVO:RAIL system will cost you well over six figures, and locks you into a single vendor. Unlike most stand-alone SAN products, VMware’s virtual SAN won’t provision storage to physical servers. And EVO:RAIL is, by definition, VMware only, whereas many enterprises have a mixture of hypervisors in their environment. (According to Todd Knapp, saying “We’re a __________ shop” is just another way of saying “We’re more interested in maintaining homogeneity in the network than in taking advantage of innovations in technology.”) Not to mention the internal political problems: Which of those groups we discussed earlier is going to manage the hyperconverged infrastructure? Does it fall under servers, storage, or networking? Are you going to create a new group of admins? Consolidate the groups you have? It could get ugly.

So where does this leave us? Is convergence, or hyperconvergence, a good thing or not? The answer, as it often is in our industry, is “It depends.” In the author’s opinion, Andy Warfield is exactly right in that today’s hyperconverged appliances address fairly narrow use cases. On the other hand, the hardware platforms that have been developed to run these hyperconverged systems, such as the Fujitsu CX400, have broader applicability. Just think for a moment about the things you could do with a 2U rack-mount system that contained four dual-processor server modules with up to 256 Gb of RAM each, and up to 24 hot-plug disk drives (6 per server module).

We’ve built a number of SMB virtualization infrastructures with two rack-mount virtualization hosts and two DataCore SAN nodes, each of which was a separate 2U server with its own power supplies. Now we can do it in ¼ the rack space with a fraction of the power consumption. Or how about two separate Stratus everRun fault-tolerant server pairs in a single 2U package?

Innovation is nearly always a good thing…but it’s amazing how often the best applications turn out not to be the ones the innovators had in mind.

Hyperconvergence and the Advent of Software-Defined Everything (Part 1)

The IT industry is one of those industries that is rife with “buzz words” – convergence, hyperconvergence, software-defined this and that, etc., etc. It can be a challenge for even the most dedicated IT professionals to keep up on all the new trends in technology, not to mention the new terms invented by marketeers who want you to think that the shiny new product they just announced is on the leading edge of what’s new and cool…when in fact it’s merely repackaged existing technology.

What does it really mean to have “software-defined storage” or “software-defined networking”…or even a “software-defined data center? What’s the difference between “converged” and “hyperconverged?” And why should you care? This series of articles will suggest some answers that we hope will be helpful.

First, does “software-defined” simply mean “virtualized?”

No, not as the term is generally used. If you think about it, every piece of equipment in your data center these days has a hardware component and a software component (even if that software component is hard-coded into specialized integrated circuit chips or implemented in firmware). Virtualization is, fundamentally, the abstraction of software and functionality from the underlying hardware. Virtualization enables “software-defined,” but, as the term is generally used, “software defined” implies more than just virtualization – it implies things like policy-driven automation and a simplified management infrastructure.

An efficient IT infrastructure must be balanced properly between compute resources, storage resources, and networking resources. Most readers are familiar with the leading players in server virtualization, with the “big three” being VMware, Microsoft, and Citrix. Each has its own control plane to manage the virtualization hosts, but some cross-platform management is available. vCenter can manage Hyper-V hosts. System Center can manage vSphere and XenServer hosts. It may not be completely transparent yet, but it’s getting there.

What about storage? Enterprise storage is becoming a challenge for businesses of all sizes, due to the sheer volume of new information that is being created – according to some estimates, as much as 15 petabytes of new information world-wide every day. (That’s 15 million billion bytes.) The total amount of digital data that needs to be stored somewhere doubles roughly every two years, yet storage budgets are increasing only 1% - 5% annually. Hence the interest in being able to scale up and out using lower-cost commodity storage systems.

But the problem is often compounded by vendor lock-in. If you have invested in Vendor A’s enterprise SAN product, and now want to bring in an enterprise SAN product from Vendor B because it’s faster/better/less costly, you will probably find that they don’t talk to one another. Want to move Vendor A’s SAN into your Disaster Recovery site, use Vendor B’s SAN in production, and replicate data from one to the other? Sorry…in most cases that’s not going to work.

Part of the promise of software-defined storage is the ability to not only manage the storage hardware from one vendor via your SDS control plane, but also pull in all of the “foreign” storage you may have and manage it all transparently. DataCore, to cite just one example, allows you to do just that. Because the DataCore SAN software is running on a Windows Server platform, it’s capable of aggregating any and all storage that the underlying Windows OS can see into a single storage pool. And if you want to move your aging EMC array into your DR site, and have your shiny, new Compellent production array replicate data to the EMC array (or vice versa), just put DataCore’s SANsymphony-V in front of each of them, and let the DataCore software handle the replication. Want to bring in an all-flash array to handle the most demanding workloads? Great! Bring it in, present it to DataCore, and let DataCore’s auto-tiering feature dynamically move the most frequently-accessed blocks of data to the storage tier that offers the highest performance.

What about software-defined networking? Believe it or not, in 2013 we reached the tipping point where there are now more virtual switch ports than physical ports in the world. Virtual switching technology is built into every major hypervisor. Major players in the network appliance market are making their technology available in virtual appliance form. For example, Watchguard’s virtual firewall appliances can be deployed on both VMware and Hyper-V, and Citrix’s NetScaler VPX appliances can be deployed on VMware, Hyper-V, or XenServer. But again, “software-defined networking” implies the ability to automate changes to the network based on some kind of policy engine.

If you put all of these pieces together, vendor-agnostic virtualization + policy-driven automation + simplified management = software-defined data center. Does the SDDC exist today? Arguably, yes – one could certainly make the case that the VMware vCloud Automation Center, Microsoft’s Azure Pack, Citrix’s CloudStack, and the open-source OpenStack all have many of the characteristics of a software-defined data center.

Whether the SDDC makes business sense today is not as clear. Techtarget.com quotes Brad Maltz of Lumenate as saying, “It will take about three years for companies to learn about the software-designed data center concept, and about five to ten years for them to understand and implement it.” Certainly some large enterprises may have the resources – both financial and skill-related – to begin reaping the benefits of this technology sooner, but it will be a challenge for small and medium-sized enterprises to get their arms around it. That, in part, is what is driving vendors to introduce converged and hyperconverged products, and that will be the subject of Part 2 of this series.

Does “Shared Nothing” Migration Mean the Death of the SAN?

You’ve probably heard that Hyper-V in Windows Server 2012 supports what Microsoft is calling “Shared Nothing” live migration. You can see a demo of that here, in a video that was posted on a TechNet blog back in July:

Now don’t get me wrong - the ability to live migrate a running VM from one virtualization host to another across the network with no shared storage behind it is pretty cool. But if you read through the blog post, you’ll also see that it took 8 minutes and 40 seconds to migrate a 16 Gb VM. (And I don’t know about you, but many of our customers have VMs that are substantially larger than that!) On the other hand, it took only 11 seconds to live migrate that same VM running on the same hardware when it was in a cluster with shared storage.

So I will submit that the answer to the question posed in the title of this post is “No” - clearly, having shared storage behind your virtualization hosts brings a level of resilience and agility far beyond what Shared Nothing migration brings. Still, for an SMB that has a small virtualization infrastructure with only two or three hosts and no shared storage, it’s a significant improvement over what they’ve historically had to go through to move a VM from one host to another: That has typically meant shutting the VM down, then exporting it to a storage repository that can be accessed by the other host (e.g., an external USB or network-attached hard drive), then importing it into the other host’s local storage, then booting it up…that can easily take an hour or more, during which time the VM is shut down and unavailable.

So Shared Nothing migration is pretty cool, but, as Rob Waggoner writes in the TechNet post linked above, don’t throw your SANs out just yet.