You Can’t Afford to Ignore Windows 10 Anymore

Over the last several months, I’ve been watching the headlines about Windows 10 go by without really paying a lot of attention to them. Perhaps you have as well. Win10 fell into the category of “Things I Need to Look More Closely At When I Have Time.” After all, it hasn’t been that long since I upgraded to Windows 8.1. Then the news broke that “general availability” will be July 29, leading to one of those “Wait…what!?” moments. With the release now less than two months away, I realized I needed to make time.

So I bought another 8 Gb of RAM for my 64-bit Fujitsu laptop (blowing it out to a total of 12 Gb, woo hoo!), installed Client Hyper-V (which was amazingly easy to do in Windows 8.1 Enterprise), signed up for the Microsoft Insider Preview program, downloaded the Win10 ISO image, and built myself a Win10 VM.

My initial reaction is that it looks pretty good. The current preview build (10074) looks stable, seems to run everything that I’ve thrown at it, and my complaints are pretty minor. I can’t really test multimedia performance, as the preview build doesn’t have drivers that will allow audio pass-through from my Win10 VM to my host PC, but that’s not surprising at this point.

The Start menu is definitely a step in the right direction, but still doesn’t have that one piece of functionality that drove me to install Stardock Software’s Start8 utility: I love being able to click on the Start button, mouse up to, say, the Word or Excel icon, and immediately see the last several documents/spreadsheets I’ve opened, so I can jump directly to them. In Win10, if I pin, say, Word to my taskbar, I can right-click on the Word icon and see a list of recent files – but my personal preference is to reserve my taskbar for programs I’m actually running rather than taking up space with icons for programs I might want to run. Instead, I use the QuickLaunch toolbar for quick access to programs. (What – you didn’t know you could have a QuickLaunch toolbar in Windows 8.x? You can, and it works in the Win10 build I’m running as well, but that’s a subject for another post.) So, when Stardock releases a version for Win10, I’ll probably upgrade to it.

Speaking of upgrades, you’ve probably also heard that users who are running Windows 7, 8, or 8.1 will get a free upgrade to Windows 10. That’s true, depending on what version you’re currently running. There is an upgrade matrix at that tells you, for the Home and Pro editions, what version of Win10 you’ll get. And if you’re running Win7 SP1 or Win 8.1 S14, you can get the upgrade pushed to you via the Windows Update function.

Windows Enterprise users will not get free upgrades…apparently the rationale is that most Windows Enterprise users are part of, well, large enterprises that typically have a corporate license agreement with Microsoft that entitles them to OS upgrades anyway, and these enterprises also want to have tighter control over who gets what upgrade and when.

There are also a few other caveats to bear in mind. First, if you’re running Win7 SP1 or later, the chances are pretty good that your system will run Win10 without any problems…but “pretty good” doesn’t mean “guaranteed.” There’s a helpful article over on ZDNet that will walk you through how to find Microsoft’s compatibility-checking utility.

You may also be surprised at the things Windows 10 will remove from your system as part of the Win10 upgrade.

And bear in mind that if you just happily accept the automatic upgrade to Win10, you’re also opting in for all new features, security updates, and other fixes to the operating system for “the supported lifetime” of your PC. These will all be free, but you won’t have a choice as to which updates you do or don’t get – they’ll all be pushed to you via Windows Update. Businesses, whether running Windows Pro or Enterprise, will have more control over how and when new features and fixes roll out to their users, as Mary Jo Foley explains over on ZDNet.

Finally, Ed Bott is maintaining a great Win10 FAQ over on ZDNet that he’s been updating regularly as more information becomes available. You might want to bookmark that one and come back to it occasionally.

I confess that I’m kind of excited about the new release, and I’ll probably upgrade to it as soon as the Win10 Enterprise bits show up on our Microsoft Partner portal. It will be interesting to see how these major changes in how the Windows OS will be distributed and updated will play out over time. How about you? Feel free to share your thoughts in the comments below…

The Case for Office 365

Update – May 7, 2015
In the original post below, we talked about the 20,000 “item” limit in OneDrive for Business. It turns out that even our old friend and Office 365 evangelist Harry Brelsford, founder of SMB Nation, and, more recently, O365 Nation, has now run afoul of this obstacle, as he describes in his blog post from May 5.

Turns out there’s another quirk with OneDrive for Business that Harry didn’t touch on in his blog (nor did we in our original post below) – OneDrive for Business is really just a front end for a Microsoft hosted SharePoint server. “So what?” you say. Well, it turns out that there are several characters that are perfectly acceptable for you to use in a Windows file or folder name that are not acceptable in a file or folder name on a SharePoint server. (For the definitive list of what’s not acceptable, see And if you’re trying to sync thousands of files with your OneDrive for Business account and a few of them have illegal characters in their names, the sync operation will fail and you will get to play the “find-the-file-with-the-illegal-file-name” game, which can provide you with hours of fun…

Original Post Follows
A year ago, in a blog post targeted at prospective hosting providers, we said, “…in our opinion, selling Office 365 to your customers is not a cloud strategy. Office 365 may be a great fit for customers, but it still assumes that most computing will be done on a PC (or laptop) at the client endpoint, and your customer will still, in most cases, have at least one server to manage, backup, and repair when it breaks.”

About the same time, we wrote about the concept of “Data Gravity” – that, just as objects with physical mass exhibit inertia and attract one another in accordance with the law of gravity, large chunks of data also exhibit a kind of inertia and tend to attract other related data and the applications required to manipulate that data. This is due in part to the fact that (according to former Microsoft researcher Jim Gray) the most expensive part of computing is the cost of moving data around. It therefore makes sense that you should be running your applications wherever your data resides: if your data is in the Cloud, it can be argued that you should be running your applications there as well – especially apps that frequently have to access a shared set of back-end data.

Although these are still valid points, they do not imply that Office 365 can’t bring significant value to organizations of all sizes. There is a case to be made for Office 365, so let’s take a closer look at it:

First, Office 365 is, in most cases, the most cost-effective way to license the Office applications, especially if you have fewer than 300 users (which is the cut-off point between the “Business” and “Enterprise” O365 license plans). Consider that a volume license for Office 2013 Pro Plus without Software Assurance under the “Open Business” license plan costs roughly $500. The Office 365 Business plan – which gets you just the Office apps without the on-line services – costs $8.25/month. If you do the math, you’ll see that $500 would cover the subscription cost for five years.

But wait – that’s really not an apples-to-apples comparison, because with O365 you always have access to the latest version of Office. So we should really be comparing the O365 subscription cost to the volume license price of Office with Software Assurance, which, under the Open Business plan, is roughly $800 for the initial purchase, which includes two years of S.A., and $295 every two years after that to keep the S.A. in place. Total four-year cost under Open Business: $1,095. Total four-cost under the Office 365 Business plan: $396. Heck, even the Enterprise E3 plan (at $20/month) is only $960 over four years.

But (at the risk of sounding like a late-night cable TV commercial) that’s still not all! Office 365 allows each user to install the Office applications on up to five different PCs or Macs and up to five tablets and five smart phones. This is the closest Microsoft has ever come to per-user licensing for desktop applications, and in our increasingly mobile world where nearly everyone has multiple client devices, it’s an extremely attractive license model.

Second, at a price point that is still less than comparable volume licensing over a four-year period, you can also get Microsoft Hosted Exchange, Hosted SharePoint, OneDrive for Business, Hosted Lync for secure instant messaging and Web conferencing, and (depending on the plan) unlimited email archiving and eDiscovery tools such as the ability to put users and/or SharePoint document libraries on discovery hold and conduct global searches across your entire organization for relevant Exchange, Lync, and SharePoint data. This can make the value proposition even more compelling.

So what’s not to like?

Well, for one thing, email retention in Office 365 is not easy and intuitive. As we discussed in our recent blog series on eDiscovery, when an Outlook user empties the Deleted Items folder, or deletes a single item from it, or uses Shift+Delete on an item in another folder (which bypasses the Deleted Items folder), that item gets moved to the “Deletions” subfolder in a hidden “Recoverable Items” folder on the Exchange server. As the blog series explains, these items can still be retrieved by the user as long as they haven’t been purged. By default, they will be purged after two weeks. Microsoft’s Hosted Exchange service allows you to extend that period (the “Deleted Items Retention Period”), but only to a maximum of 30 days – whereas if you are running your own Exchange server, you can extend the period to several years.

But the same tools that allow a user to retrieve items from the Deletions subfolder will also allow a user to permanently purge items from that subfolder. And once an item is purged from the Deletions subfolder – whether explicitly by the user or by the expiration of the Deleted Items Retention Period – that item is gone forever. The only way to prevent this from happening is to put the user on Discovery Hold (assuming you’ve subscribed to a plan which allows you to put users on Discovery Hold), and, unfortunately, there is currently no way to do a bulk operation in O365 to put multiple users on Discovery Hold – you must laboriously do it one user at a time. And if you forget to do it when you create a new user, you run the risk of having that user’s email messages permanently deleted (whether accidentally or deliberately) with no ability to recover them if, Heaven forbid, you ever find yourself embroiled in an eDiscovery action.

One way around this is to couple your Office 365 plan with a third-party archiving tool, such as Mimecast. Although this obviously adds expense, it also adds another layer of malware filtering, an unlimited archive that the user cannot alter, a search function that integrates gracefully into Outlook, and an email continuity function that allows you to send/receive email directly via a Mimecast Web interface if the Office 365 Hosted Exchange service is ever unavailable. You can also use a tool like eFolder’s CloudFinder to back up your entire suite of Office 365 data – documents as well as email messages.

And then there’s OneDrive. You might be able, with a whole lot of business process re-engineering, to figure out how to move all of your file storage into Office 365’s Hosted SharePoint offering. Of course, there would then be no way to access those files unless you’re on-line. Hence the explosive growth in the business-class cloud file synchronization market – where you have a local folder (or multiple local folders) that automatically synchronizes with a cloud file repository, giving you the ability to work off-line and, provided you’ve saved your files in the right folder, synchronize those files to the cloud repository the next time you connect to the Internet. Microsoft’s entry in this field is OneDrive for Business…but there is a rather serious limitation in OneDrive for Business as it exists today.

O365’s 1 Tb of Cloud Storage per user sounds like more than you would ever need. But what you may not know is that there is a limit of 20,000 “items” per user (both a folder and a file within that folder are “items”). You’d be surprised at how fast you can reach that limit. For example, there are three folders on my laptop where all of my important work-related files are stored. One of those folders contains files that also need to be accessible by several other people in the organization. The aggregate storage consumed by those three folders is only about 5 Gb – but there are 18,333 files and subfolders in those three folders. If I was trying to use OneDrive for Business to synchronize all those files to the Cloud, I would probably be less than six months away from exceeding the 20,000 item limit.

Could I go through those folders and delete a lot of stuff I no longer need, or archive them off to, say, a USB drive? Sure I could – and I try to do that periodically. I dare say that you probably also have a lot of files hanging around on your systems that you no longer need. But it takes time to do that grooming – and what’s the most precious resource that most of us never have enough of? Yep, time. My solution is to use Citrix ShareFile to synchronize all three of those folders to a Cloud repository. We also offer Anchor Works (now owned by eFolder) for business-class Cloud file synchronization. (And there are good reasons why you might choose one over the other, but they’re beyond the scope of this article.)

The bottom line is that, while Office 365 still may not be a complete solution that will let you move your business entirely to the cloud and get out of the business of supporting on-prem servers, it can be a valuable component of a complete solution. As with so many things in IT, there is not necessarily a single “right” way to do anything. There are multiple approaches, each with pros and cons, and the challenge is to select the right combination of services for a particular business need. We believe that part of the value we can bring to the table is to help our clients select that right combination of services – whether it be a VirtualQube hosted private cloud, a private cloud on your own premise, in your own co-lo, or in a public infrastructure such as Amazon or Azure, or a public/private hybrid cloud deployment – and to help our clients determine whether one of the Office 365 plans should be part of that solution. And if you use the Office Suite at all, the answer to that is probably “yes” – it’s just a matter of which plan to choose.

Hyperconvergence and the Advent of Software Defined Everything (Part 2)

As cravings go, the craving for the perfect morning cup of tea in jolly old England rivals that of the most highly-caffeinated Pacific Northwest latte-addict. So, in the late 1800s, some inventive folks started thinking about what was actually required to get the working man (or woman) out of bed in the morning. An alarm clock, certainly. A lamp of some kind during the darker parts of the year (England being at roughly the same latitude as the State of Washington). And, most importantly, that morning cup of tea. A few patent filings later, the “Teasmade” was born. According to Wikipedia, they reached their peak of popularity in the 1960s and 1970s…although they are now seeing an increase in popularity again, partly as a novelty item. You can buy one on eBay for under $50.


The Teasmade, ladies and gentlemen, is an example of a converged appliance. It integrates multiple components – an alarm clock, a lamp, a teapot – into a pre-engineered solution. And, for it’s time, a pretty clever one, if you don’t mind waking up with a pot of boiling water inches away from your head. The Leatherman multi-tool is another example of a converged appliance. You get pliers, wire cutters, knife blades, phillips-head and flat-head screwdrivers, a can/bottle opener, and, depending on the model, an awl, a file, a saw blade, etc., etc., all in one handy pocket-sized tool. It’s a great invention, and I keep one on my belt constantly when I’m out camping, although it would be of limited use if I had to work on my car.

How does this relate to our IT world? Well, in traditional IT, we have silos of systems and operations management. We typically have separate admin groups for storage, servers, and networking, and each group maintains the architecture and the vendor relationships, and handles purchasing and provisioning for the stuff that group is responsible for. Unfortunately, these groups do not always play nicely together, which can lead to delays in getting new services provisioned at a time when agility is increasingly important to business success.

Converged systems attempt to address this by combining two or more of these components as a pre-engineered solution…components that are chosen and engineered to work well together. One example is the “VCE” system, so called because it is a bundle of VMware, Cisco UCS hardware, and EMC storage.

A “hyperconverged” system takes this concept a step further. It is a modular system from a single vendor that integrates all functions, with a management overlay that allows all the components to be managed via a “single pane of glass.” They are designed to scale by simply adding more modules. They can typically be managed by one team, or, in some cases, one person.

VMware’s EVO:RAIL system, introduced in August of last year, is perhaps the first example of a truly hyperconverged system. VMware has arrangements with several hardware vendors, including Dell, HP, Fujitsu, and even SuperMicro, to build EVO:RAIL on their respective hardware. All vendors’ products include four dual-processor compute nodes with 192 Gb RAM each, one 400 Gb SSD per node (used for caching), and three 1.2 Tb hot-plug disk drives per node, all in a 2U rack-mount chassis with dual hot-plug redundant power supplies.

Update – June 10, 2015
VMware has now given hardware vendors more flexibility in configuring the appliances: They can now include dual six-, eight-, ten-, or 12-core Intel Haswell or Ivy Bridge CPUs per node, 128 Gb to 512 Gb of RAM per node, and an alternate storage configuration of one 800 Gb SSD and five 1.5 Tb HDDs per node.

The hardware is bundled with VMWare’s virtualization software, as well as their virtual SAN. The concept is appealing – you plug it in, turn it on, and you’re 15 minutes away from building your first VM. EVO:RAIL can be scaled out to four appliances (today), with plans to increase the number of nodes in the future.

The good news is that it’s fast and simple, it has a small footprint (meaning it enables high-density computing), and places lower demands on power and cooling. Todd Knapp, writing for, says, “Hyperconverged infrastructure is a good fit for companies with branch locations or collocated facilities, as well as small organizations with big infrastructure requirements.”

Andy Warfield (from whom I borrowed the Teasmade example), writing in his blog at, is a bit more specific: “…converged architectures solve a very real and completely niche problem: at small scales, with fairly narrow use cases, converged architectures afford a degree of simplicity that makes a lot of sense. For example, if you have a branch office that needs to run 10 – 20 VMs and that has little or no IT support, it seems like a good idea to keep that hardware install as simple as possible. If you can do everything in a single server appliance, go for it!”

But Andy also points out some not-so-good news:

However, as soon as you move beyond this very small scale of deployment, you enter a situation where rigid convergence makes little or no sense at all. Just as you wouldn’t offer to serve tea to twelve dinner guests by brewing it on your alarm clock, the idea of scaling cookie-cutter converged appliances begs a bit of careful reflection.

If your environment is like many enterprises that I’ve worked with in the past, it has a big mix of server VMs. Some of them are incredibly demanding. Many of them are often idle. All of them consume RAM. The idea that as you scale up these VMs on a single server, that you will simultaneously exhaust memory, CPU, network, and storage capabilities at the exact same time is wishful thinking to the point of clinical delusion…what value is there in an architecture that forces you to scale out, and to replace at end of life, all of your resources in equal proportion?

Moreover, hyperconverged systems are, at the moment, pretty darned expensive. An EVO:RAIL system will cost you well over six figures, and locks you into a single vendor. Unlike most stand-alone SAN products, VMware’s virtual SAN won’t provision storage to physical servers. And EVO:RAIL is, by definition, VMware only, whereas many enterprises have a mixture of hypervisors in their environment. (According to Todd Knapp, saying “We’re a __________ shop” is just another way of saying “We’re more interested in maintaining homogeneity in the network than in taking advantage of innovations in technology.”) Not to mention the internal political problems: Which of those groups we discussed earlier is going to manage the hyperconverged infrastructure? Does it fall under servers, storage, or networking? Are you going to create a new group of admins? Consolidate the groups you have? It could get ugly.

So where does this leave us? Is convergence, or hyperconvergence, a good thing or not? The answer, as it often is in our industry, is “It depends.” In the author’s opinion, Andy Warfield is exactly right in that today’s hyperconverged appliances address fairly narrow use cases. On the other hand, the hardware platforms that have been developed to run these hyperconverged systems, such as the Fujitsu CX400, have broader applicability. Just think for a moment about the things you could do with a 2U rack-mount system that contained four dual-processor server modules with up to 256 Gb of RAM each, and up to 24 hot-plug disk drives (6 per server module).

We’ve built a number of SMB virtualization infrastructures with two rack-mount virtualization hosts and two DataCore SAN nodes, each of which was a separate 2U server with its own power supplies. Now we can do it in ¼ the rack space with a fraction of the power consumption. Or how about two separate Stratus everRun fault-tolerant server pairs in a single 2U package?

Innovation is nearly always a good thing…but it’s amazing how often the best applications turn out not to be the ones the innovators had in mind.

Hyperconvergence and the Advent of Software-Defined Everything (Part 1)

The IT industry is one of those industries that is rife with “buzz words” – convergence, hyperconvergence, software-defined this and that, etc., etc. It can be a challenge for even the most dedicated IT professionals to keep up on all the new trends in technology, not to mention the new terms invented by marketeers who want you to think that the shiny new product they just announced is on the leading edge of what’s new and cool…when in fact it’s merely repackaged existing technology.

What does it really mean to have “software-defined storage” or “software-defined networking”…or even a “software-defined data center? What’s the difference between “converged” and “hyperconverged?” And why should you care? This series of articles will suggest some answers that we hope will be helpful.

First, does “software-defined” simply mean “virtualized?”

No, not as the term is generally used. If you think about it, every piece of equipment in your data center these days has a hardware component and a software component (even if that software component is hard-coded into specialized integrated circuit chips or implemented in firmware). Virtualization is, fundamentally, the abstraction of software and functionality from the underlying hardware. Virtualization enables “software-defined,” but, as the term is generally used, “software defined” implies more than just virtualization – it implies things like policy-driven automation and a simplified management infrastructure.

An efficient IT infrastructure must be balanced properly between compute resources, storage resources, and networking resources. Most readers are familiar with the leading players in server virtualization, with the “big three” being VMware, Microsoft, and Citrix. Each has its own control plane to manage the virtualization hosts, but some cross-platform management is available. vCenter can manage Hyper-V hosts. System Center can manage vSphere and XenServer hosts. It may not be completely transparent yet, but it’s getting there.

What about storage? Enterprise storage is becoming a challenge for businesses of all sizes, due to the sheer volume of new information that is being created – according to some estimates, as much as 15 petabytes of new information world-wide every day. (That’s 15 million billion bytes.) The total amount of digital data that needs to be stored somewhere doubles roughly every two years, yet storage budgets are increasing only 1% – 5% annually. Hence the interest in being able to scale up and out using lower-cost commodity storage systems.

But the problem is often compounded by vendor lock-in. If you have invested in Vendor A’s enterprise SAN product, and now want to bring in an enterprise SAN product from Vendor B because it’s faster/better/less costly, you will probably find that they don’t talk to one another. Want to move Vendor A’s SAN into your Disaster Recovery site, use Vendor B’s SAN in production, and replicate data from one to the other? Sorry…in most cases that’s not going to work.

Part of the promise of software-defined storage is the ability to not only manage the storage hardware from one vendor via your SDS control plane, but also pull in all of the “foreign” storage you may have and manage it all transparently. DataCore, to cite just one example, allows you to do just that. Because the DataCore SAN software is running on a Windows Server platform, it’s capable of aggregating any and all storage that the underlying Windows OS can see into a single storage pool. And if you want to move your aging EMC array into your DR site, and have your shiny, new Compellent production array replicate data to the EMC array (or vice versa), just put DataCore’s SANsymphony-V in front of each of them, and let the DataCore software handle the replication. Want to bring in an all-flash array to handle the most demanding workloads? Great! Bring it in, present it to DataCore, and let DataCore’s auto-tiering feature dynamically move the most frequently-accessed blocks of data to the storage tier that offers the highest performance.

What about software-defined networking? Believe it or not, in 2013 we reached the tipping point where there are now more virtual switch ports than physical ports in the world. Virtual switching technology is built into every major hypervisor. Major players in the network appliance market are making their technology available in virtual appliance form. For example, Watchguard’s virtual firewall appliances can be deployed on both VMware and Hyper-V, and Citrix’s NetScaler VPX appliances can be deployed on VMware, Hyper-V, or XenServer. But again, “software-defined networking” implies the ability to automate changes to the network based on some kind of policy engine.

If you put all of these pieces together, vendor-agnostic virtualization + policy-driven automation + simplified management = software-defined data center. Does the SDDC exist today? Arguably, yes – one could certainly make the case that the VMware vCloud Automation Center, Microsoft’s Azure Pack, Citrix’s CloudStack, and the open-source OpenStack all have many of the characteristics of a software-defined data center.

Whether the SDDC makes business sense today is not as clear. quotes Brad Maltz of Lumenate as saying, “It will take about three years for companies to learn about the software-designed data center concept, and about five to ten years for them to understand and implement it.” Certainly some large enterprises may have the resources – both financial and skill-related – to begin reaping the benefits of this technology sooner, but it will be a challenge for small and medium-sized enterprises to get their arms around it. That, in part, is what is driving vendors to introduce converged and hyperconverged products, and that will be the subject of Part 2 of this series.

Windows Server 2003 – Four Months and Counting

Unless you’ve been living in a cave in the mountains for the last several months, you’re probably aware that Windows Server 2003 hits End of Life on July 14, 2015 – roughly four months from now. That means Microsoft will no longer develop or release security patches or fixes for the OS. You will no longer be able to call Microsoft for support if you have a problem with your 2003 server. Yet, astoundingly, only a few weeks ago Microsoft was estimating that there were still over 8 million 2003 servers in production.

Are some of them yours? If so, consider this: As Mike Boyle pointed out in his blog last October, you’re running a server OS that was released the year Facebook creator Mark Zuckerberg entered college; the year Wikipedia was launched; the year Myspace (remember them?) was founded; the year the Tampa Bay Buccaneers won the Super Bowl. Yes, it was that long ago.

Do you have to deal with HIPAA or PCI compliance? What would it mean to your organization if you didn’t pass your next audit? Because you probably won’t if you’re still running 2003 servers. And even if HIPAA or PCI aren’t an issue, what happens when (not if) the next big vulnerabilty is discovered and you have no way to patch for it?

Yes, I am trying to scare you – because this really is serious stuff, and if you don’t have a migration plan yet, you don’t have much time to assemble one. Please, let’s not allow this to become another “you can have it when you pry it from my cold dead hands” scenario like Windows XP. There really is too much at stake here. You can upgrade. You can move to the cloud. Or you can put your business as risk. It’s your call.

Seven Security Risks from Consumer-Grade File Sync Services

[The following is courtesy of Anchor – an eFolder company and a VirtualQube partner.]

Consumer-grade file sync solutions (referred to hereafter as “CGFS solutions” to conserve electrons) pose many challenges to businesses that care about control and visibility over company data. You may think that you have nothing to worry about in this area, but the odds are that if you have not provided your employees with an approved business-grade solution, you have multiple people using multiple file sync solutions that you don’t even know about. Here’s why that’s a problem:

  1. Data theft – Most of the problems with CGFS solutions emanate from a lack of oversight. Business owners are not privy to when an instance is installed, and are unable to control which employee devices can or cannot sync with a corporate PC. Use of CFGS solutions can open the door to company data being synced (without approval) across personal devices. These personal devices, which accompany employees on public transit, at coffee shops, and with friends, exponentially increase the chance of data being stolen or shared with the wrong parties.
  2. Data loss – Lacking visibility over the movement of files or file versions across end-points, CFGS solutions improperly backup (or do not backup at all) files that were modified on an employee device. If an end-point is compromised or lost, this lack of visibility can result in the inability to restore the most current version of a file…or any version for that matter.
  3. Corrupted data – In a study by CERN, silent data corruption was observed in 1 out of every 1500 files. While many businesses trust their cloud solution providers to make sure that stored data maintains its integrity year after year, most CGFS solutions don’t implement data integrity assurance systems to ensure that any bit-rot or corrupted data is replaced with a redundant copy of the original.
  4. Lawsuits – CGFS solutions give carte blanche power to end-users over the ability to permanently delete and share files. This can result in the permanent loss of critical business documents as well as the sharing of confidential information that can break privacy agreements in place with clients and third-parties.
  5. Compliance violations – Since CGFS solutions have loose (or non-existent) file retention and file access controls, you could be setting yourself up for a compliance violation. Many compliance policies require that files be held for a specific duration and only be accessed by certain people; in these cases, it is imperative to employ strict controls over how long files are kept and who can access them.
  6. Loss of accountability – Without detailed reports and alerts over system-level activity, CGFS solutions can result in loss of accountability over changes to user accounts, organizations, passwords, and other entities. If a malicious admin gains access to the system, hundreds of hours of configuration time can be undone if no alerting system is in place to notify other admins of these changes.
  7. Loss of file access – Consumer-grade solutions don’t track which users and machines touched a file and at which times. This can be a big problem if you’re trying to determine the events leading up to a file’s creation, modification, or deletion. Additionally, many solutions track and associate a small set of file events which can result in a broken access trail if a file is renamed, for example.

Consumer-grade file sync solutions pose many challenges to businesses that care about control and visibility over company data. Allowing employees to utilize CFGS solutions can lead to massive data leaks and security breaches.

Many companies have formal policies or discourage employees from using their own accounts. But while blacklisting common CFGS solutions may curtail the security risks in the short term, employees will ultimately find ways to get around company firewalls and restrictive policies that they feel interfere with their productivity.

The best way for business to handle this is to deploy a company-approved application that will allow IT to control the data, yet grants employees the access and functionality they feel they need to be productive.

The Great Superfishing Expedition of 2015

In a move that will probably end up in the top ten technology blunders of the year, Lenovo decided, starting in September 2014, to pre-install Superfish VisualDiscovery software on some of their PCs. (Fortunately for most of the readers of this blog, it appears that it was primarily the consumer products that were affected, not the business products.) The “visual search” concept behind Superfish is interesting – the intent is that a user could hover over a picture in their browser, and Superfish would pop up links to shopping sites that sell the item in the picture. I could see where that would be some pretty cool functionality…if the user wanted that functionality, if the user intentionally installed the software, and if the user could easily turn the functionality on and off as desired. But that’s not what happened – and here’s why it’s a big problem.

In order to perform this function when a user has an SSL-encrypted connection to a Web site, Superfish has to insert itself into the middle of that encrypted connection. It has to intercept the data coming from the shopping site, decrypt it, and then re-encrypt it before sending it on to the browser. Security geeks have a term for this – it’s called a “man-in-the-middle attack,” and it’s not something you want to willingly allow on your PC. In order to do this, Superfish installs a self-signed trusted root certificate on the PC. That means Superfish has the same level of trust as, say, the VeriSign trusted root certificate that Microsoft bakes into your Operating System so you can safely interact with all the Web sites out there that have VeriSign certificates on them…for example, your banking institution, as most financial institutions I’ve seen use VeriSign certificates on their Web banking sites. (Are you frightened yet?)

But that’s not all. Superfish installs the same root certificate on every PC that it gets installed on. And it turns out that it’s not technically difficult to recover the private encryption key from the Superfish software. That means that an attacker could generate an SSL certificate for any Web site that would be trusted by any system that has the Superfish software installed. In other words, you could be lured to a Web site that impersonated your bank, or a favorite shopping site, and you would get no security warning from your browser. You try to authenticate, and now the bad guys have your user credentials. (How about now?)

Hopefully, you’re at least frightened enough to check to see if your system was one of the ones that Lenovo shipped with Superfish pre-installed. You can find that list at Again, it appears that the majority of the Lenovo systems on the list were consumer models, not business models. If you are one of the unlucky ones, you can find an uninstall tool at

You should also note that security experts are divided as to whether simply running uninstall tools and deleting the root certificate are sufficient. Some have recommended a new, clean installation of Windows as the safest thing to do. Unfortunately, this may require you to purchase a new copy of Windows if you don’t have one lying around…as just re-installing from whatever recovery media may have come with your new PC will probably also re-install Superfish.

Meanwhile, Lenovo has stopped pre-installing Superfish, and is doing its best to control the damage to its brand. We wish them the best of luck with that – from what we’ve seen, they make some great products…and at least one really bad decision…

Choosing the Right IT Provider

A few months ago, we wrote about how business leaders could determine when it was time to use an outside IT vendor. (See “When Should an IT Leader Use a Vendor, Part 1” and Part 2.) Once the decision has been made to seek outside help, the logical next question is how to choose the right IT vendor. Before you begin that selection process, you need to assess your organization’s needs:

  • Do you have an in-house IT staff and just need a consultant for specialty work? Or do you need to outsource a broader spectrum of services, such as comprehensive help desk support, fixed fee monitoring and support services for your workstations and/or servers, and consulting services to help you establish future technology direction? A consultant may have different pricing approaches for different types of IT projects, while the broader spectrum of services is probably best handled via a fixed-fee monthly support contract.
  • What, exactly, are you looking for? Do you need a single project completed? Are you looking for design services, deployment services, post-deployment support, or some combination of the three? Do you want your vendor to provide a complete package consisting of hardware, software, and services, or only part of the solution? Will the project be built on premise, or do you want to go to the Cloud? IT providers frequently specialize in different aspects of the IT world, so make sure you have a talk with any company you are considering to determine if they can fulfill all of your needs, or if you will need multiple providers to achieve your end goal.

After you’ve determined your needs, you will want to identify IT providers that offer the services that you need. Some providers are very specialized, and others have boad offerings. You will want to do your due diligence by checking out the provider’s own Web site as well as supporting sites such as LinkedIn, Facebook, Twitter, etc. But don’t stop there – dig deeper and examine their credentials. Look for case studies, testimonials, and references. Ask if you can actually speak to the customers who are profiled in these case studies, testimonials, and references. If you’re looking for a comprehensive support agreement, ask to review the contract to make sure all of your needs are covered and that the proposed Service Level Agreement (“SLA”) meets your requirements. Some of the questions you’ll want to answer are:

  • How qualified is the provider’s staff? Are they certified with the vendors whose products they will be working on in your environment?
  • How big is the provider’s company? Size and reach matter – you don’t want to have a service emergency and discover that the only person who knows how to work on your systems is gone on vacation. On the other hand, if your organization is small, your business may be less important to a very large provider and you may get more attentive service from a smaller one.
  • What geographical areas does the provider cover? This is obviously important if your own organization operates in more than one area, but will also be important if you’re considering a potential move or business expansion.
  • Does the SLA include a guaranteed response time? More importantly, does that guaranteed response time meet the needs of your business? It might be nice to have a one hour guaranteed response time, but shorter guaranteed response times are likely to be more expensive…so if your business really doesn’t need that SLA, why pay for it?
  • If you’re signing a support contract, make sure you clearly understand what services are covered, what is excluded, and what your cost is for items that are excluded from coverage.

Did we miss anything that you have found to be important? Let us know in the comments.

The Year of Mobile Computing: BYOD Trends to Expect in 2015

Guest post by Jennifer Birch

Bring Your Own Everything

Photo Credit: Dennis Callahan via Compfight cc

As people become more mobile reliant, the trend toward “bring your own device” (BYOD) becomes more common in today’s highly technologically dependent world. In fact, Gartner research revealed that 50 percent of companies will require their staff to use their own devices for work purposes in 2017. “The benefits of BYOD include creating new mobile workforce opportunities, increasing employee satisfaction, and reducing or avoiding costs,” according to Gartner vice president David Willis.

With the continuous demand for mobile computing in the business sector, it’s important to know what’s next in this sector. In this post, let’s introduce you to the top BYOD trends to watch out for this year.

More Mobile Security Apps
Security will remain as the main concern that slows the widespread growth of mobile computing in the office. However, as the famous saying goes, “there’s always an app for that.” A mobile security application is one of the most important apps that each gadget owner should acquire. For companies, one of the major concerns is the safety of their servers and crucial business information that can be hacked easily, given that these devices can easily be stolen and accessed by anyone remotely. It’s best to follow some of the common tips for mobile data security such as installing security apps, deleting cache and history, and turning on the device’s access pin code system. [Editor’s note: Mobile Device Management systems such as Citrix XenMobile can offer organizations ways to enforce security policies, even on employee-owned devices.]

Rise of Wearables
Some of the much-awaited devices this year are in the form of wearables, particularly smart headsets such as Google Glass. Through its potential to provide augmented and virtual reality technologies, various industries are given the opportunity to work remotely, maximize innovative solutions, and acquire real-time data right at their eyes. “It [smartglasses] could provide access to repair manuals and larger schematics, helping engineers, technicians and architects to make more informed, quicker decisions,” Steve Pluta wrote in the news section of O2. As smartwatches have become powerful as well (with their ability to be standalone devices), it is not impossible that these gadgets will also be included in the next wave of BYOD technologies.

High mTech Demands by Employees
As stated previously, there will be an increase in the number of companies requiring their employees to use their own smartphones and tablets to work remotely. However, demand coming from their staff will also be apparent, such as the following:

  • The option to choose their own type of gadget.
  • Demand for a 4G connection.
  • Free access to work-related apps.
  • Pre-installed Cloud apps (such as Dropbox or iCloud), access to company Web site, and more.

Tracking Tools to Monitor Mobile Usage
Since there will be widespread adoption of mobile devices in the office, businesses will then have to control and monitor their usage. With the help of analytics tools, companies will have concrete insight into the content that their employees are accessing. Some may regard this action as a way to control their employees, limiting the activities they can partake of using their gadgets. However, experts say that applying a mobile monitoring tool must be discussed openly with colleagues to avoid any hurdle in the process.

BYOD has completely revolutionized the business sector, with its various advantages in terms of faster computing processes. Although security will remain to be of the utmost concern to most companies in making the shift to mobile processing, it will continue to grow as more devices are being produced that are focused on making work more efficient and cost-effective. What trend are you expecting to come up in BYOD this year?

Exclusive for VirtualQube

NOTE: VirtualQube welcomes the submission of guest posts on topics related to our own subject matter. The opinions expressed by the authors of guest posts are their own and do not necessarily represent the opinions of VirtualQube. VirtualQube also reserves the right to decline to publish submissions that we feel are not appropriate for our site.