Tag Archives: Business Practices

How Do You Back Up Your Cloud Services?

I recently came across a post on spiceworks.com that, although it’s a couple of years old, makes a great point: “IT professionals would never run on-premise systems without adequate backup and recovery capabilities, so it’s hard to imagine why so many pros adopt cloud solutions without ensuring the same level of protection.”

This is not a trivial issue. According to some articles I’ve read, over 100,000 companies are now using Salesforce.com as their CRM system. Microsoft doesn’t reveal how many Office 365 subscribers they have, but they do reveal their annual revenue run-rate. If you make some basic assumptions about the average monthly fee, you can make an educated guess as to how many subscribers they have, and most estimates place it at over 16 million (users, not companies). Google Apps subscriptions are also somewhere in the millions (they don’t reveal their specific numbers either). If your organization subscribes to one or more of these services, have you thought about backing up that data? Or are you just trusting your cloud service provider to do it for you?

Let’s take Salesforce.com as a specific example. Deleted records normally go into a recycle bin, and are retained and recoverable for 15 days. But there are some caveats there:

  • Your recycle bin can only hold a limited number of records. That limit is 25 times the number of megabytes in your storage. (According to the Salesforce.com “help” site, this usually translates to roughly 5,000 records per license.) For example, if you have 500 Mb of storage, your record limit is 12,500 records. If that limit is exceeded, the oldest records in the recycle bin get deleted, provided they’ve been there for at least two hours.
  • If a “child” record – like a contact or an opportunity – is deleted, and its parent record is subsequently deleted, the child record is permanently deleted and is not recoverable.
  • If the recycle bin has been explicitly purged (which requires “Modify All Data” permissions), you may still be able to get them back using the DataLoader tool, but the window of time is very brief. Specifically how long you have is not well documented, but research indicates it’s around 24 – 48 hours.

A quick Internet search will turn up horror stories of organizations where a disgruntled employee deleted a large number of records, then purged the recycle bin before walking out the door. If this happens to you on a Friday afternoon, it’s likely that by Monday morning your only option will be to contact Salesforce.com to request their help in recovering your data. The Salesforce.com help site mentions that this help is available, and notes that there is a “fee associated” with it. It doesn’t mention that the fee starts at $10,000.

You can, of course, periodically export all of your Salesforce.com data as a (very large) .CSV file. Restoring a particular record or group of records will then involve deleting everything in the .CSV file except the records you want to restore, and then importing them back into Salesforce.com. If that sounds painful to you, you’re right.

The other alternative is to use a third-party backup service, of which there are several, to back up your Salesforce.com data. There are several advantages to using a third-party tool: backups can be scheduled and automated, it’s easier to search for the specific record(s) you want to restore, and you can roll back to any one of multiple restore points. One such tool is Cloudfinder, which was recently acquired by eFolder. Cloudfinder will backup data from Salesforce.com, Office 365, Google Apps, and Box. I expect that list of supported cloud services to grow now that they’re owned by eFolder.

We at VirtualQube are excited about this acquisition because we are an eFolder partner, which means that we are now a Cloudfinder partner as well. For more information on Cloudfinder, or any eFolder product, contact sales@virtualqube.com, or just click the “Request a Quote” button on this page.

eDiscovery Part 1 – Lifecycle of an Email Message

Last Friday, September 26, VirtualQube was invited to present at the O365 Nation fall conference in Redmond on the subject of eDiscovery and Organizational Search in Microsoft Office. O365 Nation is a new organization created by our long-time friend Harry Brelsford, the founder of SMB Nation, and, as you might expect, most of the content at the conference was related to Office 365. However, since the eDiscovery and Search tools in question are built into Exchange, SharePoint, and Lync, the subject matter of our presentation is equally applicable to on premises deployments of these products.

This is the first of a series of blog posts on this topic, which will include video excerpts from the presentation.

It is important to note that the Microsoft tools discussed here only cover a portion of the Electronically Stored Information (“ESI”) that an organization may be required to produce as part of a discovery action. ESI can include Web content, social media content, videos, voice mails, etc., in addition to the information contained in email and Lync messages, and SharePoint content. The primary purpose of these tools is to enable you to preserve email, Lync, and SharePoint content in its original form, perform integrated searches across all three platforms – plus file shares that are being indexed by SharePoint, and export the results in an industry-standard format that can be ingested into third-party eDiscovery tools for further processing.

Since, by sheer volume, email is likely to be the largest component an organization will have to deal with, this series will begin with a discussion of the lifecycle of an email message in Microsoft Exchange – specifically, what happens to an email message when the user’s “Deleted Items” folder is emptied, and how we can insure that if a user attempts to modify an existing message, a copy of that message in its original form is preserved.

Cloud-Based VDI vs. DaaS – Is There a Difference?

With nearly all new technologies in the IT space comes confusion over terminology. Some of the confusion is simply because the technology is new, and we’re all trying to understand how it works and how – or whether – it fits the needs of our businesses. Unfortunately, some of the confusion is often caused by technology vendors who want to find a way to label their products in a way that associates them with whatever is perceived as new, cool, innovative, cutting-edge, etc. Today, we’re seeing that happen with terms like “cloud,” “DaaS,” and “VDI.”

“VDI” stands for Virtual Desktop Infrastructure. Taken literally, it’s an infrastructure that delivers virtual desktops to users. What is a virtual desktop? It is a (usually Windows) desktop computing environment where the user interface is abstracted and delivered to a remote user over a network using some kind of remote display protocol such as ICA, RDP, or PCoIP. That desktop computing environment is most often virtualized using a platform such as VMware, Hyper-V, or XenServer, but could also be a blade PC or even an ordinary desktop PC. If the virtual desktop is delivered by a service provider (such as VirtualQube) for a monthly subscription fee, it is often referred to as “Desktop as a Service,” or “DaaS.”

There are a number of ways to deliver a virtual desktop to a user:

  • Run multiple, individual instances of a desktop operating system (e.g., Windows 7 or Windows 8) on a virtualization host that’s running a hypervisor such as VMware, Hyper-V, or XenServer. Citrix XenDesktop, VMware View, and Citrix VDI-in-a-Box are all products that enable this model.
  • Run multiple, individual instances of a server operating system (e.g., 2008 R2 of 2012 R2) on a virtualization host that’s running a hypervisor such as VMware, Hyper-V, or XenServer. In such a case, a policy pack can be applied that will make the 2008 R2 desktop look like Windows 7, and the 2012 R2 desktop look like Windows 8. In a moment we’ll discuss why you might want to do that.
  • Run multiple, individual desktops on a single, shared server operating system, using Microsoft Remote Desktop Services (with or without added functionality from products such as Citrix XenApp). This “remote session host,” to use the Microsoft term, can be a virtual server or a physical server. Once again, the desktop can be made to look like a Windows 7 or Windows 8 desktop even though it’s really a server OS.
  • Use a brokering service such as XenDesktop to allow remote users to connect to blade PCs in a data center, or even to connect to their own desktop PCs when they’re out of the office.
  • Use client-side virtualization to deliver a company-managed desktop OS instance that will run inside a virtualized “sandbox” on a client PC, such as is the case with Citrix XenClient, or the Citrix Desktop Player for Macintosh. In this case, the virtual desktop can be cached on the local device’s hard disk so it can continue to be accessed after the client device is disconnected from the network.

Although any of the above approaches could lumped into the “VDI” category, the common usage that seems to be emerging is to use the term “VDI” to refer specifically to approaches that deliver an individual operating system instance (desktop or server) to each user. From a service provider perspective, we would characterize that as cloud-based VDI. So, to answer the question we posed in the title of this post, cloud-based VDI is one variant of DaaS, but not all DaaS is delivered using cloud-based VDI – and for a good reason.

Microsoft has chosen not to put its desktop operating systems on the Service Provider License Agreement (“SPLA”). That means there is no legal way for a service provider such as VirtualQube to provide a customer with a true Windows 7 or Windows 8 desktop and charge by the month for it. The only way that can be done is for the customer to purchase all the licenses that would be required for their own on-site VDI deployment (and we’ve written extensively about what licenses those are), and provide those licenses to the service provider, which must then provision dedicated hardware for that customer. That hardware cannot be used to provide any services to any other customer. (Anyone who tells you that there’s any other way to do this is either not telling you the truth, or is violating the Microsoft SPLA!)

Unfortunately, the requirement for dedicated hardware will, in many cases, make the solution unaffordable. Citrix recently published the results of a survey of Citrix Service Providers. They received responses from 718 service providers in 25 countries. 70% of them said that their average customer had fewer than 100 employees. 40% said their average customer had fewer than 50 employees. It is simply not cost-effective for a service provider to dedicate hardware to a customer of that size, and unlikely that it could be done at a price the customer would be willing to pay.

On the other hand, both Microsoft and Citrix have clean, easy-to-understand license models for Remote Desktop Services and XenApp, which is the primary reason why nearly all service providers, including VirtualQube, use server-hosted desktops as their primary DaaS delivery method. We all leverage the policy packs that can make a Server 2008 R2 desktop look like a Windows 7 desktop, and a 2012 R2 desktop look like a Windows 8 desktop, but you’re really not getting Windows 7 or Windows 8, and Microsoft is starting to crack down on service providers who fail to make that clear.

Unfortunately, there are still some applications out there that will not run well – or will not run at all – in a remote session hosted environment. There are a number of reasons for this:

  • Some applications check for the OS version as part of their installation routines, and simply abort the installation if you’re trying to install them on a server OS.
  • Some applications will not run on a 64-bit platform – and Server 2008 R2 and 2012 R2 are both exclusively 64-bit platforms.
  • Some applications do not follow proper programming conventions, and insist on doing things like writing temp files to a hard-coded path like C:\temp. If you have multiple users running that application on the same server via Remote Desktop Services, and each instance of the application is trying to write to the same temp file, serious issues will result. Sometimes we can use application isolation techniques to redirect the writes to a user-specific path, but sometimes we can’t.
  • Some applications are so demanding in terms of processor and RAM requirements that anyone else trying to run applications on the same server will experience degraded performance.

There’s not much that a service provider can do to address the first two of these issues, short of going the dedicated-hardware route (for those customers who are large enough to afford it) and provisioning true Windows 7 or Windows 8 desktops. But there is a creative solution for the third and fourth issues, and that’s to use VDI technology to provision individual instances of Server 2008 R2 or Server 2012 R2 per user. From the licensing perspective, it’s no different than supporting multiple users on a remote session host. Once the service provider has licensed a virtualization host for Windows Datacenter edition, there is no limit to the number of Windows Server instances that can be run on that host – you can keep spinning them up until you don’t like the performance anymore. And the Citrix and Microsoft user licensing is the same whether the user has his/her own private server instance, or is sharing the server OS instance with several other users via Remote Desktop Services.

On the positive side, this allows an individual user to be guaranteed a specified amount of CPU and RAM to handle those resource-intensive applications, avoids “noisy neighbor” issues where a single user impacts the performance of other users who happen to be sharing the same Remote Desktop Server, and allows support of applications that just don’t want to run in a multi-user environment. It’s even possible to give the user the ability to install his/her own applications – this may be risky in that the user could break his/her own virtual server instance, but at least the user can’t affect anyone else.

On the negative side, this is a more expensive alternative simply because it is a less efficient way to use the underlying virtualization host. Our tests indicate that we can probably support an average of 75 individual virtual instances of Server 2008 or Server 2012 for VDI on a dual-processor virtualization host with, say, 320 Gb or so of RAM. We can support 200 – 300 concurrent users on the same hardware by running multiple XenApp server instances on it rather than an OS instance per user.

That said, we believe there are times when the positives of cloud-based VDI is worth the extra money, which is why we offer both cloud-based VDI and remote session hosted DaaS powered by Remote Desktop Services and XenApp.

Countdown to July 14, 2015

In case you haven’t heard, Microsoft will end support for Windows Server 2003 on July 14, 2015. A quick glance at the calendar will confirm that this is now less than a year away. So this is your friendly reminder that if you are still running 2003 servers in production, and you haven’t yet begun planning how you’re going to replace them, you darn well better start soon. Here are a few questions to get you started:

  • Are those 2003 servers already virtualized, or do you still have physical servers that will need to be retired/replaced?
  • If you have physical 2003 servers, do you have a virtualized infrastructure that you can use for their replacements? (If not, this is a great opportunity to virtualize. If so, do you have enough available capacity on your virtualization hosts? How about storage capacity on your SAN?)
  • Can the application workloads on those 2003 servers be moved to 2008 or 2012 servers? If not, what are your options for upgrading those applications to something that will run on a later server version?
  • What impact will all this have on your 2015 budget? Have you already budgeted for this? If not, do you still have time to get this into your next budget?
  • Would it make more sense from a budget perspective to move those application workloads to the cloud instead of purchasing server upgrades? (Maybe a monthly operating expense will be easier to deal with than the capital expenditure of purchasing the upgrades.)

According to Microsoft, there are more than 9 million 2003 servers still in production worldwide…and the clock is ticking. How many of the 9 million are yours?

A Brief Respite from CryptoLocker

A couple of days ago (June 2), the UK’s National Crime Agency announced that law enforcement agencies have effectively disabled key nodes of the GOZeuS network, which provided a key delivery mechanism for CryptoLocker’s ransom malware. They’ve also identified a person believed to be the leader of the criminal enterprise behind GOZeuS, and international officials say that other arrests are “in progress.”

While this is good news, it’s unlikely to be a permanent solution to the ransomware problem, given the distributed nature of Internet-based malware. It does, however, give us some breathing room – perhaps as much as a couple of weeks – to think about how to protect against it.

In case you’re not familiar with what CryptoLocker is, it is a particularly nasty form of malware that first appeared in the fall of 2013, and is typically spread by tricking a user into clicking on a disguised executable. Disguised executables are, in part, enabled by the unfortunate design choice Microsoft made in Windows XP that continued through Windows 7, which was to “Hide extensions for known file types” by default. (Personally, this always annoyed me, and one of the first things I always did when setting up a new PC was to deselect that option. It does appear that it is no longer selected by default in Windows 8 and 8.1.)
Hide extensions of known file types
This meant that, for example, a Word document that was called “My Important Customer Proposal.docx” would display in Windows Explorer (and elsewhere within the OS) as, simply, “My Important Customer Proposal.” That also meant that if someone sent you an email with a file attachment called MalwareDesignedToStealYourMoney.pdf.exe, it would display in Windows as, simply, MalwareDesignedToStealYourMoney.pdf. An unsophisticated or careless user – or someone who perhaps was just exhausted from a long day and not thinking clearly – might look at the file name and think it was an ordinary Adobe PDF file, and double-click on it to open it up…not realizing that the “.exe” that was hidden from them meant that it was really an executable that was designed to install malware on their system.

“But why,” you might ask, “wouldn’t my anti-virus software protect me against this?” The answer is that some anti-virus products might protect you, depending on how the options are set. But many, if not most, users have local administrator rights to their PCs. (Yes, arguably they shouldn’t, but every IT admin that’s ever tried to take those rights away has had to deal with the howls of protest when users – often top executives – suddenly can’t install iTunes or some other equally essential utility on their PCs.) So unless your AV product is set to scan files whenever they are accessed – a setting that often isn’t enabled even on products that are capable of doing it because it can slow your system down – you won’t know that you’re installing something bad until it’s too late. Local administrators, by definition, have the authority to install software. You launched the installation program, you’re a local administrator, so it’s going to get installed.

CryptoLocker screen cap
Once installed, CryptoLocker checks in with a server on the Internet that assigns a public/private key pair to that PC, and CryptoLocker then happily goes to work using the public key to encrypt all the documents, spreadsheets, pictures, etc., on your system. The latest variants will even encrypt files on network drives if they’re mapped using a drive letter. (So far, it doesn’t appear that CryptoLocker knows how to navigate across UNC paths.) There is even some evidence that the latest variants may wait up to two weeks before locking you out of your files, in the hopes that you will move through a full cycle of backups during that time, meaning that all your backups will also be encrypted and therefore useless to you. Once it’s done its dirty work, you will suddenly be unable to access any of your files, and will be presented with a screen that tells you that you have, typically, 72 hours to submit payment – typically via untraceable money cards or bitcoin – in order to obtain the private key that will decrypt your files. Otherwise, the private key will be automatically destroyed, and your files will be forever out of your reach.

If the thought of having to cough up the equivalent of $300 US or lose all your data leaves you with cold chills (as it does me), what can/should you do?

  • First and foremost, educate your users. One of the most basic rules of computer safety is that you simply don’t open email attachments from people you don’t know – and, for that matter, don’t open them from people you do know unless you were expecting them and know what they are. Remember that it’s not that tough to impersonate someone’s email address. At the moment, most CryptoLocker payloads are disguised as invoices from financial institutions, messages from shipping companies, notices from law enforcement agencies, etc., often with scary messages about account closures, final notices, and amounts due. Also beware of zip file attachments. Make sure your users are aware of these common tricks, so they don’t reflexively click to see what a file attachment is.
  • If you’re still running Windows 7 or earlier, deselect the “Hide extensions for known file types” option. This will at least make it slightly more likely that someone will notice that there’s something not quite right about the file they’re about to click on.
  • Keep your anti-virus products up to date.
  • Restrict permissions on shared folders.
  • Consider removing local admin rights from users.
  • Consider using a prevention tool like “CryptoPrevent” from the folks at Foolish IT, LLC. This is a tool that is free for both private and commercial use – although there is a paid version that will automatically update itself and offers additional features like email alerts when applications are blocked. When installed, it will, silently and automatically, lock down a Windows system by, among other things, preventing executables with double extensions (like “something.pdf.exe”) from running, and preventing executables from running if they’re located in folders where you wouldn’t expect legitimate programs to be located. It implements over 200 rules that will help protect you from other forms of malware as well as CryptoLocker.

    It should be noted that, if you’re running a Professional version of Windows that is joined to a Windows domain, all of these rules could be set via group policies, and there are even pre-packaged prevention kits, such as CryptolockerPreventionKit.zip, available at www.thirdtier.net/downloads that will make it easier to set those group policies. But if you’re not comfortable with the whole concept of group policies and/or you’re not in a Windows domain or you’re running a home version of Windows, CryptoPrevent is a fast and easy way to deal with the issue.

Please do not assume that the latest law enforcement announcements mean that we don’t have to worry about CryptoLocker anymore. It’s estimated that CryptoLocker raked in as much as $30 million just in the first 100 days after it appeared in the wild. With that much money in play, it – or something else like it – will inevitably reappear sooner or later.