Tag Archives: Backup Strategy

How Do You Back Up Your Cloud Services?

I recently came across a post on spiceworks.com that, although it’s a couple of years old, makes a great point: “IT professionals would never run on-premise systems without adequate backup and recovery capabilities, so it’s hard to imagine why so many pros adopt cloud solutions without ensuring the same level of protection.”

This is not a trivial issue. According to some articles I’ve read, over 100,000 companies are now using Salesforce.com as their CRM system. Microsoft doesn’t reveal how many Office 365 subscribers they have, but they do reveal their annual revenue run-rate. If you make some basic assumptions about the average monthly fee, you can make an educated guess as to how many subscribers they have, and most estimates place it at over 16 million (users, not companies). Google Apps subscriptions are also somewhere in the millions (they don’t reveal their specific numbers either). If your organization subscribes to one or more of these services, have you thought about backing up that data? Or are you just trusting your cloud service provider to do it for you?

Let’s take Salesforce.com as a specific example. Deleted records normally go into a recycle bin, and are retained and recoverable for 15 days. But there are some caveats there:

  • Your recycle bin can only hold a limited number of records. That limit is 25 times the number of megabytes in your storage. (According to the Salesforce.com “help” site, this usually translates to roughly 5,000 records per license.) For example, if you have 500 Mb of storage, your record limit is 12,500 records. If that limit is exceeded, the oldest records in the recycle bin get deleted, provided they’ve been there for at least two hours.
  • If a “child” record – like a contact or an opportunity – is deleted, and its parent record is subsequently deleted, the child record is permanently deleted and is not recoverable.
  • If the recycle bin has been explicitly purged (which requires “Modify All Data” permissions), you may still be able to get them back using the DataLoader tool, but the window of time is very brief. Specifically how long you have is not well documented, but research indicates it’s around 24 – 48 hours.

A quick Internet search will turn up horror stories of organizations where a disgruntled employee deleted a large number of records, then purged the recycle bin before walking out the door. If this happens to you on a Friday afternoon, it’s likely that by Monday morning your only option will be to contact Salesforce.com to request their help in recovering your data. The Salesforce.com help site mentions that this help is available, and notes that there is a “fee associated” with it. It doesn’t mention that the fee starts at $10,000.

You can, of course, periodically export all of your Salesforce.com data as a (very large) .CSV file. Restoring a particular record or group of records will then involve deleting everything in the .CSV file except the records you want to restore, and then importing them back into Salesforce.com. If that sounds painful to you, you’re right.

The other alternative is to use a third-party backup service, of which there are several, to back up your Salesforce.com data. There are several advantages to using a third-party tool: backups can be scheduled and automated, it’s easier to search for the specific record(s) you want to restore, and you can roll back to any one of multiple restore points. One such tool is Cloudfinder, which was recently acquired by eFolder. Cloudfinder will backup data from Salesforce.com, Office 365, Google Apps, and Box. I expect that list of supported cloud services to grow now that they’re owned by eFolder.

We at VirtualQube are excited about this acquisition because we are an eFolder partner, which means that we are now a Cloudfinder partner as well. For more information on Cloudfinder, or any eFolder product, contact sales@virtualqube.com, or just click the “Request a Quote” button on this page.

High Availability vs. Fault Tolerance

Many times, terms like “High Availability” and “Fault Tolerance” get thrown around as though they were the same thing. In fact, the term “fault tolerant” can mean different things to different people - and much like the terms “portal,” or “cloud,” it’s important to be clear about exactly what someone means by the term “fault tolerant.”

As part of our continuing efforts to guide you through the jargon jungle, we would like to discuss redundancy, fault tolerance, failover, and high availability, and we’d like to add one more term: continuous availability.

Our friends at Marathon Technologies shared the following graphic, which shows how IDC classifies the levels of availability:

Graphic of Availability Levels

The Availability Pyramid



Redundancy is simply a way of saying that you are duplicating critical components in an attempt to eliminate single points of failure. Multiple power supplies, hot-plug disk drive arrays, multi-pathing with additional switches, and even duplicate servers are all part of building redundant systems.

Unfortunately, there are some failures, particularly if we’re talking about server hardware, that can take a system down regardless of how much you’ve tried to make it redundant. You can build a server with redundant hot-plug power supplies and redundant hot-plug disk drives, and still have the system go down if the motherboard fails - not likely, but still possible. And if it does happen, the server is down. That’s why IDC classifies this as “Availability Level 1″ (“AL1″ on the graphic)…just one level above no protection at all.

The next step up is some kind of failover solution. If a server experiences a catastrophic failure, the work loads are “failed over” to a system that is capable of supporting those workloads. Depending on those work loads, and what kind of fail-over solution you have, that process can take anywhere from minutes to hours. If you’re at “AL2,” and you’ve replicated your data using, say, SAN replication or some kind of server-to-server replication, it could take a considerable amount of time to actually get things running again. If your servers are virtualized, with multiple virtualization hosts running against a shared storage repository, you may be able to configure your virtualization infrastructure to automatically restart a critical workload on a surviving host if the host it was running on experiences a catastrophic failure - meaning that your critical system is back up and on-line in the amount of time it takes the system to reboot - typically 5 to 10 minutes.

If you’re using clustering technology, your cluster may be able to fail over in a matter of seconds (“AL3″ on the graphic). Microsoft server clustering is a classic example of this. Of course, it means that your application has to be cluster-aware, you have to be running Windows Enterprise Edition, and you may have to purchase multiple licenses for your application as well. And managing a cluster is not trivial, particularly when you’ve fixed whatever failed and it’s time to unwind all the stuff that happened when you failed over. And your application was still unavailable during whatever interval of time was required for the cluster to detect the failure and complete the failover process.

You could argue that a fail over of 5 minutes or less equals a highly available system, and indeed there are probably many cases where you wouldn’t need anything better than that. But it is not truly fault tolerant. It’s probably not good enough if you are, say, running a security application that’s controlling the smart-card access to secured areas in an airport, or a video surveillance system that sufficiently critical that you can’t afford to have a 5-minute gap in your video record, or a process control system where a five minute halt means you’ve lost the integrity of your work in process and potentially have to discard thousands of dollars worth of raw material and lose thousands more in lost productivity while you clean out your assembly line and restart it.

That brings us to the concept of continuous availability. This is the highest level of availability, and what we consider to be true fault tolerance. Instead of simply failing workloads over, this level allows for continuous processing without disruption of access to those workloads. Since there is no disruption in service there is no data loss, no loss of productivity and no waiting for your systems to restart your workloads.

So all this leads to the question of what your business needs.

Do you have applications that are critical to your organization? If those applications go down how long could you afford to be without access to them? If those applications go down how much data can you afford to lose? 5 minutes? An hour? And, most importantly, what does it cost you if that application is unavailable for a period of time? Do you know, or can you calculate it?

This is another way to ask what the requirements are for your “RTO” (“Recovery Time Objective” - i.e., how long, when a system goes down, do you have before you must be back up) and “RPO” (“Recovery Point Objective” - i.e., when you do get the system back up, how much data it is OK to have lost in the process). We’ve discussed these concepts in previous posts. These are questions that only you can answer, and the answers are significantly different depending on your business model. If you’re a small business, and your accounting server goes down, and all it means is that you have to wait until tomorrow to enter today’s transactions, it’s a far different situation from a major bank that is processing millions of dollars in credit card transactions.

If you can satisfy your business needs by deploying one of the lower levels of availability, great! Just don’t settle for an AL1 or even an AL3 solution if what your business truly demands is continuous availability.

BC, DR, BIA - What does it mean???

Most companies instinctively know that they need to be prepared for an event that will compromise business operations, but it’s often difficult to know where to begin.  We hear a lot of acronyms: “BC” (Business Continuity), “DR” (Disaster Recovery), “BIA” (Business Impact Analysis), “RA” (Risk Assessment), but not a lot of guidance on exactly what those things are, or how to figure out what is right for any particular business.

Many companies we meet with today are not really sure what components to implement or what to prioritize.  So what is the default reaction?  “Back up my Servers!  Just get the stuff off-site and I will be OK.”   Unfortunately, this can leave you with a false sense of security.  So let’s stop and take a moment to understand these acronyms that are tossed out at us.

BIA (Business Impact Analysis)
BIA is a process through which a business will gain an understanding from a financial perspective how and what to recover once a disruptive business event occurs.   This is one of the more critical steps and should be done early on as it directly impacts  BC and DR. If you’re not sure how to get started, get out a blank sheet of paper, and start listing everything you can think of that could possibly disrupt your business. Once you have your list, rank each item on a scale of 1 - 3 on how likely it is to happen, and how severely it would impact your business if it did. This will give you some idea of what you need to worry about first (the items that were ranked #1 in both categories). Congratulations! You just performed a Risk Assessment!

Now, before we go much farther, you need to think about two more acronyms: “RTO” and “RPO.” RTO is the “Recovery Time Objective.” If one of those disruptive events occurs, how much time can pass before you have to be up and running again? An hour? A half day? A couple of days? It depends on your business, doesn’t it? I can’t tell you what’s right for you - only you can decide. RPO is the “Recovery Point Objective.” Once you’re back up, how much data is it OK to have lost in the recovery process? If you have to roll back to last night’s backup, is that OK? How about last Friday’s backup? Of course, if you’re Bank of America and you’re processing millions of dollars worth of credit card transactions, the answer to both RTO and RPO is “zero!” You can’t afford to be down at all, nor can you afford to lose any data in the recovery process. But, once again, most of our businesses don’t need quite that level of protection. Just be aware that the closer to zero you need those numbers to be, the more complex and expensive the solution is going to be!

BC (Business Continuity)
Business Continuity planning is the process through which a business develops a specific plan to assure survivability in the event of a disruptive business event: fire, earthquake, terrorist events, etc.  Ideally, that plan should encompass everything on the list you created - but if that’s too daunting, start with a plan that addresses the top-ranked items. Then revise the plan as time and resources allow to include items that were, say, ranked #1 in one category and #2 in the other, and so forth. Your plan should detail specifically how you are going to meet the RTO and RPO you decided on earlier.

And don’t forget the human factor. You can put together a great plan for how you’re going to replicate data off to another site where you can have critical systems up and running within a couple of hours of your primary facility turning into a smoking hole in the ground. But where are your employees going to report for work? Where will key management team members convene to deal with the crisis and its aftermath? How are they going to get there if transportation systems are disrupted, and how will they communicate if telephone lines are jammed?

DR (Disaster Recovery)
Disaster recovery is the process or action a business takes to bring the business back to a basic functioning entity after a disruptive business event. Note that BC and DR are complementary: BC addresses how you’re going to continue to operate in the face of a disruptive event; DR addresses how you get back to normal operation again.

Most small business think of disasters as events that are not likely to affect them.  Their concept of “disaster” is that of a rare act of God or a terrorist attack.  But in reality, there are many other things that would qualify as a “disruptive business event:” fire, long term power loss, network security breach, swine flu pandemic, and in the case of one of my clients, a fire in the power vault of a building that crippled the building for three days.  It is imperative to not overlook some of the simpler events that can stop us from conducting our business.

Finally, it is important to actually budget some money for these activities. Don’t try to justify this with a classic Return on Investment calculation, because you can’t. Something bad may never happen to your business…or it could happen tomorrow. If it never happens, then the only return you’ll get on your investment is peace of mind (or regulatory compliance, if you’re in a business that is required to have these plans in place). Instead, think of the expense the way you think of an insurance premium, because, just like an insurance premium, it’s money you’re paying to protect against a possible future loss.

A Better Way to Backup Your Data

Moose Logic has been building and supporting networks for a long time. And during most of that time we’ve had a real love-hate relationship with most of the backup technologies we’ve implemented and/or recommended.

Tape backups - although they are arguably the best technology for long-term archival storage - are a pain to manage. Tapes wear out. Tape drives get dirty. People just don’t do test restores as often as they should. As a result, all too often, the first time you realize that you’ve got a problem with your backups is when you have a data loss, try to restore from your backups, and find out that they’re no good.

Add to that the astronomical growth in storage capacity, meaning that all the data you need to back up often won’t fit on one tape any more. So, unless you have someone working the night shift who can swap out the tape when it gets full, you’re faced with…

  • Buying multiple tape drives, which typically means you’re going to spend more on your backup software. And if your servers are virtualized, where are you going to install those tape drives?
  • Buying a tape library (a.k.a. autoloader), which can also get expensive.
  • Changing the tape when you come in the next morning, which means that your network performance suffers because you’re trying to finish the backup job(s) while people are trying to get work done.

Then there’s the issue of getting a copy of your data out of the building. Typically, that’s done by having multiple sets of tapes, and a designated employee who takes one set home every Friday and brings the other set in. If s/he remembers. Or isn’t sick or on vacation.

Backing up to external hard drives is a reasonable alternative for some. It solves the capacity issue in most cases. But over the years, we’ve seen reliability issues with some manufacturers’ units. We’ve uncovered nagging little issues like some units that don’t automatically come back on line after a power interruption. And they’re not necessarily the best for long-term archival storage, unless you keep them powered on - or at least power them on once in a while - because hard disks that just sit for long periods of time may develop issues with the lubrication in their bearings and not want to spin back up.

But we’ve finally found an approach that we really, really like. One that, as one of our engineers said in an internal email thread, we actually enjoy managing. In fact, we like it so much we built a backup appliance around it. It’s Microsoft’s System Center Data Protection Manager (SCDPM).

In this installment of the Moose Logic Video Series, our own Scott Gorcester gives you a quick overview of SCDPM 2010:



For more detail on how it works, check out the description of our MooseSentryTM backup appliance.

Why You Need Good Backups

A few days ago, in the post entitled “Seven things you need to do to keep your data safe,” we were talking primarily about some simple things that individuals can do to protect their data, even if (or especially if) they’re not IT professionals. In this post, we’re talking to you, Mr. Small Business Owner.

You might think that it’s intuitively obvious why you would need good backups, but according to an HP White Paper I recently discovered (which you should definitely download and read), as many as 40% of Small and Medium Sized Businesses don’t back up their data at all.

The White Paper is entitled Impact on U.S. Small Business of Natural and Man-Made Disasters. What kinds of disasters are we talking about? The White Paper cites statistics from a presentation to the 2007 National Hurricane Conference in New Orleans by Robert P. Hartwig of the Insurance Information Institute. According to Hartwig, over the 20-year period of 1986 through 2005, catastrophic losses broke down like this:

  • Hurricanes and tropical storms - 47.5%
  • Tornado losses - 24.5%
  • Winter storms - 7.8%
  • Terrorism - 7.7%
  • Earthquakes and other geologic events - 6.7%
  • Wind/hail/flood - 2.8%
  • Fire - 2.3%
  • Civil disorders, water damage, and utility services disruption - less than 1%

If you’re in Moose Logic’s back yard here in the great State of Washington, you probably went down that list and told yourself, with a sigh of relief, that you didn’t have to worry about almost three-quarters of the disasters, because we typically don’t have to deal with hurricanes and tornadoes. But you might be surprised, as I was, to learn that we are nevertheless in the top twenty states in terms of the number of major disasters, with 40 disasters declared in the period of 1955 - 2007. We’re tied with West Virginia for 15th place.

Sometimes, disasters come at you from completely unexpected directions. Witness the “Great Chicago Flood” of 1992. Quoting from the White Paper:

In 1899 the city of Chicago started work on a series of interconnecting tunnels located approximately forty feet beneath street level. This series of tunnels ran below the Chicago River and underneath the Chicago business district, known as The Loop. The tunnels housed a series of railroad tracks that were used to haul coal and to remove ashes from the many office buildings in the downtown area. The underground system fell into disuse in the 1940’s and was officially abandoned in 1959 and the tunnels were largely forgotten until April 13th, 1992.

Rehabilitation work on the Kinzie Street bridge crossing the Chicago River required new pilings and a work crew apparently drove one of those pilings through the roof of one of those long abandoned tunnels. The water flooded the basements of Loop office buildings and retail stores and an underground shopping district. More than 250 million gallons of water quickly began flooding the basements and electrical controls of over 300 buildings throughout the downtown area. At its height, some buildings had 40 feet of water in their lower levels. Recovery efforts lasted for over four weeks and, according to the City of Chicago cost businesses and residents, an estimated $1.95 billion. Some buildings remained closed for weeks. In those buildings were hundreds of small and medium businesses suddenly cut off from their data and records and all that it took to conduct business. The underground flood of Chicago proved to be one of the worst business disasters ever.

Or how about the disaster that hit Tessco Technologies, outside of Baltimore, in October of 2002? A faulty fire hydrant outside its Hunt Valley data center failed, and “several hundred thousand gallons of water blasted through a concrete wall leaving the company’s primary data center under several feet of water and left some 1400 hard drives and 400 SAN disks soaking wet and caked with mud and debris.”

How could you have possibly seen those coming?

And as if these disasters aren’t bad enough, other studies show that as much as 50% of data loss is caused by user error - and we all have users!

One problem, of course, as we’ve observed before, is that it’s difficult to build an ROI justification around the bad thing that didn’t happen. Unforeseen disasters are, well, unforeseen. There’s no guarantee that the big investment you make in backup and disaster recovery planning is going to give you any return in the next 12 - 24 months. It’s only going to pay off if, God forbid, you actually have a disaster to recover from. So it’s no surprise that, when a business owner is faced with the choice between making that investment and making some other kind of business investment that will have a higher likelihood of a short-term payback (or perhaps taking that dream vacation that the spouse has been bugging you about for the last five years), the backup / disaster recovery expenditure drops, once again, to the bottom of the priority list.

One solution is to shift your perspective, and view the expense as insurance. Heck, if it helps you can even take out a lease to cover the cost - then you can pretend the lease payment is an insurance premium! You wouldn’t run your business without business liability insurance - because without it you could literally lose everything. You shouldn’t run your business without a solid backup and disaster-recovery plan, either, and for precisely the same reason.

Please. Download the HP White Paper, read it, then work through the following exercise:

  • List all of the things that you can imagine that would possibly have an impact on your business. I mean everything - from the obvious things like flood, fire, and earthquake, to less obvious things, like a police action that restricts access to the building your office is in, or the pandemic that everyone keeps telling us is just around the corner.
  • For each item on your list, make your best judgment call, on a scale of 1 to 3, of
    • How likely it is to happen, and
    • How severely it would affect your business if it did happen.

You now have the beginnings of a priority list. The items that you rated “3″ in both columns (meaning not likely to happen, and not likely to have a severe effect on your business even if they did) you can push to the bottom of the priority list. The items that you rated “1″ in both columns need to be addressed yesterday. The others fall somewhere in between, and you’re going to have to use your best judgment in how to prioritize them - but at least you now have some rationale behind your decisions.

The one thing you can’t afford to do is to keep putting it off. Hope is not a strategy, nor is it a DR plan.