How Do You Back Up Your Cloud Services?

I recently came across a post on spiceworks.com that, although it’s a couple of years old, makes a great point: “IT professionals would never run on-premise systems without adequate backup and recovery capabilities, so it’s hard to imagine why so many pros adopt cloud solutions without ensuring the same level of protection.”

This is not a trivial issue. According to some articles I’ve read, over 100,000 companies are now using Salesforce.com as their CRM system. Microsoft doesn’t reveal how many Office 365 subscribers they have, but they do reveal their annual revenue run-rate. If you make some basic assumptions about the average monthly fee, you can make an educated guess as to how many subscribers they have, and most estimates place it at over 16 million (users, not companies). Google Apps subscriptions are also somewhere in the millions (they don’t reveal their specific numbers either). If your organization subscribes to one or more of these services, have you thought about backing up that data? Or are you just trusting your cloud service provider to do it for you?

Let’s take Salesforce.com as a specific example. Deleted records normally go into a recycle bin, and are retained and recoverable for 15 days. But there are some caveats there:

  • Your recycle bin can only hold a limited number of records. That limit is 25 times the number of megabytes in your storage. (According to the Salesforce.com “help” site, this usually translates to roughly 5,000 records per license.) For example, if you have 500 Mb of storage, your record limit is 12,500 records. If that limit is exceeded, the oldest records in the recycle bin get deleted, provided they’ve been there for at least two hours.
  • If a “child” record – like a contact or an opportunity – is deleted, and its parent record is subsequently deleted, the child record is permanently deleted and is not recoverable.
  • If the recycle bin has been explicitly purged (which requires “Modify All Data” permissions), you may still be able to get them back using the DataLoader tool, but the window of time is very brief. Specifically how long you have is not well documented, but research indicates it’s around 24 – 48 hours.

A quick Internet search will turn up horror stories of organizations where a disgruntled employee deleted a large number of records, then purged the recycle bin before walking out the door. If this happens to you on a Friday afternoon, it’s likely that by Monday morning your only option will be to contact Salesforce.com to request their help in recovering your data. The Salesforce.com help site mentions that this help is available, and notes that there is a “fee associated” with it. It doesn’t mention that the fee starts at $10,000.

You can, of course, periodically export all of your Salesforce.com data as a (very large) .CSV file. Restoring a particular record or group of records will then involve deleting everything in the .CSV file except the records you want to restore, and then importing them back into Salesforce.com. If that sounds painful to you, you’re right.

The other alternative is to use a third-party backup service, of which there are several, to back up your Salesforce.com data. There are several advantages to using a third-party tool: backups can be scheduled and automated, it’s easier to search for the specific record(s) you want to restore, and you can roll back to any one of multiple restore points. One such tool is Cloudfinder, which was recently acquired by eFolder. Cloudfinder will backup data from Salesforce.com, Office 365, Google Apps, and Box. I expect that list of supported cloud services to grow now that they’re owned by eFolder.

We at VirtualQube are excited about this acquisition because we are an eFolder partner, which means that we are now a Cloudfinder partner as well. For more information on Cloudfinder, or any eFolder product, contact sales@virtualqube.com, or just click the “Request a Quote” button on this page.

Scott’s Book Arrived!

IMG_1716

We are pleased to announce that Scott’s books have arrived! ‘The Business Owner’s Essential Guide to I.T.’ is 217 pages packed full of pertinent information.

For those of you who pre-purchased your books, Thank You! Your books have already been signed and shipped, you should receive them shortly and we hope you enjoy them as much as Scott enjoyed writing for you.

If you haven’t purchased your copy, click here, purchase a signed copy from us and all proceeds will be donated to the WA chapter of Mothers Against Drunk Driving (MADD).

My first trip to a DataCenter

Friday, the technology leadership of VirtualQube (and me) descended upon Austin, Texas to meet with our datacenter vendor. It was a meeting long overdue as we had been doing business together for almost four years, but this was the first face-to-face meeting for the entire team.

Our vendor did their homework and took us out to dinner to Bob’s Steakhouse on Lavaca in downtown Austin the night before. It was a GREAT meal, and we had a blast checking out a number of watering holes in the area. According to our hosts, we apparently stopped the festivities just before entering the “seedy” part of the city. I feel like that was the perfect amount of fun to have, especially since we had a 4 hour meeting starting at 9am the next day.

The first thing we started with was a tour of the facility. Our vendor is in a CyrusOne Type Four Level II datacenter. For the uninitiated, this means it’s the best of the best. Fully redundant everything, generally with another safety valve or failover in addition. And the majority of these failovers were tested MONTHLY. Whoa, that’s impressive. We even saw the four huge generators outside that were gas powered and would support the entire building in case of a loss of electricity. Looking inside them (which we weren’t supposed to be allowed to do) was awe inspiring. Basically a V-12 design, with a filter on each cylinder due to its size. I didn’t get the specs, and I would have gotten a photo but security showed up right as I had grabbed for my phone. Just believe me that this building had thought of everything that could go wrong.

I broke the rules and took a picture of all the blinking lights. Kinda looks like my home theater, only more expensive (which is tough to do!).

DC1DC2

 

After the tour we talked about ways to work together for the coming years and both teams came away with a list of action items to make our collective futures brighter. And I’m off to get started on one of those projects now!

Why Not Amazon Web Services?

Thinking about moving to Amazon Web Services? Address these 4 Concerns FIRST

We’ve been hearing a lot of debate recently about using Amazon Web Services for all or part of a cloud infrastructure. Many people sing their praises whole-heartedly, and we here at VirtualQube have even explored their offerings to see if there was an opportunity to bend our own cost curve. But there really is a mixed bag of benefits and features. How do you know if the move is right for you? We’ve narrowed it down to 4 concerns you should address in light of your own circumstances before making the move.

blog

1. Business impact

First of all, let’s analyze the business model for AWS. Amazon rents out virtual machines for a reasonable price per desktop. But in order to get their best price, you have to pony up 36 months of service fees in advance to rent the space. If you’re an enterprise with three years of IT budget available, this is a great deal. If not, take a closer look.

The pricing from AWS also assumes that virtual machines will be spinning-down 40% of the day. If your workforce mostly logs in within an 8:30am – 6:00pm time frame, you will greatly benefit from this pricing. If your employees have much more flexibility in their schedules (due to travel, seasonal workload spikes, or shifting hours for coverage), then you may need to look at another provider.

AWS also allows for 1GB of data to flow into their cloud for free, and only charges for the outflow of data. This is great for storage if you only use it sparingly, or in the case of a disaster, but can add-up quickly if you need to access your data frequently. While this doesn’t seem to be a concern now, as businesses exchange more and larger files, the cost of this pricing model could quickly outweigh the benefits.

2. Operations impact

The operational capabilities of Amazon truly are world-class. However, to achieve scale and offer its best price/experience the lowest cost of operations, AWS has one set way of operating and its customers are required to interact with AWS in this one way alone. So AWS may not offer the flexibility that would make it easy for you to add services to your existing operations.

If your firm fits AWS’s standard use case, it could lead to an easy transition, but if you have unique requirements, the friction caused with your organization could quickly lead to discord, operational changes, and many other business costs while trying to fit the mold AWS promotes.

3. Technology impact

The technology benefit of AWS is really second-to-none. Their infrastructure has the best hardware and capabilities offered by any of the cloud vendors. The efficiencies of scale mean you can get access to best-of-breed hardware faster than you would otherwise. The only caveat is this could give you a false sense of security.

How we approach business is to think of all the things that CAN go wrong, because many times they eventually do. We coach our customers to prepare for the times when technology will fail. And fail it will. We have consistently seen multi-million dollar technology fail unpredictably, even in hundred million dollar installations. These cases are NOT supposed to happen, and may not happen frequently, but they will happen. And if the failure impacts your business, it doesn’t matter how expensive the underlying technology is. And when the technology does fail, will you be able to get a senior engineer on the phone to immediately address your concerns?

4. Flexibility impact

The ability for AWS to match your business needs during hyper-growth and/or significant volatility could make the business case alone. With AWS’s web interface, your internal technology leader can order additional computing capabilities and they will be ready as soon as you hit “Enter.” The days of placing hardware into a room, hooking up cables, creating and testing images are truly over for all cloud users, and AWS does shorten the timeline for creating these technologies from minutes to seconds. Companies with significant growth who are doubling or tripling in size within a year many years in a row are a perfect match for AWS. No question.

Your final decision…

To sum it all up: AWS works well for you if:

  • You have a scale of operations and support for tens of thousands of users
  • There is three years of IT spend on the balance sheet and it can be invested today
  • You are a typical player in your industry, which fits AWS’s definition of your industry
  • Your IT needs to meet business demands that fluctuate exponentially, immediately, and unpredictably
  • You take advantage of some advanced features for business continuity

For an more in-depth discussion on this topic, check out this: LINK. For an in-depth cost analysis, check out this: LINK. Please note, you will have to be a Citrix Service Partner to access the cost analysis.

 

The Red Cross Wants to Help You with DR Planning

Red Cross Ready Rating Program

Ready Rating Program Seal


A few days ago, I spotted a headline in the local morning paper: “SBA Partners with the Red Cross to Promote Disaster Planning.” We’ve written some posts in the past that dealt with the importance of DR planning, and how to go about it, so this piqued my curiosity enough that I visited the Red Cross “Ready Rating” Web site. I was sufficiently impressed with what I found there that I wanted to share it with you.

Membership in the Ready Rating program is free. All you have to do to become a member is to sign up and take the on-line self-assessment, which will help you determine your current level of preparedness. And I’m talking about overall business preparedness, not just IT preparedness. The assessment rates you on your responses to questions dealing with things like:

  • Have you conducted a “hazard vulnerability assessment,” including identifying appropriate emergency responders (e.g., police, fire, etc.) in your area and, if necessary, obtaining agreements with them?
  • Have you developed a written emergency response plan?
  • Has that plan been communicated to employees, families, clients, media representatives, etc.?
  • Have you developed a “continuity of operations plan?”
  • Have you trained your people on what to do in an emergency?
  • Do you conduct regular drills and exercises?

That last point is more important than you might think. It’s not easy to think clearly when you’re in the middle of an earthquake, or when you’re trying to find the exit when the building is on fire and there’s smoke everywhere. The best way to insure that everyone does what they’re supposed to do is to drill until the response is automatic. It’s why we had fire drills when we were in elementary school. It’s still effective now that we’re all grown up.

Once you become a member, your membership will automatically renew from year to year, as long as you take the self-assessment annually and can show that your score has improved from the prior year. (Once your score reaches a certain threshold, you’re only required to maintain that level to retain your membership.)

So, why should you be concerned about this? It’s hard to imagine that, after the tsunami in Japan and the flooding and tornadoes here at home, there’s anyone out there who still doesn’t get it. But, just in case, consider these points taken from the “Emergency Fast Facts” document in the members’ area:

  • Only 2 in 10 Americans feel prepared for a catastrophic event.
  • Close to 60% of Americans are wholly unprepared for a disaster of any kind.
  • 54% of Americans don’t prepare because they believe a disaster will not affect them – although 51% of Americans have experienced at least one emergency situation where they lost utilities for at least three days, had to evacuate and could not return home, could not communicate with family members, or had to provide first aid to others.
  • 94% of small business owners believe that a disaster could seriously disrupt their business within the next two years.
  • 15 – 40% of small businesses fail following a natural or man-made disaster.

If you’re not certain how to even get started, they can help there as well. Here’s a screen capture showing a partial list of the resources available in the members’ area:

Member Resources

You may also want to review the following articles and posts:

And speaking of getting started, check this out: Just about everything I’ve ever read about disaster preparedness talks about the importance of having a “72-hour kit” – something that you can quickly grab and take with you that contains everything you need to survive for three days. Well, for those of you who haven’t got the time to scrounge up all of the recommended items and pack them up, you may find the solution at your local Costco. Here’s what I spotted on my most recent trip:

Pre-Packaged 3-day Survival Kit

Yep, it’s a pre-packaged 3-day survival kit. The cost at my local store (in Woodinville, WA, if you’re curious) was $69.95. That, in my opinion, is a pretty good deal.

So, if you haven’t started planning yet, consider this your call to action. Don’t end up as a statistic. You can do this.

High Availability vs. Fault Tolerance

Many times, terms like “High Availability” and “Fault Tolerance” get thrown around as though they were the same thing. In fact, the term “fault tolerant” can mean different things to different people – and much like the terms “portal,” or “cloud,” it’s important to be clear about exactly what someone means by the term “fault tolerant.”

As part of our continuing efforts to guide you through the jargon jungle, we would like to discuss redundancy, fault tolerance, failover, and high availability, and we’d like to add one more term: continuous availability.

Our friends at Marathon Technologies shared the following graphic, which shows how IDC classifies the levels of availability:

Graphic of Availability Levels

The Availability Pyramid



Redundancy is simply a way of saying that you are duplicating critical components in an attempt to eliminate single points of failure. Multiple power supplies, hot-plug disk drive arrays, multi-pathing with additional switches, and even duplicate servers are all part of building redundant systems.

Unfortunately, there are some failures, particularly if we’re talking about server hardware, that can take a system down regardless of how much you’ve tried to make it redundant. You can build a server with redundant hot-plug power supplies and redundant hot-plug disk drives, and still have the system go down if the motherboard fails – not likely, but still possible. And if it does happen, the server is down. That’s why IDC classifies this as “Availability Level 1″ (“AL1″ on the graphic)…just one level above no protection at all.

The next step up is some kind of failover solution. If a server experiences a catastrophic failure, the work loads are “failed over” to a system that is capable of supporting those workloads. Depending on those work loads, and what kind of fail-over solution you have, that process can take anywhere from minutes to hours. If you’re at “AL2,” and you’ve replicated your data using, say, SAN replication or some kind of server-to-server replication, it could take a considerable amount of time to actually get things running again. If your servers are virtualized, with multiple virtualization hosts running against a shared storage repository, you may be able to configure your virtualization infrastructure to automatically restart a critical workload on a surviving host if the host it was running on experiences a catastrophic failure – meaning that your critical system is back up and on-line in the amount of time it takes the system to reboot – typically 5 to 10 minutes.

If you’re using clustering technology, your cluster may be able to fail over in a matter of seconds (“AL3″ on the graphic). Microsoft server clustering is a classic example of this. Of course, it means that your application has to be cluster-aware, you have to be running Windows Enterprise Edition, and you may have to purchase multiple licenses for your application as well. And managing a cluster is not trivial, particularly when you’ve fixed whatever failed and it’s time to unwind all the stuff that happened when you failed over. And your application was still unavailable during whatever interval of time was required for the cluster to detect the failure and complete the failover process.

You could argue that a fail over of 5 minutes or less equals a highly available system, and indeed there are probably many cases where you wouldn’t need anything better than that. But it is not truly fault tolerant. It’s probably not good enough if you are, say, running a security application that’s controlling the smart-card access to secured areas in an airport, or a video surveillance system that sufficiently critical that you can’t afford to have a 5-minute gap in your video record, or a process control system where a five minute halt means you’ve lost the integrity of your work in process and potentially have to discard thousands of dollars worth of raw material and lose thousands more in lost productivity while you clean out your assembly line and restart it.

That brings us to the concept of continuous availability. This is the highest level of availability, and what we consider to be true fault tolerance. Instead of simply failing workloads over, this level allows for continuous processing without disruption of access to those workloads. Since there is no disruption in service there is no data loss, no loss of productivity and no waiting for your systems to restart your workloads.

So all this leads to the question of what your business needs.

Do you have applications that are critical to your organization? If those applications go down how long could you afford to be without access to them? If those applications go down how much data can you afford to lose? 5 minutes? An hour? And, most importantly, what does it cost you if that application is unavailable for a period of time? Do you know, or can you calculate it?

This is another way to ask what the requirements are for your “RTO” (“Recovery Time Objective” – i.e., how long, when a system goes down, do you have before you must be back up) and “RPO” (“Recovery Point Objective” – i.e., when you do get the system back up, how much data it is OK to have lost in the process). We’ve discussed these concepts in previous posts. These are questions that only you can answer, and the answers are significantly different depending on your business model. If you’re a small business, and your accounting server goes down, and all it means is that you have to wait until tomorrow to enter today’s transactions, it’s a far different situation from a major bank that is processing millions of dollars in credit card transactions.

If you can satisfy your business needs by deploying one of the lower levels of availability, great! Just don’t settle for an AL1 or even an AL3 solution if what your business truly demands is continuous availability.

How’s That “Cloud” Thing Working For You?

Color me skeptical when it comes to the “cloud computing” craze. Well, OK, maybe my skepticism isn’t so much about cloud computing per se as it is about the way people seem to think it is the ultimate answer to Life, the Universe, and Everything (shameless Douglass Adams reference). In part, that’s because I’ve been around IT long enough that I’ve seen previous incarnations of this concept come and go. Application Service Providers were supposed to take the world by storm a decade ago. Didn’t happen. The idea came back around as “Software as a Service” (or, as Microsoft preferred to frame it, “Software + Services”). Now it’s cloud computing. In all of its incarnations, the bottom line is that you’re putting your critical applications and data on someone else’s hardware, and sometimes even renting their Operating Systems to run it on and their software to manage it. And whenever you do that, there is an associated risk – as several users of Amazon’s EC2 service discovered just last week.

I have no doubt that the forensic analysis of what happened and why will drag on for a long time. Justin Santa Barbara had an interesting blog post last Thursday (April 21) that discussed how the design of Amazon Web Services (AWS), and its segmentation into Regions and Availability Zones, is supposed to protect you against precisely the kind of failure that occurred last week…except that it didn’t.

Phil Wainewright has an interesting post over at ZDnet.com on the “Seven lessons to learn from Amazon’s outage.” The first two points he makes are particularly important: First, “Read your cloud provider’s SLA very carefully” – because it appears that, despite the considerable pain some of Amazon’s customers were feeling, the SLA was not breached, legally speaking. Second, “Don’t take your provider’s assurances for granted” – for reasons that should be obvious.

Wainewright’s final point, though, may be the most disturbing, because it focuses on Amazon’s “lack of transparency.” He quotes BigDoor CEO Keith Smith as saying, “If Amazon had been more forthcoming with what they are experiencing, we would have been able to restore our systems sooner.” This was echoed in Santa Barbara’s blog post where, in discussing customers’ options for failing over to a different cloud, he observes, “Perhaps they would have started that process had AWS communicated at the start that it would have been such a big outage, but AWS communication is – frankly – abysmal other than their PR.” The transparency issue was also echoed by Andrew Hickey in an article posted April 26 on CRN.com.

CRN also wrote about “lessons learned,” although they came up with 10 of them. Their first point is that “Cloud outages are going to happen…and if you can’t stand the outage, get out of the cloud.” They go on to talk about not putting “Blind Trust” in the cloud, and to point out that management and maintenance are still required – “it’s not a ‘set it and forget it’ environment.”

And it’s not like this is the first time people have been affected by a failure in the cloud:

  • Amazon had a significant outage of their S3 online storage service back in July, 2008. Their northern Virginia data center was affected by a lightning strike in July of 2009, and another power issue affected “some instances in its US-EAST-1 availability zone” in December of 2009.
  • Gmail experienced a system-wide outage for a period of time in August, 2008, then was down again for over 1 ½ hours in September, 2009.
  • The Microsoft/Danger outage in October, 2009, caused a lot of T-Mobile customers to lose personal information that was stored on their Sidekick devices, including contacts, calendar entries, to-do lists, and photos.
  • In January, 2010, failure of a UPS took several hundred servers offline for hours at a Rackspace data center in London. (Rackspace also had a couple of service-affecting failures in their Dallas area data center in 2009.)
  • Salesforce.com users have suffered repeatedly from service outages over the last several years.

This takes me back to a comment made by one of our former customers, who was the CIO of a local insurance company, and who later joined our engineering team for a while. Speaking of the ASPs of a decade ago, he stated, “I wouldn’t trust my critical data to any of them – because I don’t believe that any of them care as much about my data as I do. And until they can convince me that they do, and show me the processes and procedures they have in place to protect it, they’re not getting my data!”

Don’t get me wrong – the “Cloud” (however you choose to define it…and that’s part of the problem) has its place. Cloud services are becoming more affordable, and more reliable. But, as one solution provider quoted in the CRN “lessons learned” article put it, “Just because I can move it into the cloud, that doesn’t mean I can ignore it. It still needs to be managed. It still needs to be maintained.” Never forget that it’s your data, and no one cares about it as much as you do, no matter what they tell you. Forrester analyst Rachel Dines may have said it best in her blog entry from last week: “ASSUME NOTHING. Your cloud provider isn’t in charge of your disaster recovery plan, YOU ARE!” (She also lists several really good questions you should ask your cloud provider.)

Cloud technologies can solve specific problems for you, and can provide some additional, and valuable, tools for your IT toolbox. But you dare not assume that all of your problems will automagically disappear just because you put all your stuff in the cloud. It’s still your stuff, and ultimately your responsibility.

SuperGRUB to the Rescue!

This post requires two major disclaimers:

  1. I am not an engineer. I am a relatively technical sales & marketing guy. I have my own Small Business Server-based network at home, and I know enough about Microsoft Operating Systems to be able to muddle through most of what gets thrown at me. And, although I’ve done my share of friends-and-family-tech-support, you do not want me working on your critical business systems.
  2. I am not, by any stretch of the imagination, a Linux guru. However, I’ve come to appreciate the “LAMP” (Linux/Apache/MySQL/PHP) platform for Web hosting. With apologies to my Microsoft friends, there are some things that are quite easy to do on a LAMP platform that are not easy at all on a Windows Web server. (Just try, for example, to create a file called “.htaccess” on a Windows file system.)

Some months ago, I got my hands on an old Dell PowerEdge SC420. It happened to be a twin of the system I’m running SBS on, but didn’t have quite as much RAM or as much disk space. I decided to install CentOS v5.4 on it, turn it into a LAMP server, and move the four or five Web sites I was running on my Small Business Server over my new LAMP server instead. I even found an open source utility called “ISP Config” that is a reasonable alternative – at least for my limited needs – to the Parallels Plesk control panel that most commercial Web hosts offer.

Things went along swimmingly until last weekend, when I noticed a strange, rhythmic clicking and beeping coming from my Web server. Everything seemed to be working – Web sites were all up – I logged on and didn’t see anything odd in the system log files (aside from the fact that a number of people out there seemed to be trying to use FTP to hack my administrative password). So I decided to restart the system, on the off chance that it would clear whatever error was occurring.

Those of you who are Linux gurus probably just did a double facepalm…because, in retrospect, I should have checked the health of my disk array before shutting down. The server didn’t have a hardware RAID controller, so I had built my system with a software RAID1 array – which several sources suggest is both safer and better performing than the “fake RAID” that’s built into the motherboard. Turns out that the first disk in my array (/dev/sda for those who know the lingo) had died, and for some reason, the system wouldn’t boot from the other drive.

This is the point where I did a double facepalm, and muttered a few choice words under my breath. Not that it was a tragedy – all that server did was host my Web sites, and my Web site data was backed up in a couple of places. So I wouldn’t have lost any data if I had rebuilt the server…just several hours of my life that I didn’t really have to spare. So I did what any of you would have done in my place – I started searching the Web.

The first advice I found suggested that I should completely remove the bad drive from the system, and connect the good drive as drive “0.” Tried it, no change. The next advice I found suggested that I boot my system from the Linux CD or DVD, and try the “Linux rescue” function. That sounded like a good idea, so I tried it – but when the rescue utility examined my disk, it claimed that there were no Linux partitions present, despite evidence to the contrary: I could run fdisk -l and see that there were two Linux partitions on the disk, one of which was marked as a boot partition, but the rescue utility still couldn’t detect them, and the system still wouldn’t boot.

I finally stumbled across a reference to something called “SuperGRUB.” “GRUB,” for those of you who know as much about Linux as I did before this happened to me, is the “GNU GRand Unified Bootloader,” from the GNU Project. It’s apparently the bootloader that CentOS uses, and it was apparently missing from the disk I was trying to boot from. But that’s precisely the problem that SuperGRUB was designed to fix!

And fix it it did! I downloaded the SuperGRUB ISO, burned it to a CD, booted my Linux server from it, navigated through a quite intuitive menu structure, told it what partition I wanted to fix, and PRESTO! My disk was now bootable, and my Web server was back (albeit running on only one disk). But that can be fixed as well. I found a new 80 Gb SATA drive (which was all the space I needed) on eBay for $25, installed it, cruised a couple of Linux forums to learn how to (1) use sfdisk to copy the partition structure of my existing disk to the new disk, and (2) use mdadm to add the new disk to my RAID1 array, and about 15 minutes later, my array was rebuilt and my Web server was healthy again.

There are two takeaways from this story:

First, the Internet is a wonderful thing, with amazing resources that can help even a neophyte like me to find enough information to pull my ample backside out of the fire and get my system running again.

Second, all those folks out there whom we sometimes make fun of and accuse of not having a life are actually producing some amazing stuff. I don’t know the guys behind the SuperGRUB project. They may or may not be stereotypical geeks. I don’t know how many late hours were burned, nor how many Twinkies or Diet Cokes were consumed (if any) in the production of the SuperGRUB utility. I do know that it was magical, and saved me many hours of work, and for that, I am grateful. (I’d even ship them a case of Twinkies if I knew who to send it to.) If you ever find yourself in a similar situation, it may save your, um, bacon as well.

BC, DR, BIA – What does it mean???

Most companies instinctively know that they need to be prepared for an event that will compromise business operations, but it’s often difficult to know where to begin.  We hear a lot of acronyms: “BC” (Business Continuity), “DR” (Disaster Recovery), “BIA” (Business Impact Analysis), “RA” (Risk Assessment), but not a lot of guidance on exactly what those things are, or how to figure out what is right for any particular business.

Many companies we meet with today are not really sure what components to implement or what to prioritize.  So what is the default reaction?  “Back up my Servers!  Just get the stuff off-site and I will be OK.”   Unfortunately, this can leave you with a false sense of security.  So let’s stop and take a moment to understand these acronyms that are tossed out at us.

BIA (Business Impact Analysis)
BIA is a process through which a business will gain an understanding from a financial perspective how and what to recover once a disruptive business event occurs.   This is one of the more critical steps and should be done early on as it directly impacts  BC and DR. If you’re not sure how to get started, get out a blank sheet of paper, and start listing everything you can think of that could possibly disrupt your business. Once you have your list, rank each item on a scale of 1 – 3 on how likely it is to happen, and how severely it would impact your business if it did. This will give you some idea of what you need to worry about first (the items that were ranked #1 in both categories). Congratulations! You just performed a Risk Assessment!

Now, before we go much farther, you need to think about two more acronyms: “RTO” and “RPO.” RTO is the “Recovery Time Objective.” If one of those disruptive events occurs, how much time can pass before you have to be up and running again? An hour? A half day? A couple of days? It depends on your business, doesn’t it? I can’t tell you what’s right for you – only you can decide. RPO is the “Recovery Point Objective.” Once you’re back up, how much data is it OK to have lost in the recovery process? If you have to roll back to last night’s backup, is that OK? How about last Friday’s backup? Of course, if you’re Bank of America and you’re processing millions of dollars worth of credit card transactions, the answer to both RTO and RPO is “zero!” You can’t afford to be down at all, nor can you afford to lose any data in the recovery process. But, once again, most of our businesses don’t need quite that level of protection. Just be aware that the closer to zero you need those numbers to be, the more complex and expensive the solution is going to be!

BC (Business Continuity)
Business Continuity planning is the process through which a business develops a specific plan to assure survivability in the event of a disruptive business event: fire, earthquake, terrorist events, etc.  Ideally, that plan should encompass everything on the list you created – but if that’s too daunting, start with a plan that addresses the top-ranked items. Then revise the plan as time and resources allow to include items that were, say, ranked #1 in one category and #2 in the other, and so forth. Your plan should detail specifically how you are going to meet the RTO and RPO you decided on earlier.

And don’t forget the human factor. You can put together a great plan for how you’re going to replicate data off to another site where you can have critical systems up and running within a couple of hours of your primary facility turning into a smoking hole in the ground. But where are your employees going to report for work? Where will key management team members convene to deal with the crisis and its aftermath? How are they going to get there if transportation systems are disrupted, and how will they communicate if telephone lines are jammed?

DR (Disaster Recovery)
Disaster recovery is the process or action a business takes to bring the business back to a basic functioning entity after a disruptive business event. Note that BC and DR are complementary: BC addresses how you’re going to continue to operate in the face of a disruptive event; DR addresses how you get back to normal operation again.

Most small business think of disasters as events that are not likely to affect them.  Their concept of “disaster” is that of a rare act of God or a terrorist attack.  But in reality, there are many other things that would qualify as a “disruptive business event:” fire, long term power loss, network security breach, swine flu pandemic, and in the case of one of my clients, a fire in the power vault of a building that crippled the building for three days.  It is imperative to not overlook some of the simpler events that can stop us from conducting our business.

Finally, it is important to actually budget some money for these activities. Don’t try to justify this with a classic Return on Investment calculation, because you can’t. Something bad may never happen to your business…or it could happen tomorrow. If it never happens, then the only return you’ll get on your investment is peace of mind (or regulatory compliance, if you’re in a business that is required to have these plans in place). Instead, think of the expense the way you think of an insurance premium, because, just like an insurance premium, it’s money you’re paying to protect against a possible future loss.

Hosted Exchange – Putting Your Email In the Cloud

These days, it seems everybody is talking about “cloud computing,” even if they don’t completely understand what it is. If you’re among those who are wondering what the “cloud” is all about and what it can do for you, maybe you should investigate moving your email to the cloud. You’ll find that there are several hosted Exchange providers (including ourselves) who would be very happy to help you do it.

Why switch to hosted Exchange?  Well,  it is fair to say that for most SMBs, email has become a predominant tool in our arsenal of communications.  The need for fast, efficient, and cost effective collaboration, as well as integration with our corporate environment and mobile devices, has become the baseline of operations – an absolute requirement for our workplace today.

So why not just get an Exchange Server or Small Business Server?  You can, but managing that environment may not be the best use of your resources.  Here are a few things to consider:

Low and Predictable Costs:
Hosted Exchange has become a low cost enterprise service without the enterprise price tag. If you own the server and have it deployed on your own premise, it now becomes your responsibility to prepare for a disruptive business event: fire, earthquake, flood, and in the Puget Sound Area, a dusting of snow. And it isn’t just an event in your own office space that you have to worry about:

  • A few years ago, there was a fire in a cable vault in downtown Seattle that caused some nearby businesses to lose connectivity for as long as four days.
  • Last year, wildfires in Eastern Washington interrupted power to the facility of one of our customers, and the recovery from the event was delayed because their employees were not allowed to cross the fire line to get to the facility.
  • If you are in a building that’s shared with other tenants, a fire or police action in a part of the building that’s unrelated to your own office space could still block access to the building and prevent your employees from getting to work.
  • Finally, even though it may be a cliche, you’re still at the mercy of a backhoe-in-the-parking-lot event

The sheer cost of trying to protect yourself against all of these possibilities can be daunting, and many business would rather spend their cash on things that generate revenue instead.

Depending on features and needs, hosted Exchange plans can be as low as $5 per month per user – although to get the features most users want, you’re probably looking at $10 or so – and if you choose your hosting provider carefully, you’ll find that they have already made the required investments for high availability. Plus you’ll always have the latest version available to you without having to pay for hardware or software upgrades.

Simplified Administration:
For many small businesses, part of the turn-off of going to SBS or a full blown Exchange server is the technical competency and cost associated with managing and maintaining the environment.  While there are some advantages to having your own deployed environment, most customers I talk to today would rather not have to deal with the extra costs of administering backups and managing server licensing (and periodic upgrade costs), hardware refresh, security, etc.  With a good hosted exchange provider, you will enjoy all the benefits of an enterprise environment, with a simple management console.

UP TIME:
Quality hosted Exchange providers will provide an SLA (“Service Level Agreement”) and up time guarantees – and they have the manpower and infrastructure in place to assure up time for their hundreds and thousands of users.

For deployed Exchange, you’ll need to invest in a robust server environment, power protection (e.g., an Uninterruptible Power Supply, or UPS, that can keep your server running long enough for a graceful shutdown – and maybe even a generator if you can’t afford to wait until your local utility restores power), data backup and recovery hardware and software, and the time required to test your backups.  (Important side note here: If you never do a test restore, you only think you have your data backed up. Far too often, the first time users find out that they have a problem is when they have a data loss and find that they are unable to successfully restore from their backup.) The cost/benefit ratio for a small business is simply not in favor of deployed.

Simple Deployment:
Properly setting up and configuring an Exchange environment and not leaving any security holes can be a daunting task for the non-IT Professional.  Most SMBs will need to hire someone like us to set up and manage the environment, and, although we love it when you hire us, and although the total cost of hiring us may be less than it would cost you to try to do it yourself (especially if something goes wrong), it is still a cost.

With a hosted environment, there is no complicated hardware and software setup.  In some cases, hosting providers have created a tool that you execute locally on your PC that will even configure the Outlook client for you.

A few questions to ask yourself:

  • Do we have the staff and technical competency to deploy and maintain our own Exchange environment?
  • What is the opportunity cost/gain by deploying our own?
  • What are the costs of upgrades/migration in a normal life-cycle refresh?
  • Is there a specific business driver that requires us to deploy?
  • What are the additional costs we will incur?  (Security, archiving, competency, patch management, encryption, licensing, etc.)

This is not to say that some businesses won’t benefit from a deployed environment, but for many – and perhaps most – businesses, hosted Exchange will provide a strong reliable service that will enable you to effectively communicate while having the peace of mind that your stuff is secure and available from any location where you have Internet access. Even if the ultimate bad thing happens and your office is reduced to a smoking crater, your people can still get to their email if they have Internet access at home or at the coffee shop down the street. If you’re as dependent on email as most of us are, there’s a definite value in that.