Where Did My Document Go?

It is axiomatic that many of us (perhaps most of us) don’t worry about backing up our PCs until we have a hard drive crash and lose valuable information. This is typically more of a problem with personal PCs than it is with business systems, because businesses usually go to great lengths to make sure that critical data is being backed up. (You are doing that, right? RIGHT? Of course you are. And, of course, you also have a plan for getting a copy of your most critical business data out of your office to a secure off-site location for disaster recovery purposes. Enough said about that.)

So, with business systems, the biggest challenge is making sure that users are saving files to the right place, so the backup routines can back up the file. If users are saving things to their “My Documents” folder, and you’re not redirecting “My Documents” to a network folder on a server, you’ve got a big potential problem brewing. Ditto if people are saving things to their Windows Desktop, which is possibly the worst place to save things that you care about keeping.

But there’s an even more fundamental thing to remember, and to communicate to our users: The best, most comprehensive backup strategy in the world won’t save you if you forget to save your work in the first place! Even in our Hosted Private Cloud environment, where we go to great lengths to back up your data and replicate it between geo-redundant data centers, there’s not much we can do if you don’t save it.

Just as many of us have learned a painful lesson about backing up our data by having lost it, many of us have also had that sinking feeling of accidentally closing a document without saving it, or having the PC shut down due to a power interruption, and realizing that we just lost hours of work.

Microsoft has built an Autorecovery option into the Office apps in an attempt to save us from ourselves. Within, say, Word, go to “File / Options / Save,” and you should see this:

Word Autorecover Settings

That’s where you set how often your working document will be automatically saved, as well as the location. But be aware that Autorecovery works really well…until it doesn’t. A Google search on the string “Word autorecovery didn’t save” returned roughly 21,000 results. That doesn’t mean you shouldn’t leverage Autorecovery – you certainly should. But take a look at the Word “Help” entry on Autorecovery:

Word Autorecover Help

Notice the text that I’ve circled in red? It says “IMPORTANT The Save button is still your best friend. To be sure you don’t lose your latest work, click Save (or press Ctrl+S) often.” Bottom line: Autorecovery may save your backside at some point…or it may not. And corporate backup routines certainly won’t rescue you if you don’t save your work. So save early and often.

And if you’re a mobile user who frequently works while disconnected from the corporate network, it’s a good idea to save your files in multiple locations. Both Microsoft (OneDrive) and Google (Google Drive) will give you 15 Gb of free on-line storage. And if it’s too much trouble to remember to manually save (or copy) your files to more than one location, there are a variety of ways – including VirtualQube’s “follow-me data” service – to set up a folder on your PC or laptop that automatically synchronizes with a folder in the cloud whenever you’re connected to the Internet. You just have to remember to save things to that folder.

You just have to remember to save things, period. Did we mention saving your work early and often? Yeah. Save early and often. It’s the best habit you can develop to protect yourself against data loss.

Part One: When Should an IT Leader Use a Vendor?

Build vs. Buy Decisioning in IT Organizations

Many of our clients are constantly challenged by a growing number of technologies to manage and understand in order to support the growing needs of the business. Our clients have experienced this growth quite a bit recently with the amount of technological advancement in both hardware and software creating even more product options for IT. Coupled with the dynamic and rapidly changing business opportunities available, IT Leaders have a lot of opportunities to manage, which includes placing bets on where to build capability versus buy it as a service. In this case, it could be simply described as “EXaaS” or “Expertise as a Service”.

The breadth of technology needs for SMBs is not that different from the breadth of technology needs for the large enterprises. Many in the IT Organization get caught up in trying to be the one-stop shop for all of their firm’s needs. Often the list of technology skills required to run the organization (not to mention grow) gets longer annually while budgets get tighter and tighter. One of the most common struggles is trying to get the skills required for an ever expanding set of technologies from the current IT staff. Sending engineers away for a week to learn additional skills takes away from the capacity to manage and monitor the technical services required by the business. IT Leaders have to constantly juggle the time to train versus the time to fight. But how much time should they allocate for each? And what fighting methods (software and hardware) are we going to commit to mastering, and what fighting methods are we going to allow others to do for us?

Once an IT Leader sets a training ratio, the organization needs to figure out which technologies it will continue to deliver, and which it will source. There are a number of ways to think about this, but here are two methods for identifying which technologies you need to focus on with internal resources. If you plot the skills (or OEM or topics) against their annual frequency, some surprising insights come about. Firstly, that many technologies come up frequently, and there is a significant drop-off quickly. In Marketing and Statistics, this is called “The Long Tail”. The way it occurs in an IT Organization is needing to send an engineer to training to implement the newest version of a software that is only used by IT. An example might be SDS solutions like DataCore, or a Citrix XenApp Farm migration. The critical assumption on this graph is that the less frequently a technology is used or referenced, the less knowledge an engineer will have about the technology. I spoke Spanish (Catalan, actually) while I lived in Madrid, but within 2 years of arriving back in the US I could barely carry a conversation with the guy working in a Mexican restaurant. We all know that if you don’t use something regularly, you lose the capability rather quickly. And investing the resources for an engineer to learn a technology and then use that skill once for your organization is low cost, but low ROI as well. It’s also higher risk because your organization just became the guinea pig for your engineer to practice; not at all a great scenario all around.

The example I use is car maintenance. The activities you have to do very often and are low skill (change oil, refill windshield wiper fluid, refill brake fluids) you can and should do yourself. The activities that happen very infrequently and you may not have the right tools to do (head gasket replacement, control arm replacement, trans-axle replacement) you should find a car mechanic to take care of. It’s the activities in the middle of those extremes (e.g. spark plug replacement, brake pad replacement) that you will need to decide if you want to develop the talent to deliver those services. One of my close friends has restored Mustangs for years, and has personally done just about everything to service all of his various vehicles for the 20 years I have known him. Yet he will not replace brake pads or touch any part of the braking system himself on any car. He simply doesn’t want the responsibility.

Relating this to your IT Organization, you should determine what is needed to run the organization, and then make a framework for choosing which technologies you will invest the time and resources into mastering versus which technologies you will “rent” the skills. When you build your own graph for the IT Organization, it might looks something like this:

chart

On the far left are the areas you want to have skills in-house. On the far-right are the skills and technologies you will want to rent. With a plotting of the needs of the organization like this you will quickly see the obvious. The trickier part will be choosing where that line should exist. In metaphorical terms, you will have to call everything Black or White. There will be obvious colors that are easy to see, but there will be many shades of Grey that you will have to choose a home for. Don’t worry, it may take 2-3 tires to get this correct, but if you make the effort consistently, your abilities will improve.

And the critical last step is to develop the budget for all of these internally developed skills as well as the costs to source them so the CFO has all the data required to understand the costs of running the business as well as growing.

Karl Burns is the Chief Strategy Officer at VirtualQube. He can be reached at karl.burns@virtualqube.com.

Scott’s Book Arrived!

IMG_1716

We are pleased to announce that Scott’s books have arrived! ‘The Business Owner’s Essential Guide to I.T.’ is 217 pages packed full of pertinent information.

For those of you who pre-purchased your books, Thank You! Your books have already been signed and shipped, you should receive them shortly and we hope you enjoy them as much as Scott enjoyed writing for you.

If you haven’t purchased your copy, click here, purchase a signed copy from us and all proceeds will be donated to the WA chapter of Mothers Against Drunk Driving (MADD).

Great Windows 8.1 Experience!

Win8

Today I had a truly great Windows 8.1 experience! I know some might be skeptical, and I for one felt Microsoft faced some challenges with user acceptance of Windows 8. But I am a big fan of Windows 8 primarily because it provides a multi-computer experience in one device. My “truly great Windows 8.1 experience” came while setting up a new laptop. We all dread setting up or refreshing a laptop because historically it’s been difficult and time consuming to transfer files and settings. But it’s a new day for Windows, and transferring all of my settings, metro apps, and data was as simple as logging into my Microsoft Live account and answering a few questions. First question after providing my live credentials was to enter my wireless security code, second question was “we found this computer on your network that belongs to you, do you want to copy the settings to this computer” and BAM all of my settings and data began streaming to my new PC. This was a truly great Windows 8.1 experience!

Yet Another Phishing Example

Today, we’re going to play “What’s Wrong with This Picture.” First of all, take a look at the following screen capture. (You can view it full-sized by clicking on it.)

Phishing Email from Aug, 2011

Phishing Email from Aug, 2011

Now let’s see if you can list all the things that are wrong with this email. Here’s what I came up with:

  • There is no such thing as “Microsoft ServicePack update v6.7.8.”
  • The Microsoft Windows Update Center will never, ever send you a direct email message like this.
  • Spelling errors in the body of the email: “This update is avelable…” “…new futures were added…” (instead of “features”) and “Microsoft Udates” (OK, that last one is not visible in my screen cap, so it doesn’t count).
  • Problems with the hyperlink. Take a look at the little window that popped up when I hovered my mouse over the link: The actual link is to an IP address (85.214.70.156), not to microsoft.com, as the anchor text would have you believe. Furthermore, the directory path that finally takes you to the executable (“bilder/detail/windowsupdate…”) is not what I would expect to see in the structure of a Microsoft Web site.”

If you want to know what sp-update.v678.exe would do if you downloaded and executed it, take a look at the description on the McAfee Web site (click on the “Virus Characteristics” tab). Suffice it to say that this is not something you want on your PC.

Sad to say, I suspect that thousands of people have clicked through on it because it has the Windows logo at the top with a cute little “Windows Update Center” graphic.

Would you have spotted it as a phishing attempt? Did you spot other giveaways in addition to the ones I listed above? Let us know in the comments.

It’s Been a Cloud-y Week

No, I’m not talking about the weather here in San Francisco – that’s actually been pretty good. It’s just that everywhere you look here at the Citrix Summit / Synergy conference, the talk is all about clouds – public clouds, private clouds, even personal clouds, which, according to Mark Templeton’s keynote on Wednesday, refers to all your personal stuff:

  • My Devices – of which we have an increasing number
  • My Preferences – which we want to be persistent across all of our devices
  • My Data – which we want to get to from wherever we happen to be
  • My Life – which increasingly overlaps with…
  • My work – which I want to use My Devices to perform, and which I want to reflect My Preferences, and which produces Work Data that is often all jumbled up with My Data (and that can open up a whole new world of problems, from security of business-proprietary information to regulatory compliance).

These five things overlap in very fluid and complex ways, and although I’ve never heard them referred to as a “personal cloud” before, we do need to think about all of them and all of the ways they interact with each other. So if creating yet another cloud definition helps us do that, I guess I’m OK with that, as long as nobody asks me to build one.

But lest I be accused of inconsistency, let me quickly recap the cloud concerns that I shared in a post about a month ago, hard on the heels of the big Amazon EC2 outage:

  1. We have to be clear in our definition of terms. If “cloud” can simply mean anything you want it to mean, then it means nothing.
  2. I’m worried that too many people are running to embrace the public cloud computing model while not doing enough due diligence first:
    1. What, exactly, does your cloud provider’s SLA say?
    2. What is their track record in living up to it?
    3. How well will they communicate with you if problems crop up?
    4. How are you insuring that your data is protected in the event that the unthinkable happens, there’s a cloud outage, and you can’t get to it?
    5. What is your business continuity plan in the event of a cloud outage? Have you planned ahead and designed resiliency into the way you use the cloud?
    6. Never forget that, no matter what they tell you, nobody cares as much about your stuff as you do. It’s your stuff. It’s your responsibility to take care of it. You can’t just throw it into the cloud and never think about it again.

Having said that, and in an attempt to adhere to point #1 above, I will henceforth stick to the definitions of cloud computing set forth in the draft document (#800-145) released by the National Institute of Standards and Technology in January of this year, and I promise to tell you if and when I deviate from those definitions. The following are the essential characteristics of cloud computing as defined in that draft document:

  • On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
  • Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
  • Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out, and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
  • Measured Service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

If you’ll read through those points a couple of times and give it a moment’s thought, a couple of things should become obvious.

First, most of the chunks of infrastructure that are being called “private clouds” aren’t – at least by the definition above. Standing up a XenApp or XenDesktop infrastructure, or even a mixed environment of both, does not mean that you have a private cloud, even if you access it from the Internet. Virtualizing a majority, or even all, of your servers doesn’t mean you have a private cloud.

Second, very few Small & Medium Enterprises can actually justify the investment required to build a true private cloud as defined above, although some of the technologies that are used to build public and private clouds (such as virtualization, support for broad network access, and some level of user self-service provisioning) will certainly trickle down into SME data centers. Instead, some will find that it makes sense to move some services into public clouds, or to leverage public clouds to scale out or scale in to address their elasticity needs. And some will decide that they simply don’t want to be in the IT infrastructure business anymore, and move all of their computing into a public cloud. And that’s not a bad thing, as long as they pay attention to my point #2 above. If that’s the way you feel, we want to help you do it safely, and in a way that meets your business needs. That’s one reason why I’ve been here all week.

So stay tuned, because we’ll definitely be writing more about the things we’ve learned here, and how you can apply them to make your business better.

SuperGRUB to the Rescue!

This post requires two major disclaimers:

  1. I am not an engineer. I am a relatively technical sales & marketing guy. I have my own Small Business Server-based network at home, and I know enough about Microsoft Operating Systems to be able to muddle through most of what gets thrown at me. And, although I’ve done my share of friends-and-family-tech-support, you do not want me working on your critical business systems.
  2. I am not, by any stretch of the imagination, a Linux guru. However, I’ve come to appreciate the “LAMP” (Linux/Apache/MySQL/PHP) platform for Web hosting. With apologies to my Microsoft friends, there are some things that are quite easy to do on a LAMP platform that are not easy at all on a Windows Web server. (Just try, for example, to create a file called “.htaccess” on a Windows file system.)

Some months ago, I got my hands on an old Dell PowerEdge SC420. It happened to be a twin of the system I’m running SBS on, but didn’t have quite as much RAM or as much disk space. I decided to install CentOS v5.4 on it, turn it into a LAMP server, and move the four or five Web sites I was running on my Small Business Server over my new LAMP server instead. I even found an open source utility called “ISP Config” that is a reasonable alternative – at least for my limited needs – to the Parallels Plesk control panel that most commercial Web hosts offer.

Things went along swimmingly until last weekend, when I noticed a strange, rhythmic clicking and beeping coming from my Web server. Everything seemed to be working – Web sites were all up – I logged on and didn’t see anything odd in the system log files (aside from the fact that a number of people out there seemed to be trying to use FTP to hack my administrative password). So I decided to restart the system, on the off chance that it would clear whatever error was occurring.

Those of you who are Linux gurus probably just did a double facepalm…because, in retrospect, I should have checked the health of my disk array before shutting down. The server didn’t have a hardware RAID controller, so I had built my system with a software RAID1 array – which several sources suggest is both safer and better performing than the “fake RAID” that’s built into the motherboard. Turns out that the first disk in my array (/dev/sda for those who know the lingo) had died, and for some reason, the system wouldn’t boot from the other drive.

This is the point where I did a double facepalm, and muttered a few choice words under my breath. Not that it was a tragedy – all that server did was host my Web sites, and my Web site data was backed up in a couple of places. So I wouldn’t have lost any data if I had rebuilt the server…just several hours of my life that I didn’t really have to spare. So I did what any of you would have done in my place – I started searching the Web.

The first advice I found suggested that I should completely remove the bad drive from the system, and connect the good drive as drive “0.” Tried it, no change. The next advice I found suggested that I boot my system from the Linux CD or DVD, and try the “Linux rescue” function. That sounded like a good idea, so I tried it – but when the rescue utility examined my disk, it claimed that there were no Linux partitions present, despite evidence to the contrary: I could run fdisk -l and see that there were two Linux partitions on the disk, one of which was marked as a boot partition, but the rescue utility still couldn’t detect them, and the system still wouldn’t boot.

I finally stumbled across a reference to something called “SuperGRUB.” “GRUB,” for those of you who know as much about Linux as I did before this happened to me, is the “GNU GRand Unified Bootloader,” from the GNU Project. It’s apparently the bootloader that CentOS uses, and it was apparently missing from the disk I was trying to boot from. But that’s precisely the problem that SuperGRUB was designed to fix!

And fix it it did! I downloaded the SuperGRUB ISO, burned it to a CD, booted my Linux server from it, navigated through a quite intuitive menu structure, told it what partition I wanted to fix, and PRESTO! My disk was now bootable, and my Web server was back (albeit running on only one disk). But that can be fixed as well. I found a new 80 Gb SATA drive (which was all the space I needed) on eBay for $25, installed it, cruised a couple of Linux forums to learn how to (1) use sfdisk to copy the partition structure of my existing disk to the new disk, and (2) use mdadm to add the new disk to my RAID1 array, and about 15 minutes later, my array was rebuilt and my Web server was healthy again.

There are two takeaways from this story:

First, the Internet is a wonderful thing, with amazing resources that can help even a neophyte like me to find enough information to pull my ample backside out of the fire and get my system running again.

Second, all those folks out there whom we sometimes make fun of and accuse of not having a life are actually producing some amazing stuff. I don’t know the guys behind the SuperGRUB project. They may or may not be stereotypical geeks. I don’t know how many late hours were burned, nor how many Twinkies or Diet Cokes were consumed (if any) in the production of the SuperGRUB utility. I do know that it was magical, and saved me many hours of work, and for that, I am grateful. (I’d even ship them a case of Twinkies if I knew who to send it to.) If you ever find yourself in a similar situation, it may save your, um, bacon as well.

Hosted Exchange – Putting Your Email In the Cloud

These days, it seems everybody is talking about “cloud computing,” even if they don’t completely understand what it is. If you’re among those who are wondering what the “cloud” is all about and what it can do for you, maybe you should investigate moving your email to the cloud. You’ll find that there are several hosted Exchange providers (including ourselves) who would be very happy to help you do it.

Why switch to hosted Exchange?  Well,  it is fair to say that for most SMBs, email has become a predominant tool in our arsenal of communications.  The need for fast, efficient, and cost effective collaboration, as well as integration with our corporate environment and mobile devices, has become the baseline of operations – an absolute requirement for our workplace today.

So why not just get an Exchange Server or Small Business Server?  You can, but managing that environment may not be the best use of your resources.  Here are a few things to consider:

Low and Predictable Costs:
Hosted Exchange has become a low cost enterprise service without the enterprise price tag. If you own the server and have it deployed on your own premise, it now becomes your responsibility to prepare for a disruptive business event: fire, earthquake, flood, and in the Puget Sound Area, a dusting of snow. And it isn’t just an event in your own office space that you have to worry about:

  • A few years ago, there was a fire in a cable vault in downtown Seattle that caused some nearby businesses to lose connectivity for as long as four days.
  • Last year, wildfires in Eastern Washington interrupted power to the facility of one of our customers, and the recovery from the event was delayed because their employees were not allowed to cross the fire line to get to the facility.
  • If you are in a building that’s shared with other tenants, a fire or police action in a part of the building that’s unrelated to your own office space could still block access to the building and prevent your employees from getting to work.
  • Finally, even though it may be a cliche, you’re still at the mercy of a backhoe-in-the-parking-lot event

The sheer cost of trying to protect yourself against all of these possibilities can be daunting, and many business would rather spend their cash on things that generate revenue instead.

Depending on features and needs, hosted Exchange plans can be as low as $5 per month per user – although to get the features most users want, you’re probably looking at $10 or so – and if you choose your hosting provider carefully, you’ll find that they have already made the required investments for high availability. Plus you’ll always have the latest version available to you without having to pay for hardware or software upgrades.

Simplified Administration:
For many small businesses, part of the turn-off of going to SBS or a full blown Exchange server is the technical competency and cost associated with managing and maintaining the environment.  While there are some advantages to having your own deployed environment, most customers I talk to today would rather not have to deal with the extra costs of administering backups and managing server licensing (and periodic upgrade costs), hardware refresh, security, etc.  With a good hosted exchange provider, you will enjoy all the benefits of an enterprise environment, with a simple management console.

UP TIME:
Quality hosted Exchange providers will provide an SLA (“Service Level Agreement”) and up time guarantees – and they have the manpower and infrastructure in place to assure up time for their hundreds and thousands of users.

For deployed Exchange, you’ll need to invest in a robust server environment, power protection (e.g., an Uninterruptible Power Supply, or UPS, that can keep your server running long enough for a graceful shutdown – and maybe even a generator if you can’t afford to wait until your local utility restores power), data backup and recovery hardware and software, and the time required to test your backups.  (Important side note here: If you never do a test restore, you only think you have your data backed up. Far too often, the first time users find out that they have a problem is when they have a data loss and find that they are unable to successfully restore from their backup.) The cost/benefit ratio for a small business is simply not in favor of deployed.

Simple Deployment:
Properly setting up and configuring an Exchange environment and not leaving any security holes can be a daunting task for the non-IT Professional.  Most SMBs will need to hire someone like us to set up and manage the environment, and, although we love it when you hire us, and although the total cost of hiring us may be less than it would cost you to try to do it yourself (especially if something goes wrong), it is still a cost.

With a hosted environment, there is no complicated hardware and software setup.  In some cases, hosting providers have created a tool that you execute locally on your PC that will even configure the Outlook client for you.

A few questions to ask yourself:

  • Do we have the staff and technical competency to deploy and maintain our own Exchange environment?
  • What is the opportunity cost/gain by deploying our own?
  • What are the costs of upgrades/migration in a normal life-cycle refresh?
  • Is there a specific business driver that requires us to deploy?
  • What are the additional costs we will incur?  (Security, archiving, competency, patch management, encryption, licensing, etc.)

This is not to say that some businesses won’t benefit from a deployed environment, but for many – and perhaps most – businesses, hosted Exchange will provide a strong reliable service that will enable you to effectively communicate while having the peace of mind that your stuff is secure and available from any location where you have Internet access. Even if the ultimate bad thing happens and your office is reduced to a smoking crater, your people can still get to their email if they have Internet access at home or at the coffee shop down the street. If you’re as dependent on email as most of us are, there’s a definite value in that.

More Facebook Phishing

We’ve talked before about how the Internet threat landscape has changed over the past few years. Increasingly, malware is being distributed, not by sending you an infected email attachment, but by trying to entice you to visit a Web site that will drop the malware onto your computer. It should be no surprise to anyone that, given the explosive growth of Facebook, and given the fact that the fastest growing segments of Facebook users are people who are not “power users,” and who probably don’t know a lot about Internet security, these people are obvious targets for the bad guys.

Here’s a classic “phishing” example – one that recently showed up in my email. Let’s break it down and look at the things that are not quite right about it, and perhaps it will help you spot similar attempts in the future. As you read through this post, you may want to open the images in separate windows, so you can easily see what we’ll be discussing here.

If you’ve got a presence on Facebook, you’ve no doubt received one or more email messages that look like this (I’ve blanked out stuff that might identify the specific Facebook friend who sent me the message):

Legitimate Facebook Notification

Legitimate Facebook Notification


There are some things that are consistent across all of the legitimate notification messages that I’ve received:

  • The subject line contains the name of the person who sent me the message (“so-and-so sent you a message on Facebook”).
  • The first line in the message itself also contains the name (“so-and-so sent you a message”).
  • The name is repeated yet a third time next to the sender’s profile pic, along with the time stamp of when the message was sent.
  • The text of the message is included in the email.
  • The hyperlink that’s provided (“To reply to this message, follow the link below”) contains the email address that’s associated with my Facebook account.
  • The footer repeats my email address (“This message was intended for…”), and the big, long, cryptic number that’s provided in the unsubscribe link is the same big, long, cryptic number that was in the reply link.

Now, let’s look at the phishing message:

Phishing Message

Phishing Message


First of all, although this isn’t obvious by looking at the message, this email was sent to my personal email address, which is not the address that’s associated with my Facebook account. That was my first clue that something wasn’t right. But let’s look at all the other discrepancies:

  • The subject line just says “You have 1 unread message(s)…” with no indication of who may have sent the message to me.
  • In the body of the message, instead of the sender’s name, it just says “Facebook” sent you a message.
  • There is no time stamp provided.
  • The text of the message itself is not included – because, of course, the sender wants me to click on the link provided to see what it is.
  • The hyperlink provided does not include my email address.
  • The hyperlink is “cloaked,” that is, it doesn’t go to the location it claims to go to. As you can see, when I hovered my mouse over the link, the pop-up window showed that the hyperlink actually went to a totally different destination that had nothing to do with Facebook.
  • The footer does not contain the “This message was intended for” text with my email address
  • The unsubscribe link simply says “click here” rather than being specifically associated with the message ID.

Now that I’ve pointed out all of the differences, it’s probably pretty obvious that this isn’t a legitimate message – but taken one by one, the differences are all pretty subtle. Would you have spotted them if I hadn’t pointed them out? All in all, this is a relatively well-crafted phishing email, and I have no doubt that lots of recipients would click on the link provided without even thinking about it. And here’s what would have happened:

Malware Site

Malware Site


According to Google’s “Safe Browsing” diagnostics, 10 different pages within this domain were designed to drop malware on the visitor’s PC without their knowledge or consent: five scripting exploits, two other exploits, and one trojan.

The moral of the story is that you should always be suspicious of links that are sent to you by email. I used to own a motorcycle, and I always tried to drum into my kids the concept that, in order to survive as a biker, you have to ride with a certain amount of paranoia: you must assume that you’re invisible, and the other motorists can’t see you…and those who can see you are out to get you. Unfortunately, we’re at the point where the same kind of paranoia is required to stay safe on the Internet. Yes, in most cases, there are subtle clues that you can spot if you know what to look for. But you’re probably better off to simply assume that any message you receive is a phishing attempt unless/until you can determine otherwise.

And if there’s ever any question in your mind, don’t click on the link. You can always open a browser, type in Facebook’s URL manually, and check to see if you actually do have any messages instead of clicking on a link in an email. Same with email messages that purport to come from your bank.

Remember: just because you’re paranoid doesn’t mean that they aren’t out to get you!

DNS Security Extensions and Why You Should Care

Tomorrow (May 5), at 17:00 GMT, all 13 root DNS servers on the Internet will begin using DNSSEC (Domain Name System Security Extensions) to reply to user requests. Here’s why you might care about this.

As most of our readers know, DNS is what translates the URL you type into your browser (like “www.mooselogic.com”) into an IP address (like “216.9.9.164”) that your computer can actually use to send packets of data across the Internet. If you have a Windows Server-based network, one (or more) of your Windows Servers is probably providing DNS services to the users on your network. But the DNS server on your network doesn’t automatically know where everything is. If it needs to resolve an address that doesn’t happen to already be in its local cache, it has to ask some other DNS server out on the Internet. Sometimes those queries go all the way to one of the root servers.

It’s been recognized for quite some time that the existing protocol used for DNS queries isn’t entirely secure. Therefore, the international standards bodies have been working on a more secure standard, which is DNSSEC. DNSSEC uses digital signatures to authenticate DNS responses, so your computer knows the response actually came from an authoritative DNS server.

So what’s the problem? The potential problem is that those DNS responses will arrive in significantly larger data packets than before. Specifically, rather than using UDP packets that are smaller than 512 bytes, the responses will not only be longer, but may be broken into multiple TCP packets. Some routers and firewalls specifically inspect DNS traffic to look for anomalies, and if you have older equipment that doesn’t know about the DNSSEC standard, these changes may very well look like anomalies, and be blocked. That would mean that your DNS clients or DNS server would not be able to communicate with the public root DNS servers, and that would mean that you would start having problems resolving DNS.

These problems may be intermittent in nature at first, because some DNS requests may be able to be resolved by using locally cached information…but DNS records typically have a “time to live” built into them, so eventually the cached information will expire and have to be refreshed. So if you do have a problem, it’s likely to get worse with time.

There are some tools available to help you determine whether you’re likely to have a problem. If you’re comfortable using a DNS query tool like dig (which is a command-line query that can be run from most unix or linux systems), you can find instructions on using it at https://www.dns-oarc.net/oarc/services/replysizetest. If you don’t have access to a unix or linux host, or don’t feel comfortable using such a tool, you can download a Java utility from http://labs.ripe.net/content/testing-your-resolver-dns-reply-size-issues, and run it on any system with Java run-time installed (which includes most Windows systems). Just download and save the file, then double-click it.

Watchguard customers should note that if you have a Watchguard Firebox or XTM appliance with current firmware, you should not have any issues with these new DNSSEC packets.