Category Archives: Computer Basics

Part One: When Should an IT Leader Use a Vendor?

Build vs. Buy Decisioning in IT Organizations

Many of our clients are constantly challenged by a growing number of technologies to manage and understand in order to support the growing needs of the business. Our clients have experienced this growth quite a bit recently with the amount of technological advancement in both hardware and software creating even more product options for IT. Coupled with the dynamic and rapidly changing business opportunities available, IT Leaders have a lot of opportunities to manage, which includes placing bets on where to build capability versus buy it as a service. In this case, it could be simply described as “EXaaS” or “Expertise as a Service”.

The breadth of technology needs for SMBs is not that different from the breadth of technology needs for the large enterprises. Many in the IT Organization get caught up in trying to be the one-stop shop for all of their firm’s needs. Often the list of technology skills required to run the organization (not to mention grow) gets longer annually while budgets get tighter and tighter. One of the most common struggles is trying to get the skills required for an ever expanding set of technologies from the current IT staff. Sending engineers away for a week to learn additional skills takes away from the capacity to manage and monitor the technical services required by the business. IT Leaders have to constantly juggle the time to train versus the time to fight. But how much time should they allocate for each? And what fighting methods (software and hardware) are we going to commit to mastering, and what fighting methods are we going to allow others to do for us?

Once an IT Leader sets a training ratio, the organization needs to figure out which technologies it will continue to deliver, and which it will source. There are a number of ways to think about this, but here are two methods for identifying which technologies you need to focus on with internal resources. If you plot the skills (or OEM or topics) against their annual frequency, some surprising insights come about. Firstly, that many technologies come up frequently, and there is a significant drop-off quickly. In Marketing and Statistics, this is called “The Long Tail”. The way it occurs in an IT Organization is needing to send an engineer to training to implement the newest version of a software that is only used by IT. An example might be SDS solutions like DataCore, or a Citrix XenApp Farm migration. The critical assumption on this graph is that the less frequently a technology is used or referenced, the less knowledge an engineer will have about the technology. I spoke Spanish (Catalan, actually) while I lived in Madrid, but within 2 years of arriving back in the US I could barely carry a conversation with the guy working in a Mexican restaurant. We all know that if you don’t use something regularly, you lose the capability rather quickly. And investing the resources for an engineer to learn a technology and then use that skill once for your organization is low cost, but low ROI as well. It’s also higher risk because your organization just became the guinea pig for your engineer to practice; not at all a great scenario all around.

The example I use is car maintenance. The activities you have to do very often and are low skill (change oil, refill windshield wiper fluid, refill brake fluids) you can and should do yourself. The activities that happen very infrequently and you may not have the right tools to do (head gasket replacement, control arm replacement, trans-axle replacement) you should find a car mechanic to take care of. It’s the activities in the middle of those extremes (e.g. spark plug replacement, brake pad replacement) that you will need to decide if you want to develop the talent to deliver those services. One of my close friends has restored Mustangs for years, and has personally done just about everything to service all of his various vehicles for the 20 years I have known him. Yet he will not replace brake pads or touch any part of the braking system himself on any car. He simply doesn’t want the responsibility.

Relating this to your IT Organization, you should determine what is needed to run the organization, and then make a framework for choosing which technologies you will invest the time and resources into mastering versus which technologies you will “rent” the skills. When you build your own graph for the IT Organization, it might looks something like this:


On the far left are the areas you want to have skills in-house. On the far-right are the skills and technologies you will want to rent. With a plotting of the needs of the organization like this you will quickly see the obvious. The trickier part will be choosing where that line should exist. In metaphorical terms, you will have to call everything Black or White. There will be obvious colors that are easy to see, but there will be many shades of Grey that you will have to choose a home for. Don’t worry, it may take 2-3 tires to get this correct, but if you make the effort consistently, your abilities will improve.

And the critical last step is to develop the budget for all of these internally developed skills as well as the costs to source them so the CFO has all the data required to understand the costs of running the business as well as growing.

Karl Burns is the Chief Strategy Officer at VirtualQube. He can be reached at

Scott’s Book Arrived!


We are pleased to announce that Scott’s books have arrived! ‘The Business Owner’s Essential Guide to I.T.’ is 217 pages packed full of pertinent information.

For those of you who pre-purchased your books, Thank You! Your books have already been signed and shipped, you should receive them shortly and we hope you enjoy them as much as Scott enjoyed writing for you.

If you haven’t purchased your copy, click here, purchase a signed copy from us and all proceeds will be donated to the WA chapter of Mothers Against Drunk Driving (MADD).

Great Windows 8.1 Experience!


Today I had a truly great Windows 8.1 experience! I know some might be skeptical, and I for one felt Microsoft faced some challenges with user acceptance of Windows 8. But I am a big fan of Windows 8 primarily because it provides a multi-computer experience in one device. My “truly great Windows 8.1 experience” came while setting up a new laptop. We all dread setting up or refreshing a laptop because historically it’s been difficult and time consuming to transfer files and settings. But it’s a new day for Windows, and transferring all of my settings, metro apps, and data was as simple as logging into my Microsoft Live account and answering a few questions. First question after providing my live credentials was to enter my wireless security code, second question was “we found this computer on your network that belongs to you, do you want to copy the settings to this computer” and BAM all of my settings and data began streaming to my new PC. This was a truly great Windows 8.1 experience!

Yet Another Phishing Example

Today, we’re going to play “What’s Wrong with This Picture.” First of all, take a look at the following screen capture. (You can view it full-sized by clicking on it.)

Phishing Email from Aug, 2011

Phishing Email from Aug, 2011

Now let’s see if you can list all the things that are wrong with this email. Here’s what I came up with:

  • There is no such thing as “Microsoft ServicePack update v6.7.8.”
  • The Microsoft Windows Update Center will never, ever send you a direct email message like this.
  • Spelling errors in the body of the email: “This update is avelable…” “…new futures were added…” (instead of “features”) and “Microsoft Udates” (OK, that last one is not visible in my screen cap, so it doesn’t count).
  • Problems with the hyperlink. Take a look at the little window that popped up when I hovered my mouse over the link: The actual link is to an IP address (, not to, as the anchor text would have you believe. Furthermore, the directory path that finally takes you to the executable (“bilder/detail/windowsupdate…”) is not what I would expect to see in the structure of a Microsoft Web site.”

If you want to know what sp-update.v678.exe would do if you downloaded and executed it, take a look at the description on the McAfee Web site (click on the “Virus Characteristics” tab). Suffice it to say that this is not something you want on your PC.

Sad to say, I suspect that thousands of people have clicked through on it because it has the Windows logo at the top with a cute little “Windows Update Center” graphic.

Would you have spotted it as a phishing attempt? Did you spot other giveaways in addition to the ones I listed above? Let us know in the comments.

It’s Been a Cloud-y Week

No, I’m not talking about the weather here in San Francisco – that’s actually been pretty good. It’s just that everywhere you look here at the Citrix Summit / Synergy conference, the talk is all about clouds – public clouds, private clouds, even personal clouds, which, according to Mark Templeton’s keynote on Wednesday, refers to all your personal stuff:

  • My Devices – of which we have an increasing number
  • My Preferences – which we want to be persistent across all of our devices
  • My Data – which we want to get to from wherever we happen to be
  • My Life – which increasingly overlaps with…
  • My work – which I want to use My Devices to perform, and which I want to reflect My Preferences, and which produces Work Data that is often all jumbled up with My Data (and that can open up a whole new world of problems, from security of business-proprietary information to regulatory compliance).

These five things overlap in very fluid and complex ways, and although I’ve never heard them referred to as a “personal cloud” before, we do need to think about all of them and all of the ways they interact with each other. So if creating yet another cloud definition helps us do that, I guess I’m OK with that, as long as nobody asks me to build one.

But lest I be accused of inconsistency, let me quickly recap the cloud concerns that I shared in a post about a month ago, hard on the heels of the big Amazon EC2 outage:

  1. We have to be clear in our definition of terms. If “cloud” can simply mean anything you want it to mean, then it means nothing.
  2. I’m worried that too many people are running to embrace the public cloud computing model while not doing enough due diligence first:
    1. What, exactly, does your cloud provider’s SLA say?
    2. What is their track record in living up to it?
    3. How well will they communicate with you if problems crop up?
    4. How are you insuring that your data is protected in the event that the unthinkable happens, there’s a cloud outage, and you can’t get to it?
    5. What is your business continuity plan in the event of a cloud outage? Have you planned ahead and designed resiliency into the way you use the cloud?
    6. Never forget that, no matter what they tell you, nobody cares as much about your stuff as you do. It’s your stuff. It’s your responsibility to take care of it. You can’t just throw it into the cloud and never think about it again.

Having said that, and in an attempt to adhere to point #1 above, I will henceforth stick to the definitions of cloud computing set forth in the draft document (#800-145) released by the National Institute of Standards and Technology in January of this year, and I promise to tell you if and when I deviate from those definitions. The following are the essential characteristics of cloud computing as defined in that draft document:

  • On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
  • Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
  • Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out, and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
  • Measured Service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

If you’ll read through those points a couple of times and give it a moment’s thought, a couple of things should become obvious.

First, most of the chunks of infrastructure that are being called “private clouds” aren’t – at least by the definition above. Standing up a XenApp or XenDesktop infrastructure, or even a mixed environment of both, does not mean that you have a private cloud, even if you access it from the Internet. Virtualizing a majority, or even all, of your servers doesn’t mean you have a private cloud.

Second, very few Small & Medium Enterprises can actually justify the investment required to build a true private cloud as defined above, although some of the technologies that are used to build public and private clouds (such as virtualization, support for broad network access, and some level of user self-service provisioning) will certainly trickle down into SME data centers. Instead, some will find that it makes sense to move some services into public clouds, or to leverage public clouds to scale out or scale in to address their elasticity needs. And some will decide that they simply don’t want to be in the IT infrastructure business anymore, and move all of their computing into a public cloud. And that’s not a bad thing, as long as they pay attention to my point #2 above. If that’s the way you feel, we want to help you do it safely, and in a way that meets your business needs. That’s one reason why I’ve been here all week.

So stay tuned, because we’ll definitely be writing more about the things we’ve learned here, and how you can apply them to make your business better.