Tag Archives: Encryption

What You Need to Know About the Heartbleed Bug

You may have heard that a new critical vulnerability has been identified that has affected many Internet Web servers – specifically those that use certain versions of “Open SSL” as a means of encrypting user sessions. We have inspected all VirtualQube.com Web sites, and verified that none of our sites have this vulnerability. However, it is possible that other Web sites you use on a regular basis are, or were, vulnerable. You can find a list of the top 1000 Web sites and their status (“vulnerable” / “not vulnerable”) as of roughly 12:00 UTC yesterday at https://github.com/musalbas/heartbleed-masstest/blob/master/top1000.txt. It is possible that many of the sites listed as “vulnerable” at the time have since patched their servers. However, if you have accounts on any of these sites – and the “vulnerable” list includes some high-profile sites such as yahoo.com, flickr.com, okcupid.com, slate.com, and eventbrite.com – you should immediately change your passwords.

There is also a useful tool available at http://filippo.io/Heartbleed/ that will allow you to check out a Web site if you are unsure whether or not it is vulnerable.

For the more technical in the crowd who are wondering how this vulnerability affects Web security, it allows an attacker to extract data from the memory of a Web server in up to 64K chunks. That may not sound like much, but if enough 64K chunks are extracted, useful information can be reconstructed, including username/password combinations, and even the private encryption key of the server itself. http://www.mysqlperformanceblog.com/2014/04/08/openssl-heartbleed-cve-2014-0160/ contains a list of the specific versions of OpenSSL that are vulnerable to this exploit.

Citrix Formally Announces XenClient and XenVault

Yesterday (August 25), Citrix formally announced XenDesktop 4 Feature Pack 2. It’s expected to be available by the end of September, and, of course, will be available at no charge to existing XenDesktop customers whose Subscription Advantage is current. The big news in this Feature Pack is the incorporation of XenClient and XenVault.

We’ve talked a lot about XenClient here, but haven’t said much about XenVault. It’s high time we did, because it’s a pretty cool piece of technology in its own right.

If you’ve used Citrix products in the past, you know that we have administrative control over whether, for example, users who are running applications on a XenApp server are able to save data back to a disk drive on their client device. With the advent of Smart Access (enabled by Access Gateway Enterprise policies), we can get even more granular: we might allow a user to save data to a client drive if they’re connecting from within the protected network, or connecting from a corporate-owned laptop, but deny that same user the ability to do so if they’re connecting from a personal device or public location like a hotel business center.

Unfortunately, once the data is on a client device, you now have a security risk. It could potentially be copied to a USB drive. The corporate laptop could be lost or stolen. (For some of the more high-profile examples, check out the “laptop losers hall of shame.”) Nevertheless, it’s often viewed as a risk we have to take so that our mobile users can be productive.

XenVault, which was first previewed at the Synergy event last May, is designed to address this risk. XenVault is a new plug-in for the Citrix Receiver. As such, its deployment and configuration are controlled through the Citrix Merchandising Server. To quickly review, Merchandising Server is the preferred tool Citrix has provided for installing and configuring client software. The first time a user authenticates to the Merchandising Server (through a simple browser interface), the Citrix Receiver will be pushed down and installed on the client device, together with whatever plug-ins and configuration details the administrator has defined for that user. Subsequently, the Citrix Receiver will check back with the Merchandising Server behind the scenes, and receive any configuration updates that may be available.

The XenVault plug-in creates a secure, encrypted (256-bit AES) storage area on the client hard disk. Typically, any application that is running remotely on a XenApp server or XenDesktop virtual PC will only be able to store data in the secure, encrypted location, if it is allowed to store data on the client drive at all. Same for an application that has been streamed via XenApp for local execution on the client (regardless of whether it was packaged with the Citrix streaming tools or with App-V). While the user will be able to use Windows Explorer to look at the secure location and see what files are there, the user will not be able to copy files from the secure location to a non-secured area of the hard disk, nor open the files with applications other than those specified by the administrator. For a deeper explanation of how this works, see Joe Nord’s blog post on the subject.

If the laptop is lost or stolen, the administrator can issue a “kill pill” that will cause the secure, encrypted area to be locked or deleted the next time the Receiver checks in with the Merchandising Server. Pretty cool.

If you can’t wait until the end of September to try it out, and you have a mycitrix login, you can download the XenVault technology preview now. And keep watching this space, because I’ve got a feeling that this will be a good subject for a future video blog.

SSL and Certificates - Part 3 of 3

Part 1 and Part 2 of this series covered the basic cryptographic concepts behind SSL certificates, and looked at how an SSL certificate is constructed and how it is validated. This installment will discuss what different kinds of certificates exist, some things to watch out for, and two big takeaways that will save you time, money, and aggravation.

Traditionally, an SSL server certificate, such as the Wells Fargo Bank certificate that we discussed in Part 2, were issued for the Fully Qualified Domain Name (“FQDN”) of the server the certificate is intended to secure. The certificate we discussed was specifically issued to “www.wellsfargo.com.” This is called the “Common Name” (abbreviated as “CN”), and is specified in the “Subject” field of the certificate:

The Common Name Field

That means that if there were other DNS entries, such as “remote.wellsfargo.com” or “email.wellsfargo.com” that happened to resolve to the IP address of the same physical server, and you pointed your browser at one of those, you would get a certificate error – because the certificate was issued to “www.wellsfargo.com,” not to one of those other entries, and the browser won’t be happy unless the host name in the address bar exactly matches the Common Name listed in the Subject field.

In recent years, a couple of new kinds of certificates have been introduced. One is the Multiple Domain, or “UCC” (Unified Communications Certificate) certificate. [Note: Yes, I realize that saying “UCC certificate” is inherently redundant – like saying “PIN number.” If, by chance, my former English professor is reading this, I apologize. But I’m going to do it anyway.] A UCC certificate contains an extra field called the “Subject Alternative Names” field, which can be used to list multiple subdomains that the certificate can be used to secure. For example, a UCC certificate could be used to secure “remote.mooselogic.com,” “email.mooselogic.com,” “extranet.mooselogic.com,” and so forth, provided that all of those subdomains are explicitly listed in the Subject Alternative Names field. That means that you must specify what subdomains you want listed when you purchase the certificate, and if you want to add or delete one, the certificate must be regenerated by the issuer (which will generally cost you more money).

In addition to the Subject Alternative Names field, a UCC certificate still has a “Common Name” listed in the “Subject” field. However, according to the X.509 certificate standard, if the Subject Alternative Names field is present, the client browser is supposed to ignore the contents of the Common Name field (although not all of them do). Therefore, if the common name is “www.mooselogic.com,” but that common name is not repeated as one of the Subject Alternative Names, a browser that strictly adhered to the standard would end up with a certificate error if it tried to connect to “www.mooselogic.com.” This interaction between Common Name and Subject Alternative Names has some implications for mobile devices that we’ll come back to in a bit.

The other new kind of certificate is the “Wildcard” certificate. A Wildcard certificate could be issued for, say, “*.mooselogic.com,” and used to secure any and all first level subdomains. (E.g., email.mooselogic.com is a first level subdomain; email.seattle.mooselogic.com is not a first level subdomain, and could not be secured with a Wildcard certificate.) A Wildcard certificate does not contain a Subject Alternative Names field – instead, the Wildcard (“*.mooselogic.com”) is actually listed as the Common Name in the Subject field.

If you are running a browser that was released anytime since 2003, it should support Subject Alternative Names, and probably Wildcard certificates as well. In that case, your browser will be happy if one of the following three conditions is true:

  1. The host name in the address bar of the browser exactly matches the Common Name of the certificate. (Unless the cert is a UCC cert, in which case the browser is supposed to ignore the Common Name.)
  2. The Common Name is a Wildcard, and the host name in the address bar matches the Wildcard.
  3. The cert is a UCC cert, and the host name in the address bar exactly matches one of the names listed in the Subject Alternative Names field.

However, if you are using a browser on a mobile device of some kind, it’s a different story. Windows Mobile 6.x devices support both Subject Alternative Names and Wildcards. Windows Mobile 5 devices support Subject Alternative Names, but do not support Wildcards. If you’re not running a Windows Mobile 5 or 6 device, you’re going to have to check with the vendor of your mobile device. Some support both Subject Alternative Names and Wildcards, some only support one of them, some support neither.

So what? Well, if you’re trying to use a mobile device to synchronize e-mail with an Exchange Server, it’s usually done by pointing the device at the same URL that you’re using for Outlook Web Access (“OWA”). If you’re using a Wildcard certificate to secure your OWA site, and your mobile device doesn’t support Wildcards, you’re out of luck – it’s simply not going to work. However, if you’re using a UCC certificate to secure your OWA site, and the URL of the OWA site is also the Common Name of that UCC certificate, your mobile device will be happy even if it doesn’t support UCC certificates…because it will simply look in the Common Name field and find a match.

So here’s big takeaway #1: If you’re going to use a UCC certificate to secure multiple URLs, and one of those URLs happens to be the URL you’re going to use to synch email to mobile devices, make sure that URL is the Common Name of the certificate in addition to being listed as one of the Subject Alternative Names.

Another common “gotcha” involving SSL and mobile devices involves intermediate certificates. Remember that “chain of trust” discussion from the second post in this series? It is increasingly common to find that the certificate you have purchased to secure your Web site is not chained directly to the CA’s trusted root. Instead, there is at least one intermediate certificate in the chain between the trusted root and the certificate you purchased. This isn’t a problem for “big Windows,” because the browser is smart enough to sense that the certificate the server is presenting is not chained directly to the trusted root that it knows about, and to request the intermediate certificate(s) so it can validate the complete chain of trust. Mobile devices, including Windows Mobile devices, are not that smart.

Mobile devices depend on the server to present the entire certificate chain, including any intermediate certificates, at the time of connection. And the server won’t do that unless all of the intermediate certificates are present in that server’s own local computer certificate store. Installing the certificates into IIS for use in securing the OWA Web site does not automatically put them in the local computer certificate store – you must explicitly import them.

But why purchase a commercial certificate at all? Can’t you be your own Certificate Authority if you’re running a Windows Active Directory Domain? Yes, you can…if you don’t care about supporting connections from any PCs other than ones that have been joined to the domain, and you don’t care about supporting mobile devices. For example, when you set up a Windows Small Business Server, the wizard that configures that server for OWA automatically secures it with a self-issued certificate. That’s not a problem for any PC or laptop that has been joined to your SBS domain, because the very act of joining a computer to a domain inserts the domain’s own self-issued root certificate into the computer’s trusted root certificates store. But if you then try to connect from your home PC, or your mother-in-law’s PC, or any other PC that isn’t a member of your domain, you get a certificate error. At least with a PC, you have the opportunity to override the error and connect to the Web site anyway…but a mobile device will simply fail to connect, while typically giving you very little information about what the problem is.

You may or may not be able to manually import a certificate into the trusted root certificate store of your mobile device. Some mobile operators give their subscribers that level of “management access” to their mobile devices and some don’t. Some mobile operators provide special certificate installation utilities for their smart phones, some don’t. Sometimes there are workarounds, sometimes there aren’t. To our knowledge, there is no definitive list available of which mobile devices have their certificate stores locked down and which don’t. So the question is: How much is your time worth? The first time you (or we, on your behalf) spend a half day trying to make a mobile device work with an SSL certificate that wasn’t built into the phone, you will have spent more money – not to mention the time and aggravation – than it would have cost to go to a public CA and purchase a certificate that’s already supported.

So big takeaway #2 is: If you’re going to synch e-mail to mobile devices, do yourself a favor and decide in advance what mobile devices you’re going to support, then buy an SSL certificate from a public CA whose trusted root is already supported by those mobile devices. You’ll save money in the long run, and probably keep your blood pressure lower as well.

For more information on certificates and mobile devices, including a list of the trusted root certificates that ship with Windows Mobile 5 and Windows Mobile 6 devices, download the Moose Logic Technical Bulletin entitled Recommended Best Practices for Exchange Synchronization with Mobile Devices.

SSL and Certificates - Part 2 of 3

In Part 1, we discussed basic cryptography, and worked our way up to symmetrical encryption systems such as AES, which accepts key lengths as long as 256 bits. We also discussed why key length was important to a cryptosystem, and alluded to the fact that there are also asymmetrical systems. An asymmetrical system uses a key pair, such that anything that is encrypted with one key can only be decrypted with the other. The math behind such a system is way beyond the scope of this humble blog, so we ask that you simply take our word for it that such systems exist.

In most systems that make use of key pairs, one of the keys is made public, and the other is kept secret. This is generically called a Public Key Infrastructure, or PKI. Consider the following use cases:

  • If you know my public key, you can use it to encrypt a message that only I can decrypt - assuming, of course, that I’ve kept my private key truly private.
  • I can encrypt something using my private key that can then be decrypted by anyone who has my public key. And since the public key is, well, public, that means pretty much anyone.

The first use case has obvious benefits. But what good is encrypting a message that anyone could theoretically decrypt? Well, if you know my public key – and you have some way of knowing that it’s really mine – then you can be pretty sure that any message that you successfully decrypt with it must have been sent by me (again, assuming that I’ve kept my private key safe). That makes for a pretty good digital signature.

So, the question becomes this: to use some kind of PKI, how can I securely transmit my public key to the people who might want to communicate with me (or authenticate my communication with them), in such a way that they are confident that it’s really my public key? One way, certainly, to get it to you would be to physically hand it to you on some kind of storage medium – which could be a USB flash drive, a CD, or even a piece of paper. If that’s not feasible, perhaps I could give it to someone you trust, who could then give it to you and vouch for its authenticity. That, by the way, is the concept behind “PGP” (which stands for “Pretty Good Privacy”) – you establish a circle of trust, and when Bob sends you Jane’s public key, it’s up to you to decide how much you really trust Bob.

That might be acceptable for exchanging secure e-mail with your friends, but it’s probably not good enough if we’re talking about securing access to your on-line banking system. So how do you know, when you point your browser at, say, www.wellsfargo.com, that the server you end up talking to, which is asking you for stuff like your social security number and password, is really a server that belongs to Wells Fargo Bank? Enter the SSL Certificate.

The next time you’re on your favorite banking or shopping site, click on the little padlock symbol (exactly where that symbol is will depend on what browser and version you’re running…IE8 displays it right at the end of the address field where the URL is displayed). I’m going to stick with Wells Fargo Bank for now, and break down what you should see if you choose “View Certificate.”

First, you’ll see something like this:

SSL Certificate Screen Capture

It tells you briefly the purpose for which the certificate was issued (to ensure the identity of a remote computer). It tells you that the certificate was issued to www.wellsfargo.com, and that it is valid through June 5, 2010. But how do you know you can trust it? Where did the certificate come from?

Well, if you click on the “Certification Path” tab, it will tell you where it came from:

SSL Chain of Trust

This shows the “chain of trust,” and tells you that the “root” of that chain is the VeriSign Class 3 Public Primary Certificate Authority (“CA”). VeriSign is one of the public CAs that companies like Wells Fargo can go to and purchase certificates for purposes such as this. VeriSign is, in fact, probably the best known (and most expensive). But how do we really know that the certificate is valid? Just because it says it came from VeriSign, how do we know it really did?

Hold that thought, and let’s click on the “Details” tab:

Certificate Details Tab

Now we’re getting into the nitty-gritty of the certificate. Note the two fields called “Signature algorithm” and “Signature hash algorithm.” Remember those - we’ll come back to them later. For now, scroll down on the upper portion of the certificate, and highlight “Public key:”

The Public Key

You can now see the actual public key for this certificate displayed in the lower part of the window. Basically, the server is trying to tell you, “Here’s my public key. Use it to encrypt anything that you want to securely send to me.” But, still, how do we know it’s real? How do we know it hasn’t been tampered with?

Scroll down to the bottom, and you’ll see a reference to something called a “thumbprint:”

The Thumbprint

You will also see what algorithm was used to generate the thumbprint - in this case, it’s an algorithm called “sha1.” You probably don’t know what the “sha1” algorithm is, but your PC does. It’s used to generate a “hash value” on the contents of the certificate (remember, to your PC the entire certificate is just one long binary number). This is a one-way computation - in other words, it is not possible to look at the hash value, even knowing the algorithm, and work backwards to determine the original value that was used to generate the hash.

So the certificate is signed by running the sha1 algorithm on its contents, and then encrypting the results using the private key of the next-higher certificate in the chain of trust. That encrypted result is transmitted with the certificate as the “thumbprint.” Your computer then takes the contents of the certificate, runs the sha1 algorithm on it, and compares the results with the transmitted thumbprint, which it decrypts using the public key of the next-higher certificate in the chain of trust. If the hash values exactly match, you can be confident that the certificate hasn’t been tampered with – because it would be, for all practical purposes, impossible to tamper with the contents of the certificate without altering the thumbprint value.

Now, back to the basic question of trust. Each certificate in the chain of trust is, in this fashion, digitally signed using the private key of the certificate above it in the chain. You can, in fact, click on any certificate in the chain, click the button labeled “View Certificate,” and examine the details of that certificate, just as we examined the details in the example above. Ultimately, we find ourselves at the VeriSign “root” certificate, and the ultimate question becomes how do we really know that the public key presented in that root certificate, and which we are supposed to use to validate the signature of the next lower certificate in the chain, is valid - since we’ve now come to the end of the chain, and have no higher authority to use to validate the signature of the root?

The answer (which may surprise you) is that the manufacturer of your browser software made a deal with VeriSign to build their root certificate into the browser. If you’re running Internet Explorer, go to the “Tools” menu, and choose “Internet Options.” Click on the “Content” tab, and then click the “Certificates” button. Then click the tab that says “Trusted Root Certification Authorities,” and scroll down. Guess what?

Trusted Root Certificates

There it is. Ever notice that from time to time Microsoft’s update service pushes out “root certificate updates?” Now you know what that’s all about - they’re either adding certificates to the trusted root certificate store, or replacing ones that are about to expire.

So, now that we know we can trust the certificate that the Wells Fargo Web server presented to us, what do we do with it? Well, we’ve learned two things from the certificate: First, we know that the server we’re communicating with is indeed what it claims to be – part of the www.wellsfargo.com server farm; second, we know that server’s public key. Since we know its public key, we know that we can send information to it securely. However, we don’t want to use the public/private key pair for our entire session, because it so happens that an asymmetrical encryption scheme requires more processing effort that a symmetrical scheme – it would be preferable to use a symmetrical encryption scheme. But we don’t want to just use the same symmetrical key every time, because one of the basic precepts of cryptography is that the more encrypted data you have to work with, the easier it is to break the encryption. Therefore, your PC will use the server’s public key to securely negotiate a session key that will be used to symmetrically encrypt and decrypt just this banking session. Tomorrow, or next week, when you log on again, your PC will negotiate a totally different session key.

In the next post, we’ll talk more about the different kinds of certificates, what they’re used for, and some of the pitfalls of using certificates to secure communications in your own networks.

SSL and Certificates - Part 1 of 3

We’ve seen a lot of confusion regarding what SSL certificates are all about – what they are, what they do, how you use them to secure a Web site, what the “gotchas” are when you’re trying to set up mobile devices to synchronize with an Exchange server, etc. So we’re going to attempt, over a few posts, to explain in layman’s terms (OK, a fairly technical layman) what it’s all about. However, before you can really understand what SSL is all about, you need to understand a little bit about cryptography.

When we were all kids, we probably all played around at one time or another with a simple substitution cipher – where each letter of the alphabet was substituted for another letter, and the same substitution was used for the entire message. It may have been done by simply reversing the alphabet (e.g., Z=A, Y=B, etc.), by shifting all the letters “x” letters to the right or left, or by using your Little Orphan Annie Decoder Ring. (The one-letter-to-the-left substitution cypher was famously used by Arthur C. Clarke in 2001: A Space Odyssey to turn “IBM” into “HAL” - the computer that ran the spaceship.)

The problem with such a simple cipher is that it may fool your average six-year-old, but that’s about it – because (among other things) it does nothing to conceal frequency patterns. The letter “e” is, by far, the most frequently used letter in the English language, followed by “t,” “a,” “o,” etc. (If you want the full list, you can find it at http://en.wikipedia.org/wiki/Letter_frequency.) So whichever letter shows up most frequently in your encoded message is likely to represent the letter “e,” and so forth…and the longer the message is, the more obvious these patterns become. It would be nice to have a system that used a different substitution method for each letter of the message so that the frequency patterns are also concealed.

One approach to this is the so-called “one-time pad,” which is nearly impossible to break if it is properly implemented. This is constructed by selecting letters at random, for example, drawing them from a hopper similar to that used for a bingo game. A letter is drawn, it’s written down, then it goes back into the hopper which is again shuffled, and another letter is drawn. This process is continued until you have enough random letters written down to encode the longest message you might care about. Two copies are then made: one which will be used to encode a message, and the other which will be used to decode it. After they are used once, they are destroyed (hence the “one-time” portion of the name). One-time pads were commonly used in World War II to encrypt the most sensitive messages.

To use a one-time pad, you take the first letter of your message and assign it a numerical value of 1 to 26 (1=A, 26=Z). Then you add to that numerical value the numerical value of the first letter of the pad. That gives you the numerical value of the first letter of your cyphertext. If the sum is greater than 26, you subtract 26 from it. This kind of arithmetic is called “modulo 26,” and while you may not have heard that term, we do these kinds of calculations all the time: If it’s 10:00 am, and you’re asked what time it will be in five hours, you know without even thinking hard that it will be 3:00 pm. Effectively, you’re doing modulo 12 arithmetic: 10 + 5 = 15, but 15 is more than 12, so we have to subtract 12 from it to yield 3:00. (Unless you’re in the military, in which case 15:00 is a perfectly legitimate time.) So as we work through the following example, it might be helpful to visualize a clock that, instead of having the numbers 1 – 12 on the face, has the letters A – Z…and when the hand comes around to “Z,” it then starts over at “A.”

Let’s say that your message is, “Hello world.” Let’s further assume that the first ten characters of your one-time pad are: DKZII MIAVR. (By the way, I came up with these by going to www.random.org, and using their on-line random number generator to generate ten random numbers between 1 and 26.) So let’s write out our message – I’ll put the numerical value of each letter next to it in parentheses – then write the characters from the one-time pad below them, and then do the math:

  H(8)  E(5)  L(12) L(12) O(15) W(23) O(15) R(18) L(12) D(4)
+ D(4)  K(11) Z(26) I(9)  I(9)  M(13) I(9)  A(1)  V(22) R(18)

= L(12) P(16) L(12) U(21) X(24) J(10) X(24) S(19) H(8)  V(22)

So our cyphertext is: LPLUX JXSHV. Note that, in the addition above, there were three times (L + Z, W + M, and L + V) when the sum exceeded 26, so we had to subtract 26 from that sum to come up with a number that we could actually map to a letter. Our recipient, who presumably has a copy of the pad, simply reverses the calculation by subtracting the pad from the cyphertext to yield the original message.

While one-time pads are very secure, you do have the logistical problem of getting a copy of the pad to the intended recipient of the message. So this approach doesn’t help us much when we’re trying to secure computer communications – where often you don’t know in advance exactly who you will need to communicate with, e.g., a banking site or a typical Internet e-commerce site. Instead, we need something that lends itself to automated coding and decoding.

During World War II, the Germans had a machine that the Allies referred to by the code name “Enigma.” This machine had a series of wheels and gears that operated in such a way that each time a letter was typed, the wheels would rotate into a new position, which would determine how the next letter would be encoded. The first Enigma machine had spaces for three wheels; a later model had spaces for four. All the recipient needed to know was which wheels to use (they generally had more wheels to choose from than the machine had spaces for) and how to set the initial positions of the wheels, and the message could be decoded. In modern terms, we would call this information the “key.”

One of the major turning points in the war occurred when the British were able to come up with a mathematical model (or “algorithm”) of how the Enigma machine worked. Alan Turing (yes, that Alan Turing) was a key player in that effort, and the roots of modern digital computing trace back to Bletchley Park and that code-breaking effort. (For a very entertaining read, I highly recommend Cryptonomicon by Neal Stephenson, in which Bletchley Park and the code breakers play a leading role.)

Today, we have computers that can perform complex mathematical algorithms very quickly, and the commonly used encryption algorithms are generally made public, specifically so that researchers will attack and attempt to break them. That way, the weak ones get weeded out pretty quickly. But they all work by performing some kind of mathematical manipulation of the numbers that represent the text (and to a computer, all text consists of numbers anyway), and they all require some kind of key, or “seed value,” to get the computation going. Therefore, since the encryption algorithm itself is public knowledge, the security of the system depends entirely on the key.

One such system is the “Advanced Encryption Standard” (“AES”), which happens to be the one adopted by the U. S. government. AES allows for keys that are 128 bits, 192 bits, or 256 bits long. Assuming there isn’t some kind of structural weakness in the AES algorithm – in which case it would presumably have been weeded out before anyone who was serious about security started using it – the logical way to attack it is to sequentially use all possible keys until you find the one that decodes the message. This is called a “brute force” attack. Of course, with a key length of n bits, there are 2n possible keys. So every bit that’s added to the length of the key doubles the number of possible keys.

It is generally accepted that the computing power required to try all possible 128-bit keys will be out of reach for the foreseeable future, unless some unanticipated breakthrough in technology occurs that dramatically increases processing power. Of course, such a breakthrough is entirely possible, which is why AES also allows for 192-bit and 256-bit keys – and remember, a 256-bit key isn’t just twice as hard to break as a 128-bit key, it’s 2128 times as hard. (And 2128 is roughly equal to the digit “3” followed by 38 zeros.) Therefore the government requires 192- or 256-bit keys for “highly sensitive” data.

AES uses a symmetrical key, meaning that the same key is used both to encrypt and decrypt the message, just as was the case with the old Enigma machine. In the next post of this series, we’ll talk about asymmetrical encryption systems, and try to work our way around to talking about SSL certificates.