Image of the glider from the Game of Life by John Conway
Skip to content

Cryptographically Secure Passphrases In d-note

A couple nights ago, while coming home from work, I started thinking about the button you press on the d-note web application (an instance running at https://secrets.xmission.com) for generating passphrases used to encrypt your note. Each passphrase is a 22-character base 64 passphrase. Initially, I was using the following code in JavaScript:

1
2
3
4
5
6
7
8
9
function make_key() {
    var text = "";
    var possible =
        "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
    for(i=22; i--;) {
        text += possible.charAt(Math.floor(Math.random() * possible.length));
    }
    return text;
}

Simple, functional, works. However, using Math.random() for each character generation isn't cryptographically strong. The d-note web application is known for going over the top on the security engineering, and I know I can do better. So, I did a little digging, and learned about the "Web Crypto API" that many modern browsers support, despite the specification being a "working draft". So, I figured I could use that for my code. As such, the code morphed into the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
function make_key() {
    var text = "";
    var possible =
        "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
    var random_array = new Uint32Array(22);

    // Make some attempt at preferring a strong CSPRNG first
    if (window.crypto && window.crypto.getRandomValues) {
        // Desktop Chrome 11.0, Firefox 21.0, Opera 15.0, Safari 3.1
        // Mobile Chrome 23, Firefox 21.0, iOS 6
        window.crypto.getRandomValues(random_array);
    }
    else if (window.msCrypto && window.msCrypto.getRandomValues) {
        // IE 11
        window.msCrypto.getRandomValues(random_array);
    }
    else {
        // Android browser, IE Mobile, Opera Mobile, older desktop browsers
        for(i=22; i--;) {
            random_array[i] = Math.floor(Math.random() * Math.pow(2, 32));
        }
    }

    for(i=22; i--;) {
        text += possible.charAt(Math.floor(random_array[i] % possible.length));
    }

    return text;
}

The Web Crypto API ensures that browser is using a cryptographically secure non-blocking PRNG from the operating system, such as /dev/urandom that ships with the Linux kernel. While this works, it means that browsers that don't support the Web Crypto API are stuck with non-cryptographic passphrases. This certainly wouldn't do, so I went out to fix it.

Enter Blum Blum Shub (this sounds like something out of a Dr. Seuss book). Blum Blum Shub is a cryptographically secure PRNG, provided a few criteria:

  • The primes 'p' and 'q' can be static or chosen pseudorandomly.
  • The primes 'p' and 'q' should be congruent to 3 modulo 4.
  • The seed 'x0' should be chosen pseudorandomly to start the process.
  • The seed 'x0' should not be '0' or '1'.
  • The seed 'x0' should be coprime to the product of the primes 'p' and 'q'.
  • The GCD of phi(p-1) and phi(q-2) should be small (subjective?), where phi is the Euler's totient function.

Unfortunately, as the product of primes 'p' and 'q' grow, the Blum Blum Shub algorithm slows to a crawl. Fortunately and unfortunately, the maximum integer JavaScript can store is 53-bits, or 2^53. This means the product of 'p' and 'q' cannot exceed that value. That leaves us with an integer space of '2^26' and '2^27' for the primes. However, this is okay, as even 'p = 11' and 'q = 19' generate a cycle that is long enough for our 22-character passphrase.

Thus, the code had changed to:

1
2
3
4
    else {
        // Android browser, IE Mobile, Opera Mobile, other browsers
        random_array = bbs(22);
    }

This new "bbs(n)" function is the bread and butter for generating cryptographically secure pseudorandom numbers. The function takes an integer as an argument, to know how many numbers need to be generated, and returns an unsigned 32-bit integer array with that many random numbers. The code is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
function gcd(x, y) {
    if(!y) return x;
    return gcd(y, x%y);
}

function seed() {
    var s = 2*Math.floor(Math.random() * Math.pow(2,31))-1; //odd
    if(s < 2) {
        return seed();
    } else {
        return s;
    }
}

function bbs(n) {
    // Blum Blum Shub cryptographically secure PRNG
    // See https://en.wikipedia.org/wiki/Blum_Blum_Shub
    var a = new Uint32Array(n);
    // Max int = 2^53 == (2^26)*(2^27) -> (2^p1)*(2^p2)
    var p1 = Math.floor(Math.random()*2)+25; // first power, 25 or 26
    var p2 = 51-p1; // second power
    var p = random_prime(2*Math.floor(Math.random() * Math.pow(2,p1))-1);
    var q = random_prime(2*Math.floor(Math.random() * Math.pow(2,p2))-1);
    var s = seed();

    // Ensure each quadratic residue has one square root which is also quadratic
    // residue. Also, gcd(totient(p-1),totient(q-1)) should be small to ensure a
    // large cycle length.
    while(p%4 != 3 || q%4 != 3 || gcd(totient(p-1),totient(q-1)) >= 5) {
        p = random_prime(2*Math.floor(Math.random() * Math.pow(2,p1))-1);
        q = random_prime(2*Math.floor(Math.random() * Math.pow(2,p2))-1);
    }

    // s should be coprime to p*q
    while(gcd(p*q, s) != 1) {
        s = seed();
    }

    for(i=n; i--;) {
        s = Math.pow(s,2)%(p*q);
        a[i] = s;
    }

    return a;
}

The key to this function is generating primes 'p' and 'q' that meet the requirements outlined earlier in the post. The primes can be static, with the seed pseudorandomly generated, or the primes can also be pseudorandomly genenerated. I went with the latter. So, the only trick to getting these values, is testing for primality.

The standard way to test for primality is through trial division. Unfortunately, it's slow. Sure, there are a few optimization techniques that you can do to speed things up, but by and large, it's not an efficient way of attacking the problem. Instead, I applied the Miller-Rabin test for composites. Miller-Rabin uses "witnesses" that "testify" about the compositeness of a positive integer. The more witness there are to testify that your integer is composite, the more likely it is. Thus, we would say that we have a "strong composite" integer. However, if there are zero witnesses for a positive integer we can say that the integer "may or may not be prime". However, after enough witnesses, we can actually say with deterministic fact if a number is prime.

I'll stop going over the algorithm there, and let you read up on the Wikipedia article about it. Suffice it to say, the Miller-Rabin primality test runs in O(k*log(n)^3), where 'k' is the accuracy that is sought. This is good news for Blum Blum Shub, as it's slow enough already. Code below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
function modProd(a,b,n){
    if(b==0) return 0;
    if(b==1) return a%n;
    return (modProd(a,(b-b%10)/10,n)*10+(b%10)*a)%n;
}

function modPow(a,b,n){
    if(b==0) return 1;
    if(b==1) return a%n;
    if(b%2==0){
        var c=modPow(a,b/2,n);
        return modProd(c,c,n);
    }
    return modProd(a,modPow(a,b-1,n),n);
}

function isPrime(n){
    // Miller-Rabin primality test taken from
    // http://rosettacode.org/wiki/Miller-Rabin_primality_test#JavaScript
    // O(k*log(n)^3) worst case, given k-accuracy
    if(n==2||n==3||n==5) return true;
    if(n%2==0||n%3==0||n%5==0) return false;
    if(n<25) return true;
    for(var a=[2,3,5,7,11,13,17,19],b=n-1,d,t,i,x;b%2==0;b/=2);
    for(i=0;i<a.length;i++) {
        x=modPow(a[i],b,n);
        if(x==1||x==n-1) continue;
        for(t=true,d=b;t&&d<n-1;d*=2){
              x=modProd(x,x,n); if(x==n-1) t=false;
        }
        if(t) return false;
    }
    return true;
}

function random_prime(n) {
    while(!isPrime(n)) n -= 2;
    return n;
}

The primes 'p' and 'q' must then go through some rigid tests to ensure that the algorithm has a long cycle, and the seed doesn't fall into a trap, where it falls to 0, and the algorithm cannot recover. Part of these tests is using Euler's totient function. The totient function does nothing more than just count what are called "totatives". A "totative" is a number that is coprime to a given number 'n'. In other words, the totative and 'n' do not share any common divisors other than '1'. Every positive integer from 1 through 'n' must be tested. Unfortunately, this is slow. However, there is a proof using a product formula, which shows that we can arrive at the same result in O(sqrt(n)/3) time. Again, because Blum Blum Shub is slow, we need to keep our processor time down.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
function totient(n) {
    // compute Euler's totient function
    // O(sqrt(n)/3) worst case
    // Taken from:
    // https://en.wikipedia.org/wiki/Talk:Euler%27s_totient_function#C.2B.2B_Example
    if(n < 2) return n;
    var phi = n;
    if (n % 2 == 0) {
        phi /= 2;
        n /= 2;
        while(n % 2 == 0) n /= 2;
    }
    if (n % 3 == 0) {
        phi -= phi/3;
        n /= 3;
        while(n % 3 == 0) n /= 3;
    }
    for(p = 5; p * p <= n;) {
        if(n % p == 0) {
            phi -= phi/p;
            n /= p;
            while(n % p == 0) n /= p;
        }
        p += 2;
        if(p * p > n) break;
        if(n % p == 0) {
            phi -= phi/p;
            n /= p;
            while(n % p == 0) n /= p;
        }
    }
    if(n > 1) phi -= phi/n;
    return phi;
}

Putting everything together, the Blum Blum Shub algorithm on my workstation can produce approximately 1 million random numbers per second for a 53-bit space. We only need 22 numbers, so that should be sufficient, even for weaker devices. I was able to test the code successfully on a number of browsers that do not support the Web Crypto API, and the key generation is near instantaneous, even on my phone. Interacting with the console, here is the output of one call to the algorithm:

> bbs(22);
[3143943020, 475278844, 386457630, 124718623, 280175014, 2600881459,
127152064, 749398749, 2269393658, 692609408, 1218408987, 1523732228,
1265360812, 1641372390, 2500929554, 2223592103, 2462017186, 310616491,
426752821, 2973180471, 2248877527, 574751875]

Those numbers are cryptographically strong, because they are the result of the product of two primes, which means using the sequence to get back to the primes is at least as difficult as the factoring problem. It also turns out that it may also be at least as difficult as computing modular square roots, which is at least as difficult as the factoring problem.

So, even though most users visiting a site hosting d-note will likely be using the Web Crypto API, there may be some other browsers that are not. For them, should they choose to have the browser generate the passphrase for encrypting their note server-side, they can rest assured that the result is cryptographically strong.

Officially Announcing d-note Version 1.0

I've been looking forward to this post. Finally, on my birthday, it's here. My Python Flask web application of encrypted self-destructing notes is stable, and ready for production use.

History
Around 2011, or so, I started thinking about a way that I could send data privately and securely to friends, family and coworkers, without requiring them to spend a great deal of time setting up PGP keys, knowing a great deal about encryption and security, and other things. Basically, I wanted a way to send them an encrypted note, and transparently, it could be decrypted with very little work, or at most, providing a passphrase to decrypt the note. I knew this would need to be a web application, but I wasn't sure of the implementation details. Eventually, that got worked out, and d-note was born early January of this year.

Shortly after that release, I started focusing on a script that would generate random ASCII art with OpenPGP keys. There will be some additional news about that project I'm excited about, but will announce at a later date. As a result of that OpenPGP project, d-note development took the back seat for a bit. However, a recent pull request from Alan Dawson brought focus back to the code.

Changes And Increased Security
The changes that Alan Dawson introduced were switching away from Blowfish in ECB mode to AES-128 in CBC mode with HMAC-SHA1. This is something that I initially wanted to support, but didn't for two reasons:

  • Without thinking about PBKDF2, AES keys must be 16, 24 or 32 bytes in size. This prevented users from entering personal passwords to encrypt the note, rather than the server-side key.
  • Also, without thinking about PBKDF2, Blowfish was intentionally designed to make the key setup slow, so brute forcing the password out of the encrypted file would be more difficult.

However, Blowfish uses a 64-bit block size for its internal operations, while AES uses 128-bit. This might be of little consequence for the standard plaintext encrypted note, but for encrypting files, which I would like to support in the long term, it could mean the difference of repeated encrypted blocks in Blowfish, or none in AES.

Since the changes introduced by Alan, I've increased the security of the application to AES-256 in CTR with HMAC-SHA512. The application relies on the Python Cryptography Toolkit version 2.6. In version 2.7, new authenticated block cipher modes are introduced, such as GCM. With the ability to use GCM, the need to create an HMAC-SHA512 tag will no longer be needed. However, switching modes will break previously encrypted notes that have not yet been decrypted, so this will need to be handled with care. Also, when SHA3 is standardized, I would like to switch to that if it is introduced into PyCrypto, rather than use SHA512, even though there have been no serious security concerns over SHA2.

The Encryption Process Visualized Without A Custom Passphrase
I wanted to at least show what the encryption process looks like from top-to-bottom visually, so you wouldn't need to piece together the code and figure it out.

First, we start out with the application creating 3 static salts at random. Each salt is 16-bytes in size, and should be different from the other two, although this isn't a requirement. The salts will remain static as long as you wish. They are the only random data that is not deleted, but instead saved on the server. Changing the salts will mean any previously encrypted notes still on the server will no longer be able to be decrypted. As such, once generated, they should be left alone. However, if no encrypted notes remain on the server to be decrypted, the web server could be stopped, the salts changed, and the web server started back up. The only reason you would do this, is if you feel the salts have been compromised in some way.

3-static-salts

When a browser renders the main index.html page, we need to create a unique and random URL to post to. As such, a one-way function is used to generate three random keys, from which we'll build everything from. First, we generate a random 16-byte nonce. This nonce is the key to starting the one-way function to build everything for encrypting and decrypting the note.

random-nonce

All of the security of this application rests on the ability to generate a cryptographically secure random nonce on every page view. We've taken the steps necessary to ensure that not only is that choice cryptographically strong, but all the building blocks that build from it are industry best practices, and over engineered at that. Our one-way function is started by using the salts along with our nonce in a PBKDF2 function. PBKDF2 is a password-based key derivation function that can derive a pseudorandom key of any size, given a nonce and a salt. As such, we use our three salts and the nonce to generate a 16-byte file key, a 32-byte AES key, and a 64-byte HMAC key. Notice that we use each salt only once.

pbkdf2

Now that we have a file key, we can base-64 encode that data, which becomes our file name to save our encrypted data to. This is different than initially released, where the URL would also be the file name. The URL can produce the file name, but the reverse is not true.

filename

Finally, our nonce is base-64 encoded, and becomes the URL that we post to, and the URL that we give to the recipient.

url-creation

At this point, the user is now copying their data into the form. The user has the option of either using a personal passphrase to encrypt the note, or to let the server use its key. It's important to note, that due to our nonce above, unless the same passphrase is chosen for encrypting multiple notes, the same AES key is never used to encrypt multiple notes. Because this is symmetric encryption server-side, we don't want to run the risk of deriving the AES key and having multiple encrypted files decrypted, because they were encrypted with the same key. So the random nonce generation at each index.html view is critical.

When the note is posted to the server, it is first compressed using the ZLIB compression library. This reduces data structure out of the plaintext, and increases the overall entropy of the encrypted note before it is finally saved to disk. It should be mentioned that the note is never stored to disk until it is fully encrypted.

plaintext-compression

We are now ready to ship the compressed plaintext off to AES for encryption processing. First, we need to generate a 12-byte random initial value for our counting. We need to do this, because we will be encrypting the note with AES-256 in CTR mode, and I would like to protect the end users from backups. CTR mode uses a counter, that typically starts with "1", for ensuring that the same plaintext block is not encrypted to the same ciphertext block during the encryption process. However, if the same AES key is shared between two plaintexts, then an XOR of the two ciphertexts can reveal the AES key. As such, for every encryption process, we use that random 12-byte initial value to practically guarantee that the starting point of the AES counter is always different, even if the same AES key is used between multiple plaintexts. The initial value is 12-bytes in size, rather than 16-bytes, to ensure that we still have plenty of counting space during the encryption process.

We also need our pseudorandom 32-byte AES key that we generated from our earlier PBKDF2 function. Using this 32-byte key, our 12-byte initial value, and our compressed plaintext, we encrypt the data. This encryption process will give us a preliminary ciphertext. It is preliminary, in that we have additional work to do, before we are ready to store it on disk.

aes-encryption

Because we used an initial value to start our AES-256 in CTR mode, we will require the initial value also for decryption. As such, we need to store the initial value with the preliminary ciphertext. As such, the initial value is prepended at the beginning of the ciphertext. Because the initial value is random data, and the ciphertext should appear as random data, prepending the initial value to the ciphertext should not reveal any data boundaries or leak any information about the stored contents. Prepending the initial value to the ciphertext gives us a file 12-bytes larger as an intermediate ciphertext.

iv-prepend

The final step in the encryption process is to ensure data integrity and to provide authentication by using HMAC-SHA512. The reason for this, is non-authenticated block cipher modes can suffer from a practical malleability attack on the ciphertext to reveal the plaintext. Thus, the intermediate ciphertext is HMAC-SHA512 hashed. This is known as "encrypt-then-MAC" (EtM), and is the preferred way for storing MAC tags.

HMAC allows you to choose a number of different cryptographic hashing algorithms. In our case, SHA512 is used, because we can. HMAC requires both a salt and a message. In our case, because the user is not providing a passphrase to the application, the salt is our 64-byte HMAC pseudorandom key that we generated with PBKDF2 earlier. The intermediate ciphertext and this 64-byte key are added to the HMAC-512 function, to generate a 64-byte SHA512 tag.

hmac-sha512

This 64-byte SHA512 tag is prepended to our intermediate ciphertext to produce our final ciphertext document, which is finally written to disk in binary form. Initially, I had base-64 encoded the ciphertext, as is standard practice, but this just increases the used disk space by an additional 30%, and adds additional code, with no real obvious benefit, other than not being able to view the ciphertext in a text editor, or on your terminal. So, the raw binary is stored instead.

hmac-prepend

The Encryption Process Visualized With A Custom Passphrase
In the process of encrypting the note with a user-supplied passphrase, rather than use our cryptographic nonce to generate the AES key and the HMAC key, the passphrase is used in place. In other words, we still use PBKDF2 to generate our AES key and our HMAC key, combined with the appropriate salt. Everything else about the encryption process is the same.

user-passphrase

Knowing whether or not a user supplied a passphrase to encrypt the note, an empty file with ".key" as an extension is created. The user-supplied password is not stored on disk. The empty file exists only to tell the application not to use PBKDF2 to generate the AES key, but to ask the client for it. However, if a duress key for Big Brother is supplied, then that key is indeed stored on disk. That key will not decrypt the note, but instead return random sentences from The Zen of Python, as well as destroy the original note.

To show the effectiveness of the 12-byte initial value discussed earlier, I'll encrypt the text "Hello Central, give me Doctor Jazz!" twice, both with the same passphrase "Oliver". If the AES counter started with the same value in both instances, then the ciphertext will be identical, as it should be. By starting with a random initial value for our counter, the ciphertexts should be different. Let's take a look.

Encrypting it twice, I ended up with two files "qZ-cloUfLnFYVYZImgYWmQ" and "hxlUK9GxbNSMgJZDUgEsMw". Let's look at their content:

$ ls data
hashcash.db  qZ-cloUfLnFYVYZImgYWmQ  qZ-cloUfLnFYVYZImgYWmQ.key hxlUK9GxbNSMgJZDUgEsMw  hxlUK9GxbNSMgJZDUgEsMw.key
$ cat data/qZ-cloUfLnFYVYZImgYWmQ | base64 -
CgHOJ4NTTMSNWOVbC1SduXg7jqij92o3ZHhOcCumu8DKmDKI3cyZ5Ne9dB6w7TrnxMaR8EyDf5w9
eMpMM+VLkztyfAieUWFYVFvFqJ0cY7+fgX1tzyaLs71rHywjydE9bobAfof0Bqo6j87sTJEgJTu+
BFXko1w=
$ cat data/hxlUK9GxbNSMgJZDUgEsMw | base64 -
D/fLA+UgghHQMGadJxmATifaL+JTXybRNROUDvSBzgTV6EWX7Dau9Z9zI1KpEuMbDeFNp4oZk4SN
hlVGm2k8vWwKJWhys78U6FFd1jGmKWDC61VKH7+zXEGhNf/fo/igEEEG+Jge1awQ9A0cJbsLSmoh
zgj6XH8=

This should not be surprising, knowing how CTR mode works with block ciphers:

ctr-encryption

As long as that IV is different, then the ciphertexts should be different, and one should not be able to extract the private key that encrypted the plaintext, even if the key is reused across multiple plaintexts. Let's see what the IV was for each, using Python:

>>> msg = """CgHOJ4NTTMSNWOVbC1SduXg7jqij92o3ZHhOcCumu8DKmDKI3cyZ5Ne9dB6w7TrnxMaR8EyDf5w9
... eMpMM+VLkztyfAieUWFYVFvFqJ0cY7+fgX1tzyaLs71rHywjydE9bobAfof0Bqo6j87sTJEgJTu+
... BFXko1w="""
>>> msg = msg.decode('base64')
>>> data = msg[64:] # remember, the first 64-bytes are the SHA512 tag
>>> iv = data[:12]
>>> long(iv.encode('hex'),16)
18398018855321261569467991464L
>>> msg = """D/fLA+UgghHQMGadJxmATifaL+JTXybRNROUDvSBzgTV6EWX7Dau9Z9zI1KpEuMbDeFNp4oZk4SN
... hlVGm2k8vWwKJWhys78U6FFd1jGmKWDC61VKH7+zXEGhNf/fo/igEEEG+Jge1awQ9A0cJbsLSmoh
... zgj6XH8="""
>>> msg = msg.decode('base64')
>>> data = msg[64:]
>>> iv = data[:12]
>>> long(iv.encode('hex'),16)
21167414188217590509613634382L

Decryption
Not much really needs to be said about decrypting the encrypted notes, other than because we are using authenticated encryption with HMAC-SHA512, when client supplies the URL to decrypt a note, a SHA512 tag is generated dynamically based on the encrypted file and then compared to the SHA512 tag actually stored in the encrypted note. If the tags match, the plaintext is returned. If the tags do not match, something went wrong, and the client is redirected to a standard 404 error. Other than that, the code for decrypting the notes should be self-explanatory.

Conclusion
I hope you enjoy the self-destructing encrypted notes web application! If there are any issues or concerns, please be sure to let me know.

Going Google Free

On June 25, 2004 I received the following email:

To: Aaron Toponce
From: Gmail Team
Subject: Gmail is different. Here's what you need to know.
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

First off, welcome. And thanksfor agreeing to help us test Gmail. By now you probably know the key ways in which Gmail differs from traditional webmail services. Searching instead of filing. A free gigabyte of storage. Messages displayed in context as conversations.

So what else is new?

Gmail has many other special features that will become apparent as you use your account. You’ll find answers to most of your questions in our searchable help section, which includes a Getting Started guide. You'll find information there on such topics as:

  • How to use address auto-complete
  • Setting up filters for incoming mail
  • Using advanced search options

You may also have noticed some text ads or related links to the right of this message. They're placed there in the same way that ads are placed alongside Google search results and, through our AdSense program, on content pages across the web. The matching of ads to content in your Gmail messages is performed entirely by computers; never by people. Because the ads and links are matched to information that is of interest to you, we hope you'll find them relevant and useful.

You're one of the very first people to use Gmail. Your input will help determine how it evolves, so we encourage you to send your feedback, suggestions and questions to us. But mostly, we hope you'll enjoy experimenting with Google's approach to email.

Speedy Delivery,

The Gmail Team

p.s. You can sign in to your account any time by visiting http://gmail.google.com

On June 25, 2014, I'll be closing my Google account for good. It seems fitting to end it on that date, exactly 10 years after first receiving it. I've already migrated all my documents, my contacts and my calendars to my personal Owncloud instance. When Google shut down Reader, I moved to TTRSS. Last week, I replaced the ROM on my phone with Cyanogenmod, and have not installed or enabled any Google accounts or applications, other than using K-9 for my email. I've already migrated my chat to my own personal XMPP server. The only thing that remains is email.

I will miss the community at Google+. I've made a great many friends there, and have had some amazing discussions. Sadly, it's time to say goodbye. You can find me on Twitter @AaronToponce, and I've usually cross-posted my articles from Plus on Twitter also. We can still have the amazing discussions there. Or, you can find me on IRC under the nick 'eightyeight' on the Freenode, OFTC and XMission IRC networks.

There are many reasons why I'm leaving the Google-verse. They include:

  • Cooperation with NSA spying.
  • Amassing vast amounts of data collection from hundreds of products.
  • Support for proprietary software.
  • A large monopolistic monoculture.

I appreciate that they will allow me to export my data, and delete it off of their servers. Their Takeout service isn't entirely evil. And their Google Summer of Code is to be commended. I appreciate that they have built Android, and submit back to the Linux kernel, and I love that they financially support Free Sofware conferences. However, despite all the good they have done, their toes are touching the "Do No Evil" line, and it's creepy.

I am going to self-host my data. I want as much control over my data as humanly possible. Using Google's services, means losing some of that control, and I just can't tolerate it any longer. This means work and some growing pains, but I'm ready for the trials.

Goodbye Google.

OpenPGP Key Random Art, Now With ANSI Color Support

I just recently committed support for my OpenPGP key random art Python script to support ANSI color. The idea is to create a "heat map" of which squares the drunken bishop has traversed during his dizzying travels. So not only can you see what your key "looks" like, but now you can sense what your key "feels" like.

The idea is simple- the squares that have not been walked will not have any color applied to them. The squares on the floor with the fewer visits will be "cold" with the blue color, while squares that have been visited many times will be "hot" with the red color, approaching white hot for even higher frequencies. There is a direct one-to-one relationship with the coin and the color. The point isn't necessarily to introduce additional entropy or to reduce collisions, but merely for entertainment value. For PGP key signing parties, standard plain text output should be the default, preventing anyone from being pigeon holed into specific software (browsers, PDF readers, etc).

Right now, the Python script is under very active development, so there are many things missing, such as switches to turn on foreground and background colors, or enabling and disabling ANSI entirely. For the time being, ANSI is default, and the script will fallback to plain text if the TTY or STDIN does not support ANSI escape sequences. The next step is to start working on command line switches. I haven't decided if ANSI color should be the default or not.

Necessary screenshots with the heat map and a key chosen at random from my keyring displayed, both for foreground and background colors are below. Note: the switches are missing in the command to differentiate between foreground and background display. I changed the code manually between the screenshots. Also, I might adjust the heat map, adding one or two extra pink to white colors, bringing red closer to blue, so a larger variety of color can be displayed in the ANSI.

Screenshot showing ANSI foreground color for an OpenPGP key.

Screenshot showing ANSI background color for an OpenPGP key.

Gunnar Glasses Review

I've owned a pair of GUNNAR Optiks for a couple of months now, and figured it would probably be a good time for a review. For those unaware, GUNNAR Optiks are specially designed eyewear for long exposure computer use. The claims on the main site are:

  • Eyes are more moist.
  • Headaches are reduced.
  • Blurry vision is reduced.
  • Eyes are no longer fatigued.

They claim these points are met by the curvature of the eyewear and the unique color of the yellow lens. I'll review each individiually.

The curvature of the glasses brings the lens in more closely to the face. The GUNNARS claim is that your eye produces natural moisture that humidifies the immediate air surrounding the eye. By bringing the lens in more closely to the face, that humidity is trapped in the area surrounding the eye, preventing the eye from drying out. So someone who does not wear glasses regularly, I was skeptical of this claim. However, after wearing them for 8 hours per day, every day, I can most certainly feel that my eyes are much more moist than without. People who regularly wear glasses are probably already aware of this. In fact, I've been fighting allergies this season, and my eyes have been abnormally moist. Wearing the GUNNARS has actually complicated the problem, to where I am constantly wiping my eyes to remove the excessive tearing.

The yellow color tint of the lens claims to neutralize the high-intensity blues that artifical light provide, such as flourescent and LED bulbs common in computer monitors. This actually didn't surprise me all that much, as the lens of your eye yellows as you get older due to UV radiation from the sun. On the color wheel, yellow is complementary to blue. Complementary colors of light, when combined correctly, produce white light. So, using a yellow tint to neutralize a blue light, providing a more balanced color spectrum is something photograhpers have known for a long time.

Due to the yellow tinting of the glasses, the claim is that headaches, fatiqued eyes and blurry vision are reduced or eliminated. I was highly skeptical of these claims, but after wearing them for a full day, I have noticed how much more "comfortable" my eyes feel at the end of the day. Indeed, I do not have the typical eye fatique that I normally have when driving home. I've never encountered headaches working in front of a computer all day, but I do get blurry vision, and the GUNNARS have most certainly reduced it. Even my wife has noticed how less grumpy and fatiqued I am when coming home after a full day of work.

After wearing the GUNNARS for the past couple of months, I've concluded that they do indeed hold up to the claims they make. There is something there to the science behind the eyewear. In fact, I've come to prefer wearing them when watching movies and television as well. Even when sitting in front of this 17" monitor writing this post without wearing the glasses, I am beginning to notice eye strain. I do feel like I look a bit kooky when walking around the office with them on my face, but they work, and they work well.

All isn't positive, however. The GUNNARS I purchased were from a friend who was getting migrains as a result of wearing them. The pair is non-prescription, so other than wearing typical sunglasses or ski goggles, I don't understand how changing the tint of what his eye sees would cause a migrain, but it was certainly the case for him. For myself, my life has been improved, not degraded by wearing the GUNNARS.

If someone were to ask me if I recommend wearing GUNNARS for gaming or general computing, it would certainly be in the affirmative.

Analysis of RIPEMD-160

Recently on Hacker News, I noticed a table showing the "Life cycles of popular cryptographic hashes" by Valerie Aurora (in this post, I've greatly compressed her HTML for faster page delivery).

Life cycles of popular cryptographic hashes (the "Breakout" chart)
Function 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
Snefru                                              
MD4                                              
MD5                                              
MD2                                              
RIPEMD                                              
HAVAL-128                                              
SHA-0                                              
SHA-1                                              
RIPEMD-128 [1]                                              
RIPEMD-160                                              
SHA-2 family                                    [2]          
SHA-3 (Keccak)                                              
Key Unbroken Weakened Broken Deprecated
[1] Note that 128-bit hashes are at best 2^64 complexity to break; using a 128-bit hash is irresponsible based on sheer digest length.
[2] In 2007, the NIST launched the SHA-3 competition because "Although there is no specific reason to believe that a practical attack on any of the SHA-2 family of hash functions is imminent, a successful collision attack on an algorithm in the SHA-2 family could have catastrophic effects for digital signatures." One year later the first strength reduction was published.
The Hash Function Lounge has an excellent list of references for most of the dates. Wikipedia now has references to the rest.

I find this table a great resource, and I'm glad she put it online. However, I have one small issue with the table (other than it's out of date, and small on functions), and that's her calling RIPEMD-160 "deprecated". My first question would be: deprecated by whom exactly? RIPEMD-160 isn't a FIPS standardized cryptographic hash function, so it couldn't be deprecated by NIST. RIPEMD-160 was actually developed in Belgium, and as far as I can tell, the Belgium NBN - Bureau for Standardisation hasn't deprecated it either. It is standardized by CRYPTREC in Japan, and also has not been officially deprecated there, as far as I can tell, although it is on their "monitored list".

It could be considered deprecated if there are known attacks that greatly weaken the algorithm, or if known collisions exist, such as MD5. However, RIPEMD-160 does not have any known weaknesses nor collisions. The simplified versions of RIPEMD do have problems, however, and should be avoided. But as it stands, RIPEMD-160 is still considered "strong" and "cryptographically secure". Being that it was first published in 1996, almost twenty years ago, in my opinion, that's impressive. Compared to SHA1, another 160-bit digest, which was first published in 1995, the first published attack against SHA-1 was published just 8 years later, in 2003, and attacks have been pouring out since.

In fact, maybe Valerie is calling RIPEMD-160 "deprecated", because it's old, and there are plenty of other functions with larger digests, such as the new proposed FIPS standard SHA3/Keccak. The only problem with that, is that while we may be able to call these secure now, they too could fall victim to various attacks, and be severely weakened. It could also be possible that RIPEMD-160 would still be considered strong and cryptographically secure.

Granted, RIPEMD-160 has not received the attention that SHA1 has. As such, it is likely that it has not gotten the mathematical attention of the cryptographic community. I understand that. However, RIPEMD-160 is part of the OpenPGP standard, and available in many cryptographic libraries for many different programming languages. While it's not the running favorite of cryptographers and developers, it also hasn't been overlooked.

The only concern I would have against RIPEMD-160 is the 160-bit output digest size. Is that large enough to withstand a sophisticated brute force search? To answer that question, we can look at the Bitcoin distributed network, likely the largest distributed computing network in the world. The Bitcoin network, at the time of this post, is currently calculating approximately 60 million billion SHA256 hashes per second, largely using specialized hardware called "ASICs". 60 million billion is 60 quadrillion, or 6x10^16. 160-bits is approximately 1.5x10^48. This means it is taking the Bitcoin distributed network approximately 2.4x10^31 seconds to completely exhaust the RIPEMD-160 digest space, or about 7.7x10^23 years. So even then, at the amazing brute force pace of 60 million billion hashes per second, it's still unreasonable to find legitimate collisions for a 160-bit digest. Even using the Birthday Attack, the likelihood of finding a collision is 50% in 890,362,173,273 years at that pace. RIPEMD-160 still stands strong.

Now, I'm an amateur cryptographer, and a lousy one at that. But my Internet searching has left me empty handed in trying to find any resource that successfully attacks, weakens, or criticizes RIPEMD-160, let alone calling it "deprecated". From everything I can tell, it's withstood the test of time, and it's still going very, very strong.

SHA3 (Keccak) in Linux

For a long time, I've been waiting to use the newly accepted SHA3 in Linux for file integrity and other uses. Like the md5sum(1), sha1sum(1), sha224sum(1), sha256sum(1), sha384sum(1), and sha512sum(1), I was hoping that a similar "sha3-224sum(1)", etc would be developed, and make its way into the GNU/Linux library. Unfortunately, I kept waiting and waiting, until eventually, I just stopped worrying about it. Well, to my surprise, it appears that there is a package that ships SHA3, as accepted by NIST in the rhash package (it also does a number of other hashes as well).

Keccak was chosen by NIST as the SHA3 winner, due to it's performance, security and construction. Keccak uses the sponge function for creating the cryptographic hash, which truly sets it apart from SHA1 and SHA2. This means any successful attack against SHA1 or SHA2 will likely be ineffective on SHA3. SHA3 clams 12.5 cycles per byte on an Intel Core 2 CPU in a software implementation. Unfortunately, it appears that SHA3 as it appears in the rhash package still needs some optimizations, as SHA2, which requires more cycles per byte due to its construction, can calculate a SHA2-256 hash faster than a SHA3-256 hash. SHA3 support was added in September 2013.

The implementation of SHA3 in rhash uses the offical acceptance of the original Keccak function as approved by NIST. This means that it does not contain the 2 bits "01" which are appended to the message for padding. It should be noted that SHA3 is only a FIPS draft as of the time of this blog post. As such, outputs could change until the standard is formalized.

Below are examples of hashing:

$ echo -n "" | rhash --sha3-224 -
f71837502ba8e10837bdd8d365adb85591895602fc552b48b7390abd  (stdin)
$ echo -n "" | rhash --sha3-256 -
c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470  (stdin)
$ echo -n "" | rhash --sha3-384 -
2c23146a63a29acf99e73b88f8c24eaa7dc60aa771780ccc006afbfa8fe2479b2dd2b21362337441ac12b515911957ff  (stdin)
$ echo -n "" | rhash --sha3-512 -
0eab42de4c3ceb9235fc91acffe746b29c29a8c366b7c60e4e67c466f36a4304c00fa9caf9d87976ba469bcbe06713b435f091ef2769fb160cdab33d3670680e  (stdin)
$ echo -n "The quick brown fox jumps over the lazy dog." | rhash --sha3-224 -
c59d4eaeac728671c635ff645014e2afa935bebffdb5fbd207ffdeab  (stdin)
$ echo -n "The quick brown fox jumps over the lazy dog." | rhash --sha3-256 -
578951e24efd62a3d63a86f7cd19aaa53c898fe287d2552133220370240b572d  (stdin)
$ echo -n "The quick brown fox jumps over the lazy dog." | rhash --sha3-384 -
9ad8e17325408eddb6edee6147f13856ad819bb7532668b605a24a2d958f88bd5c169e56dc4b2f89ffd325f6006d820b  (stdin)
$ echo -n "The quick brown fox jumps over the lazy dog." | rhash --sha3-512 -
ab7192d2b11f51c7dd744e7b3441febf397ca07bf812cceae122ca4ded6387889064f8db9230f173f6d1ab6e24b6e50f065b039f799f5592360a6558eb52d760  (stdin)

In my limited testing, it appears that the SHA3 implementation in rhash(1) is not quite up to par, and could use some additional performance improvements. I'm sure these will be committed over time. However, it's hardly a poor performer. I've been very happy with the performance results so far.

The Core Problem With Ubuntu Releases - Little QA

This post has a background bug that was introduced just over a week ago, ONE THE DAY BEFORE ITS RELEASE. The bug is being able to bypass the lock screen by just holding down your <Enter> key, letting Unity crash, then restarting without locking the desktop. That's a pretty big bug. What's interesting about this bug though, isn't the bug itself, but a bigger problem withing the Ubuntu and Canonical development teams. That is, that the release date is more important than the quality of the release.

I blogged about this six years ago when I was an Ubuntu Member, and my blog was on the Ubuntu Planet (it's weird to think that I've had this blog now for almost 10 years. Heh). I mentioned that I was moving my server away from Ubuntu 8.04 LTS to Debian 5.0. My rationale was clear: Debian is much more interested in making sure that "Debian stable" is actually rock-solid stable. If that means that it takes three years before the next release is ready, then so be it. Stable is stable. Ubuntu could learn a lot from this.

So, what does Debian do so differently than Ubuntu, and how can Ubuntu learn from this? Let's take a look at how packages transition from the "unstable" release to the "testing" release in Debian, and how "testing" becomes "stable".

  1. After the package has been in unstable for a given length of time, it can qualify for migration to testing. This depends on each package, and the urgency of the migration.
  2. The package can only enter testing if no new release critical bugs exist. This means, that the package must have fewer release critical bugs than the current package in testing.
  3. All dependencies needed for the package must be satisfiable in the current testing repository. If not, those packages must be brought in at the time the current package is, and they must meet the same criteria related to time in unstable and the number of release critical bugs.
  4. The package migrating to testing must not break any other packages currently in testing.
  5. It must be compiled for all release architectures it claims to support, and all architecture specific packages must be brought in as well, meeting the same criteria as mentioned.

Point #1 gives the package enough time for people in the community to report any immediate or obvious bugs. If nothing is reported, then it's possible that the new package version is an improvement over what is currently in testing, and as such, the package becomes a migration candidate. This doesn't mean the package migrates. Instead, it must now be evaluated in terms of quality, and this is handled by evaluating its release bug count.

Point #2 is all about release bug count. If the release bug count is equal to or higher than what is currently in testing, then the package is not an improvement, and is not going to get migrated into testing. If the release bug count is fewer than what is in testing, then migrating the package from unstable into testing improves the quality of the testing release.

Point #3 makes sure that dependencies are met with the package migration from unstable into testing. All dependencies must meet the criteria of points #1 and #2 to be candidates for migration.

Point #4 ensures that current packages in testing do not break when the new package is migrated. Not only the current package dependencies, but also other packages that might interact with it.

Point #5 brings in CPU architecture support. Debian is a universal operating system, handling many different CPU architectures. Not all CPU architectures have the same packages. But, if the package was previously compiled for a certain architecture, then it must continue to be compiled for that architecture. IE- no orphaned packages. Points 1-4 must be met for every architecture it claims to support.

If points 1-5 are successfully met, then the package can be migrated from unstable into testing, improving the quality of the testing release. You can track the release critical bug status in the current testing release at https://bugs.debian.org/release-critical/. As of this writing, there are currently 384 release critical bugs that will prevent testing from becoming the new stable release (the green line in this graph). When the release critical bugs near zero, then and only then is a new release of Debian ready for general consumption.

Debian release critical bugs graph.

I know I've highlighted how the Debian release works, but how about other operating system vendors?

  • Windows releases when it's ready.
  • Mac OS X releases when it's ready.
  • RHEL releases when it's ready.
  • CentOS releases when it's ready (following RHEL).
  • SLES releases when it's ready.
  • Slackware releases when it's ready.
  • FreeBSD releases when it's ready.
  • NetBSD releases when it's ready.
  • Solaris releases when it's ready.

In fact, of all of the operating system vendors I can find, only Ubuntu and their derivatives (Linux Mint, *buntu, etc.) and OpenBSD stick to a rigid schedule of releasing every six months. Other than Ubuntu, none of of those operating system vendors see large scale mission critical server deployment. Debian does. RHEL and CentOS do. Windows certainly does. Even FreeBSD sees enormous success in the datacenter. A large part of that, I would bet, is the fact that stable means stable. "Mission critical" just is not synonymous with Ubuntu servers.

According to https://launchpad.net/ubuntu/trusty/+bugs, as of the time of this writing, almost 2 weeks after 14.04 LTS released, there are:

  • 88 New bugs
  • 301 Open bugs
  • 19 In-progress bugs
  • 10 Critical bugs
  • 78 High importance bugs
  • 4 Incomplete bugs (can expire)

Here's a beauty, reported in December 2013, against 13.10: Reinstallation wipes out all/other partitions. People are losing their Windows partitions. That's lost data. This isn't a laughing matter. And 14.04 STILL released? No way in hell would any other vendor release their OS with a bug of that size. Let's hope it's fixed before 14.10 releases.

Even though I'm critical of Ubuntu's spyware on the desktop, and their CLA and other business practices and development, I would love to see them do things right. Unfortunately, releasing every 6 months, regardless, is not doing things right. It doesn't put a lot of trust into system administrators and into executives to run their production environment.

The Drunken Bishop For OpenPGP Keys

Almost a year ago, I blogged about the drunken bishop algorithm for OpenSSH key random art. Towards the end of the post, I mentioned that I would be building an OpenPGP implementation. I started doing so in Python, but eventually got sidetracked with other things. Well, I hosted the Scale 12x PGP keysigning party, and after the fact wished that I had finished the OpenPGP drunken bishop implementation. Well, I'll be hosting another PGP keysigning party at OpenWest 2014 next month, and couldn't risk missing another keysigning party where I could introduce the drunken bishop to attendees there.

The OpenPGP implementation follows ALMOST the exact algorithm. The only differences are this:

  1. OpenSSH key fingerprints are MD5 checksums while OpenPGP key fingerprints are SHA1 checksums. As such, a larger board is introduced.
  2. The algorithm for OpenSSH key fingerprints reads the fingerprint in little endian. I felt that this unnecessarily complicates the codebase, so I read the fingerprint in big endian. Turns out, this wasn't that big of a deal code-wise, and the reason for the little-endian reading is to "mix" the key, preventing fingerprints that start the same, from looking the same.
  3. For the ASCII art output, the PDF paper mentions that as the bishop visits a square more often, the ASCII character should increase in weight gradually. Unfortunately, it appears this wasn't strictly observed. I've changed the characters that are shown in the art to more accurately reflect the frequency of visits to each square. The starting 'S' and the ending 'E' remain the same.
  4. I added an extra label at the bottom of the random art to depict the 8-bit OpenPGP key ID for that art.

Because it's written in Python, it is slower than its C counterpart for OpenSSH. If I were motivated enough to care, I'd write it in C. But, it's plenty fast for what it's doing, and I'm sure there are performance optimizations I can make to speed things up in the Python script.

I took 9 random keys from my GnuPG public keyring, and am showing them here, as a screenshot, rather than the horrible rendering that is web browsers:

Terminal screenshot showing 9 OpenPGP random art images

As with OpenSSH, these random art images for each public key exists to make verifying OpenPGP keys a bit easier. No doubt that there exist collisions, where more than one key produce the same artwork. When in doubt, verify the hex fingerprint. However, this may be a good alternate method for speeding up keysigning parties, by having each individual verify their key individually, then only speaking up if they feel their key is in error. This way, parties can go straight to identification checking, this eliminating 1/2 to 2/3 of the party. During identification checking, each individual could verify the key random art, fingerprint, etc. while checking photo identification.

A possible handout format for the random art images could look something like this:

+----[DSA 1024]-----+	
|l^  . ^^?^.        |	1) ID: 22EEE0488086060F
|l. . .l:.?         |	UID: Aaron Toponce 
|^ E ...ll          |	Fingerprint: E041 3539 273A 6534
|^. .  ^:.          |	             A3E1 9259 22EE E048
|^ . .  ..          |	Size/Type: 1024/DSA
| . .   . S         |	
|  . .   . .        |	Matches?
|   .     .         |	Fingerprint: ____ ID: ____ Signed: ____
|                   |	
|                   |	
|                   |	
+----[8086060F]-----+

Of course, this is less efficient on paper usage than the traditional table, but it could be improved.

The source code for generating the OpenPGP random art is at https://github.com/atoponce/keyart. It requires an OpenPGP public key file as an argument when executing the script, like so:

$ ./keyart 8086060F.pgp

Hopefully, PGP keysigning party organizers find this useful in streamlining the keysigning process. If not, at least it could be entertaining.

Time Based One Time Passwords - How It Works

Introduction

With all the news about Heartbleed, passwords, and two-factor authentication, I figured I would blog about exactly how two-factor authentication can work- in this case, TOTP, or Time based one time passwords, as defined by The Initiative for Open Authentication (OATH). TOTP is defined in RFC 6238, and is an open standard, which means anyone can implement it, with no worries about royalty payments, or copyright infringements. In fact, TOTP is actually just an extension of HOTP (HMAC based one time passwords), which is defined in RFC 4226. I'll describe HOTP a little bit in this post, but focus primarily on TOTP.

What is two-factor authentication?

First, let's describe the problem. The point of two-factor authentication is to prevent attackers from getting access to your account. Two-factor authentication requires that two tokens be provided for proof of ownership of the account. The first token is something that we're all familiar with- a username and a password. The second token is a bit more elusive, however. It should be something you have, and only you. No one else should be able to come into possession with the same token. The same should be true for usernames and passwords, but we've seen how easily broken a single-factor authentication system is.

Think of two-factor authentication as layers on an onion. The first layer is your username and password. After peeling away that layer, the second layer is a secret token. Three-factor authentication is also a thing, where you can provide something that you are, such as a fingerprint or retinal scan. That would be the third layer in our onion. You get the idea.

However, the second token should not be public knowledge. It must be kept secret. As such, a shared secret is generated between the client and the server. For both HOTP and TOTP, this is just a base-32 random number. This random number, along with a message turns into an HMAC-SHA1 cryptographic hash (which is defined in RFC 2104; also described at Wikipedia).

Importance of Time

Unfortunately, a "shared secret" is a fairly lame form of authentication. First, the user could memorize the secret, no longer making it something the user has. Second, man in the middle attacks on shared secrets are extremely effective. So, we need the ability to prevent the user from memorizing the shared secret, and we need to make man in the middle attacks exceptionally difficult. As such, we turn our shared secret into a moving target.

HOTP, as already mentioned, is the base from which TOTP comes. HOTP uses the same algorithm as described below in this post, except that rather than using time as the moving factor, an 8-byte counter is changed. We need a moving target, because if the token were static, it would be no different than just a second password. Instead, we need the attacker to be constantly guessing as to what the token could be. So, with HOTP, a token is valid until used. Once used, the counter is incremented, and the HMAC-SHA-1 string is recalculated.

TOTP uses the UNIX epoch as its time scale, in seconds. This means that for TOTP, time starts with January 1, 1970 at 00:00.00, and we count the number of seconds that have elapsed since then. By default, we only look at 30 second intervals, on the minute. This means at the top of the minute (zero seconds past the minute), TOTP is refreshed, and again 30 seconds after the minute. TOTP uses a shared secret between the client and the server, so it's important that both the client and the server clocks are synchronized. However, RFC 6238 does allow for some clock skew and drift. As such, when a code is entered, the server will allow for a 30 second window on either side of the code. This means that the code is actually valid for a maximum of 90 seconds. If using NTP on the server, and a mobile phone for the client, then clock drift isn't a concern, as both will be continuously updated throughout the day to maintain accurate time. If NTP or GSM/CDMA broadcasts are not adjusting the clock, then it should be monitored and adjusted as needed.

Computing the HMAC-SHA-1

Hash-based message authentication codes (HMAC) require two arguments to calculate the hash. First, they require a secret key, and second, they require a message. For TOTP (and HOTP), the secret key is our shared secret between the client and the server. This secret never changes, and is the foundation from which our HMAC is calculated. Our message, is what changes. For TOTP, is the time in seconds, since the UNIX epoch rounded to the nearest 30 seconds. For HOTP, it is the 8-byte counter. This moving target will change our cryptographic hash. You can see this with OpenSSL on your machine:

$ KEY=$(< /dev/random tr -dc 'A-Z0-9' | head -c 16; echo)
$ echo $KEY
WHDQ9I4W5FZSCCI0
$ echo -n '1397552400' | openssl sha1 -hmac "$KEY"
(stdin)= f7702ad6254a06f33f7dcb952000cbffa8b3c72e
$ echo -n '1397552430' | openssl sha1 -hmac "$KEY" # increment the time by 30 seconds
(stdin)= 70a6492f088785444fc664e1a66189c6f33c2ba4

Suppose that our HMAC-SHA1 string is "0215a7d8c15b492e21116482b6d34fc4e1a9f6ba". We'll use this image of our HMAC-SHA-1 to help us identify a bit more clearly exactly what is happening with our token:

Image clarifying the HMAC-SHA-1 string, by using 20 divisions of 1 byte each (two characters per division).

Dynamic Truncation

Unfortunately, requiring the user to enter a 40-character hexadecimal string in 30 seconds is a bit unrealistic. So we need a way to convert this string into something a bit more manageable, while still remaining secure. As such, we'll do something called dynamic truncation. Each character occupies 4-bits (16-possible values). So, we'll look at the lower 4-bits (the last character of our string) to determine a starting point from which we'll do the truncating. In our example, the last 4-bits is the character 'a':

Same 20-division image, highlighting the last character of the HMAC-SHA-1 string- character 'a'.

The hexadecimal character "a" has the same numerical value as the decimal "10". So, we will read the next 31-bits, starting with offset 10. If you think of your HMAC-SHA-1 as 20 individual 1-byte strings, we want to start looking at the strings, starting with the tenth offset, of course using the number "0" as our zeroth offset:

Image clarifying the HMAC-SHA-1 string, by using 20 divisions of 1 byte each (two characters per division).

As such, our dynamically truncated string is "6482b6d3" from our original HMAC-SHA-1 hash of "0215a7d8c15b492e21116482b6d34fc4e1a9f6ba":

Same 20-division image, highlighting the first 31-bits of the HMAC-SHA-1 string, starting with the 10th offset.

The last thing left to do, is to take our hexadecimal numerical value, and convert it to decimal. We can do this easily enough in our command line prompt:

$ echo "ibase=16; 6482B6D3" | bc
1686288083

All we need now are the last 6 digits of the decimal string, zero-padded if necessary. This is easily accomplished by taking the decimal string, modulo 1,000,000. We end up with "288083" as our TOTP code:

TOTP: 288083

The user then types this code into the form requesting the token. The server does the exact same calculation, and verifies if the two codes match (actually, the server does this for the previous 30 seconds, the current 30 seconds, and the next 30 seconds for clock drift). If the code provided by the user matches the code calculated by the server, the token is valid, and the user is authenticated.

Conclusion

TOTP, and alternatively HOTP, is a great way to do two-factor authentication. It's based on open standards, and the calculations for the tokens are done entirely in software, not requiring a proprietary hardware or software solution, such as provided by RSA SecurID and Verisign. Also, the calculations are done entirely offline; there is no need to communicate with an external server for authentication handling at all. No calling home to the mothership, to report someone has just logged in. It's platform independent, obviously, and is already implemented in a number of programming language libraries. You can take this implementation further, by generating QR codes for mobile devices to scan, making it as trivial as installing a mobile application, scanning the code, and using the tokens as needed.

Two Factor Authentication with OpenSSH

With all the news about Heartbleed, passwords and two-factor authentication, I figured I would finally get two-factor authentication working with my SSH servers. I've known about it in the past, but haven't done anything about it. Now is the time.

To get two-factor authentication working with your OpenSSH server, you need to install the "libpam-google-authenticator" PAM module on your system. Don't let the package name fool you, however. It is developed by Google, but it does not "phone home" to Google servers at all. It also does not require a Google account. Further, the PAM module is Free and Open Source software, licensed under the Apache 2.0 license.

To install the module on Debian-based systems, run the following:

$ sudo aptitude install libpam-google-authenticator

Once installed, run the google-authenticator(1) command, and answer the resulting questions. The questions offer a balance between increased security and convenience of use. You have the opportunity to create an HMAC-Based One-time Password (HOTP), as specified in RFC 4226, or a Time-based One-time Password (TOTP), as specified in RFC 6238. The only difference, is that with HOTP, each code is dependent on the previous codes, whereas with TOTP, each code is dependent on time. If running NTP on your SSH server, and you're using this with a phone, then TOTP would probably be a better bet, as time will be very closely synchronized, and unlikely to fall out of sync.

The command will create an ANSI QR code that you can scan on your phone to setup the codes. Further, it will print 5 backup codes in the event that you lose your phone, or don't have it, but need to login. Print those backup codes, and store them in your wallet. Just in case you need the codes, they are stored in your ~/.google_authenticator file:

$ cat ~/.google_authenticator 
YXSNFX37ZUZCKVZM
" RATE_LIMIT 3 30
" WINDOW_SIZE 17
" DISALLOW_REUSE
" TOTP_AUTH
74110971
51742064
84348069
78844952
28772212

Now you just need to configure OpenSSH to use OTP as part of the authentication process. There are two configuration files to edit. First, you need to edit the /etc/pam.d/sshd config file, and put "auth required pam_google_authenticator.so" at the bottom of the file:

$ sudo vim /etc/pam.d/sshd
(move to the bottom of the file)
auth required pam_google_authenticator.so

Now change the /etc/ssh/sshd_config file, to allow a challenge and response as part of the authentication process:

$ sudo vim /etc/ssh/sshd
ChallengeResponseAuthentication yes

Restart your OpenSSH server, and you should be good to go:

$ sudo service ssh restart

For an OTP application that you can install on your Android phone, I would personally recommend FreeOTP by Red Hat. It's Free and Open Source software, unlike the Google Authenticator app, and it has increased security in that codes are not displayed by default. Instead, you must tap the refresh button for that site to display the code. Also, I like the display options better with FreeOTP than with Authenticator, but that's personal choice. There currently is not an iOS version of the application in the Apple App Store, but my understanding is that one will make it there soon.

Anyway, happy two-factor authentication on your OpenSSH server!

Heartbleed And Your Passwords

Recently it was discovered that OpenSSL contained a pretty massive security hole that allowed simple TLS clients to retrieve plain text information from a TLS-protected server using the TLS Heartbeat. The advisory is CVE-2014-0160. This has to be one of the most dangerous security vulnerabilities to hit the Internet in a decade. More information can be found at https://heartbleed.com/ (ironically enough, using a self-signed certificate as of this writing).

I don't wish to cover all the extensive details of this vulnerability. They are large and high in number. However, I do want to address it from the point of passwords, which is something I can handle on this blog. First, let's review the heartbleed vulnerability:

  1. FACT: This vulnerability was introduced into OpenSSL on December 31, 2011.
  2. FACT: This vulnerability was fixed on April 5, 2014.
  3. FACT: Without the TLS heartbeat bounds check, data is leaked in 64KB chunks.
  4. FACT: A TLS client can expose the server's SSL private key, to decrypt future communication, without any special hardware or software.
  5. FACT: This is not a man-in-the-middle (MITM) attack.
  6. FACT: Armed, an attacker can reveal usernames and passwords without anyone knowing.

To demonstrate this, here is a post by Ronald Prins on Twitter:

He demonstrates it on his blog, showing a censored example. What does this mean? This means that if you logged into a server, using SSL during the past two years, your username and password could already be compromised. This includes your email acount, your bank account, or social media accounts, and any others. Of course, if the service takes advantage of two-factor authentication, then that specific account is likely safe. However, if you share passwords between accounts, additional accounts may not be.

I really wish I was joking.

My advice? Time to change passwords to accounts you think you may have used over the past two years. But, before you do so, you need to know if the service is protected against Heartbleed. You can use http://possible.lv/tools/hb/ and http://filippo.io/Heartbleed/ as online Heartbleed testers before logging in with your account credentials.

Protect Against Bit Rot With Parchive

Introduction

Yes, this post was created on April 1. No, it's not an April Fool's joke.

So, I need to begin with post with a story. In 2007, I adopted my daughter, and my wife decided that she wanted to stay home rather than work. In 2008, she quit her job. She was running a website on my web server, which only had a 100G IDE PATA drive. I was running low on space, so I asked if I could archive her site, and move it to my ext4 Linux MD RAID-10 array. She was fine with that, so off it went. I archived it using GNU tar(1), and compressed the archive with LZMA. Before taking her site offline, I verified that I could restore her site with my compressed archive.

From 2008 to 2012, it just sat on my ext4 RAID array. In 2012, I built a larger ZFS RAID-10, and moved her compressed archive there. Just recently, my wife has decided that she may want to go back to work. As such, she wants her site back online. No problem, I thought. I prepared Apache, got the filesystem in order, and decompressed the archive:

$ unlzma site-backup.tar.lzma
unlzma: Decoder error

Uhm...

$ tar -xf site-backup.tar.lzma
xz: (stdin): Compressed data is corrupt
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

Oh no. Not now. Please, not now. My wife has put in hundreds of hours into this site, and many, many years. PDFs, images, other media, etc. Tons, and tons of content. And the archive is corrupt?! Needless to say, my wife is NOT happy, and neither am I.

I'm doing my best to restore the data, but it seems that all hope is lost. From the GNU tar(1) documentation:

Compressed archives are easily corrupted, because compressed files have little redundancy. The adaptive nature of the compression scheme means that the compression tables are implicitly spread all over the archive. If you lose a few blocks, the dynamic construction of the compression tables becomes unsynchronized, and there is little chance that you could recover later in the archive.

I admit to not knowing a lot about the internal workings of compression algorithms. It's something I've been meaning to understand more fully. However, best I can tell, this archive experienced bit rot in the 4 years it was on my ext4 RAID array, and as a result, the archive is corrupt. Had I known about Parchive, and that ext4 doesn't offer any sort of self healing with bit rot, I would have been more careful storing that compressed archive.

Because most general purpose filesystems do not protect against silent data errors, like Btrfs or ZFS, there is no way to fix a file that suffers from bit rot, outside of restoring from a snapshot or backup, or the internal file type has some sort of redundancy. Unfortunately, I learned that compression algorithms have very little to no redundancy in the final compressed file. It makes sense, as compression algorithms are designed to remove redundant data, either using lossy or lossless techniques. However, I would be willing to grow the compressed file, say 15%, if I knew that I could suffer some damage on the compressed file, and still get my data back.

Parchive

Parchive is a Reed-Solomon error correcting utility for general files. It does not handle Unicode, and it does not work on directories. It's initial use was to maintain the integrity of Usenet posts on the Usenet server against bit rot. It has since been used by anyone who is interested in maintaining data integrity for any sort of regular file on the filesystem.

It works by creating "parity files" on a source file. Should the source file suffer some damage, up to a certain point, the parity files can rebuild the data, and restore the source file, much like parity in a RAID array when a disk is missing. To see this in action, let's create a 100M file full of random data, and get the SHA256 of that file:

$ dd if=/dev/urandom of=file.img bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 7.76976 s, 13.5 MB/s
$ sha256sum file.img > SUM
$ cat SUM
5007501cb7f7c9749a331d1b1eb9334c91950268871ed11e36ea8cdc5a8012a2  file.img

Should this file suffer any sort of corruption, the resulting SHA256 hash will likely change. As such, we can protect the file against some mild corruption with Parchive (removing newlines and cleaning up the output a bit):

$ sudo aptitude install par2
(...snip...)
$ par2 create file.img.par2 file.img
(...snip...)
Block size: 52440
Source file count: 1
Source block count: 2000
Redundancy: 5%
Recovery block count: 100
Recovery file count: 7
Opening: file.img
Computing Reed Solomon matrix.
Constructing: done.
Wrote 5244000 bytes to disk
Writing recovery packets
Writing verification packets
Done

As shown in the output, the redundancy of file is 5%. This means that we can suffer up to 5% damage on the source file, or the parity files, and still reconstruct the data. This redundancy is default behavior, and can be changed by passing the "-r" switch.

If we do a listing, we will see the following files:

$ ls -l file.img*
-rw-rw-r-- 1 atoponce atoponce 104857600 Apr  1 08:06 file.img
-rw-rw-r-- 1 atoponce atoponce     40400 Apr  1 08:08 file.img.par2
-rw-rw-r-- 1 atoponce atoponce     92908 Apr  1 08:08 file.img.vol000+01.par2
-rw-rw-r-- 1 atoponce atoponce    185716 Apr  1 08:08 file.img.vol001+02.par2
-rw-rw-r-- 1 atoponce atoponce    331032 Apr  1 08:08 file.img.vol003+04.par2
-rw-rw-r-- 1 atoponce atoponce    581364 Apr  1 08:08 file.img.vol007+08.par2
-rw-rw-r-- 1 atoponce atoponce   1041728 Apr  1 08:08 file.img.vol015+16.par2
-rw-rw-r-- 1 atoponce atoponce   1922156 Apr  1 08:08 file.img.vol031+32.par2
-rw-rw-r-- 1 atoponce atoponce   2184696 Apr  1 08:08 file.img.vol063+37.par2

Let's go ahead and corrupt our "file.img" file, and verify that the SHA256 hash no longer matches:

$ dd seek=5k if=/dev/zero of=file.img bs=1k count=128 conv=notrunc
128+0 records in
128+0 records out
131072 bytes (131 kB) copied, 0.00202105 s, 64.9 MB/s
$ sha256sum -c SUM
file.img: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match

This should be expected. We've corrupted our file through a simulated bit rot. Of course, we could have corrupted up to 5% of the file, or 5 MB worth of data. Instead, we only wrote 128k of bad data.

Now that we know our file is corrupt, let's see if we can recover the data. Parchive to the rescue (again, cleaning up the output):

$ par2 repair -q file.img.par2 
(...snip...)
Loading "file.img.par2".
Loading "file.img.vol031+32.par2".
Loading "file.img.vol063+37.par2".
Loading "file.img.vol000+01.par2".
Loading "file.img.vol001+02.par2".
Loading "file.img.vol003+04.par2".
Loading "file.img.vol007+08.par2".
Loading "file.img.vol015+16.par2".
Target: "file.img" - damaged. Found 1996 of 2000 data blocks.
Repair is required.
Repair is possible.
Verifying repaired files:
Target: "file.img" - found.
Repair complete.

Does the resulting SHA256 hash now match?

$ sha256sum -c SUM 
file.img: OK

Perfect! Parchive was able to fix my "file.img" by loading up the parity files, recalculating the Reed-Solomon error correction, and using the resulting math to fix my data. I could have also done some damage to the parity files to demonstrate that I can still repair the source file, but this should be sufficient to demonstrate my point.

Conclusion

Hindsight is always 20/20. I know that. However, had I known about Parchive in 2008, I would have most certainly used it on all my compressed archives, including my wife's backup (in reality, a backup is not a backup unless you have more than one copy). Thankfully, the rest of the compressed archives did not suffer bit rot like my wife's backup did, and thankfully, all of my backups are now on a 2-node GlusterFS replicated ZFS data store. As such, I have 2 copies of everything, and I have self healing against bit rot with ZFS.

This is the first time in my life where bit rot has reared its ugly head, and really caused a great deal of pain in my life. Parchive can fix that for those who have sensitive data where data integrity MUST be maintained on filesystems that do not provide protection against bit rot. Of course, I should have also had multiple copies of the archive to have a true backup. Lesson learned. Again.

Parchive is no substitution for a backup. But it is a protection against bit rot.

Creating Strong Passwords Without A Computer, Part III - Off The Grid

Previously, I used entropy as a backdrop for creating strong passwords. It's important that you read that article and fully understand it before moving on with the rest of the series.

So far, I've blogged about generating passwords using systems that your grandma could use. In this case, I have less confidence that my grandma would be willing to give this a go. Not because she's stupid, but because she's impatient. The older I get, the less patience I have for things like this, so I can understand where she's coming from. In this post, we'll be discussing Steve Gibson's paper cipher Off The Grid.

Introduction

Off The Grid is a paper-based cipher for encrypting domain names. The concept is built around the idea of using Latin Squares as a means for creating the cipher. Off The Grid is a 26x26 Latin Square using the English alphabet. In other words, any character appears only one in any given row and column. As a result of the Latin Square, words can be traversed throughout the square, alternating rows and columns. This will be explained further.

Outside of the grid are numbers and non-alphabetic characters. These are used as an additional resource when creating the passwords for your sites. Because the grid itself is a randomized mixture of lowercase and uppercase letters, the grid itself has an entropy of approximately 5.7-bits per character. The outer border consists of the following characters:

0123456789!"#$%&'()*+,/:;<=>?@[$$^{|}~

As such, if using the border to build your passwords, then you have access to 90 total unique characters, bringing your entropy to approximately 6.59-bits per character.

Because we determined that 80-bits of entropy should be the minimum when generating passwords, if your password consists of only the alphabetic characters in the grid, then you should aim at building at least 15-character passwords. If you are including the outer border in your password building, then a 13-character password should be the target.

The grid and border are randomized, minus 12 characters in the center of the top border, so the resulting password is a true random password that can be used for your accounts. Thus, the requirements to build a truly random password with at least 80-bits of entropy is achieved.

Off The Grid

Off The Grid is a finite state machine. This is achieved by traversing the grid from a starting location to an ending location, the rules of which will be described here. After reaching your first ending location, a second traversal is made starting from the first ending location, and ending at a new location. There are exactly 26^2 or 676 states in the Off The Grid system.

In Steve's instructions for building passwords using the grid, you take the first 6 characters of the domain name, and build a resulting password of 12 characters- 2 ciphertext characters for each 1 plaintext character. Unfortunately, as we demonstrated, this doesn't provide us with enough entropy to withstand offline database attacks. As such, I would look at what you want for your target password length. If it's 15 characters, then I would take the first 5 characters of the domain name, and use a ratio of 3:1 rather than 2:1 when building the password. If you want a 16 character password, then you could use the first 4 characters of the domain name, and use a ratio of 4:1, or you could take the first 8 characters of the domain name, and use a ration of 2:1. Keep this in mind, because from here on out, it gets challenging, and you'll need your ratio for later.

Off The Grid can be described in the following steps:

  1. You are always using the domain name for building your passwords.
  2. Determine how many characters from the domain name you will need to build your password.
  3. Find the first character of the domain name in the first row.
  4. Find the second character of the domain name in the column below the first character.
  5. Find the third character of the domain name in the row of the previous character.
  6. Alternate rows and columns until you reach the last character of the domain name.
  7. Starting at your new position, find the first character of the domain name in that row.
  8. Overshoot in that row by the ratio you determined before. If your ratio is 2:1, overshoot 2 characters. If your ratio is 4:1, overshoot 4 characters.
  9. Write down the characters you overshot with. These will build your password.
  10. Now at your new overshot position, find the second character of the domain name in the column below the overshot character.
  11. Overshoot in that column by the ratio you determined before.
  12. Write down the characters you overshot with.
  13. Continue alternating rows and columns, overshooting by your ratio, and writing down the overshot characters until you've traversed the domain name
  14. You now have your password.

PHEW!!!

Building passwords using Off The Grid is tricky. Once you understand the above steps, and practice it, then it will become easy. But the initial understanding of how the system works is a bit a pain in the behind. In the next section, I'll be providing an example.

A Simple Example

Visiting his website, I created the following sample grid that we'll use for this post:

A Sample Off The Grid key.

Suppose we wish to setup a password for example.com, and suppose that example.com does not restrict passwords for our account in anyway. As such, we are fine building passwords with just alphabetic characters. We'll handle how to add non-alphabetic characters to our password in a minute. As such, I need to build a 15-character password. The text "example" is 7 characters. I'll take the first 5 characters "examp" and use a ratio of 3:1 for building my password.

First step is to locate the letter 'e' in the first row. You can decide to use any row, or column for your first letter. However, the goal is to be consistent, as using a different row or column for the same domain will produce different results. In these examples, I'll be starting in the first row. In this example, it's in the 25th column from the left edge. Now I locate the 'x' in the 25th column. It's in the 18th row from the top in the 25th column. Now I look along that row for 'a'. I continue in this manner, until I've worked out "examp". On my example card, my path after phase 1 would look like this:

Tracing the text 'examp' on our sample OTG key.

Now I'm in a new location ready to start creating my password. This is phase 2, the overshoot phase. Because I ended looking for 'p' in the same row as 'a', I'm going to switch to looking for 'e' in the same column as 'p' to start this phase. In other words, I am constantly alternating row, column, row, column, etc., even when switching phases. However, rather than stopping at the letter 'e', I will overshoot by by 3 characters, recording the characters I have overshot.

So, I find 'e' then continue moving 3 characters in the column, reaching 'a' in my example grid. The 3 characters that were overshot were 'Mta'. These are the first 3 characters of my domain password for example.com.

I now look for 'x' in the same row as my overshot 'a'. I will again overshoot by 3 characters to the letter 'P'. The overshot characters in that row were 'zhP', and are the second set of characters for my password. Thus, up to this point, my password for example.com has become 'MtazhP'.

I will continue alternating rows and columns, overshooting by 3 characters, writing down the overshot characters as my password, until I overshot every character in "examp". My password is "MtazhPfoYgpJvWy", and my path would have looked like this in phase 2 (the domain path is traced in red, with the overshoot characters traced in blue):

Tracing the text 'examp' again, this time overshooting by 3 characters on our sample OTG key.

Notice that my overshoot needed to wrap around the card. This should be expected in phase 2, and wrapping around the card is appropriate.

Make sure that you fully understand these two phases of creating your password before continuing. You want to make sure you get it right every time for every domain, without error.

Handling non-alphabetic characters

Now it's time to get a bit ugly. Up to this point, the OTG system only generates passwords with alphabetic characters. It does not generate passwords with anything else. This can be problematic for sites that require the use of non-alphabetic characters in the password. Further, some domains may have a '.', '-' or digit as part of the domain. These non-alphabetic characters are not part of the inner OTG grid, so creating a password with them is impossible.

This is where the outer border comes into play. First notice that a digit and non-alphanumeric character are always opposite of each other on the border. This is by design. In our case, opposite of '#' on the top border is a '2' on the bottom border. Opposite of the ',' on the bottom border, is the digit '8' on the top border, and so forth. Let's first discuss how to add non-alphabetic characters to our passwords:

Letting the OTG choose

While traversing the grid in phase 2, building your password, you can incorporate the border into your overshoot. In my previous example of "example.com", we traced out the word "examp", and overshot with 3 characters, all of the form <grid><grid><grid>. Rather, we could have used our 3rd overshoot character to be our border character. As such, our pattern would have been <grid;><grid><border>. In that case, we would stop at the 2nd overshoot character, rather than the border, to find the rest of our characters in our domain. As a result, for every character in the domain, there will be a non-alphabetic character:

Sample OTG showing a possible path for building a password with non-alphabetic characters.

In this case, the password for example.com would be "Mt.bA1oQ7cZ[IA2"

This example also brings up a good point: when the overshoot character is the same character as the next character that you need to traverse to, then skip one more character. In other words, notice that with "examp", when we travel to "x", our overshoot character is "A". This is the next character that we would be traveling to. So, I overshot one more character to "W", then traveled to my "a" in "examp". If you think that exception sucks, you're right. It does. It's just another rule that can introduce human error into the equation.

Further, notice that I needed to wrap around the card, as we already expressed earlier. Yet, my border character technically came after "c" and before "Z". However, I still put the border character at the end. I am doing this for consistency with the rest of the password, so I don't have to keep yet another rule in my head.

Letting the OTG help

In this case, some sites do not allow non-alphanumeric characters in the password. In other words, it's letters and digits only. Part of the border pattern is that there is exactly one digit in the border for every row and every column. So, rather than overshooting to a non-alphanumeric character, you could just use the number in that row or column instead. This way, our previous password would be "Mt5bA1oQ7cZ7IA2".

Tack something onto the beginning or end

Of course, you could always just tack a special character or two at the beginning or end of the password. Your logic for how this is accomplished is up to you. You could use the beginning row/column border character in phase-2, or the ending row/column border character. Whatever makes the most sense for you.

Handling domains with non-alphabetic characters

Unfortunately, some domains have non-alphabetic characters in their domain name. Although seldom, they do exist, and OTG can't completely rule out the possibility that they will not be encountered. As such, in the center of the top border are 12 characters- the digits 0 through 9, the period and the dash. If these characters are encountered as part of traveling through the OTG with the domain, then travel to the character in the first row immediately below that character in the same column. For example, if your domain had a "5" in the domain, then you would travel to "f" in the first row in our example OTG card. If there are consecutive numbers in your domain, then unfortunately, I am not sure exactly how that is to be handled.

Criticisms, Thoughts and Conclusion

DOUBLE PHEW!!!

If you have made it to this point, you're determined to learn OTG. In my opinion, Steve's OTG paper system has some pretty serious problems:

  • The rules are technical and many, making the entire system very cumbersome to use.
  • Overall, creating your passwords are slow, slow, slow. Hopefully you store them in an encrypted database for retrieval, because recreating them is very slow, and not something you want to be fighting with when under stress.
  • Due to the cumbersome nature of OTG, creating your passwords can be very error prone.
  • The size of the OTG card is too large to fit into your wallet to carry with you, and see the characters on the grid without a magnifying glass.
  • If the card is lost, and the card is attributed to you, because you are using domains to create the passwords, the attacker could easily figure out the password to your accounts with the card. As such, you may want to prepend a 1- or 2-character salt onto the domain before beginning phase 1.

These are probably my biggest criticisms of the OTG system. While slick for a paper-based encryption system, it's a lot of visual scanning and lookups, a lot of rules and exceptions to remember, and a lot of areas that can create errors. Some die-hards may use this for their passwords, but I don't think I can recommend it, even to some of the most nerdy, security conscious friends I have.

In my opinion, the PasswordCard is a much more elegant system that doesn't require the user to keep a lot of rules and exceptions in their head, and if the card is lost, the attacker still has a great amount of work to do figuring out what the password is, and to which account it belongs to.

To be fair, the OTG system is marked as "Work in Progress", so there may be adjustments to the system in the future that make it more of an elegant system for creating passwords. But I think the whole thing will need to be reworked, as traversing a Latin Square to create passwords is just too cumbersome for practical use.

Creating Strong Passwords Without A Computer, Part II - The PasswordCard

Previously, I used entropy as a backdrop for creating strong passwords. It's important that you read that article and fully understand it before moving on with the rest of the series.

Continuing our series about creating strong passwords without a computer, we look at a method I've blogged about in the past: The PasswordCard. The idea is simple: carry around a card with you, that has all of your passwords written down in plaintext, obfuscated with additional text surrounding the password. Let's look at it in more detail.

The Passwordcard Idea

The PasswordCard is a GPLv3 web application that generates a small credit-card sized card that you can fit into your wallet for password generation. The way you would generate a password is simple. Suppose you wanted to generate a password for an online account, such as an email provider. You could pull out your PasswordCard, determine a starting location, a direction, and a length, and use the resulting characters for your password. Because the PasswordCard is a two-dimensional table of characters, the direction of your password can take any direction, such as left, right, up, down, diagonally, spiral, or any combination of directions. Because the length of your password can theoretically be infinite, so too would be the search space, if someone were to get access to your card.

The PasswordCard uses the following character sets when generating a card on the site:

  • alphanumeric: "23456789abcdefghjkmnpqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ"
  • alphanumeric plus symbols: "23456789abcdefghjkmnpqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ@#$%&*<>?€+{}[]()/\"
  • numeric: "0123456789"

As such, you can expect the following entropy sizes per character when creating a card:

  • Alphanumeric (55 unique characters): 5.78-bits.
  • Alphanumeric plus symbols (76 unique characters): 6.25-bits.

When checking the box "check this for an area with only digits", it doesn't change the search space for the already existing alphanumeric output that is default when loading the page. It only creates 4 rows of digits for easy PIN generation. Your entropy per character is only increased when checking the box "check this to include symbols".

A Practical PasswordCard

Before beginning, let's use a sample card to work from. This is card 7327e6c258cb3d52, in case you also want to generate it for following along:

Sample PasswordCard

To generate a password, you would need to pick a starting location for your password. You'll notice that there are 29 columns and 8 rows. The columns are identified by symbols, while the rows are identified both by a color and a number. For each account, all you need to remember are 3 things:

  • The starting location (should be different for every account).
  • The direction (should be the same for each account).
  • The length (should be the same for each account).

This should be easier to remember than "NyFScbUJ>X7j?", but your brain might be wired differently than mine.

Because this PasswordCard is using both alphanumeric and non-alphanumeric characters on the card, and our entropy per character is approximately 6.25-bits, and further, because we established that we need a minimum of 80-bits of entropy in our passwords, we should generate at least a 13-character password off of the card.

Some PasswordCard Direction Ideas

Most of those who are using the PasswordCard to generate their passwords, are probably just reading the passwords left-to-right. I would not recommend this, as it's the lowest hanging fruit for an attacker, if they have access to a hashed password database, and your password card. Instead, I would recommend doing something non-standard for the password direction. Let's consider a few examples, each of them starting at ⊙4 on the PasswordCard:

Wall Bouncing

When hitting a wall, when either travelling diagonally or otherwise, when you hit a wall, you could "bounce" off the wall much like a pool ball would when playing billiards:

Sample PasswordCard showing a path bouncing off of the walls.

In this case, the password would be "f+zwWYB[5\C<u".

Wall Bending

Rather than bouncing off the wall like pool balls, you could "bend" along the wall, much like turning a corner when driving on a street:

Sample PasswordCard showing a path bending along the wall.

In this case, the password would be "f%efcEBnNQk7\".

Pac-Man Walls

Another alternative would be to treat the walls as a topological spheroid, much like Pac-Man. In other words, when you come to the edge of the room, you travel out the other side on the same row, or the same column, continuing in your direction:

Sample PasswordCard showing a path Pac-Man might take when traveling to the edge of our wall.

In this case, the password would be "f+zwrpt9n2B&y".

Spiral paths

My last example shows a spiral path, starting from the same location as the previous examples, this case moving clockwise:

Sample PasswordCard showing a spiral path.

In this case, the password would be "fY+%FqvGYerrz".

As a strong suggestion, whatever direction and length you take for your passwords, you should keep them consistent. If you decide to do a 13-character clockwise spiral for one account, you should do a 13-character clockwise spiral for all of your accounts. The only thing changing is the location of the password. This will greatly simplify identifying each password for each account. If you change up the lengths and directions, and well as the starting location for each account, you run the risk of having a very difficult time finding the correct password for that account. If your brain has that mental capacity, then the more power to you. Otherwise, I would keep it consistent.

A Couple Thoughts

The nice thing about the PasswordCard, is that all of your passwords are already written down for you in plaintext. However, if a password cracker were to get access to your card, they would need to know which starting location belongs to which account, the direction the password takes, as well as the length of the password. This is too many variables for an attacker to make efficient use of his time. His time would be better spent taking the character sets off of the card, and building an incremental brute-force search. Provided your password has sufficient entropy, you will likely thwart the attack.

There are a couple disadvantages with the PasswordCard, however. The first is that this is not well suited for the blind. Unlike Diceware, which can be easily adapted for the blind, this is a bit more of a challenge. While I'm not asserting that it's impossible, it certainly seems difficult to practically reproduce. The second disadvantage is the use of the "€" euro symbol. The PasswordCard is developed by a Java developer in the Euro Zone. While it makes sense for him to include the character, it alienates those that don't easily have access to it on their keyboards, such as those using the basic ASCII character set. As such, you may want to refresh your browser, generating random cards until you find one without the "€" character in its output.

Lastly, you will definitely want to print a second card, and keep it somewhere safe as a backup, should you lose the original. Keeping the card in your wallet or purse makes the most sense, as your wallet or purse is likely the most protected object in your possession, next to your phone and keys. But, should you lose the card, you will want to get access to your passwords, which will mean getting access to your backup copy.

Conclusion

I personally like the PasswordCard. It's simple, small, and doesn't require me to carry a lot of items with me, such as dice and a large word list. My only concern is being able to choose a new starting location for each account. I'm not as random as I would think when finding a starting location, so I wrote a script to handle that for me. But it's clean, out of the way, and works really well. When I don't have an account password memorized, I can pull out the card, remember where it starts, and just start typing. Generation is quick, and remember password locations is easy. Highly recommended.