Image of the glider from the Game of Life by John Conway
Skip to content

How do you pronounce "daemon"?

As a Linux system administrator, I am familiar with the definition of the term "daemon" as a process running on a system, usually backgrounded, that provides some sort of service, like a web server or file store. When I first started my career, I prounounced the "ae" with the long "a" sound as "day-mon". It seems everyone I interacted with agreed with me.

Then I got a new job at a startup when my boss heard my use the term. He pulled my into his office to correct my behavior:

"Daemon" is pronounced with a long "e", similar to how you pronounce "Caesar". When you see "ae" together, it's always with a long "e". The term itself goes back to Greek mythology. The spelling is the older spelling of "demon" and they are pronounced the same.

This was around 2011. What he said resonated with me and I found other words with "ae" that had the long "e" sound, even if they were spelled archaically:

  • Archaeology
  • Caesar
  • Encyclopaedia
  • Haemoglobin
  • Orthopaedic
  • ...

I was a convert. So much so that I was giving a presentation on systemd administration to other system administrators. At one point while discussing service units, I mentioned that "oh, by the way, the word spelled d-a-e-m-o-n is pronounced dee-mon, not day-mon. Think Caesar, encyclopaedia, archaeology, haemoglobin, etc." Well, one attendee in the presentation wasn't satisfied. When I finished and opened the remaining time for questions, he asked:

What do you call a dessert with bananas, ice cream, chocolate syrup, sprinkles, and a cherry?

Of course the answer is a "sundae" and the rest of the attendees got a good chuckle, as I was caught off-guard and didn't have a retort. But that got me thinking: just how many "ae" words exist in English, archaic spelling or modern, and how are they pronounced? An easy regular expression search on my Debian system returns that answer:

$ grep -P "(?\!.*'s$)ae" /usr/share/dict/words

I get 205 results in my search; your mileage may vary. However, when I scan the results, here are some that stand out. Obviously this isn't an exhaustive list, and this is also how I would pronounce them which follows General American pronunciation. Recieved Pronunciation or other English dialects my pronounce them differently. Archaic spellings are italicized:

  • Long "a"
    • aerate
    • antennae
    • Gaelic
    • Praetorian
    • pupae
    • reggae
    • sundae
  • Long "e"
    • aeon
    • algae
    • archaeology
    • Caesar
    • encyclopaedia
    • haemoglobin
    • orthopaedic
  • Short "e"
    • aero-(bic, dynamic, nautics, sol, space)
    • aery
    • haemorrhage
  • Short "i"
    • caesarean (first "ae")
    • Michael
    • Rachael
  • Syllable separator
    • caesarean (second "ae")
    • Ishmael
    • Israel
    • Kafkaesque

I think it's fair to say that the pronunciation of "ae" is varied. Is there a single authoritative source we can use that settles the debate? I don't know, but I'm willing to trust Wiktionary on this one:

Etymology 1: A borrowing of Latin daemon ("tutelary deity"), from Ancient Greek δαίμων (daímōn, "dispenser, tutelary deity")

Pronunciation: IPA: /ˈdiː.mən/

Etymology 2: From Maxwell's demon; a derivation from "disk and execution monitor" is generally considered a backronym.

Pronunciation: IPA: /ˈdiːmən/, /ˈdeɪmən/

In the case of "Etymology 1", the pronunciation for our Latin deity for both Received Pronunciation and General American is strongly "dee-mon". However, is the case of "Etymology 2", our pronunciation is both "dee-mon" and "day-mon" according to IPA.

Wikipedia's article on "daemon (computing software)" agrees:

In modern usage, the word daemon is pronounced /ˈdiːmÉ™n/ DEE-mÉ™n. In the context of computer software, the original pronunciation /ˈdiːmÉ™n/ has drifted to /ˈdeɪmÉ™n/ DAY-mÉ™n for some speakers.

So, "dee-mon" or "day-mon", you're not wrong.

Solving Wordle with Regular Expressions

It seems like everyone is solving the daily Wordle puzzle these days, myself included. If you're not familiar with Wordle, it's a simple, fun, stress-free game. Your goal is to guess the secret 5-letter word within six tries. The rules are really quite simple:

  1. Gray letters are not in the solution.
  2. Yellow letters are in the solution, but in a different position (or positions).
  3. Green letters are in the solution and in the correct position.

That's it. Let's look at the puzzle for January 1, 2022 from first guess to last to see how it works:

Here we solved the puzzle in 4 guesses:

  1. "STARE": "S", "R", and "E" are in the solution, but in different positions. "T" and "A" are not in the solution.
  2. "EUROS": "E", "U", and "R" are in the solution, but in different positions. "O" is not in the solution. "S" is the last character in the solution.
  3. "URGES": "U", "R", and "E" are in the solution, but in different positions. "G" is not in the solution. "S" is the last character in the solution.
  4. "REBUS" is correctly guessed as the solution.

I'm always curious if I can stretch my skills as a system administrator and software developer, and one of those critical skills is regular expressions. While I'm good with the basics, such as line and word anchors, character sets, quantification, and simple back references, I wanted to see if I could dig deeper using the /usr/share/dict/words file on my Debian laptop.

First off, I already know I can get a list of all 5-character alphabetic words ignoring proper nouns with the following regex:

$ grep -P '^[a-z]{5}$' /usr/share/dict/words

This returns 4,681 unique words on my system. I just need to pick a good starting word that increases my chances of guessing the solution quickly. Knowing about character frequency, I can pick a word that maximizes frequently used characters in English. A solid pick would be something that uses "E", "S", "T", "A", "I", "R", etc. As such, I always start with "STARE".

The clues tell us immediately that "S", "R", and "E" are in the solution, just in different positions. We also learn that "T" and "A" are not in the solution. So my limited knowledge of regular expressions would have me doing something like this:

$ grep -P '^[a-z]{5}$' /usr/share/dict/words | grep 's' | grep 'r' | grep 'e' | grep '[^ta]'

Surely regular expressions can handle this logic without unnecessary pipes. The obvious question is whether or not PCRE supports logical AND. Basically, "match words that have 'S' and 'R' and 'E' and not 'T' and not 'A'". The answer is "yes" using lookarounds.

Regular expression lookarounds come in four flavors:

  1. (?=foo): Look ahead. "foo" immediately follows the current position in the string.
  2. (?<bar): Look behind. "bar" immediately precedes the current position in the string.
  3. (?!baz): Negative look ahead. What immediately follows the current position in the string is not "baz".
  4. (?<!qux): Negative look behind. What immediatley precedes the current position in the string is not "qux".

I want "S", and "R" and "E" to appear in the string. So I can use look aheads to make that possible. Because "foo" can be any regular expression pattern, then I can do:

$ grep -P '^(?=.*s.*)(?=.*r.*)(?=.*e.*)[a-z]{5}$' /usr/share/dict/words | grep '[^ta]'

Further, I know that "T" and "A" are not in the solution, and further I know that "S" will not be in the 1st position, "R" will not be in the 4th, and "E" will not be in the 5th. So, I can make five character classes with negative matches. Our final regular expression looks like:

$ grep -P '^(?=.*s.*)(?=.*r.*)(?=.*e.*)(?=[a-z]{5})[^sta][^ta][^ta][^tar][^tae]$' /usr/share/dict/words

This returns 80 words, but I can fiddle it down a touch further (for the moment) by removing words that have repeated characters. For example, "preps" is returned in that list, but repeats "p". In order to maximize my win ratio, I should probably be taking advantage of words with strictly unique characters. So I'll add the negative look ahead of (?!.*(.).*\1) to my list:

$ grep -P '^(?=.*s.*)(?=.*r.*)(?=.*e.*)(?=[a-z]{5})(?!.*(.).*\1)[^sta][^ta][^ta][^tar][^tae]$' /usr/share/dict/words

This returns 66 words which is easier to parse. I should probably pick something high on the character frequency list. "EUROS" seems like a good pick, so let's try that:

The clues this time tell us that "E", "U", and "R" are in the solution, but in different positions, "O" is not in the solution, and "S" was correctly picked for its position. We can modify our expression above with the new data. We know that for the:

  1. First character: Cannot be "S", "T", "A", "E", or "O".
  2. Second character: Cannot be "T", "A", "U", or "O".
  3. Third character: Cannot be "T", "A", "R", or "O".
  4. Fourth character: Cannot be "T", "A", "R", or "O".
  5. Fifth character: "S".

So we modify our expression:

$ grep -P '^(?=.*r.*)(?=.*e.*)(?=.*u.*)(?=[a-z]{5})(?!.*(.).*\1)[^staeo][^tauo][^taro][^taro]s$' /usr/share/dict/words

We're down to three words. Now we still have that filter that no word can have a repeating character. It might be worthwhile to remove it as our returned list is manageable. Interestingly enough, removing that filter returns the same three words.

$ grep -P '^(?=.*r.*)(?=.*e.*)(?=.*u.*)(?=[a-z]{5})[^staeo][^tauo][^taro][^taro]s$' /usr/share/dict/words

We know the final word is either "GRUES" or "REBUS" at this point, and I have 3 guesses left to win. But let's see if updating the regular expression reveals the solution. Again, given the clues, we know "U", "R", and "E" are in the solution, but in different spots. "G" is not in the solution, and "S" is the final character. Knowing that "GRUES" has "G", then the solution should be "REBUS". Let's see if regex agrees:

$ grep -P '^(?=.*r.*)(?=.*e.*)(?=.*u.*)(?=[a-z]{5})[^staeoug][^tauorg][^tarog][^taroeg]s$' /usr/share/dict/words

Indeed it does. "REBUS" is the only valid word with this dictionary and Wordle confirms it's the correct solution.

Let's look at each piece in this regular expression, just to make sure we know what it's doing:

  • grep -P: Use Perl Compatible Regular Expressions with GNU grep.
  • '^: Anchor the match at the beginning of the line.
  • (?=.*r.*): Match "r" anywhere in the word.
  • (?=.*e.*): Match "e" anywhere in the word.
  • (?=.*u.*): Match "u" anywhere in the word.
  • (?=[a-z]{5}): Match five lowercase alphabetic characters only.
  • (?!.*(.).*\1): Do not match words with repeated characters.
  • [^staeoug]: The first character cannot contain "s", "t", "a", "e", "o", "u", or "g".
  • [^tauorg]: The second character cannot contain "t", "a", "u", "o", "r", or "g".
  • [^tarog]: The third character cannot contain "t", "a", "r", "o", or "g".
  • [^taroeg]: The fourth character cannot contain "t", "a", "r", "o", "e", or "g".
  • s: The fifth character must be "s".
  • $': Anchor the match at the end of the line.

Yes, it's ugly. Yes, it's probably more efficient to pipe several times. Yes, this regular expression is error-prone and could be difficult to debug. BUT, you learned lookarounds which may come handy for pattern matching in the future unrelated to Wordle.

Next step is discovering whether or not you could store the gray characters (not in the solution) in a capture group and refer back to them in the character classes. That would help to reduce typing and error. I'll leave that as an exercise for the reader (and author).

Checksums in Passwords? Uh, okay.

Introduction

As most of my readers know, I have a rather extensive yet easy-to-use web-based password generator. I've spent a lot of time doing password research (a couple ideas mine, most not), and have implemented most of these into the project. These include, but are not limited to:

  • Expansive language support
  • Verbal unambiguity
  • Visual unambiguity
  • Memorability
  • Compact density
  • Programmatic prediction
  • Versatility
  • Accommodating complex requirements
  • Entertainment
  • Checksums

That last idea was just recently committed to the project, and I think it might have some value, albeit with some tight controls and possibly little reward for the cost.

Bubble Babble

When I started developing my password generator, Tony Arcieri suggested on IRC that I implement Bubble Babble. I think he meant it mostly as a joke, but already being somewhat familiar with it having to do with SSH keys, I looked into it. Bubble Babble is an encoding specification designed for SSH fingerprints. The goal is to make the fingerprints pronounceable, such as when comparing host keys on first use. Antti Huima designed the specification, and built in a checksum to detect transmission errors. But for a password generator, I initially ignored implementing the checksum, and just implemented "xVCVC-CVCVC-...-CVCVC-CVCVx", where "C" is a random consonant from the spec and "V" is a random vowel also from the spec.

But then I got thinking, would it really be that big of a deal to implement Bubble Babble's checksum? Would it impact the character count for similar security margins? Would users even notice or care? It seemed obvious that the answer was to try it and see, so try it I did. For example, here is a Bubble Babble password before I implemented the checksum: "xuduh-taren-rezyd-gefik-bixux", and here's one after implementing the checksum: "xuval-zoder-cykeh-lyvin-lyrax". The former has approximately 78 bits of security, but the check will fail, while the latter has 72 with a valid check. Both are five pronounceable pseudowords.

However, the check isn't easily identifiable, as it's integrated throughout the entire string, and calculating the check is rather involved. One noticeable identifier is if Bubble Babble is encoding an odd number of bytes versus an even number of bytes. If the number of bytes is even, then the format of the string will have a third "x": "xVCVC-CVCVC-...-CVCVC-CVxVx". That doesn't mean that every even-numbered byte-encoded Bubble Babble string that has three "x"s has a valid check, but even numbered bytes with a valid check will have three "x"s. Regardless, stripping off the checksum isn't really a thing, due to being tightly integrated with the full string.

Okay. Now that it's implemented in Bubble Babble, is there any practical value to it? I could only think of one possible scenario.

A Scenario With Tightly Controlled Authentication

Suppose an organization has a credential management system (CMS) that requires employees to use their built in password manager and password generator. The password generator generates Bubble Babble passwords with valid checksums. When a new employee is hired and their account is setup, the CMS generates a random Bubble Babble password, hashes it, and stores it on disk for authentication. If at any point the employee wants to change their password, the CMS prevents them from supplying their own password, and they must use the builtin generator.

When staff authenticate, all client-side software checks the password for a valid checksum before sending it to the authentication server. If the check fails, the user has entered their password incorrectly, and must try again. If the check succeeds, the software sends the password to the authentication servers for hashing and verification.

The employee could attempt to bypass the client-side check by just sending the password to the authentication server directly, but it would be pointless as if verifying the password hash still fails at the authentication server, the employee still has to retype their password.

Okay, but why? Well, assuming the organization is using a best practice password hashing function with an appropriate cost factor, then authentication is expensive. People frequently mistype passwords and that cost on the server can be mitigated with client-side checksum validation.

But shouldn't users just copy-paste their passwords from the password manager? Absolutely yes they should. The most secure password is the one you don't know. However, there may be scenarios where pasting the password out of the CMS isn't practical, such as hooking up a crash cart to an unresponsive server or logging into your workstation when first getting into the office.

More Checksums

So if there is value here, are there other places where I've implemented a binary-to-text encoding scheme as a password generator that has a formally defined checksum in its specification? Yes, Crockford's Base32 and Bitcoin's BIP39.

In the case of Crockford's Base32, the checksum extends the base-32 character set to 37 characters, and the checksum is calculated modulo 37 against the bytes. It's rather trivial.

In the case of Bitcoin's BIP39, the bytes (which must be a multiple of 4 bytes) are hashed with SHA-256, and the leading bits of the digest are appended to the original entropy bytes to make the final bitstring a multiple of 11 bits, which is then converted to words and presented as a mnemonic, the final word being the "check word".

Screenshots

Below are some screenshots of the current state of affairs with the Bitcoin, Bubble Babble, and Base32 generators when at least 70 bits of security is required. The styling and such in each container might change as the project matures, such as "Integrated checksum" in the lower right-hand corner, but the checksum will remain.

Prior Work - Letterblock Diceware

While I did start thinking of this independently on my own, there is prior work that should be acknowledged. I just discovered it last night when doing web searches for password and passphrase generators with checksums. It was partly that discovery actually that lead to the creation of this post.

On August 12, 2020 Arne Babenhauserheide created Letterblock Diceware as an approach to physically and practically carrying Diceware with you. Unfortunately, Diceware ships 7,776 words with indices, and at best, this is several pages of printed paper with 5 dice, which isn't practical to carry around. So he created a 6x6 table of "pronounceable" and "memorable" digits, letters, and bigrams that can fit on a business card. Roll 2d6, one for the row the other for the column, and record the intersection for your password character. Four 2d6 rolls create a "block" worth about 20 bits of security. Four blocks produces about 80 bits of security as a result.

However, as a weak checksum, add the row numbers of two consecutive blocks modulo 4, and insert the resulting character between the two blocks. For example, if you rolled for two blocks as {col, row}:

  • {2,1}: A
  • {6,3}: t
  • {2,1}: A
  • {3,5}: U
  • {1,4}: 48
  • {2,1}: A
  • {2,4}: FK
  • {3,3}: N

Then your blocks are "AtAU" and "48AFKN" (or "4AFN" if you prefer). The rows however are "1, 3, 1, 5, 4, 1, 4, & 3". Adding these up modulo 4 returns (1+3+1+5+4+1+4+3) % 4 = 2, which yields "-" for the check. Thus, the resulting password would be "AtAU-4AFN" (I would have done modulo 6 instead. Then the check is uniform, and it could be printed as one more column on the card).

He also mentions the same scenario that I did in this post as well (emphasis mine):

Letterblock passwords use 55 letters that are unambiguous in handwriting and safe to use in URLs, grouped in blocks of four letters to make them easier to remember, with separators that work as weak checksum to catch many typing errors before even sending the password to the server, with weak optimization for legibility by creating 8 passwords and choosing the one with bigrams that are closest to regular prose.

It's worth noting that upstream Diceware also ships tables to be used with dice, although they're not designed for memorability.

Conclusion

There you have it. Three password generators in my web-based password generation project that now ship with checksums without reducing security or reducing the end user experience. I haven't made an actual release yet, as there is a bit more work I want to do prior to that. However, I'm sure there are other scenarios where passwords with checksums have value, as authentication is ubiquitous, and I couldn't possibly list every possible authentication scenario. Play around with it and let me know what you think. I would be very interested in your feedback.

Introducing Deckware - A 224-bit entropy extractor

Introduction

I can't believe that it's been almost 3 years since my last blog post. Interestingly enough, that was on a deterministic card shuffle that I decided to call "Ouroboros". Well, this post is also about a deterministic algorithm with a deck of playing cards, but rather than shuffling the deck, we'll be extracting the entropy out of it.

The algorithm is called Deckware. I would have called it "Pokerware", but it was already taken by Chris Wellons. I could have called it "Solitaireware", and it does have a sort of ring to it, but I didn't want to confuse people with the Solitaire Cipher by Bruce Schneier. I debated calling it "Bridgeware", but I fear that the Bridge card game is a fringe game enjoyed only by old ladies in nursing homes drinking lemonade, and most people wouldn't get it. Ultimately, the randomness extractor is working through the whole deck, so it makes sense to call it "Deckware", even if it does sound a bit like a construction company.

The thing to understand about this algorithm however, is that it is not a generic passphrase generator like Diceware or Pokerware. As such, there is no word list provided with Deckware. Instead, it's a randomness extractor. it's designed such that you use your deck of playing cards as a random number generator, and this algorithm uniformly returns a 224-bit random number from that shuffle. Once you have that 224-bits of entropy, it's yours to do with as you wish:

  • Use it as a <= 224-bit cryptographic symmetric key.
  • Use it as a seed for a CSPRNG, such as reseeding your kernel RNG.
  • Use it for election auditing, lottery drawing, or randomized drug samples.
  • Convert the hexadecimal to a 14-word Niceware passphrase.

When you need a lot of randomness, Deckware might work, although it's not particularly fast.

Lehmer Code

The basis of Deckware is Lehmer code. Lehmer code is a factoradic algorithm for converting any specific permutation in a set to an integer. To understand how this works, let's look first at standard combinations that we're all familiar with.

In decimal, which we use every day, we're all familiar with the "ones" place, the "tens" place, "hundreds" place, "thousands" place, etc. So a number like "3481" is 3*1000 + 4*100 + 8*10 + 1*1, right? Simple enough.

Factoradic systems are a way to represent an integer as the sum of multiples of factorials. Instead of a decimal number system (or binay, octal, hexadecimal, etc), it's a factorial number system. If I wanted to take my previous example of "3481", I know that 6! = 720, so 7! = 5040. Thus, 3481/6! = 4 remainder 601. 601/5! is 5 remainder 1. Thus, 3481 = 4*6! + 5*5! + 1*1!.

Okay, but how do you do that with a permutation? Let's say we have a box with numbered chits 1, 2, & 3. How many permutations (order matters) are there? Well, we know it's 3! = 6. We could list them all quite easily:

  • 1, 2, 3
  • 1, 3, 2
  • 2, 1, 3
  • 2, 3, 1
  • 3, 1, 2
  • 3, 2, 1

Lehmer code converts each unique sequence to an integer. It does this by starting with the left-most value, and counting the values less than it to its right. So, starting with the first permutation of "1, 2, 3", "1" is our left-most value, and no values to its rights that are less than 1. So, for this factorial, it's multiplier would be "0". Next we move to the second value, which is "2". Again, there are no values to its right that are less than 2. So also for this factorial, its multiplier is also "0". Finally, on the last value, there are no values to its right, so it's value is "0". This is always the case for the right-most value in Lehmer code. So for "1, 2, 3", our Lehmer code would be 0*2! + 0*1! + 0*0! = 0. If we look at our second permutation of "1, 3, 2", applying Lehmer code, we get 0*2! + 1*1! + 0*0! = 1.

Let's complete the list:

  • 1, 2, 3 = 0*2! + 0*1! + 0*0! = 0
  • 1, 3, 2 = 0*2! + 1*1! + 0*0! = 1
  • 2, 1, 3 = 1*2! + 0*0! + 0*0! = 2
  • 2, 3, 1 = 1*2! + 1*1! + 0*0! = 3
  • 3, 1, 2 = 2*2! + 0*1! + 0*0! = 4
  • 3, 2, 1 = 2*2! + 1*1! + 0*0! = 5

Deckware uses Lehmer code, but with 52! permutations instead of 3! like our example above.

Playing Card Permutations

Knowing that there are 52 unique cards in a standard Poker or Bridge deck of playing cards, then we know there are 52! order permutations. 52! has 68 decimal digits. Converting to binary bits yields log2(52!) ~= 225.581. In case you forgot, it would take all the energy from a hypernova captured by a Dyson sphere to count from 0 to ~2^227. In all likelihood, a sufficiently shuffled deck has never been discovered before.

But how do you do the math? How do you compare the inequality of the Ace of Spades to the Ten of Diamonds, for example? To do this, we need to make some numerical assignments. We're going to use Bridge order for suits, and treat Ace as low, King as high. As such, we get:

  • Clubs: Ace - King = 1 - 13
  • Diamonds: Ace - King = 14 - 26
  • Hearts: Ace - King = 27 - 39
  • Spades: Ace - King = 40 - 52

Now that we have these numerical assignments, we can trivially do our inequality comparisons to build our Lehmer code. But we have a snag. Because the permutation space is larger than 225 bits but not quite 226 bits, we can't use the full space, or we'll end up with a biased extractor. As such, we need to discard anything larger than 2^225-1 (because we start counting with 0). So, when we compute our Lehmer code, if the value is 2^225 or greater, it's ignored, and the user needs to reshuffle the deck. Otherwise, we return the lower 224 bits of the extracted result to the user.

However, 2^225 is approximately 67% of 2^log2(52!). This means that on average, you will have to reshuffle the deck 33% of the time to prevent getting a biased result, or about 1 out of every 3 shuffles will be discarded. It's really unfortunate that it couldn't be better, but it is what it is.

Deckware versus Pokerware: FIGHT!

I think it's worth mentioning how Deckware compares to Pokerware and when you would want to choose one over the other, seeing as though they are both using a deck of playing cards as a source of randomness.

First off, as already mentioned, Deckware does not ship a word list. Technically speaking, Deckware is not a passphrase generator. It's an entropy extractor. This means that you need to bring your own word list to the Deckware table. By comparison, Pokerware provides both formal and slang word lists as part of the project.

Second, Pokerware can be executed trivially without any computing or calculating device. All you need is a deck of cards and a printed off indexed word list. To be fair, I don't think anyone actually keeps a printed off word list of Pokerware, or Diceware for that matter, with them, except for maybe the inventors themselves. I'm guessing most, if not all, are using a computer to generate the Diceware or Pokerware passphrase.

Deckware on the other hand could be executed 100% with a pencil and paper, but it would be painful and incredibly slow. That's something you would make inmates in prison do when they need something to do. I mean, this is essentially what it would take:

  1. Count inequalities for every card placement in the list.
  2. Find the Lehmer code using the factorial number system.
  3. Convert to base 16.

Yeah, no, I'll pass. I'll stick with the tool. However, Deckware has a couple of advantages over Pokerware though that might be worth considering.

First with Pokerware, after every draw, the deck needs to be reshuffled. As determined, this is at least 7 shuffles. That's 7 full deck shuffles for every passphrase word. At 6 words, that's a total of 42 shuffles you've performed on the deck. Deckware only requires 7 shuffles, and odds are 2 out of 3 you won't need to try again.

Second, for those 42 shuffles with Pokerware, you only returned 74 bits of security. For Deckware's 7 shuffles, you were able to extract 224 bits! That's a significant return for the cost, making it far more efficient.

In summary:

Pokerware:

  • Advantages:
    • Provides two word lists.
    • Simple and clean to execute.
    • Can be executed without a computer.
    • Stands on its own as a unique tool.
  • Disadvantages:
    • Cumbersome shuffling per generated word.
    • More time costly for similar security margins.

Deckware:

  • Advantages:
    • Maximizes deck entropy.
    • Small time commitment.
    • Can be used for security solutions other than passphrases.
  • Disadvantages:
    • Does not provide a word list.
    • Might be difficult to independently audit.
    • Requires a computer.
    • Can be replaced with SHA-224.

I think that last disadvantage actually speaks volumes. In the past, I would shuffle the deck, record the results, and hash it with SHA-224. That's perfectly acceptable, and I won't blame you for that approach. Even though using SHA-224 to hash your deck order is technically biased, the bias isn't significant enough to reduce security in practical terms, and so long as SHA-2 remains secure, you can't identify a biased result from an unbiased one.

Deckware is elegant in that not only is it uniform, it doesn't rely on any cryptographic primitives. It's just factorial math. This means you can trivially audit it for correctness. For example, extract the 224-bit hexadecimal string from an ordered deck, and it should return 0x00000000000000000000000000000000000000000000000000000000. Swap the King of Spades with the Queen of Spades, and it will return 0x00000000000000000000000000000000000000000000000000000001. This sort of vetting isn't accessible for SHA-2, although there is little to no reason to not trust its correctness.

I'm not going to say one is better than the other (Pokerware or Deckware), because as outlined, they have their own strengths and weaknesses. I have personally used Pokerware, and truth be told, I was adding it to my password and passphrase generator (how did I miss it?!), and it got me thinking: "how would I design a playing card algorithm without relying on cryptography?"

Deckware In Action

Here's a couple screenshots of an early release of the tool in action. Here, you can see the unshuffled deck on the "upper table". The suit symbols are emoji provided by OpenMoji. I added the text next to each suit using the DejaVu Serif font in Inkscape.

Here, I've dragged and dropped each card onto the lower table representing the shuffled deck. I'm not very good an JavaScript listening events, so I shamelessly took the code from W3Schools. No doubt it could use some polish, but it works.

Notice that I've clicked the "Calculate unique deck ID" button to extract the entropy (maybe I should change that button text now that I'm thinking about it). I got "b08bd2f0720ade917b842ee1e721fe1c6ad00429e1155f9201b50d82" returned. This gives be a 14-word Niceware passphrase of "random sporran ironclad tare lifeful cromwell trekked wrigglier imprudence amenable thai hajj affectionately barratry".

After extracting the entropy out of the deck, you should thoroughly reshuffle the deck or place it back in order to destroy the key, so the entropy cannot be re-extracted. You should also reload your web browser for the same reason. The tool is not using any persistent storage, but feel free to run the tool in a private browser window if you're paranoid.

Closing Thoughts

In practice, after shuffling the deck, I was able to record every card in the tool in 153 seconds, or around 2 - 3 minutes. That's not bad with drag-and-drop using the mouse, and I'm sure it can be improved with a keyboard listening event to type it in rather than using the mouse. Again though, I'm not proficient with JavaScript listening events, so maybe someone can help me out here. However, this tool or SHA-224, the bulk of the time is taken to record the cards in the shuffled deck, so from my point of view, it's sixes. Pick your poison. You need a tool, one way or the other.

For the time being, I've got this opened in a tab in my browser. When I need a password generator, I give the hexadecimal string to Niceware. 14 words is generally overkill for my usual password needs. Even dividing it in half at two 7 words each, gives me two 112-bit passphrases. Still overkill. But 5 words yields 80 bits of security, which is right on the money. I can get two 80-bit passphrases and one 64-bit passphrase out of a single shuffle. I'll have to see how this goes.

The Ouroboros Card Shuffle

Introduction

For the most part, I don't play a lot of table games, and I don't play party games. But occasionally, I'll sit down with my family and play a board game or card game. When we play a card game though, I get teased by how I shuffle the deck of cards.

I know that to maximize entropy in the deck, it should be riffle shuffled at least 7 times, but a better security margin would be around 10 shuffles, and after 12, I'm just wasting my time. But I don't always riffle shuffle. I'll also do various deterministic shuffles as well, such as the pile shuffle to separate cards from each other.

I'm familiar with a number of deterministic card shuffles, which I didn't even know had names:

  • Pile shuffle- Separate the cards into piles, one at a time, until exhausted, then collect the piles. I usually just do 4 or 5 piles, and pick up in order.
  • Mongean shuffle- Move the cards from one had to the other, strictly alternating placing each discard on top then beneath the previously discarded cards.
  • Mexican spiral shuffle- Discard the top card on the table, and the second card to the bottom of the deck in your hard. Continue discarding all odd cards to the table, all even cards beneath the deck in hand until exhausted. I never do this shuffle, because it takes too long to execute.

In practice, when I'm playing a card game with my family, I'll do something like 3 riffle shuffles, a pile shuffle, 3 more riffle shuffles, a Mongean shuffle, 3 more riffle shuffles, another pile shuffle, then one last riffle shuffle. I'll get teased about it, of course: "Dad, I'm sure the cards are shuffled just fine. Can we just play now?", but when we play the game, I'll never hear complaints about how poorly the deck was shuffled.

This got me thinking though- there aren't that many simple deterministic blind card shuffles (I say "blind", because any of the playing card ciphers would work, but that requires seeing the cards, which is generally frowned upon when playing competitive card games). I wonder what else is out there. Well, doing some web searches didn't turn out much. In fact, all I could find were variations of the above shuffles, such as random pile discarding and pickup, but nothing new.

So the question then turned into- could I create my own simple deterministic card shuffle? It didn't take me long before I came up with what I call the "Ouroboros shuffle".

The Ouroboros Shuffle

Before going any further, let me state that I very much doubt I'm the first to come up with this idea, but I have searched and couldn't find where anyone else had documented it. If it does in fact exist, let me know, and I'll gladly give credit where credit is due. Until then, however, I'll stick with calling it the "Ouroboros Shuffle", named after the serpent or dragon eating its own tail.

The shuffle is simple:

  1. Holding the deck in your hard, discard the first card from the bottom of the deck to the table.
  2. Discard the top card of the deck to the discard pile on the table.
  3. Repeat steps 1 and 2, strictly alternating bottom and top cards until the deck is exhausted.

If the playing cards are plastic-based, like those from Kem or Copag, then you could "pinch" the top and bottom cards simultaneously, and pull them out of the deck in your hand to the tale. If you do this perfectly, you will pinch 2 cards only 26 times. If they're paper-based though, this may or may not work as efficiently due to cards having a tendency to stick together after heavy use.

If the deck was unshuffled as "1, 2, 3, ..., 50, 51, 52", then the first shuffle would look like this:

Step: 0
Unshuffled: 1, 2, 3, ..., 50, 51, 52
Shuffled: <none>

Step: 1
Unshuffled: 1, 2, 3, ..., 49, 50, 51
Shuffled: 52

Step: 2
Unshuffled: 2, 3, 4, ..., 49, 50, 51
Shuffled: 1, 52

Step: 3
Unshuffled: 2, 3, 4, ..., 48, 49, 50
Shuffled: 51, 1, 52

Step: 4
Unshuffled: 3, 4, 5, ..., 48, 49, 50
Shuffled: 2, 51, 1, 52

Step: 5
Unshuffled: 3, 4, 5, ..., 47, 48, 49
Shuffled: 50, 2, 51, 1, 52

Step: 6
Unshuffled: 4, 5, 6, ..., 47, 48, 49
Shuffled: 3, 50, 2, 51, 1, 52

....

Step 50:
Unshuffled: 26, 27
Shuffled: 25, 28, 24, ..., 51, 1, 52

Step 51:
Unshuffled: 26
Shuffled: 27, 25, 28, ..., 51, 1, 52

Step 52:
Unshuffled: <none>
Shuffled: 26, 27, 25, ..., 51, 1, 52

As you can see, the top and bottom cards are always paired together in the unshuffled deck, and discarded as a pair to the shuffled deck. The top and bottom cards could also be thought of as the head and tail of a list, and thus why I called it the Ouroboros shuffle.

If you execute this algorithm perfectly from an unshuffled deck, it will take 51 rounds to before restoring the deck to its unshuffled state.

Observations

Almost immediately, I noticed a bias. It doesn't matter how many times I execute this algorithm, the bottom card will always remain on the bottom. In the above example, the King of Spades (if assigned the value of "52") will stay at the bottom of the deck, due to the nature of the shuffle of discarding the bottom card first. So I recognized that I would need to cut at least 1 card from the top of the deck to the bottom of the deck before the next round of the shuffle, to ensure the bottom card gets mixed in with the rest of the deck.

Other questions started popping up, specifically:

  • How many perfect shuffles will it take to restore the deck to an unshuffled state now?
  • Is there a different bias hidden after cutting the top card to the bottom?
  • What if I cut 2 cards? 3 cards? 51 cards?

Whelp, time to code up some Python, and see what pops out. What I'm looking for is what the state of the deck looks like after each round. In other words, I want to know which card occupies which positions in the deck. For example, does the Seven of Clubs see all possible 52 positions in the deck? Without the cut, we know that's not possible, because the bottom card stubbornly stays in the bottom position.

Typing up a quick script and graphing with Gnuplot gave me the following images. The first image on the left is the Ouroboros shuffle with no cuts, where the right image is the Ouroboros shuffle followed by cutting the top card to the bottom of the deck as the end of the round. Click to enlarge.

What you're looking at is the card position in the deck along the X-axis and the card value along the Y-axis. In the left image, where the Ouroboros shuffle is executed without any following cuts, the 52nd card in the deck is always the face value of 52. But in the right image, where the Ouroboros shuffle is followed by cutting one card from the top of the unshuffled deck to the bottom, every card position sees every face value.

So what would happen if instead of cutting 1 card off the top to the bottom at the end of each round, I cut 2 cards, and cards, etc. all the way to cutting 51 cards off the top to the bottom? Well, more Python scripting, and I generated a total of 52 images showing every possible position a card occupies in the deck until the deck returns to its unshuffled state.

Visualizations of the Ouroboros card shuffle with cuts

Interestingly enough, executing the Ouroboros shuffle followed by cutting 19 cards, leads to a cycle length of 6,090 perfect shuffles before restoring the deck back to its unshuffled state. Awesome! Except, as you can see in the Imgur post above, it's extremely biased.

Every shuffle-and-cut is listed here with its cycle length:

Cut: 0, iter: 51
Cut: 1, iter: 52
Cut: 2, iter: 51
Cut: 3, iter: 272
Cut: 4, iter: 168
Cut: 5, iter: 210
Cut: 6, iter: 217
Cut: 7, iter: 52
Cut: 8, iter: 418
Cut: 9, iter: 52
Cut: 10, iter: 24
Cut: 11, iter: 350
Cut: 12, iter: 387
Cut: 13, iter: 252
Cut: 14, iter: 1020
Cut: 15, iter: 144
Cut: 16, iter: 1972
Cut: 17, iter: 34
Cut: 18, iter: 651
Cut: 19, iter: 6090
Cut: 20, iter: 175
Cut: 21, iter: 90
Cut: 22, iter: 235
Cut: 23, iter: 60
Cut: 24, iter: 2002
Cut: 25, iter: 144
Cut: 26, iter: 12
Cut: 27, iter: 50
Cut: 28, iter: 24
Cut: 29, iter: 10
Cut: 30, iter: 44
Cut: 31, iter: 72
Cut: 32, iter: 297
Cut: 33, iter: 90
Cut: 34, iter: 45
Cut: 35, iter: 132
Cut: 36, iter: 12
Cut: 37, iter: 210
Cut: 38, iter: 207
Cut: 39, iter: 104
Cut: 40, iter: 420
Cut: 41, iter: 348
Cut: 42, iter: 30
Cut: 43, iter: 198
Cut: 44, iter: 35
Cut: 45, iter: 140
Cut: 46, iter: 390
Cut: 47, iter: 246
Cut: 48, iter: 28
Cut: 49, iter: 12
Cut: 50, iter: 36
Cut: 51, iter: 30

The only "shuffle then cut" rounds that are uniform appear to be cutting 1 card, 7 cards, and 9 cards. The other 49 shuffles are biased in one way or another, even if each of them have different cycle lengths.

Here's the Python code I used to create the shuffled lists, each into their own comma-separated file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/usr/bin/python

def step_1(deck):
    tmp = []
    for card in range(26):
        tmp.insert(0, deck[-1])
        deck.pop(-1)
        tmp.insert(0, deck[0])
        deck.pop(0)
    return tmp

def step_2(deck, cut):
    return deck[cut:] + deck[:cut]

orig = [_ for _ in range(1, 53)]
deck = [_ for _ in range(1, 53)]

for i in range(52):
    with open("cut_{}.csv".format(i), "w") as f:
        f.write(",".join(map(str, deck)) + "\n")
    deck = step_1(deck)
    deck = step_2(deck, i)
    n = 1
    while deck != orig:
        with open("cut_{}.csv".format(i), "a") as f:
            f.write(",".join(map(str, deck)) + "\n")
        deck = step_1(deck)
        deck = step_2(deck, i)
        n += 1
    print "Cut: {}, iter: {}".format(i, n)

Conclusion

This was a fun shuffle to play with and one that I'll incorporate into my card playing with my family. Now I could do something like: 3 riffle shuffles, Ouroboros shuffle with 1 card cut, 3 riffle shuffles, Mongean shuffle, 3 riffle shuffles, pile shuffle, 1 riffle shuffle, and I will be happy.

Latin Squares, Mathematics, and Cryptography

Introduction

Recently, I've been studying Latin squares and their role in classical cryptography including the one-time pad. Latin squares are NxN squares where no element in a row is duplicated in that same row, and no element in a column is duplicated in that column. The popular Sudoku game is a puzzle that requires building a Latin square.

As I delved deeper and deeper into the subject, I realized that there is a rich history here that I would like to introduce you to. Granted, this post is not an expansive nor exhaustive discussion on Latin squares. Rather, it's meant to introduce you to the topic, so you can look into it on your own if this interests you.

In each of the sections below, the "A" and "N" characters are highlighted in the table image to demonstrate that the table is indeed a Latin square. Further, you can click on any table image to enlarge.

Tabula Recta

The Tabula Recta is the table probably most are familiar with, and recognize it as the Vigenère table. However, the table was first used by German author and monk Johannes Trithemius in 1508, which it was used in his Trithemius polyalphabetic cipher. This was a good 15 years before Blaise de Vigenère was even born, 43 years before Giovan Battista Bellaso wrote about his cipher using the table in his 1553 book "La cifra del. Sig. Giovan Battista Bellaso", and 78 years before Blaise de Vigenère improved upon Bellaso's cipher.

Today, we know it as either the "tabula recta" or the "Vigenère table". Regardless, each row shifts the alphabet one character to the left, creating a series of 26 Caesar cipher shifts. This property of the shifted alphabets turns out to be a weakness with the Vigenère cipher, in that if a key repeats, we can take advantage of the Caesar shifts to discover the key length, then the key, then finally breaking the ciphertext.

Jim Sandborn integrated a keyed tabula recta into his Kryptos sculpture in the 2nd and 4th panels. Even though the first 3 passages in the Kryptos sculpture have been cracked, the 4th passage remains a mystery.

Beaufort Table

More than 250 years later, Rear Admiral Sir Francis Beaufort modified the Vigenère cipher by using a reciprocal alphabet and changing the way messages were encrypted. Messages were still encrypted with a repeating key, similar to the Vigenère cipher, but plaintext character was located in the first column and the key in the first row. The intersection was the ciphertext. This became the Beaufort cipher.

His reasoning in why he used a different table and changed the enciphering process isn't clear. It may have been as simple as knowing concepts about the Vigenère cipher without knowing the specific details. He may have had other reasons.

One thing to note, however, is that Vigenère-encrypted ciphertexts cannot be decrypted with a Beaufort table and vice versa. Even though the Beaufort cipher suffers from the same cryptanalysis, the Caesar shifts are different, and the calculation if using numbers instead of letters is also different.

The Beaufort table was integrated into a hardware encryption machine called the Hagelin M-209. The M-209 was used by the United States military during WWII and through the Korean War. The machine itself was small, and compact, coming in about the size of a lunchbox and only weighing 6 pounds, which was remarkable for the time.

One thing to note, is that the Beaufort table has "Z" in the upper-left corner, with the reciprocal alphabet in the first row and first column, as shown in the image above. Any other table that is not exactly as shown above that claims to be the Beaufort table is not correct.

NSA's DIANA Reciprocal Table

Of course, the narcissistic NSA needs their own polyalphabetic table! We can't let everyone else be the only ones who have tables! I'm joking of course, as there is a strong argument for using this reciprocal table rather than the Beaufort.

Everyone is familiar with the one-time pad, a proven theoretically unbreakable cipher if used correctly. There are a few ways in which to use the one-time pad, such as using XOR or modular addition and subtraction. Another approach is to use a lookup table. The biggest problem with the tabula recta is when using the one-time pad by hand, it's easy to lookup the wrong row or column and introduce mistakes into the enciphering process.

However, due to the reciprocal properties of the "DIANA" table (don't you love all the NSA codenames?), encryption and decryption are identical, which means they only require only a single column. A key "row" is no longer needed, and the order of plain, key and cipher letter don't matter (Vigenère vs Beaufort) and may even differ for sender and receiver. Just like with Beaufort, this table is incompatible with Vigenère-encrypted ciphertexts. Further, it's also incompatible with Beaufort-encrypted ciphertexts, especially if it's a one-time pad. The Beaufort table shifts the alphabet to the right, while the DIANA table shifts the alphabet to the left. The tabula recta also shifts left.

Let's make one thing clear here- this table was created strictly for ease of use, not for increased security. When using the one-time pad, the key is at least the length of the message, which means it doesn't repeat. So it doesn't matter that the table is 26 Caesar-shifted alphabets. That property won't show itself in one-time pad ciphertexts.

E.J. Williams' Balanced Tables

Stepping away from cryptography for a moment, and entering the world of mathematics, and in this case, mathematical models applied to farming, we come across E.J. Williams' balanced tables. Note how the "A" and "N" characters are populated throughout the table compared to what we've seen previously.

The paper is modeling chemical treatments to crops over a span of time, and how to approach the most efficient means of applying those treatments. The effects of the previous treatment, called the "residual effect" is then analyzed. A method based on a "balanced" Latin square is discussed. It is then applied to multiple farming sites and analyzed.

Now, I know what you're thinking- "Let's use this for a cipher table!". Well, if you did, and your key repeated throughout the message, the ciphertext would not exhibit Caesar-shifted characteristics like Vigenère and Beaufort. However, the table is still deterministic, and as such, knowing how the table is built will give cryptanalysts the edge necessary to still break Williams-encrypted ciphertexts.

Michael Damm's Anti-Symmetric Quasigroups of Order 26

Also in the world of mathematics are quasigroups. These are group algebras that must be both totalitive and invertible, but not necessarily associative. Michael Damm researched quasigroups as the basis for an integrity checksum, such as in calculating the last digit of a credit card number. But, not only did he research quasigroups, but anti-symmetric quasigroups. Anti-symmetry is a set algebra concept. If "(c*x)*y = (c*y)*x", then this implies that "x = y", and thus the set is symmetric. An anti-symmetric set means "(c*x)*y != (c*y)*x", and as such, "x != y".

Michael Damm, while researching checksums, introduced us to anti-symmetric quasigroups. One property was required, and that was that the main diagonal was "0", or "A" in our case. The Damm algorithm creates a checksum, such that when verifying the check digit, the result places you on the main diagonal, and thus returns "0". Note that any quasigroup can be represented by a Latin square.

Due to the nature of the Damm algorithm as a checksum, this could be used to verify the integrity of a plaintext message before encrypting using a quasigroup of order 26, as shown above. The sender could calculate the checksum of his plaintext message, and append the final character to the plaintext before encrypting. The recipient, after decrypting the message, could then run the same Damm checksum algorithm against the full plaintext message. If the result is "A", the message wasn't modified.

Notice in my image above, that while "A" rests along the main diagonal, the rest of the alphabets are randomized, or at least shuffled. It really isn't important how the alphabets are created, so long as they meet the requirements of being an anti-symmetric quasigroup.

Random Tables

Finally, we have randomized Latin squares. These are still Latin squares, such that for any element in a row, it is not duplicated in that row, and for any element in a column, it is not duplicated in that column. Other than that, however, there is no relationship between rows, columns, or elements. Their use is interesting in a few areas.

First, suppose I give you a partially filled Latin square as a "public key", with instructions on how to encrypt with it. I could then use my fully filled Latin square "private key", of which the public is a subset of. Using this private key, with some other algorithm, I could then decrypt your message. It turns out, filling in a partially-filled Latin square is NP-complete, meaning that we don't know of any polynomial-time algorithm currently can can complete the task. As such, this builds a good foundation for public key cryptography, as briefly outlined here.

Further, because of the lack of any structure in a randomized Latin square, aside from the requirements of being a Latin square, these make good candidates for symmetric message authentication code (MAC) designs. For example, a question on the cryptography StackExchange asked if there was any humanly-verifiable way to add message authentication to the one-time pad. The best answer suggested using a circular buffer as a signature, which incorporates the key, the plaintext, modular addition, and the Latin square. By having a randomized Latin square as the foundation for a MAC tag, no structure is present in the authenticated signature itself. Note, the table can still be public.

Steve Gibson incorporated Latin squares into a deterministic password manager. Of course, as with all deterministic password managers, there are some fatal flaws in their design. Further, his approach, while "off the grid", is rather cumbersome in execution. But it is creative, and certainly worth mentioning here as a randomized Latin square.

Conclusion

Latin squares have fascinated mathematicians for centuries, and in this post, we have seen their use en cryptography, mathematical modeling, data integrity, message authentication, and even password generation. This only shows briefly their potential.

Getting Up To 8 Possibilities From A Single Coin Toss

Introduction

Lately, I've been interested in pulling up some classical modes of generating randomness. That is, rather than relying on a computer to generate my random numbers for me, which is all to common and easy these days, I wanted to go offline, and generate random numbers the classical way- coin flips, dice rolls, card shuffling, roulette wheels, bingo ball cages, paper shredding, etc.

In fact, if randomness interests you, I recently secured the r/RNG subreddit, where we discuss everything from random number generators, to hashing functions, to quantum mechanics and chaos to randomness extraction. I invite you to hang out with us.

Anyway, I was a bit bothered that all I could get out of a single coin toss was 2 outcomes- heads or tails. It seems like there just is no way around it. It's the most basic randomness mechanic, yet it seems so limiting. Then I came across 18th century mathematician Gearges-Louis Leclerc, Comte de Buffon, where he played a common game of placing bets on tossing a coin onto a tiled floor, and whether or not the coin landed squarely in the tile without crossing any edges, or if the coin did actually cross a tile edge.

Then it hit me- I can extract more entropy out of a coin toss by printing a grid on a piece of paper. So, I put this up on Github as a simple specification. So far, I have papers you can print that are letter, ledger, a3, or a4 sizes, and coins for United States, Canada, and the European Union.

The theory

Assume a tile has a length of "l" and is square, and assume a coin has a diameter "d" such that "d < l". In other words, the tile is larger than the coin, and there is a place on the tile where the coin can sit without crossing any edges of the tile.

This means that if the edge of the coin is tangent to the edge of the tile, then we can draw a smaller square with the center of the coin, inside our tile. This smaller square tells us that if the center of the coin lands anywhere inside of that smaller square, the edges of the coin will not cross the edges of the tile.

Image showing 4 pennies in the corners of a tile, drawing a smaller center square

Now we know the coin diameter, but we would like to know the tile edge length, so we can draw our grid on a paper to toss the coin to. As such, we need to know the ratio of the area of the tile to the area of the smaller inner square drawn by the coin.

Know that the area of the tile with length "l" is:

A(tile) = l^2

The area of the smaller square inside the tile is determined by both the tile length "l" and the coin diameter "d":

A(inner_square) = (l-d)^2

So the ratio of the two is:

P = (l-d)^2/l^2

I use "P" for our variable as this the probability of where the center of the coin lands. We want our outcomes equally likely, so we want the center of the coin to land with 50% probability inside the inner square and 50% probability outside of the inner square.

1/2 = (l-d)^2/l^2

We know the diameter of the coin, so we just need to solve for the tile edge length. This is a simple quadratic equation:

1/2 = (l-d)^2/l^2
l^2 = 2*(l-d)^2
l^2 = 2*(l^2 - 2*l*d + d^2)
l^2 = 2*l^2 - 4*l*d + 2*d^2
0 = l^2 - 4*l*d + 2*d^2

If you remember from your college or high school days, the quadratic formula solution to the quadratic equation is:

x = (-b +/- Sqrt(b^2 - 4*a*c))/(2*a)

Plugging that in, we get:

l = (-(-4*d) +/- Sqrt((4*d)^2 - 4*1*(2*d^2)))/2
l = (4*d +/- Sqrt(16*d^2-8*d^2))/2
l = (4*d +/- Sqrt(8*d^2))/2
l = (2*d +/- 2*d*Sqrt(2))/2
l = d*(2 +/- Sqrt(2))

No surprise, we have 2 solutions. So, which one is the correct one? Well, we set the restriction that "d < l" earlier, and we can't break that. So, we can clearly see that:

l = d*(2 - Sqrt(2))
l = d*(Something less than 1)
l < d

So, this equation would mean that our coin diameter "d" is larger than our tile edge "l", which doesn't mean anything to us. As such, the solution to our problem for finding the tile edge length when we know the coin diameter is:

l = d*(2 + Sqrt(2))

Getting 4 outcomes from one coin toss

Now that we have the theory knocked out, let's apply it. I know that a United States penny diameter is 1.905 centimeters. As such, my grid dege length needs to be:

l = 1.905*(2+Sqrt(2))
l ~= 6.504 centimeters

This means then that when I flip my coin onto the paper grid, there is now a 50% chance that the coin will cross a grid line and a 50% chance that it won't. But this result runs orthogonal to whether or not the coin lands on a heads or tails. As such, I have the following uniformly distributed outcomes:

  1. Tails, does not cross an edge = 00
  2. Tails, crosses any edge(s) = 01.
  3. Heads, does not cross an edge = 10.
  4. Heads, crosses any edge(s) = 11.

If we do a visual check, whenever the coin center lands in the white square of the following image, the edges of the coin will not cross the edge of the square. However, if the coin center lands in the red area, then the edge of the coin will cross an edge or edges.

Grid square showing where the coin center must lie to cross an edge.

You can measure the ratios of the area of the red to that of the white to convince yourself they're equal, although careful- my image editor didn't allow me to calculate sub-pixel measurements when building this image.

Getting 8 outcomes from one coin toss

Remarkably, rather than treat the grid as a single dimensional object (does it cross any grid edge or not), I can use the grid an an X/Y plane, and treat crossing an x-axis edge differently than crossing a y-axis edge. However, that only gives me 6 outcomes, and it is possible that the coin will land in a corner, crossing both the x-axis and y-axis simultaneously. So, I need to treat that separately.

Just like we calculated the odds of crossing an edge to be 50%, now I have 4 outcomes:

  1. Does not cross an edge.
  2. Crosses an x-axis edge only.
  3. Crosses a y-axis edge only.
  4. Crosses both the x-axis and y-axis edges (corner).

As such, each outcome needs to be equally likely, or 25%. Thus, my problem now becomes:

1/4 = (l-d)^2/l^2

Just like we solved the quadratic equation for P=50%, we will do the same here. However, I'll leave the step-by-step as an exercise for the user. Our solution becomes:

l = 2*d

Simple. The length of the grid edge must by twice the diameter of the coin. No square roots or distributed multiplication. Double the diameter of the coin, and we're good to go. However, this has a benefit and a drawback. The benefit is that the grid is more compact (2*d vs ~3.4*d). This reduces your ability to "aim" for a grid edge or not. The drawback is that you will have to make more ambiguous judgment calls on whether or not the coin crosses an edge (75% of the time vs 50% in the previous approach).

So, like the previous approach of getting 4 outcomes, we have a visual check. If the coin center lands anywhere in the white area of the following image, it won't cross an edge. If the coin center lands anywhere in the green area, the coin will cross an x-axis edge. If the coin center lands anywhere in the blue, it will cross a y-axis edge, and if the coin center lands anywhere in the red, it will cross both an x-axis and y-axis edge simultaneously.

Grid showing the areas the coin center must lie to cross an edge or corner.

As with the previous 4 outcome approach, we can convince ourselves this holds. The area of the white square should equal the are of the blue, as well as equal the area of green, as well as equal the area of the red.

This means now we have 8 uniformly distributed outcomes:

  1. Tails, does not cross an edge = 000
  2. Tails, crosses the x-axis only = 001
  3. Tails, crosses the y-axis only = 010
  4. Tails, crosses both axes (corner) = 011
  5. Heads, does not cross an edge = 100
  6. Heads, crosses the x-axis only = 101
  7. Heads, crosses the y-axis only = 110
  8. Heads, crosses both axis (corner) = 111

How do you like them apples? 8 equally likely outcomes from a single coin toss.

As mentioned, one big advantage, aside from getting 3-bits per coin toss, is the smaller grid printed on paper. You have a less opportunity to try to cheat the system, and influence the results, but you also may have a more difficult time deciding if the coin crossed a grid edge.

On the left is the paper grid used for 2-bit coin toss extraction, while the paper grid on the right is used for 3-bit coin toss extraction.

Image showing the grid sizes for a 2-bit coin toss extraction. Image showing the grid sizes for a 3-bit coin toss extraction.

Some thoughts

Both fortunately and unfortunately, the coin will bounce when it lands on the paper. If the bounce causes the coin to ounce off the paper, all you can record is a single bit, or one of two outcomes- heads or tails, because it becomes ambiguous on whether or not the coin would have crossed a grid edge, and which edge it would have crossed.

You could probably minimize the bouncing by putting the paper on something less hard than a table, such as a felt card table, a towel or rag, or even a carpeted floor. But this may also create unwanted bias in the result. For example, if the paper is placed on something too soft, such as a carpeted floor, when the coin hits the paper, it could create damage to the paper, thus possibly influencing future flips, and as a result, introducing bias into the system.

Surrounding the paper with "walls", such that the coin cannot pass the boundaries of the grid may work, but the coin bouncing off the walls will also impact the outcomes. It seems clear that the walls would need to be placed immediately against the outer edge of the grid to prevent from introducing any bias.

Additional approaches

This is hardly conclusive on extracting extra randomness from a coin flip. It is known that coins precess when flying through the air, and as part of that precess, the coin may be spinning like a wheel. Which means that the coin could be "facing north" or "facing south" in addition to heads or tails. However, it's not clear to me how much spin exists in a standard flip, and if this could be a reliable source of randomness.

Further, in our 8-outcome approach, we used the center of the coin as the basis for our extra 2 bits, in addition to heads and tails. However, we could have made the grid edge length "4d" instead of "2d", and ignored the toss if the coin crosses both edges. This means a larger grid, which could improve your chances of aiming the coin in an attempt to skew the results, and also means sticking with smaller coins, as larger coins just won't have many grids that can fit on a paper.

Other ideas could be adding color to each grid. So not only do we identify edge crossing, but use color as a third dimension to get up to 4-bits of randomness. So maybe the x-axes have alternating black and white areas, as do the y-axis, and the corners. The center of the grid could be possibly alternating red/blue quadrants. Where the center of the coin lands determines the extracted bits. Of course, this would be a visually busy, and possibly confusing paper.

None of these have been investigated, but I think each could be interesting approaches to how to extract more bits out of a coin toss.

Conclusion

I think this is a refreshing approach to an age-old problem- the coin toss. Extracting 3-bits from a single flip is extracting more entropy than a fair d6 die can produce (~2.5 bits). This means that practically speaking, the coin is more efficient at entropy extraction than dice. However, you can roll multiple dice simultaneously, where it's more difficult to toss multiple coins simultaneously.

Acknowledgments

Thanks to Dr. Markku-Juhani O. Saarinen, Marsh Ray, and JV Roig for the discussions we had on Twitter, and for helping me flush out the ideas.

Middle Square Weyl Sequence PRNG

Introduction

The very first software algorithm to generate random numbers, was supposedly written in 1946 by John von Neumann, and is called the Middle Square Method, and it's crazy simple. Enough so, you could execute it with a pencil, paper, and basic calculator. In this post, I'm going to cover the method, it's drawbacks, and an approach called the Weyl Sequence

Middle Square Method

The algorithm is to start with an n-digit seed. The seed is squared, producing a 2n-digit result, zero-padded as necessary. The middle n-digits are then extracted from the result for the next seed. See? Simple. Let's look at an example.

Suppose my seed is 81 (2 digits). 81-squared is 6561 (4 digits). We then take the middle 2 digits out of the result, which is 56. We continue the process:

812 = 6561
562 = 3136
132 = 0169
162 = 0256
252 = 0625
622 = 3844
842 = 7056
52 = 0025
22 = 0004
02 = 0000

And we've reached the core problem with the middle square method- it has a tendency to converge, most likely to zero, but other numbers are possible, and in some cases a short loop. Of course, John von Neumann was aware of this problem, but he also preferred it that way. When the middle square method fails, it's immediately noticable. But, it's also horribly biased and fails most statistical tests for randomness.

Middle Square Weyl Sequence

A modern approach to an old problem is known as the Middle Square Weyl Sequence, from Hermann Weyl. Basically, a number is added to the square, then the middle bits are extracted from the result for the next seed. Let's first look at the C code, then I'll explain it in detail.

#include 

uint64_t x = 0, w = 0

// Must be odd (least significant bit is "1"), and upper 64-bits non-zero
uint64_t s = 0xb5ad4eceda1ce2a9; // qualifying seed

// return 32-bit number
inline static uint32_t msws() {
    x *= x; // square the number
    w += s; // the weyl sequence
    x += w; // apply to x
    return x = (x>>32) | (x<<32); // return the middle 32-bits
}

Explanation

Okay. Let's dive into the code. This is a 32-bit PRNG using John von Neumann's Middle Square Method, starting with a 64-bit seed "s". As the notes say, it must be an odd number, and the upper 64-bits must be non-zero. It must be odd, to ensure that "x" can be both odd and even. Recall- an odd plus an odd equals an even, and an odd plus an even equals an odd.

Note that at the start, "x" is zero, so squaring it is also zero. But that's not a problem, because we are adding a non-zero number. During that time, the "w" variable is assigned. It's dynamically changed on every iteration, although "s" remains static.

Finally, our return is a 32-bit number (because of the "inline static uint32_t" function width), but we're doing some bit-shifting. Supposedly, this is returning the middle 32-bits of our 64-bit "x", but that's not immediatly clear. Let's look at it more closely.

Example

Suppose "x = 0xace983fe671dbd09". Then "x" is a 64-bit number with the following bits:

1010110011101001100000111111111001100111000111011011110100001001

When that number is squared, it becomes the 128-bit number 0x74ca9e5f63b6047f6a65456d9da04a51, or in binary:

01110100110010101001111001011111011000111011011000000100011111110110101001100101010001010110110110011101101000000100101001010001

But remember, "x" is a 64-bit number, so in our C code, only the bottom 64-bits are returned from that 128-bit number. So "x" is really 0x6a65456d9da04a51, or in binary:

0110101001100101010001010110110110011101101000000100101001010001

But the bits "01101010011001010100010101101101" are the 3rd 32-bits of the 128-bit number that was the result of squaring "x" (see above). They are the "middle" 32-bits that we're after. So, we're going to do something rather clever. We're going to swap the upper 32-bits with the lower, then return the lower 32-bits.

Effectively, what we're doing is "ABCD" -> "CDAB", then returning "AB". We do this via bit-shifting. So, starting with:

0110101001100101010001010110110110011101101000000100101001010001

First, we bitshift the 64-bit number right 32-bits:

  0110101001100101010001010110110110011101101000000100101001010001 >> 32
= 0000000000000000000000000000000001101010011001010100010101101101

Then we bitshift "x" left 32-bits:

  0110101001100101010001010110110110011101101000000100101001010001 << 32
= 1001110110100000010010100101000100000000000000000000000000000000

Now we logically "or" them together:

 0000000000000000000000000000000001101010011001010100010101101101 |
 1001110110100000010010100101000100000000000000000000000000000000
|================================================================
 1001110110100000010010100101000101101010011001010100010101101101

See the swap? Now, due to the function return width, we return the lower 32-bits as our random number, which is 01101010011001010100010101101101, or 1785021805 in decimal. We've arrived at our goal.

Conclusion

At the main website, the C source code is provided, along with 25,000 seeds, as well as C source code for the Big Crush randomness tests from TestU01. This approach passes Big Crush with flying colors on all 25,000 seeds. Something a simple as adding an odd 64-bit number to the square changes John von Neumann's approach so much, it becomes a notable PRNG.

Who said you can't teach an old dog new tricks?

Why The "Multiply and Floor" RNG Method Is Biased

I've been auditing a lot of JavaScript source code lately, and a common problem I'm seeing when generating random numbers is using the naive "multiply-and-floor" method. Because the "Math.random()" function call returns a number between 0 and 1, not including 1 itself, then developers think that the "best practice" for generating a random number is as follows:


1
2
3
function randNumber(range) {
    return Math.floor(Math.random() * range); // number in the interval [0, range).
}

The problem with this approach is that it's biased. There are numbers returned that are more likely to occur than others. To understand this, you need to understand that Math.random() is a 32-bit RNG in Chrome and Safari, and a 53-bit RNG in Edge and Firefox. First, let's pretend every browser RNG is a 32-bit generator, then we'll extend it.

A 32-bit Math.random() means that there are only 232 = 4,294,967,296 possible decimal values in the range of [0, 1). This means that the interval [0, 1) is divided up every "1/232 = 0.00000000023283064365" decimal values. But that doesn't matter though, because if I wanted a random number between 1 and 100, 100 does not divide 4,294,967,296 evenly. I get 42,949,672 with 96 left over. What does this mean? It means that ...

1
randNumber(100);

... will favor 96 numbers out of our 100. The 4 least likely results are 24, 49, 74, & 99. That's our bias.

It doesn't matter if it's a 53-bit RNG either. "253 = 9,007,199,254,740,992" is not a multiple of 100. Instead, dividing by 100, I get 90,071,992,547,409 with 92 left over. So, with a 53-bit RNG, we have the same problem where 92 results will be more likely to be generated than 8 others. Those unlucky 8 are 11, 22, 33, 45, 58, 66, 79, and 91.

The only time this bias would not exhibit itself in the naive "multiply-and-floor" approach above, is if the random number requested is in the interval [0, 2N), where "N" is any positive integer. 232, 253, and 2X, where "X" is a positive integer, is always a multiple of 2N (2N divides 2X evenly, when N ≤ X, N > 0).

So, what do we do? How do we improve the naive multiply-and-floor approach? Thankfully, it's not too difficult. All we need to do is essentially the following:

  1. Force the RNG into 32-bits (common denominator for all browsers).
  2. Create a range of values that is a multiple of our desired range (E.G.: 1-100).
  3. Loop over the range picking values until a value inside the range is generated.
  4. Output the generated value modulo our desired range.

Let's see this in practice. First the unbiased code, then the explanation:


1
2
3
4
5
6
7
function uniformRandNumber(range) {
    var max = Math.floor(2**32/range) * range; // make "max" a multiple of "range"
    do {
        var x = Math.floor(Math.random() * 2**32); // pick a number of [0, 2^32).
    } while(x &gt;= max); // try again if x is too big
    return(x % range); // uniformly picked in [0, range)
}

I know what you're thinking: WAIT! YOU JUST DID THE "MULTIPLY AND FLOOR" METHOD!! HYPOCRITE!!! Hold on though. There are two subtle differences. See what they are?

The "max" variable is a multiple of "range" (step 2 above). So, if our range is [0, 100), then "max = 4294967200", which is a multiple of 100. This means that so long as "0 < = x < 4294967200", we can return "x % 100", and know that our number was uniformly chosen. However, if "x >= 4294967200", then we need to choose a new "x", and check if it falls within our range again (step 3 above). So long as "x" falls in [0, 4294967200), then we're good.

This extends to cryptographically secure random numbers too. In action, it's just:


1
2
3
4
5
6
7
8
function uniformSecureRandNumber(range) {
    const crypto = window.crypto || window.msCrypto; // Microsoft vs everyone else
    var max = Math.floor(2**32/range) * range; // make "max" a multiple of "range"
    do {
        var x = crypto.getRandomValues(new Uint32Array(1))[0];  // pick a number of [0, 2^32).
    } while(x &gt;= max); // try again if x is too big
    return(x % range); // uniformly picked in [0, range)
}

So it's not that "multiply and floor" is wrong so long as you use it correctly.

One small caveat- these examples are not checking if "range" is larger than 32-bits. I deliberately ignored this to draw your attention on how to correctly generate uniform random numbers. You may or may not need to do various checks on the "range" argument. Is it an integer type? Is it a positive integer? Is it 32-bits or less? Etc.

As an exercise for the reader, how could you extend this uniform generator to pick a random number in the range of [100, 200)? Going further, how could you pick only a random even number in the range of [250, 500)?

Do Not Use sha256crypt / sha512crypt - They're Dangerous

Introduction

I'd like to demonstrate why I think using sha256crypt or sha512crypt on current GNU/Linux operating systems is dangerous, and why I think the developers of GLIBC should move to scrypt or Argon2, or at least bcrypt or PBKDF2.

History and md5crypt

In 1994, Poul-Henning Kamp (PHK) added md5crypt to FreeBSD to address the weaknesses of DES-crypt that was common on the Unix and BSD systems of the early 1990s. DES-Crypt has a core flaw in that, not only DES reversible (which necessarily isn't a problem here), and incredibly fast, but it also limited password length to 8 characters (each of those limited to 7-bit ASCII to create a 56-bit DES key). When PHK created md5crypt, one of the things he made sure to implement as a feature was to support arbitrary-length passwords. In other words, unlike DES-Crypt, a user could have passwords greater than 9 or more characters.

This was "good enough" for 1994, but it had an interesting feature that I don't think PHK thought of at the time- md5crypt execution time is dependent on password length. To prove this, I wrote a simple Python script using passlib to hash passwords with md5crypt. I started with a single "a" character as my password, then increased the password length by appending more "a"s up until the password was 4,096 "a"s total.

1
2
import time
from passlib.hash import md5_crypt
1
 
1
 

md5_results = [None] * 4096

1
 
1
 

for i in xrange(0, 4096):
print i,
pw = "a" * (i+1)
start = time.clock()
md5_crypt.hash(pw)
end = time.clock()
md5_results[i] = end - start

1
 
1
2
3
with open("md5crypt.txt", "w") as f:
for i in xrange(0, 4096):
f.write("{0} {1}\n".format(i+1, md5_results[i]))

Nothing fancy. Start the timer, hash one "a" with md5crypt, stop the timer, and record the results. Start the timer, hash two "a"s with md5crypt, stop the timer, and record the results. Wash, rinse, repeat, until the password is 4,096 "a"s in length.

What do the timing results look like? Below are scatter plots of timing md5crypt for passwords of 1-128, 1-512, and 1-4,096 characters in length:

Scatter plot showing execution timing of md5crypt up to 128 characters

md5crypt 1-128 characters

Scatter plot showing execution timing of md5crypt up to 512 characters

md5crypt 1-512 characters

Scatter plot showing execution timing of md5crypt up to 4,096 characters

md5crypt 1-4,096 characters

At first, you wouldn't think this is a big deal; in fact, you may even think you LIKE it (we're supposed to make things get slower, right? That's a good thing, right???). But, upon deeper inspection, this actually is a flaw in the algorithm's design for two reasons:

  • Long passwords can create a denial-of-service on the CPU (larger concern).
  • Passive observation of execution times can predict password length (smaller concern).

Now, to be fair, predicting password length based on execution time is ... meh. Let's be honest, the bulk of passwords will be between 7-10 characters. And because these algorithms operate in block sizes of 16, 32, or 64 bytes, an adversary learning "AHA! I know your password is between 1-16 characters" really isn't saying much. But, should this even exist in a cryptographic primitive? Probably not. Still, the larger concern would be users creating a DoS on the CPU, strictly by changing password length.

I know what you're thinking- it's 2018, so there should be no reason why any practical length password cannot be adequately hashed with md5crypt insanely quickly, and you're right. Except, md5crypt was invented in 1994, 24 years ago. According to PHK, he designed it to take about 36 milliseconds on the hardware he was testing, which would mean a speed about 28 per second. So, it doesn't take much to see that by increasing the password's length, you can increase execution time enough to affect a busy authentication server.

The question though, is why? Why is the execution time dependent on password length? This is because md5crypt processes the hash for every 16 bytes in the password. As a result, this creates the stepping behavior you see in the scatter plots above. A good password hashing design would not do this.

PHK eventually sunset md5crypt in 2012 with CVE-2012-3287. Jeremi Gosney, a professional password cracker, demonstrated with Hashcat and 8 clustered Nvidia GTX 1080Ti GPUS, that a password cracker could rip through 128.4 million md5crypt guesses per second.

You should no longer be implementing md5crypt for your password hashing.

sha2crypt and NIH syndrome

In 2007, Ulrich Drepper decided to improve things for GNU/Linux. He recognized the threat that GPU clusters, and even ASICs, posed on fast password cracking with md5crypt. One aspect of md5crypt was the hard-coded 1,000 iterations spent on the CPU, before the password hash was finalized. This cost was not configurable. Also, MD5 was already considered broken, with SHA-1 showing severe weaknesses, so he moved to SHA-2 for the core of his design.

The first thing addressed, was to make the cost configurable, so as hardware improved, you could increase the iteration count, thus keeping the cost for calculating the final hash expensive for password crackers. However, he also made a couple core changes to his design that differed from md5crypt, which ended up having some rather drastic effects on its execution.

Using code similar to above with Python's passlib, but rather using the sha256_crypt() and sha512_crypt() functions, we can create scatter plots of sha256crypt and sha512crypt for passwords up to 128-characters, 512-characters, and 4,096-characters total, just like we did weth md5crypt. How do they fall out? Take a look:

Scatter plot showing execution timing of sha256crypt up to 128 characters

sha256crypt 1-128 characters

Scatter plot showing execution timing of sha256crypt up to 128 characters

sha256crypt 1-512 characters

Scatter plot showing execution timing of sha256crypt up to 512 characters

sha256crypt 1-4,096 characters

Scatter plot showing execution timing of sha512crypt up to 128 characters

sha512crypt 1-128 characters

Scatter plot showing execution timing of sha512crypt up to 512 characters

sha512crypt 1-512 characters

Scatter plot showing execution timing of sha512crypt up to 4,096 characters

sha512crypt 1-4,096 characters

Curious. Not only do we see the same increasing execution time based on password length, but unlike md5crypt, that growth is polynomial. The changes Ulrich Drepper made from md5crypt are subtle, but critical. Essentially, not only do we process the hash for every character in the password per round, like md5crypt, but we process every character in the password three more times. First, we take the binary representation of each bit in the password length, and update the hash based on if we see a "1" or a "0". Second, for every character in the password, we update the hash. Finally, again, for every character in the password, we update the hash.

For those familiar with big-O notation, we end up with an execution run time of O(pw_length2 + pw_length*iterations). Now, while it is true that we want our password hashing functions to be slow, we also want the iterative cost to be the driving factor in that decision, but that isn't the case with md5crypt, and it's not the case with sha256crypt nor sha512crypt. In all three cases, the password length is the driving factor in the execution time, not the iteration count.

Again, why is this a problem? To remind you:

  • Long passwords can create a denial-of-service on the CPU (larger concern).
  • Passive observation of execution times can predict password length (smaller concern).

Now, granted, in practice, people aren't carrying around 4 kilobyte passwords. If you are a web service provider, you probably don't want people uploading 5 gigabyte "passwords" to your service, creating a network denial of service. So you would probably be interested in creating an adequate password maximum, such as what NIST recommends at 128 characters, to prevent that from occurring. However, if you have an adequate iterative cost (such as say, 640,000 rounds), then even moderately large passwords from staff, where such limits may not be imposed, could create a CPU denial of service on busy authentication servers.

As with md5crypt, we don't want this.

Now, here's what I find odd about Ulrich Drepper, and his design. In his post, he says about his specification (emphasis mine):

Well, there is a problem. I can already hear everybody complaining that I suffer from the NIH syndrome but this is not the reason. The same people who object to MD5 make their decisions on what to use also based on NIST guidelines. And Blowfish is not on the lists of the NIST. Therefore bcrypt() does not solve the problem.

What is on the list is AES and the various SHA hash functions. Both are viable options. The AES variant can be based upon bcrypt(), the SHA variant could be based on the MD5 variant currently implemented.

Since I had to solve the problem and I consider both solutions equally secure I went with the one which involves less code. The solution we use is based on SHA. More precisely, on SHA-256 and SHA-512.

PBKDF2 was standardized as an IETF standard in September 2000, a full 7 years before Ulrich Drepper created his password hashing functions. While PBKDF2 as a whole would not be blessed by NIST until 3 years later, in December 2010 in SP 800-132, PBKDF2 can be based on functions that, as he mentioned, were already in the NIST standards. So, just like his special design that is based on SHA-2, PBKDF2 can be based on SHA-2. Where he said "I went with the one which involves less code", he should have gone with PBKDF2, as code had already long since existed in all sorts of cryptographic software, including OpenSSL.

This seems to be a very clear case of NIH syndrome. Sure, I understand not wanting to go with bcrypt, as it's not part of the NIST standards . But don't roll your own crypto either, when algorithms already exist for this very purpose, that ARE based on designs that are part of NIST.

So, how does PBKDF2-HMAC-SHA512 perform? Using similar Python code with the passlib password hashing library, it was trivial to put together:

Scatter plot showing execution timing of PBKDF2-HMAC-SHA512 up to 128 characters

PBKDF2-HMAC-SHA512 1-128 characters

Scatter plot showing execution timing of PBKDF2-HMAC-SHA512 up to 512 characters

PBKDF2-HMAC-SHA512 1-512 characters

Scatter plot showing execution timing of PBKDF2-HMAC-SHA512 up to 4,096 characters

PBKDF2-HMAC-SHA512 1-4,096 characters

What this clearly demonstrates, is that the only factor driving execution time, is the number of iterations you apply to the password, before delivering the final password hash. This is what you want to achieve, not giving the opportunity for a user to create a denial-of-service based on password length, nor an adversary learn the length of the user's password based on execution time.

This is the sort of details that a cryptographer or cryptography expert would pay attention to, as opposed to an end-developer.

It's worth pointing out that PBKDF2-HMAC-SHA512 is the default password hashing function for Mac OS X, with a variable cost between 30,000 and 50,000 iterations (typical PBKDF2 default is 1,000).

OpenBSD, USENIX, and bcrypt

Because Ulrich Drepper brought up bcrypt, it's worth mentioning in this post. First off, let's get something straight- bcrypt IS NOT Blowfish. While it's true that bcrypt is based on Blowfish, they are two completely different cryptographic primitives. bcrypt is a one-way cryptographic password hashing function, where as Blowfish is a two-way 64-bit block symmetric cipher.

At the 1999 USENIX conference, Niels Provos and David Mazières, of OpenBSD, introduced bcrypt to the world (it was actually in OpenBSD 2.1, June 1, 1997). They were critical of md5crypt, stating the following (emphasis mine):

MD5 crypt hashes the password and salt in a number of different combinations to slow down the evaluation speed. Some steps in the algorithm make it doubtful that the scheme was designed from a cryptographic point of view--for instance, the binary representation of the password length at some point determines which data is hashed, for every zero bit the first byte of the password and for every set bit the first byte of a previous hash computation.

PHK was slightly offended by their off-handed remark that cryptography was not his core consideration when designing md5crypt. However, Niels Provos was a graduate student in the Computer Science PhD program at the University of Michigan at the time. By August 2003, he had earned his PhD. Since 1997, bcrypt has withstood the test of time, it has been considered "Best Practice" for hashing passwords, and is still well received today, even though better algorithms exist for hashing passwords.

bcrypt limits password input to 72 bytes. One way around the password limit is with pre-hashing. A common approach in pseudocode is to hash the password with SHA-256, encode the digest into base64, then feed the resulting ASCII string into bcrypt. However, make sure to salt the prehash, or you fall victim to breach correlation attacks. Using HMAC is a better option than generic cryptographic hashes, as it has a construction for properly handling secret keys. In this case, a site-wide secret known as a "pepper" is appropriate.

In pseudocode:

pwhash = bcrypt(base64(hmac-sha-256(password, pepper, 256)), salt, cost)

This results in a 44-byte password (including the "=" padding) that is within the bounds of the 72 byte bcrypt limitation. This prehashing allows users to have any length password, while only ever sending 44 bytes to bcrypt. My implementation in this benchmark uses the passlib.hash.bcrypt_sha256.hash() method. How does bcrypt compare to md5crypt, sha256crypt, and sha512crypt in execution time based on password length?

Scatter plot showing execution timing of bcrypt up to 128 characters

bcrypt 1-128 characters (prehashed)

Scatter plot showing execution timing of bcrypt up to 512 characters

bcrypt 1-512 characters (prehashed)

Scatter plot showing execution timing of bcrypt up to 4,096 characters

bcrypt 1-4,096 characters (prehashed)

Now, to be fair, bcrypt is only ever hashing 44 byte passwords in the above results, because of my prehashing. So of course it's running in constant time. So, how does it look with hashing 1 to 72 character passwords without prehashing?

Scatter plot showing execution timing of bcrypt up to 72 characters

bcrypt 1-72 characters (raw)

Again, we see consistent execution, driven entirely by iteration cost, not by password length.

Colin Percival, Tarsnap, and scrypt

In May 2009, mathematician Dr. Colin Percival presented to BSDCan'09 about a new adaptive password hashing function called scrypt, that was not only CPU expensive, but RAM expensive as well. The motivation was that even though bcrypt and PBKDF2 are CPU-intensive, FPGAs or ASICs could be built to work through the password hashes much more quickly, due to not requiring much RAM, around 4 KB. By adding a memory cost, in addition to a CPU cost to the password hashing function, we can now require the FPGA and ASIC designers to onboard a specific amount of RAM, thus financially increasing the cost of production. scrypt recommends a default RAM cost of at least 16 MB. I like to think of these expensive functions as "security by obesity".

scrypt was initially created as an expensive KDF for his backup service Tarsnap. Tarsnap generates client-side encryption keys, and encrypts your data on the client, before shipping the encrypted payload off to Tarsnap's servers. If at any event your client is lost or stolen, generating the encryption keys requires knowing the password that created them, and attempting to discover that password, just like typical password hashing functions, should be slow.

It's now been 9 years as of this post, since Dr. Percival introduced scrypt to the world, and like bcrypt, it has withstood the test of time. It has received, and continues to receive extensive cryptanalysis, is not showing any critical flaws or weaknesses, and as such is among the top choices as a recommendation from security professionals for password hashing and key derivation.

How does it fare with its execution time per password length?

Scatter plot showing execution timing of scrypt up to 128 characters

scrypt 1-128 characters

Scatter plot showing execution timing of scrypt up to 512 characters

scrypt 1-512 characters

Scatter plot showing execution timing of scrypt up to 4,096 characters

scrypt 1-4,096 characters

I'm seeing a trend here.

The Password Hashing Competition winner Argon2

In 2013, an open public competition, in the spirit of AES and SHA-3, was held to create a password hashing function that approached password security from what we knew with modern cryptography and password security. There were many interesting designs submitted, including a favorite of mine by Dr. Thomas Pornin of StackExchange fame and BearSSL, that used delegation to reduce the work load on the honest, while still making it expensive for the password cracker.

In July 2015, the Argon2 algorithm was chosen as the winner of the competition. It comes with a clean approach of CPU and memory hardness, making the parameters easy to tweak, test, and benchmark. Even though the algorithm is relatively new, it has seen at least 5 years of analysis, as of this writing, and has quickly become the "Gold Standard" for password hashing. I fully recommend it for production use.

Any bets on how it will execution times will be affected by password length? Let's look:

Scatter plot showing execution timing of Argon2 up to 128 characters

Argon2 1-128 characters

Scatter plot showing execution timing of Argon2 up to 512 characters

Argon2 1-512 characters

Scatter plot showing execution timing of Argon2 up to 4,096 characters

Argon2 1-4,096 characters

Execution time is not affected by password length. Imagine that. It's as if cryptographers know what they're doing when designing this stuff.

Conclusion

Ulrich Drepper tried creating something more secure than md5crypt, on par with bcrypt, and ended up creating something worse. Don't use sha256crypt or sha512crypt; they're dangerous.

For hashing passwords, in order of preference, use with an appropriate cost:

  1. Argon2 or scrypt (CPU and RAM hard)
  2. bcrypt or PBKDF2 (CPU hard only)

Avoid practically everything else:

  1. md5crypt, sha256crypt, and sha512crypt
  2. Any generic cryptographic hashing function (MD5, SHA-1, SHA-2, SHA-3, BLAKE2, etc.)
  3. Any complex homebrew iterative design (10,000 iterations of salted SHA-256, etc.)
  4. Any encryption design (AES, Blowfish (ugh), ChaCha20, etc.)

UPDATE: 2020-12-28:Debian just pushed Linux PAM 1.4.0 into the unstable repository. This enables bcrypt password hashing for Debian and Debian-based systems by default without any 3rd party tools to custom source code compilation. It is strongly advised that you drop sha256crypt/sha512crypt in favor of bcrypt.

UPDATE: A note about PBKDF2 that was brought up in a Twitter thread from @solardiz. PBKDF2-HMAC-SHA512 isn't really an upgrade from sha512crypt (nor PBKDF2-HMAC-SHA256 an upgrade from sha256crypt), because PBKDF2 really isn't GPU resistant in the way bcrypt is. However, bcrypt can be implemented cheaply on ASICs with only 4 KB of memory.

If your choice of password hashing in constrained to NIST standards, which includes PBKDF2, then unfortunately, bcrypt, scrypt, and Argon2 are out of the question; just make sure to use it properly, which includes choosing a high iteration count based on your authentication load capacity. At that point, password storage is probably not the worst of your security concerns.

However, if you're not limited to NIST constraints, then use the others.

Acknowledgement

Thanks to Steve Thomas (@Sc00bzT) for our discussions on Twitter for helping me see this quirky behavior with sha256crypt and sha512crypt.

Use A Good Password Generator

Introduction

Screenshot of a Google Sheets spreadsheet showing my browser password generator audit.

For the past several months now, I have been auditing password generators for the web browser in Google Sheets. It started by looking for creative ideas I could borrow or extend upon for my online password generator. Sure enough, I found some, such as using mouse movements as a source of entropy to flashy animations of rolling dice for a Diceware generator. Some were deterministic, using strong password hashing or key derivation functions, and some had very complex interfaces, allowing you to control everything from letter case to pronounceable passwords and unambiguity.

However, the deeper I got, the more I realized some were doing their generation securely and others weren't. I soon realized that I wanted to grade these online generators and sift out the good from the bad. So, I created a spreadsheet to keep track of what I was auditing, and it quickly grew from "online generators" to "password generators and passphrase generators", to "web password, web passphrase, bookmarklet, chrome extensions, and firefox extenions".

When all was said and done, I had audited 300 total generators that can be used with the browser. Some were great while some were just downright horrible. So, what did I audit, why did I choose that criteria, and how did the generators fall out?

I audited:

  • Software license
  • Server vs. client generation
  • RNG security, bias, and entropy
  • Generation type
  • Network security
  • Mobile support
  • Ads or tracker scripts
  • Subresource integrity

No doubt this is a "version 1.0" of the spreadsheet. I'm sure those in the security community will mock me for my choices of audit categories and scoring. However, I wanted to be informed of how each generator was generating the passwords, so when I made recommendations about using a password generator, I had confidence that I was making a good recommendation.

Use A Password Manager

Before I go any further, the most important advice with passwords, is to use a password manager. There are a number of reasons for this:

  • They encourage unique passwords for each account.
  • They encourage passwords with sufficient entropy to withstand offline clustered attacks.
  • They allow storage of other types of data, such as SSNs or credit card numbers.
  • Many provide online synchronization support across devices, either internally or via Dropbox and Google Drive.
  • Many ship additional application support, such as browser extensions.
  • They ship password generators.

So before you go any further, the Best Practice for passwords is "Use A Password Manager". As members of the security community, this is the advice we should be giving to our clients, whether they are family, friends, coworkers, or professional customers. But, if they are already using a password manager, and discussions arise about password generation, then this audit is to inform members of the security community which generators are "great", which generators are "good", which generators are "okay", and finally, which generators are "bad".

So to be clear, I don't expect mom, dad, and Aunt Josephine to read this spreadsheet, so they make informed decisions about which password generator to use. I do hope however that security researchers, cryptographers, auditors, security contractors, and other members of the security community to take advantage of it.

So with that said, let me explain the audit categories and scoring.

Software License

In an ethical software development community, there is value granted when software is licensed under a permissive "copyleft" license. Not necessarily GPL, but any permissive license, from the 2-clause BSD to the GPL, from the Creative Commons to unlicensed public domain software. When the software is licensed liberally, it allows developers to extend, modify, and share the code with the larger community. There are a number of different generators I found in my audit where this happened; some making great changes in the behavior of the generator, others not-so-much.

License Open Source Proprietary

So when a liberal license was explicitly specified, I awarded one point for being "Open Source" and no points for being "Proprietary" when a license either was either not explicitly specified or was licensed under a EULA or "All Rights Reserved".

This is something to note- just because the code is on a public source code repository, does not mean the code is licensed under a permissive license. United States copyright law states that unless explicitly stated, all works fall under a proprietary "All Rights Reserved" copyright to the creator. So if you didn't specify a license in your Github repository, it got labeled as "Proprietary". It's unfortunate, because I think a lot of generators missed getting awarded that point for a simple oversight.

Server vs. Client Generation

Every generator should run in the browser client without any knowledge of the generation process by a different computer, even the web server. No one should have any knowledge whatsoever of what passwords were generated in the browser. Now, I recognize that this is a bit tricky. When you visit a password generator website such as my own, you are showing a level of trust that the JavaScript delivered to your browser is what you expect, and is not logging the generated passwords back to the server. Even with TLS, unless you're checking the source code on every page refresh and disconnecting your network, you just cannot guarantee that the web server did not deliver some sort of logging script.

Generator Client Server

With that said, you still should be able to check the source code on each page refresh, and check if it's being generated in the client or on the server. I awarded one point of being a "Client" generator and no points for being a "Server" generator. Interestingly enough, I thought I would just deal with this for the website generators, and wouldn't have to worry about this with bookmarklets or browser extensions. But I was wrong. I'll expand on this more in the "Network Security" category, but suffice it to say, this is still a problem.

Generation Type

I think deterministic password generators are fundamentally flawed. Fatally so, even. Tony Arcieri wrote a beautiful blog post on this matter, and it should be internalized across the security community. The "four fatal flaws" of deterministic password generators are:

  1. Deterministic password generators cannot accommodate varying password policies without keeping state.
  2. Deterministic password generators cannot handle revocation of exposed passwords without keeping state.
  3. Deterministic password managers can’t store existing secrets.
  4. Exposure of the master password alone exposes all of your site passwords.

Number 4 in that list is the most fatal. We all know the advice that accounts should have unrelated randomized unique passwords. When one account is compromised, the exposed password does not compromise any of the other accounts. This is not the case with deterministic password generators. Every account that uses a password from a deterministic generator shares a common thread via the master secret. When that master secret is exposed, all online accounts remain fatally vulnerable to compromise.

Proponents of deterministic generators will argue that discovery of the master password of an encrypted password manager database will also expose every online account to compromise. They're not wrong, but let's consider the attack scenarios. In the password manager scenario, a first compromise needs to happen in order to get access to the encrypted database file. Then a second compromise needs to happen in discovering the master password. But with deterministic generators, only one compromise needs to take place- that is using dictionary or brute force attacks to discover the master password that led to password for the online account.

With password managers, two compromises are necessary. With determenistic generators, only one compromise is necessary. As such, the generator was awardeded a point for being "Random" and no points for being "Deterministic".

Generator Random Unknown Deterministic

RNG Security

Getting random number generation is one of those least understood concepts in software development, but ironically, developers seem to think they have a firm grasp of the basic principles. When generating passwords, never at any point should a developer choose anything but a cryptographic random number generator. I awarded one point for using a CRNG, and no points otherwise. In some cases, the generation is done on the server, so I don't know or can't verify its security, and in some cases, the code is so difficult to analyze, that I cannot determine its security that way either.

CRNG Yes Maybe Unknown No

In JavaScript, using a CRNG primarily means using the Web Crypto API via "window.crypto.genRandomValues()", or "window.msCrypto.getRandomValues()" for Microsoft-based browsers. Never should I see "Math.random()". Even though it may be cryptographically secure in Opera, it likely isn't in any of the other browsers. Some developrs shipped the Stanford JavaScript Cryptographic Library. Others shipped a JavaScript implementation of ISAAC, and others yet shipped some AES-based CSPRNG. While these are "fine", you really should consider ditching those userspace scripts in favor of just calling "window.crypto.getRandomValues()". It's less software the user has to download, and as a developer, you are less likely to introduce a vulnerability.

Also, RC4 is not a CSPRNG, neither is ChaCha20, SHA-256, or any other hash function, stream cipher, or block cipher. So if you were using some JavaScript library that is using a vanilla hashing function, stream cipher, or block cipher as your RNG, then I did not consider it as secure generation. The reason being, is that even though ChaCha20 or SHA-256 may be prediction resistant, it is not backtracking resistant. To be a CRNG, the generator must be both prediction and backtracking resistant.

However, in deterministic password generators that are based on a master password, the "RNG" (using this term loosely here) should be a dedicated password hashing function or password-based key derivation function with an appropriate cost. This really means using only:

  • sha256crypt or sha512crypt with at least 5,000 rounds.
  • PBKDF2 with at least 1,000 rounds.
  • bcrypt with a cost of at least 5.
  • scrypt with a cost of at least 16 MiB of RAM and at least 0.5s execution.
  • Argon2 with sufficient cost of RAM and execution time.

Anything else, like hashing the master password or mouse entropy with MD5, SHA-1, SHA-2, or even SHA-3 will not suffice. The goal of those dedicated password generators or key derivation functions is to slow down an offline attempt at discovering the password. Likely the master password does not contain sufficient entropy, so it remains the weakest link in the generation process. By using a dedicated password hashing or key derivation function with an appropriate cost, we can guarantee a certain "speed limit" with offline clustered password attacks, making it difficult to reconstruct the password.

RNG Uniformity

Even though the developer may have chosen a CRNG, they may not be using the generator uniformly. This is probably the most difficult aspect of random number generation to grasp. It seems harmless enough to call "Math.floor(Math.random() * length)" or "window.crypto.getRandomValues(new UInt32Array(1))[0] % length". In both cases, unless "length" is a power of 2, the generator is biased. I awarded one point for being an unbiased generator, and zero points otherwise.

Uniform Yes Maybe Unknown No

To do random number generation in an unbiased manner, you need to find how many times the length divides the period of the generator, and note the remainder. For example, if using "window.crypto.getRandomValues(new UInt32Array(1))", then the generator has a period of 32-bits. If your length is "7,776", is in the case of Diceware, then 7,776 divides 232 552,336 times with a remainder of 2,560. This means that 7,776 divides values 0 through 232-2,561 evenly. So if your random number is between the range of 232-2,560 through 232-1, the value needs to be tossed out, and a new number generated.

Oddly enough, you could use a an insecure CRNG, such as SHA-256, but truncate the digest to a certain length. While the generator is not secure, the generator in this case is unbiased. More often than not actually, deterministic generators seem to fall in this category, where a poor hashing function was chosen, but the digest was truncated.

RNG Entropy

I've blogged about this a number of times, so I won't repeat myself here. Search by blog for password entropy, and get caught up with the details. I awarded one point for generators with at least 70 bits of entropy, 0.5 points for 55 through 69 bits of entropy, and no points for entropy less than 55 bits.

Entropy 70 69 55 54

I will mention however that I chose the default value that was presented to me when generating the password. Worse, if I was forced to chose my length, and I could chose a password of one character, then I awarded it as such. When you're presenting password generators to people like Aunt Josephine, they'll do anything they can do get away with as little as possible. History has shown this is 6-8 characters in length. This shouldn't be possible. A few Nvidia GTX960 GPUs can crack every 8 character ASCII password hashed with SHA-1 in under a week. There is no reason why the password generator should not present minimum defaults that are outside the scope of practical hobbyist brute force searching.

So while that may be harsh, if you developed one of the generators in my audit, and you were dinged for this- I'm calling you out. Stop it.

Network Security

When delivering the software for the password generation, it's critical that the software is delivered over TLS. There should be no room for a man-in-the-middle to inject malicious code to discover what passwords your generating, send you a determined list passwords, or any other sort of compromise. This means, however, that I expect a green lock in the browser's address or status bars. The certificate should not be self-signed, it should not be expired, it should not be missing intermediate certificates, it should not be generated for a different CN, or any other certificate problems. Further, the site should be HTTPS by default.

HTTPS Yes Not Default Expired No

I awarded one point for "Yes", serving the software over secure and problem-free HTTPS, and zero points otherwise.

Mobile View Support

For the most part, due to their ubiquity, developers are designing websites that support mobile devices with smaller screens and touch interfaces. It's as simple as adding a viewport in the HTML header, and as complex as customized CSS and JavaScript rules for screen widths, user agents, and other tests. Ultimately, when I'm recommending a password generator to Aunt Josephine while she's using her mobile phone, she shouldn't have to pinch-zoom, scroll, and other nuisances when generating and copying the password. As silly as that may sound, if the site doesn't support a mobile interface, then it wasn't awarded a point.

Mobile Yes No

Ads and Tracker Scripts

Screenshot showing the Lightbeam extension for Firefox after several days of use.

I get it. I know that as a developer, you want easy access to analytics about who is visiting your site. Having that data can help you learn what is driving traffic to your site, and how you can make adjustments as necessary to improve that traffic. But when it comes to password generators, no one other than me and the server that sent the code to my browser, should know that I'm about to generate passwords. I awarded a point for not shipping trackers, and zero points if the generator did.

Trackers No Yes

Google Analytics, social media scripts, ads, and other 3rd party browser scripts track me via fingerprinting, cookies, and other methods to learn more about who I am and what I visited. If you want to see how extensive this is, install the Lightbeam extension for Firefox. This shows the capability of companies to share data and learn more about who you are and where you've been on the web. Recently, Cambridge Analytica, a small unknown company, was able to mine data on millions of Facebook users, and the data mining exposed just how easy it was for Facebook to track your web behavior even when not on the site.

At first, I thought this would be just something with website generators, but when I started auditing browser extensions, I quickly saw that developers were shipping Google Analytics, and other tracking scripts in the bundled extension as well.

Offline

When I started auditing the bookmarklets and extensions, I pulled up my developer console, and watched for any network activity when generating the passwords or using the software. To my dismay, some do "call home" by either wrapping the generator around an <iframe> or requiring an account, such as in the case with password managers. I awarded one point for staying completely offline, zero points for any sort of network activity.

Offline Yes No

Now, you may be thinking that this isn't exactly fair for password generators or just <iframe>s, but the generation is still happening client-side and without trackers. For the most part, I agree, except, when installing browser extensions, the developer has the opportunity to make the password generation fully offline. That may sound a touch silly for a browser extension, but regardless, it removes the risk of a web server administrator from modifying the delivered JavaScript on every page load. The only times the JavaScript would be changed, is when the extension is updated. This behaves more like standard desktop software.

Subresource Integrity

Finally, the last category I audited was subresource integrity. SRI is this idea that a developer can add cryptographic integrity to <link> and <script< resources. The reason the W3C gives for the motivation of SRI is:

"Compromise of a third-party service should not automatically mean compromise of every site which includes its scripts. Content authors will have a mechanism by which they can specify expectations for content they load, meaning for example that they could load a specific script, and not any script that happens to have a particular URL."

So SRI guarantees that even though the data is delivered under TLS, if the cryptographic hashes are valid, then the data has not been compromised, even if the server has. More information about SRI can be found at the W3C Github page, and I encourage everyone to check out the links there.

I awarded one point if "Yes" or "N/A" (no resources called), and zero points for "No".

SRI Yes N/A No

Scoring

With all these audit categories taken into account, I gave a total score to see how generators ranked among others. Each generator has different auditing criteria, so the score varies from generator type to generator type. I treated the scoring like grade school- 100% is an "A", 99% to roughly 85% as "great", 84% to 51% is "okay", and 50% or less is failing. I translated that grade school score into the following:

Score Perfect Perfect - 1 Perfect - 2 51% 50%

When you look at the spreadsheet, you'll see that it is sorted first by score in descending order then alphabetically by "Name" in ascending order. It's sorted this way, so as a security consultant, you can quickly get a high-level overview of the "good" versus the "bad", and you should be able to find the generator you're looking for quickly. The names are just generic unique identifiers, and sometimes there are notes accompanying each generator when I found odd behavior, interesting generation templates, or other things that stood out.

Conclusion

This audit is for the security community, not for Aunt Josephine and friends or family. I don't expect you to share this with your dad or your spouse or your Facebook friends, and expect them to know what they're looking at, or why they should care. However, I recently took a poll on Twitter, asking the security community if they recommended password generators to family and friends. Only 90 votes came in, but 69% of you said that you did, with a few comments coming in saying that they would only recommend them if they shipped with a password manager (see the beginning of this post). I suspect that some of the 31% of the "No" votes are in this category.

Screenshot of a Twitter poll I took asking if people recommend password generators. 69% said yes, 31% said no.

So, of those 69% of you that said "Yes", I hope this audit will be of some value.

The Entropy of a Digital Camera CCD/CMOS Sensor

Recently, Vault12 released an app for iOS that uses the mobile device's camera as a source of randomness. Unfortunately, when putting the generated binary files through the Dieharder tests, it comes out pretty bad. I get 20 "PASSED", 13 "WEAK", and 81 "FAILED" results. For a TRNG, it should be doing much better than that. Now, to be clear, I'm not writing this post to shame Vault12. I actually really love the TrueEntropy app, and am eagerly waiting for it to hit Android, so I can carry around a TRNG in my pocket. However, things can get better, and that is what this post is hopefully addressing.

Using a camera as a TRNG is nothing new. SGI created a patent for pointing a webcam at a lava lamp, using the chaotic nature of the lava lamp itself as the source of entropy. Later, it was realized that this was unnecessary. The CCD/CMOS in the camera was experiencing enough noise from external events to be more than sufficient. This noise is shown in the photo itself, and is appropriately referred to as "image noise".

The primary sources of noise come from the following:

  • Thermal noise- Caused by temperature fluctuations due to electrons flowing across resistant mediums.
  • Photon noise- Caused by photons hitting the CCD/CMOS and releasing energy to neighboring electronics.
  • Shot noise- Caused by current flow across diodes and bipolar transistors.
  • Flicker noise- Caused by traps due to crystal defects and contaniments in the CCD/CMOS chip.
  • Radiation noise- Caused by alpha, beta, gamma, x-ray, and proton decay from radioactive sources (such as outer-space) interacting with the CCD/CMOS.

Some of these noise sources can be manipulated. For example, by cooling the camera, we can limit thermal noise. A camera at 0 degrees Celsius will experience less noise than one at 30 degrees Celsius. A camera in a dark room with less photons hitting the sensor will experience less noise than a bright room. Radiation noise can be limited by isolating the sensor in a radiation-protective barrier.

Let's put this to the test, and see if we can actually calculate the noise in a webcam. To do this, we'll look at a single frame with the lens cap covered, where the photo is taken in a dark room, and the web cam is further encompassed in a box. We'll take the photo at about 20 degrees Celsius (cool room temperature).

In order to get a basis for the noise in the frame, we'll use Shannon Entropy from information theory. Thankfully, understanding Shannon Entropy isn't that big of a deal. My frame will be taken with OpenCV from a PlayStation 3 Eye webcam, which means the frame itself is just a big multidimensional array of numbers between 0 and 255 (each pixel only provides 8 bits of color depth). So, to calculate the Shannon Entropy of a frame, we'll do the following:

  1. Place each number in its own unique bin of 0 through 255.
  2. Create an observed probability distribution (histogram) by counting the numbers in each bin.
  3. Normalize the distribution, creating 256 p-values (the sum of which should equal "1").
  4. For each of the 256 bins, calculate: -p_i*log_2(p_i).
  5. Sum the 256 values.

Thankfully, I don't have all of this by hand- numpy provides a function for me to call that does a lot of the heavy lifting for me.

So, without further ado, let's look at the code, then run it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#!/usr/bin/python

import cv2
import math
import numpy

def max_brightness(frame):
    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
    h, s, v = cv2.split(hsv)
    v[v > 0] = 255
    v[v < = 0] += 255
    final_hsv = cv2.merge((h, s, v))
    frame = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)
    return frame

def get_entropy(frame):
    histogram = numpy.histogram(frame, bins=256)[0]
    histogram_length = sum(histogram)
    samples_probability = [float(h) / histogram_length for h in histogram]
    entropy = -sum([p * math.log(p, 2) for p in samples_probability if p != 0])
    return entropy

cap = cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)

ret, frame1 = cap.read()
ret, frame2 = cap.read()
frame_diff = cv2.absdiff(frame1, frame2)

cv2.imwrite('/tmp/frame1.bmp', frame1)
cv2.imwrite('/tmp/frame2.bmp', frame2)
cv2.imwrite('/tmp/frame_diff.bmp', frame_diff)

frame1_max = max_brightness(frame1)
frame2_max = max_brightness(frame2)
frame_diff_max = max_brightness(frame_diff)

cv2.imwrite('/tmp/frame1_max.bmp', frame1_max)
cv2.imwrite('/tmp/frame2_max.bmp', frame2_max)
cv2.imwrite('/tmp/frame_diff_max.bmp', frame_diff_max)

print("Entropies:")
print("    frame1: {}".format(get_entropy(frame1)))
print("    frame2: {}".format(get_entropy(frame2)))
print("    frame_diff: {}".format(get_entropy(frame_diff)))
print("    frame1_max: {}".format(get_entropy(frame1_max)))
print("    frame2_max: {}".format(get_entropy(frame2_max)))
print("    frame_diff_max: {}".format(get_entropy(frame_diff_max)))

Let's look over the code before running it. First, I'm actually capturing two frames right next to each other, then taking their composite difference. We know that a photo consists of its signal (the data most people are generally interested in) and its noise (the data they're not). By taking the composite difference between the two, I'm attempting to remove the signal. Because the frames were taken in rapid succession, provided nothing was drastically changing between the frames, most of the data will be nearly identical. So the signal should disappear.

But what about the noise? Well, as discussed above, the noise is a bit unpredictable and slightly unmanageable. Unlike my signal, the noise will be drastically different between the two frames. So, rather than removing noise, I'll actually be adding noise in the difference.

The next thing you'll notice is that I'm either maximizing or completely removing the luminosity in an HSV color profile. This is done just as a visual demonstration of what the noise actually "looks like" in the frame. You can see this below (converted to PNG for space efficiency).

Frame 1

Frame 2

Difference of frames 1 & 2

Frame 1 maxed luminosity

Frame 2 maxed luminosity

Difference of frames 1 & 2 maxed luminosity


Running the Python script in my 20 degrees Celsius dark room with the lens cap on and all light removed as much as possible, I get:

$ python frame-entropy.py
Entropies:
    frame1: 0.0463253223509
    frame2: 0.0525489364555
    frame_diff: 0.0864590940377
    frame1_max: 0.0755975713815
    frame2_max: 0.0835428883103
    frame_diff_max: 0.134900632254

The "ent(1)" userspace utility confirms these findings when saving the frames as uncompressed bitmaps:

$ for I in frame*.bmp; do printf "$I: "; ent "$I" | grep '^Entropy'; done
frame1.bmp: Entropy = 0.046587 bits per byte.
frame1_max.bmp: Entropy = 0.076189 bits per byte.
frame2.bmp: Entropy = 0.052807 bits per byte.
frame2_max.bmp: Entropy = 0.084126 bits per byte.
frame_diff.bmp: Entropy = 0.086713 bits per byte.
frame_diff_max.bmp: Entropy = 0.135439 bits per byte.

It's always good to use an independent source to confirm your findings.

So, in the standard frames, I'm getting about 0.05 bits per byte of entropy. However, when taking the composite difference, that number almost doubles to about 0.09 bits per byte. This was expected, as you remember, we're essentially taking the noise from both frames, and composing them in the final frame. Thus, the noise is added in the final frame.

What was actually surprising to me were the entropy values after setting the extreme luminosity values. This may be due to the fact that there are larger deltas between adjacent pixels when creating our histogram. When taking the difference of the two adjusted frames, the entropy jumps up to about 0.13 bits per byte. So, we could safely say that a composed frame with maxed luminosity that is the difference of two frames has about 0.11 bits of entropy per byte, plus or minus 0.02 bits per byte.

What does this say about the frame as a whole though? In my case, my frame is 640x480 pixels. Knowing that each pixel in my PS3 Eye webcam only occupies 1 byte or 8 bits, we can calculate the entropy per frame:

(640*480) pixels/frame * 1 byte/pixel = 307200 bytes/frame
307200 bytes/frame * 0.11 entropy bits/byte = 33792 entropy bits/frame

Each frame in my 640x480 PS3 Eye webcame provides about 33,792 bits of entropy. For comparison SHA-256 theoretically provides a maximum of 256-bits of entropic security. Of course, we should run millions of trials, collecting the data, calculate the standard deviation, and determine a more true average entropy. But, this will suffice for this post.

So, now that we know this, what can we do with this data? Well, we can use it as a true random number generator, but we have to be careful. Unfortunately, as the frame is by itself, it's heavily biased. In the frame, there exists spatial correlation with adjacent pixels. In the frame difference, there exists both spatial and time correlations. This isn't sufficient as a secure true random number generator. As such, we need to remove the bias. There are a few ways of doing this, called "randomness extraction", "software whitening", "decorrelation", or "debiasing". Basically, we want to take a biased input, and remove any trace of bias in the output.

We could use John von Neumann decorrelation, where we look at two non-overlapping consecutive bits. If the two bits are identical, then both bits are discarded. If they are different, then the most significant bit is output, while the least significant bit is discarded. This means that at a best, you are discarding half of your data, but how much is discarded all depends on how badly biased the data is. We know that our frame is only providing 0.11 bits of entropy per 8 bits. So we're keeping 11 bits out of 800. That's a lot of data that is discarded. One drawback with this approach, however, is if one or more bits are "stuck", such is an a dead pixel. Of course, this will lower the overall entropy of the frame, but will also drastically impact the extractor.

A different approach would be to use chaos machines. This idea is relatively new and not thoroughly studied; at least I'm struggling to find good research papers on the topic. The idea is taking advantage of the chaotic behavior of certain dynamical systems, such as a double pendulum. Due to the nature of chaos, small fluctuations in initial conditions lead to large changes in the distant future. The trick here is selecting your variables, such as the time distance and chaotic process correctly. Unlike John von Neumann decorrelation, that automatically discovers the bias and discards it for you, care has to be taken to make sure that the resulting output is debiased.

A better approach is using cryptographic primitives like one-way hashing or encryption, sometimes called "Kaminsky debiasing". Because most modern crytographic primitives are designed to emulate theoretical uniform unbiased random noise, the security rests on whether or not that can be achieved. In our case, we could encrypt the frame with AES and feed the ciphertext as our TRNG output. Unfortunately, this means also managing a key, which doesn't necessarily have to be kept secret. A better idea would be to use cryptographic hashing functions, like SHA-2, or even better, extendable output functions (XOFs).

Obviously, it should go without stating, that encrypting or hashing your biased input isn't increasing the entropy. This means that we need to have a good handle on what our raw entropy looks like (as we did above) beforehand. We know in our case that we're getting about 35 kilobits of entropy per frame, so hashing with SHA-256 is perfectly acceptable, even if we're losing a great deal of entropy in the output. However, if we were only getting 200-bits of security in each frame, while SHA-256 is debiasing the data, we still only have 200-bits of entropy in the generated output.

Really though, the best approach is an XOF. We want to output as much of our raw entropy as we can. Thankfully, NIST has 2 XOFs standardized as part of the SHA-3: SHAKE128 and SHAKE256. An XOF allows you to output a digest of any length, where SHA-256 for example, only allows 256-bits of output. The security margin of the SHAKE128 XOF function is the minimum of half of the digest or 128-bits. If I have an entropy 35 kilobits, I would like to have all of that entropy available in the output. As such, I can output 4 KB in the digest knowing full well that's within my entropy margin. Even though I'm losing as much data as the John von Neumann extractor, I'm not vulnerable to "stuck pixels" being a problem manipulating the extractor.

When I put all this code together in Python:

  1. Take the difference of two consecutive overlapping frames.
  2. Maximize the luminosity of the new composed frame.
  3. Hash the frame with SHAKE128.
  4. Output 4 KB of data as our true random noise.

At 30 frames per second for a resolution of 640x480, outputting 4 KB per frame will provide 120 KBps of data per second, and this is exactly what I see when executing the Python script. The PS3 Eye camera also supports 60 fps at a lower resolution, so I could get 240 KBps if I can keep the same security margin of 4 KB per frame. I haven't tested this, but intuition tells me I'll have a lower security margin at the lower resolution.

Coming full circle, when we put our TRNG to the Dieharder test, things come out vastly different than Vault12's results:

  • Vault12 TrueEntropy:
    1. PASSED: 20
    2. WEAK: 13
    3. FAILED: 81
  • My webcam TRNG:
    1. PASSED: 72
    2. WEAK: 12
    3. FAILED: 30

1,000 Books Read In One Year? No, Not By A Long Shot

Recently, Goodreads sent out a tweet about how to remove social media and the Internet from your life, so you can focus on reading 1,000 books in one year. The post follows this sort of math:

  1. The average person reads 400 words per minute.
  2. The typical non-fiction books have around 50,000 words.
  3. Reading 200 books will take you 417 hours.
  4. The average person spends 608 hours on social media annually.
  5. The average person spends 1,642 hours watching TV annually.
  6. Giving up 2,250 hours annually will allow you to read 1,000 books in one year.

This blew my mind. I'm a very avid reader. Since signing up for Goodreads in 2013, I've been hitting at least 20,000 pages read every year, and I'm on track to read 25,000 pages this year. But, I'm only putting down 75 books each year. Now granted, a spare 2,250 hours per year ÷ 365 days per year is just over 6 hours per day of reading. I'm not reading 6 hours per day. I don't watch TV, I have a job, kids and a wife to take care of, and other things that keep me off the computer most of my time at home (I'm writing this blog post after midnight).

No doubt, 6 hours per day is a lot of reading. But I average 2 hours per day, and I'm only putting down 75 books annually. 6 hours of reading per day would only put me around 225 books each year, a far cry from the 1,000 I should be hitting. What gives?

Well, it turns out, Charles Chu is being a bit ... liberal with his figures. First off, the average person does not read 400 words per minute. Try about half, at only 200 words per minute, according to Iris Reading, a company that sells a product on improving your reading speed and memory comprehension. This cuts our max books from 1,000 in a year to 500.

Second, Chu claims the average non-fiction book is 50,000 words in length. I can tell you that 50,000 words is a very slim novel. This feels like a Louis L'Amour western length to me. Most books that I have read are probably closer to twice that length. However, according to HuffPost which quotes Amazon Text Stats, the average book is 64,000 words in length. But, according to this blog post by Writers Workshop, the average "other fiction" novel length is 70,000 to 120,000 words. This feels much more in line with what I've read personally, so I'll go with about 100,000 words in a typical non-fiction novel.

So now that brings our annual total down from 500 to 250 books. That's reading 200 words per minute, for 6 hours every day, with non-fiction books that average 100,000 words in length. I claimed that I would probably come in around 225 books, so this seems to be a much closer ballpark figure.

But, does it line up? Let's look at it the another way, and see if we can agree that 200-250 books annually, reading 6 hours per day, is more realistic.

I claimed I'm reading about 2 hours per day. I read about 3 hours for 4 of the 7 days in a week while commuting to work. For the other three days in the week, I can read anywhere from 1 hour to 3 hours, depending on circumstances. So my week can see anywhere from 13 hours to 15 hours on average. That's about 2 hours per day.

During 2016, I read 24,048 pages. That's about 65 pages per day, which feels right on target. But, how many words are there per page? According to this Google Answers answer, which offers a couple citations, a novel averages about 250 words per page.

But, readinglength.com shows that many books I've read are over 300 words per page, and some denser at 350 words per page, with the average sitting around 310. So 250 words per page at 65 pages per day is 16,250 words per day, and 310 words per page at 65 pages per day 20,150 pages that I'm reading.

Because I'm only reading about 2 hours per day, that means I'm reading at a meager 135 to 168 words per minute, based on the above stats. I guess I'm a slow reader.

If I highball it at 168 words per minute, then in 6 hours, I will have read 60,480 words. After a year of reading, that's 22,075,200 words. An independent blog post confirms this finding of 250-300 words per page, but also uses that to say that most adult books are 90,000 - 100,000 words in length (additional confirmation from earlier), and young adult novels target the 55,000 word length that Chu cited (maybe Chu likes reading young adult non-fiction?). As such, I can expect to read 22,075,200 words per year ÷ 100,000 words per book, or about 220 books in a year of reading 6 hours every day.

Bingo.

So, what can we realistically expect from reading?

  1. Readers average 200 words per minute.
  2. A page averages 250 words.
  3. A novel averages 100,000 words.
  4. One hour of reading per day can hit 30-40 books per year.
  5. Six hours of reading per day can hit 200-250 books per year.
  6. To read 1,000 books in a year, you need to read 22 hours per day.

This is reading average length adult non-fiction books at an average speed of 200 words per minute. The calculus completely changes if your average reading speed is faster than 200 wpm, you read primarily graphic novels with little text, or read shorter non-fiction novels. Fourth-grade chapter books? Yeah, I could read 1,000 of those in a year. 🙂

Password Best Practices I - The Generator

This is the first in a series of posts about password best practices. The series will cover best practices from a few different angles- the generator targeted at developers creating those generators, the end user (you, mom, dad, etc.) as you select passwords for accounts from those generators, and the service provider storing passwords in the database for accounts that your users are signing up for.

Motivation

When end users are looking for passwords, they may turn to password generators, whether they be browser extensions, websites, or offline installable executables. Regardless, as a developer, you will need to ensure that the passwords you your providing for your users are secure. Unfortunately, that's a bit of a buzzword, and can be highly subjective. So, we'll motivate what it means to be "secure" here:

  • The generator is downloaded via HTTPS, whether it's a website, executable ZIP, or browser extension.
  • The generator uses a cryptographically secure random number generator.
  • The generator provides at least 70-bits of entropy behind the password.
  • The generator is open source.
  • The generator generates passwords client-side, not server-side.
  • The generator does not serve any ads or client-side tracking software.

I think most of us can agree on these points- the software should be downloaded over HTTPS to mitigate man-in-the-middle attacks. A cryptographically secure RNG should be used to ensure unpredictability in the generated password. In addition to that, the CRNG should also be uniformly distributed across the set, so no elements of the password are more likely to appear than any other. Creating an open source password generator ensures that the software can be audited for correctness and instills trust in the application. Generating passwords client-side, means the server hos now possible way of knowing what passwords were generated, unless the client is also calling home (the code should be inspected). And of course, we don't want any adware or malware installed in the password generating application to further compromise the security of the generator.

Okay. That's all well and good, but what about this claim to generate passwords from at least 70-bits in entropy? Let's dig into that.

Brute force password cracking

Password cracking is all about reducing possibilities. Professional password crackers will have access to extensive word lists of previously compromised password databases, they'll have access to a great amount of hardware to rip through the password space, and they'll employ clever tricks in the password cracking software, such as Hashcat or MDXFind, to further reduce the search space, to make finding the passwords more likely. In practice, 90% of leaked hashed password databases are reversed trivially. With the remaining 10%, half of that space takes some time to find, but those passwords are usually recovered. The remaining few, maybe 3%-5%, contain enough entropy that the password cracking team likely won't recover those passwords in a week, or a month, or even a year.

So the question is this- what is that minimum entropy value that thwarts password crackers? To answer this question, let's look at some real-life brute force searching to see if we can get a good handle on the absolute minimum security margin necessary to keep your client's leaked password hash out of reach.

Bitcoin mining

Bitcoin mining is the modern-day version of the 1849 California Gold Rush. As of right now, Bitcoin is trading at $3,665.17 per BTC. As such, people are fighting over each other to get in on the action, purchasing specialized mining hardware, called "Bitcoin ASICs", to find those Bitcoins as quickly as possible. These ASICs are hashing blocks of data with SHA-256, and checking a specific difficulty criteria to see if it meets the requirements as a valid Bitcoin block. If so, the miner that found that block is rewarded that Bitcoin and it's recorded in the never-ending, ever-expanding, non-scalable blockchain.

How many SHA-256 hashes is the word at large calculating? As of this writing, the current rate is 7,751,843.02 TH/s, which is 7,751,843,020,000,000,000 SHA-256 hashes per second. At one point, it peaked at 8,715,000 THps, and there is no doubt in my mind that it will pass 10,000,000 THps before the end of the year. So let's run with that value, of 10,000,000,000,000,000,000 SHA-256 hashes per second, or 1019 SHA-256 hashes per second.

If we're going to talk about that in terms of bits, we need to convert it to a base-2 number, rather than base-10. Thankfully, this is easy enough. All we need to calculate is the log2(X) = log(X)/log(2). Doing some math, we see that Bitcoin mining is roughly flipping every combination of bits in a:

  • 63-bit number every second.
  • 69-bit number every minute.
  • 74-bit number every hour.
  • 79-bit number every day.
  • 84-bit number every month.
  • 88-bit number every year.

What does this look like? Well, the line is nearly flat. Here in this image, the x-axis is the number of days spent mining for Bitcoin, starting from 0 through a full year of 365 days. The y-axis is the search space exhaustion in bits. So, you can see that in roughly 45 days, Bitcoin mining have calculated enough SHA-256 hashes to completely exhaust an 85-bit search space (click to enlarge):

Plot showing log(x*10^19*86400)/log(2) for Bitcoin mining.

Real-world password cracking

That's all fine and dandy, but I doubt professional password crackers have access to that sort of hardware. Instead, let's look at a more realistic example.

Recently, Australian security researcher Troy Hunt, the guy that runs https://haveibeenpwned.com/, released a ZIP of 320 million SHA-1 hashed passwords that he's collected over the years. Because the passwords were hashed with SHA-1, recovering them should be like shooting fish in a barrel. Sure enough, a team of password crackers got together, and made mincemeat of the dataset.

In the article, it is mentioned that they had a peak password cracking speed of 180 GHps, or 180,000,000,000 SHA-1 hashes per second, or 18*1010 SHA-1 hashes per second. The article mentions that's the equivalent of 25 NVidia GTX1080 GPUs working in concert. To compare this to Bitcoin mining, the team was flipping every combination of bits in a:

  • 41-bit number every second.
  • 47-bit number every minute.
  • 53-bit number every hour.
  • 58-bit number every day.
  • 63-bit number every month.
  • 66-bit number every year.

As we can see, this is a far cry from the strength of Bitcoin mining. But, are those numbers larger than you expected? Let's see how it looks on the graph, compared to Bitcoin (click to enlarge):

Plot showing log(x*18*10^10*86400)/log(2) for this cluster of password cracking hobbyists.

So, it seems clear that our security margin is somewhere above that line. Let's look at one more example, a theoretical one.

Theoretical password cracking by Edward Snowden

Before Edward Snowden became known to the world as Edward Snowden, he was known to Laura Poitras as "Citizenfour". In emails back-and-forth between Laura and himself, he told her (emphasis mine):

"Please confirm that no one has ever had a copy of your private key and that it uses a strong passphrase. Assume your adversary is capable of one trillion guesses per second. If the device you store the private key and enter your passphrase on has been hacked, it is trivial to decrypt our communications."

But one trillion guesses per second is only about 5x the collective power of our previous example of a small team of password cracking hobbyists. That's only about 125 NVidia GTX1080 GPUs. Certainly interested adversaries would have more money on hand to invest in more computing power than that. So, let's increase the rate to 10 trillion guesses per second. 1,250 NVidia GTX1080 GPUs would cost our adversary maybe $500,000. A serious investment, but possibly justifiable, and certainly not outside the $10 billion annual budget of the NSA. So let's roll with it.

At 1013 password hashes per second, we are flipping every combination of bits in a:

  • 43-bits every second.
  • 49-bits every minute.
  • 54-bits every hour.
  • 59-bits every day.
  • 64-bits every month.
  • 68-bits every year.

Plotting this on our chart with both Bitcoin mining and clustered hobbyist password cracking, we see (click to enlarge):

Plot of log(x*86400*10^13)/log(2)

The takeaway

What does all this math imply? That as a developer of password generator software, you should be targeting a minimum of 70-bits of entropy with your password generator. This will give your users the necessary security margins to steer clear of well-funded adversaries, should some service provider's password database get leaked to the Internet, and they find themselves as a target.

As a general rule of thumb, for password generator developers, these are the sort of security margins your can expect with entropy:

  • 70-bits or more: Very secure.
  • 65-69 bits: Moderately secure.
  • 60-64 bits: Weakly secure.
  • 59 bits or less: Not secure.

Colored recommendation of the previous plot showing all brute force attempts.

What does this mean for your generator then? This means that the number of size of the password or passphrase that you are giving users should be at least:

  • Base-94: 70/log2(94)=11 characters
  • Base-64: 70/log2(64)=12 characters
  • Base-32: 70/log2(32)=14 characters
  • Base-16: 70/log2(16)=18 characters
  • Base-10: 70/log2(10)=22 characters
  • Diceware: 70/log2(7776)=6 words

Now, there is certainly nothing wrong with generating 80-bit, 90-bit, or even 128-bit entropy. The only thing you should consider with this, is the size of the resulting password and passphrases. For example, if you were providing a minimum of 128-bit security for your users with the password generator, then things would look like:

  • Base-94: 128/log2(94)=20 characters
  • Base-64: 128/log2(64)=22 characters
  • Base-32: 128/log2(32)=26 characters
  • Base-16: 128/log2(16)=32 characters
  • Base-10: 128/log2(10)=39 characters
  • Diceware: 128/log2(7776)=10 words

As you can see, as you increase the security for your users, the size of the generated passwords and passphrases will also increase.

Conclusion

It's critical that we are doing right by our users when it comes to security. I know Randall Munroe of XKCD fame created the "correct horse battery staple" comic, advising everyone to create 4-word passphrases. This is fine, provided that those 4 words meets that minimum 70-bits of entropy. In order for that to happen though, the word list needs to be:

   4 = 70/log2(x)
=> 4 = 70/log(x)/log(2)
=> 4 = 70*log(2)/log(x)
=> 4*log(x) = 70*log(2)
=> log(x) = 70/4*log(2)
=> x = 1070/4*log(2)
=> x ~= 185,364

You would need a word list of at least 185,364 words to provide at least 17.5-bits of entropy per word, which brings us to required 70-bits of total entropy for 4 words. All too often, I see generators providing four words, but the word list is far too small, like around Diceware size, which is only around 51-bits of entropy. As we just concluded, that's not providing the necessary security for our users.

So, developers, when creating password and passphrase generators, make sure they are at least targeting the necessary 70-bits of entropy, in addition to the other qualifications that we outlined at the beginning of this post.

Colorful Passphrases

Since the development of my passphrase and password generator, I started working toward improving the other online generators out there on the web. I created a Google Spreadsheet to work toward that goal, by doing reasonable audits to "rank" each generator, and see how they stacked up against the rest. Then, I started submitting patches in hopes of making things better.

One passphrase generator that was brought to my attention was Pass Plum. Pass Plum supplies an example word list to use for generating your passphrases, if you choose to install the software on your own server. Unfortunately, the list is only 140 words in size, so if you choose to use that for your word list, then you only get about 7.13-bits of entropy per word. Sticking to the default configuration 4 words given to the user, that's a scant 28-bits of security on your passphrase, which is trivially reversed. I submitted a pull request to extend it to 4,096 words, providing exactly 13-bits of entropy per word, or about 52-bits of entropy for a 4-word passphrase- a significant improvement.

I noticed, however, that the default list was nothing but color names, and that got me thinking- what if not only the generator provided color names for passphrases, but also colored the word that color name? Basically, a sort of false visual synesthesia. What I want to know is this, is it easier to remember passphrases when you can associate each word with a visual color?

So, over the past several nights, and during weekends, I've been putting this together. So, here is is- colorful passphrases.

Collage of 4 separate screenshots of what the color passphrase generator looks like.

Head over to my site to check it out. If a color is too light (its luma value is very high), then the word is outlined with CSS. Every word is bold, to make the word even more visible on the default white background.

As I mentioned, the idea is simple: people struggle remembering random meaningless strings of characters for passwords, so passphrases are a way to make a random series of words easier to recall. After all, it should be easier to remember "gnu hush gut modem scamp giddy" than it is to remember "$5hKXuE[\NK". It's certainly easier to type on mobile devices, and embedded devices without keyboards, like smart TVs and video game consoles.

But, even then, there is nothing that is really tying "gnu hush gut modem scamp giddy" together, so you force yourself in some sort of mnemonic to recall it. Visually stimulated color passphrases have the benefit of not only using a mnemonic to recall the phrase, but an order of colors as well. For example, you might not recall "RedRobin Pumpkin Revolver DeepPuce Lucky Crail TealDeer", but you may remember its color order of roughly "red orange black purple gold brown teal". "A RedRobin is red. A pumpkin is orange. A revolver (gun) is black. DeepPuce is a purple. Lucky coins are gold. Crail, Soctand has brown dirt. TealDeer are teal."

However, it also comes with a set of problems. First, what happens if you actually have visual synesthesia? Will seeing these colors conflict with your mental image of what the color should be for that word? Second, many of the words are very obscure, such as "Crail" or "Tussock" or "Tuatara" (as all seen in the previous screenshot collage). Finally, what happens when you have a color passphrase where two similar colors are adjacent to each other? Something like "Veronica Affair Pipi DeepOak Atoll BarnRed RedOxide"? Both "BarnRed" and "RedOxide" are a deep reddish color. Will it be more difficult to recall which comes first?

A screenshot showing a color collision between "BarnRed" and "RexOxide"

As someone who is interested in password research, I wanted to see what sort of memory potential visually colorful passphrases could have. As far as I know, this has never been investigated before (at least I could find any research done in this area, and I can't find any passphrase generators doing it). This post from Wired investigates alternatives to text entry for password support, such as using color wheels, but doesn't say anything about visual text. Here is a browser extension that colors password form fields on websites, with the SHA-1 hash of your password as you type it. You know if it's correct, by recognizing if the pattern is the same it always is when logging in.

Long story short, I think I'm wading into unknown territory here. If you find this useful, or even if you don't, I would be very interested in your feedback.