Image of the glider from the Game of Life by John Conway
Skip to content

The Ouroboros Card Shuffle

Introduction

For the most part, I don't play a lot of table games, and I don't play party games. But occasionally, I'll sit down with my family and play a board game or card game. When we play a card game though, I get teased by how I shuffle the deck of cards.

I know that to maximize entropy in the deck, it should be riffle shuffled at least 7 times, but a better security margin would be around 10 shuffles, and after 12, I'm just wasting my time. But I don't always riffle shuffle. I'll also do various deterministic shuffles as well, such as the pile shuffle to separate cards from each other.

I'm familiar with a number of deterministic card shuffles, which I didn't even know had names:

  • Pile shuffle- Separate the cards into piles, one at a time, until exhausted, then collect the piles. I usually just do 4 or 5 piles, and pick up in order.
  • Mongean shuffle- Move the cards from one had to the other, strictly alternating placing each discard on top then beneath the previously discarded cards.
  • Mexican spiral shuffle- Discard the top card on the table, and the second card to the bottom of the deck in your hard. Continue discarding all odd cards to the table, all even cards beneath the deck in hand until exhausted. I never do this shuffle, because it takes too long to execute.

In practice, when I'm playing a card game with my family, I'll do something like 3 riffle shuffles, a pile shuffle, 3 more riffle shuffles, a Mongean shuffle, 3 more riffle shuffles, another pile shuffle, then one last riffle shuffle. I'll get teased about it, of course: "Dad, I'm sure the cards are shuffled just fine. Can we just play now?", but when we play the game, I'll never hear complaints about how poorly the deck was shuffled.

This got me thinking though- there aren't that many simple deterministic blind card shuffles (I say "blind", because any of the playing card ciphers would work, but that requires seeing the cards, which is generally frowned upon when playing competitive card games). I wonder what else is out there. Well, doing some web searches didn't turn out much. In fact, all I could find were variations of the above shuffles, such as random pile discarding and pickup, but nothing new.

So the question then turned into- could I create my own simple deterministic card shuffle? It didn't take me long before I came up with what I call the "Ouroboros shuffle".

The Ouroboros Shuffle

Before going any further, let me state that I very much doubt I'm the first to come up with this idea, but I have searched and couldn't find where anyone else had documented it. If it does in fact exist, let me know, and I'll gladly give credit where credit is due. Until then, however, I'll stick with calling it the "Ouroboros Shuffle", named after the serpent or dragon eating its own tail.

The shuffle is simple:

  1. Holding the deck in your hard, discard the first card from the bottom of the deck to the table.
  2. Discard the top card of the deck to the discard pile on the table.
  3. Repeat steps 1 and 2, strictly alternating bottom and top cards until the deck is exhausted.

If the playing cards are plastic-based, like those from Kem or Copag, then you could "pinch" the top and bottom cards simultaneously, and pull them out of the deck in your hand to the tale. If you do this perfectly, you will pinch 2 cards only 26 times. If they're paper-based though, this may or may not work as efficiently due to cards having a tendency to stick together after heavy use.

If the deck was unshuffled as "1, 2, 3, ..., 50, 51, 52", then the first shuffle would look like this:

Step: 0
Unshuffled: 1, 2, 3, ..., 50, 51, 52
Shuffled: <none>

Step: 1
Unshuffled: 1, 2, 3, ..., 49, 50, 51
Shuffled: 52

Step: 2
Unshuffled: 2, 3, 4, ..., 49, 50, 51
Shuffled: 1, 52

Step: 3
Unshuffled: 2, 3, 4, ..., 48, 49, 50
Shuffled: 51, 1, 52

Step: 4
Unshuffled: 3, 4, 5, ..., 48, 49, 50
Shuffled: 2, 51, 1, 52

Step: 5
Unshuffled: 3, 4, 5, ..., 47, 48, 49
Shuffled: 50, 2, 51, 1, 52

Step: 6
Unshuffled: 4, 5, 6, ..., 47, 48, 49
Shuffled: 3, 50, 2, 51, 1, 52

....

Step 50:
Unshuffled: 26, 27
Shuffled: 25, 28, 24, ..., 51, 1, 52

Step 51:
Unshuffled: 26
Shuffled: 27, 25, 28, ..., 51, 1, 52

Step 52:
Unshuffled: <none>
Shuffled: 26, 27, 25, ..., 51, 1, 52

As you can see, the top and bottom cards are always paired together in the unshuffled deck, and discarded as a pair to the shuffled deck. The top and bottom cards could also be thought of as the head and tail of a list, and thus why I called it the Ouroboros shuffle.

If you execute this algorithm perfectly from an unshuffled deck, it will take 51 rounds to before restoring the deck to its unshuffled state.

Observations

Almost immediately, I noticed a bias. It doesn't matter how many times I execute this algorithm, the bottom card will always remain on the bottom. In the above example, the King of Spades (if assigned the value of "52") will stay at the bottom of the deck, due to the nature of the shuffle of discarding the bottom card first. So I recognized that I would need to cut at least 1 card from the top of the deck to the bottom of the deck before the next round of the shuffle, to ensure the bottom card gets mixed in with the rest of the deck.

Other questions started popping up, specifically:

  • How many perfect shuffles will it take to restore the deck to an unshuffled state now?
  • Is there a different bias hidden after cutting the top card to the bottom?
  • What if I cut 2 cards? 3 cards? 51 cards?

Whelp, time to code up some Python, and see what pops out. What I'm looking for is what the state of the deck looks like after each round. In other words, I want to know which card occupies which positions in the deck. For example, does the Seven of Clubs see all possible 52 positions in the deck? Without the cut, we know that's not possible, because the bottom card stubbornly stays in the bottom position.

Typing up a quick script and graphing with Gnuplot gave me the following images. The first image on the left is the Ouroboros shuffle with no cuts, where the right image is the Ouroboros shuffle followed by cutting the top card to the bottom of the deck as the end of the round. Click to enlarge.

What you're looking at is the card position in the deck along the X-axis and the card value along the Y-axis. In the left image, where the Ouroboros shuffle is executed without any following cuts, the 52nd card in the deck is always the face value of 52. But in the right image, where the Ouroboros shuffle is followed by cutting one card from the top of the unshuffled deck to the bottom, every card position sees every face value.

So what would happen if instead of cutting 1 card off the top to the bottom at the end of each round, I cut 2 cards, and cards, etc. all the way to cutting 51 cards off the top to the bottom? Well, more Python scripting, and I generated a total of 52 images showing every possible position a card occupies in the deck until the deck returns to its unshuffled state.

Visualizations of the Ouroboros card shuffle with cuts

Interestingly enough, executing the Ouroboros shuffle followed by cutting 19 cards, leads to a cycle length of 6,090 perfect shuffles before restoring the deck back to its unshuffled state. Awesome! Except, as you can see in the Imgur post above, it's extremely biased.

Every shuffle-and-cut is listed here with its cycle length:

Cut: 0, iter: 51
Cut: 1, iter: 52
Cut: 2, iter: 51
Cut: 3, iter: 272
Cut: 4, iter: 168
Cut: 5, iter: 210
Cut: 6, iter: 217
Cut: 7, iter: 52
Cut: 8, iter: 418
Cut: 9, iter: 52
Cut: 10, iter: 24
Cut: 11, iter: 350
Cut: 12, iter: 387
Cut: 13, iter: 252
Cut: 14, iter: 1020
Cut: 15, iter: 144
Cut: 16, iter: 1972
Cut: 17, iter: 34
Cut: 18, iter: 651
Cut: 19, iter: 6090
Cut: 20, iter: 175
Cut: 21, iter: 90
Cut: 22, iter: 235
Cut: 23, iter: 60
Cut: 24, iter: 2002
Cut: 25, iter: 144
Cut: 26, iter: 12
Cut: 27, iter: 50
Cut: 28, iter: 24
Cut: 29, iter: 10
Cut: 30, iter: 44
Cut: 31, iter: 72
Cut: 32, iter: 297
Cut: 33, iter: 90
Cut: 34, iter: 45
Cut: 35, iter: 132
Cut: 36, iter: 12
Cut: 37, iter: 210
Cut: 38, iter: 207
Cut: 39, iter: 104
Cut: 40, iter: 420
Cut: 41, iter: 348
Cut: 42, iter: 30
Cut: 43, iter: 198
Cut: 44, iter: 35
Cut: 45, iter: 140
Cut: 46, iter: 390
Cut: 47, iter: 246
Cut: 48, iter: 28
Cut: 49, iter: 12
Cut: 50, iter: 36
Cut: 51, iter: 30

The only "shuffle then cut" rounds that are uniform appear to be cutting 1 card, 7 cards, and 9 cards. The other 49 shuffles are biased in one way or another, even if each of them have different cycle lengths.

Here's the Python code I used to create the shuffled lists, each into their own comma-separated file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/usr/bin/python

def step_1(deck):
    tmp = []
    for card in range(26):
        tmp.insert(0, deck[-1])
        deck.pop(-1)
        tmp.insert(0, deck[0])
        deck.pop(0)
    return tmp

def step_2(deck, cut):
    return deck[cut:] + deck[:cut]

orig = [_ for _ in range(1, 53)]
deck = [_ for _ in range(1, 53)]

for i in range(52):
    with open("cut_{}.csv".format(i), "w") as f:
        f.write(",".join(map(str, deck)) + "\n")
    deck = step_1(deck)
    deck = step_2(deck, i)
    n = 1
    while deck != orig:
        with open("cut_{}.csv".format(i), "a") as f:
            f.write(",".join(map(str, deck)) + "\n")
        deck = step_1(deck)
        deck = step_2(deck, i)
        n += 1
    print "Cut: {}, iter: {}".format(i, n)

Conclusion

This was a fun shuffle to play with and one that I'll incorporate into my card playing with my family. Now I could do something like: 3 riffle shuffles, Ouroboros shuffle with 1 card cut, 3 riffle shuffles, Mongean shuffle, 3 riffle shuffles, pile shuffle, 1 riffle shuffle, and I will be happy.

Latin Squares, Mathematics, and Cryptography

Introduction

Recently, I've been studying Latin squares and their role in classical cryptography including the one-time pad. Latin squares are NxN squares where no element in a row is duplicated in that same row, and no element in a column is duplicated in that column. The popular Sudoku game is a puzzle that requires building a Latin square.

As I delved deeper and deeper into the subject, I realized that there is a rich history here that I would like to introduce you to. Granted, this post is not an expansive nor exhaustive discussion on Latin squares. Rather, it's meant to introduce you to the topic, so you can look into it on your own if this interests you.

In each of the sections below, the "A" and "N" characters are highlighted in the table image to demonstrate that the table is indeed a Latin square. Further, you can click on any table image to enlarge.

Tabula Recta

The Tabula Recta is the table probably most are familiar with, and recognize it as the Vigenère table. However, the table was first used by German author and monk Johannes Trithemius in 1508, which it was used in his Trithemius polyalphabetic cipher. This was a good 15 years before Blaise de Vigenère was even born, 43 years before Giovan Battista Bellaso wrote about his cipher using the table in his 1553 book "La cifra del. Sig. Giovan Battista Bellaso", and 78 years before Blaise de Vigenère improved upon Bellaso's cipher.

Today, we know it as either the "tabula recta" or the "Vigenère table". Regardless, each row shifts the alphabet one character to the left, creating a series of 26 Caesar cipher shifts. This property of the shifted alphabets turns out to be a weakness with the Vigenère cipher, in that if a key repeats, we can take advantage of the Caesar shifts to discover the key length, then the key, then finally breaking the ciphertext.

Jim Sandborn integrated a keyed tabula recta into his Kryptos sculpture in the 2nd and 4th panels. Even though the first 3 passages in the Kryptos sculpture have been cracked, the 4th passage remains a mystery.

Beaufort Table

More than 250 years later, Rear Admiral Sir Francis Beaufort modified the Vigenère cipher by using a reciprocal alphabet and changing the way messages were encrypted. Messages were still encrypted with a repeating key, similar to the Vigenère cipher, but plaintext character was located in the first column and the key in the first row. The intersection was the ciphertext. This became the Beaufort cipher.

His reasoning in why he used a different table and changed the enciphering process isn't clear. It may have been as simple as knowing concepts about the Vigenère cipher without knowing the specific details. He may have had other reasons.

One thing to note, however, is that Vigenère-encrypted ciphertexts cannot be decrypted with a Beaufort table and vice versa. Even though the Beaufort cipher suffers from the same cryptanalysis, the Caesar shifts are different, and the calculation if using numbers instead of letters is also different.

The Beaufort table was integrated into a hardware encryption machine called the Hagelin M-209. The M-209 was used by the United States military during WWII and through the Korean War. The machine itself was small, and compact, coming in about the size of a lunchbox and only weighing 6 pounds, which was remarkable for the time.

One thing to note, is that the Beaufort table has "Z" in the upper-left corner, with the reciprocal alphabet in the first row and first column, as shown in the image above. Any other table that is not exactly as shown above that claims to be the Beaufort table is not correct.

NSA's DIANA Reciprocal Table

Of course, the narcissistic NSA needs their own polyalphabetic table! We can't let everyone else be the only ones who have tables! I'm joking of course, as there is a strong argument for using this reciprocal table rather than the Beaufort.

Everyone is familiar with the one-time pad, a proven theoretically unbreakable cipher if used correctly. There are a few ways in which to use the one-time pad, such as using XOR or modular addition and subtraction. Another approach is to use a lookup table. The biggest problem with the tabula recta is when using the one-time pad by hand, it's easy to lookup the wrong row or column and introduce mistakes into the enciphering process.

However, due to the reciprocal properties of the "DIANA" table (don't you love all the NSA codenames?), encryption and decryption are identical, which means they only require only a single column. A key "row" is no longer needed, and the order of plain, key and cipher letter don't matter (Vigenère vs Beaufort) and may even differ for sender and receiver. Just like with Beaufort, this table is incompatible with Vigenère-encrypted ciphertexts. Further, it's also incompatible with Beaufort-encrypted ciphertexts, especially if it's a one-time pad. The Beaufort table shifts the alphabet to the right, while the DIANA table shifts the alphabet to the left. The tabula recta also shifts left.

Let's make one thing clear here- this table was created strictly for ease of use, not for increased security. When using the one-time pad, the key is at least the length of the message, which means it doesn't repeat. So it doesn't matter that the table is 26 Caesar-shifted alphabets. That property won't show itself in one-time pad ciphertexts.

E.J. Williams' Balanced Tables

Stepping away from cryptography for a moment, and entering the world of mathematics, and in this case, mathematical models applied to farming, we come across E.J. Williams' balanced tables. Note how the "A" and "N" characters are populated throughout the table compared to what we've seen previously.

The paper is modeling chemical treatments to crops over a span of time, and how to approach the most efficient means of applying those treatments. The effects of the previous treatment, called the "residual effect" is then analyzed. A method based on a "balanced" Latin square is discussed. It is then applied to multiple farming sites and analyzed.

Now, I know what you're thinking- "Let's use this for a cipher table!". Well, if you did, and your key repeated throughout the message, the ciphertext would not exhibit Caesar-shifted characteristics like Vigenère and Beaufort. However, the table is still deterministic, and as such, knowing how the table is built will give cryptanalysts the edge necessary to still break Williams-encrypted ciphertexts.

Michael Damm's Anti-Symmetric Quasigroups of Order 26

Also in the world of mathematics are quasigroups. These are group algebras that must be both totalitive and invertible, but not necessarily associative. Michael Damm researched quasigroups as the basis for an integrity checksum, such as in calculating the last digit of a credit card number. But, not only did he research quasigroups, but anti-symmetric quasigroups. Anti-symmetry is a set algebra concept. If "(c*x)*y = (c*y)*x", then this implies that "x = y", and thus the set is symmetric. An anti-symmetric set means "(c*x)*y != (c*y)*x", and as such, "x != y".

Michael Damm, while researching checksums, introduced us to anti-symmetric quasigroups. One property was required, and that was that the main diagonal was "0", or "A" in our case. The Damm algorithm creates a checksum, such that when verifying the check digit, the result places you on the main diagonal, and thus returns "0". Note that any quasigroup can be represented by a Latin square.

Due to the nature of the Damm algorithm as a checksum, this could be used to verify the integrity of a plaintext message before encrypting using a quasigroup of order 26, as shown above. The sender could calculate the checksum of his plaintext message, and append the final character to the plaintext before encrypting. The recipient, after decrypting the message, could then run the same Damm checksum algorithm against the full plaintext message. If the result is "A", the message wasn't modified.

Notice in my image above, that while "A" rests along the main diagonal, the rest of the alphabets are randomized, or at least shuffled. It really isn't important how the alphabets are created, so long as they meet the requirements of being an anti-symmetric quasigroup.

Random Tables

Finally, we have randomized Latin squares. These are still Latin squares, such that for any element in a row, it is not duplicated in that row, and for any element in a column, it is not duplicated in that column. Other than that, however, there is no relationship between rows, columns, or elements. Their use is interesting in a few areas.

First, suppose I give you a partially filled Latin square as a "public key", with instructions on how to encrypt with it. I could then use my fully filled Latin square "private key", of which the public is a subset of. Using this private key, with some other algorithm, I could then decrypt your message. It turns out, filling in a partially-filled Latin square is NP-complete, meaning that we don't know of any polynomial-time algorithm currently can can complete the task. As such, this builds a good foundation for public key cryptography, as briefly outlined here.

Further, because of the lack of any structure in a randomized Latin square, aside from the requirements of being a Latin square, these make good candidates for symmetric message authentication code (MAC) designs. For example, a question on the cryptography StackExchange asked if there was any humanly-verifiable way to add message authentication to the one-time pad. The best answer suggested using a circular buffer as a signature, which incorporates the key, the plaintext, modular addition, and the Latin square. By having a randomized Latin square as the foundation for a MAC tag, no structure is present in the authenticated signature itself. Note, the table can still be public.

Steve Gibson incorporated Latin squares into a deterministic password manager. Of course, as with all deterministic password managers, there are some fatal flaws in their design. Further, his approach, while "off the grid", is rather cumbersome in execution. But it is creative, and certainly worth mentioning here as a randomized Latin square.

Conclusion

Latin squares have fascinated mathematicians for centuries, and in this post, we have seen their use en cryptography, mathematical modeling, data integrity, message authentication, and even password generation. This only shows briefly their potential.

Getting Up To 8 Possibilities From A Single Coin Toss

Introduction

Lately, I've been interested in pulling up some classical modes of generating randomness. That is, rather than relying on a computer to generate my random numbers for me, which is all to common and easy these days, I wanted to go offline, and generate random numbers the classical way- coin flips, dice rolls, card shuffling, roulette wheels, bingo ball cages, paper shredding, etc.

In fact, if randomness interests you, I recently secured the r/RNG subreddit, where we discuss everything from random number generators, to hashing functions, to quantum mechanics and chaos to randomness extraction. I invite you to hang out with us.

Anyway, I was a bit bothered that all I could get out of a single coin toss was 2 outcomes- heads or tails. It seems like there just is no way around it. It's the most basic randomness mechanic, yet it seems so limiting. Then I came across 18th century mathematician Gearges-Louis Leclerc, Comte de Buffon, where he played a common game of placing bets on tossing a coin onto a tiled floor, and whether or not the coin landed squarely in the tile without crossing any edges, or if the coin did actually cross a tile edge.

Then it hit me- I can extract more entropy out of a coin toss by printing a grid on a piece of paper. So, I put this up on Github as a simple specification. So far, I have papers you can print that are letter, ledger, a3, or a4 sizes, and coins for United States, Canada, and the European Union.

The theory

Assume a tile has a length of "l" and is square, and assume a coin has a diameter "d" such that "d < l". In other words, the tile is larger than the coin, and there is a place on the tile where the coin can sit without crossing any edges of the tile.

This means that if the edge of the coin is tangent to the edge of the tile, then we can draw a smaller square with the center of the coin, inside our tile. This smaller square tells us that if the center of the coin lands anywhere inside of that smaller square, the edges of the coin will not cross the edges of the tile.

Image showing 4 pennies in the corners of a tile, drawing a smaller center square

Now we know the coin diameter, but we would like to know the tile edge length, so we can draw our grid on a paper to toss the coin to. As such, we need to know the ratio of the area of the tile to the area of the smaller inner square drawn by the coin.

Know that the area of the tile with length "l" is:

A(tile) = l^2

The area of the smaller square inside the tile is determined by both the tile length "l" and the coin diameter "d":

A(inner_square) = (l-d)^2

So the ratio of the two is:

P = (l-d)^2/l^2

I use "P" for our variable as this the probability of where the center of the coin lands. We want our outcomes equally likely, so we want the center of the coin to land with 50% probability inside the inner square and 50% probability outside of the inner square.

1/2 = (l-d)^2/l^2

We know the diameter of the coin, so we just need to solve for the tile edge length. This is a simple quadratic equation:

1/2 = (l-d)^2/l^2
l^2 = 2*(l-d)^2
l^2 = 2*(l^2 - 2*l*d + d^2)
l^2 = 2*l^2 - 4*l*d + 2*d^2
0 = l^2 - 4*l*d + 2*d^2

If you remember from your college or high school days, the quadratic formula solution to the quadratic equation is:

x = (-b +/- Sqrt(b^2 - 4*a*c))/(2*a)

Plugging that in, we get:

l = (-(-4*d) +/- Sqrt((4*d)^2 - 4*1*(2*d^2)))/2
l = (4*d +/- Sqrt(16*d^2-8*d^2))/2
l = (4*d +/- Sqrt(8*d^2))/2
l = (2*d +/- 2*d*Sqrt(2))/2
l = d*(2 +/- Sqrt(2))

No surprise, we have 2 solutions. So, which one is the correct one? Well, we set the restriction that "d < l" earlier, and we can't break that. So, we can clearly see that:

l = d*(2 - Sqrt(2))
l = d*(Something less than 1)
l < d

So, this equation would mean that our coin diameter "d" is larger than our tile edge "l", which doesn't mean anything to us. As such, the solution to our problem for finding the tile edge length when we know the coin diameter is:

l = d*(2 + Sqrt(2))

Getting 4 outcomes from one coin toss

Now that we have the theory knocked out, let's apply it. I know that a United States penny diameter is 1.905 centimeters. As such, my grid dege length needs to be:

l = 1.905*(2+Sqrt(2))
l ~= 6.504 centimeters

This means then that when I flip my coin onto the paper grid, there is now a 50% chance that the coin will cross a grid line and a 50% chance that it won't. But this result runs orthogonal to whether or not the coin lands on a heads or tails. As such, I have the following uniformly distributed outcomes:

  1. Tails, does not cross an edge = 00
  2. Tails, crosses any edge(s) = 01.
  3. Heads, does not cross an edge = 10.
  4. Heads, crosses any edge(s) = 11.

If we do a visual check, whenever the coin center lands in the white square of the following image, the edges of the coin will not cross the edge of the square. However, if the coin center lands in the red area, then the edge of the coin will cross an edge or edges.

Grid square showing where the coin center must lie to cross an edge.

You can measure the ratios of the area of the red to that of the white to convince yourself they're equal, although careful- my image editor didn't allow me to calculate sub-pixel measurements when building this image.

Getting 8 outcomes from one coin toss

Remarkably, rather than treat the grid as a single dimensional object (does it cross any grid edge or not), I can use the grid an an X/Y plane, and treat crossing an x-axis edge differently than crossing a y-axis edge. However, that only gives me 6 outcomes, and it is possible that the coin will land in a corner, crossing both the x-axis and y-axis simultaneously. So, I need to treat that separately.

Just like we calculated the odds of crossing an edge to be 50%, now I have 4 outcomes:

  1. Does not cross an edge.
  2. Crosses an x-axis edge only.
  3. Crosses a y-axis edge only.
  4. Crosses both the x-axis and y-axis edges (corner).

As such, each outcome needs to be equally likely, or 25%. Thus, my problem now becomes:

1/4 = (l-d)^2/l^2

Just like we solved the quadratic equation for P=50%, we will do the same here. However, I'll leave the step-by-step as an exercise for the user. Our solution becomes:

l = 2*d

Simple. The length of the grid edge must by twice the diameter of the coin. No square roots or distributed multiplication. Double the diameter of the coin, and we're good to go. However, this has a benefit and a drawback. The benefit is that the grid is more compact (2*d vs ~3.4*d). This reduces your ability to "aim" for a grid edge or not. The drawback is that you will have to make more ambiguous judgment calls on whether or not the coin crosses an edge (75% of the time vs 50% in the previous approach).

So, like the previous approach of getting 4 outcomes, we have a visual check. If the coin center lands anywhere in the white area of the following image, it won't cross an edge. If the coin center lands anywhere in the green area, the coin will cross an x-axis edge. If the coin center lands anywhere in the blue, it will cross a y-axis edge, and if the coin center lands anywhere in the red, it will cross both an x-axis and y-axis edge simultaneously.

Grid showing the areas the coin center must lie to cross an edge or corner.

As with the previous 4 outcome approach, we can convince ourselves this holds. The area of the white square should equal the are of the blue, as well as equal the area of green, as well as equal the area of the red.

This means now we have 8 uniformly distributed outcomes:

  1. Tails, does not cross an edge = 000
  2. Tails, crosses the x-axis only = 001
  3. Tails, crosses the y-axis only = 010
  4. Tails, crosses both axes (corner) = 011
  5. Heads, does not cross an edge = 100
  6. Heads, crosses the x-axis only = 101
  7. Heads, crosses the y-axis only = 110
  8. Heads, crosses both axis (corner) = 111

How do you like them apples? 8 equally likely outcomes from a single coin toss.

As mentioned, one big advantage, aside from getting 3-bits per coin toss, is the smaller grid printed on paper. You have a less opportunity to try to cheat the system, and influence the results, but you also may have a more difficult time deciding if the coin crossed a grid edge.

On the left is the paper grid used for 2-bit coin toss extraction, while the paper grid on the right is used for 3-bit coin toss extraction.

Image showing the grid sizes for a 2-bit coin toss extraction. Image showing the grid sizes for a 3-bit coin toss extraction.

Some thoughts

Both fortunately and unfortunately, the coin will bounce when it lands on the paper. If the bounce causes the coin to ounce off the paper, all you can record is a single bit, or one of two outcomes- heads or tails, because it becomes ambiguous on whether or not the coin would have crossed a grid edge, and which edge it would have crossed.

You could probably minimize the bouncing by putting the paper on something less hard than a table, such as a felt card table, a towel or rag, or even a carpeted floor. But this may also create unwanted bias in the result. For example, if the paper is placed on something too soft, such as a carpeted floor, when the coin hits the paper, it could create damage to the paper, thus possibly influencing future flips, and as a result, introducing bias into the system.

Surrounding the paper with "walls", such that the coin cannot pass the boundaries of the grid may work, but the coin bouncing off the walls will also impact the outcomes. It seems clear that the walls would need to be placed immediately against the outer edge of the grid to prevent from introducing any bias.

Additional approaches

This is hardly conclusive on extracting extra randomness from a coin flip. It is known that coins precess when flying through the air, and as part of that precess, the coin may be spinning like a wheel. Which means that the coin could be "facing north" or "facing south" in addition to heads or tails. However, it's not clear to me how much spin exists in a standard flip, and if this could be a reliable source of randomness.

Further, in our 8-outcome approach, we used the center of the coin as the basis for our extra 2 bits, in addition to heads and tails. However, we could have made the grid edge length "4d" instead of "2d", and ignored the toss if the coin crosses both edges. This means a larger grid, which could improve your chances of aiming the coin in an attempt to skew the results, and also means sticking with smaller coins, as larger coins just won't have many grids that can fit on a paper.

Other ideas could be adding color to each grid. So not only do we identify edge crossing, but use color as a third dimension to get up to 4-bits of randomness. So maybe the x-axes have alternating black and white areas, as do the y-axis, and the corners. The center of the grid could be possibly alternating red/blue quadrants. Where the center of the coin lands determines the extracted bits. Of course, this would be a visually busy, and possibly confusing paper.

None of these have been investigated, but I think each could be interesting approaches to how to extract more bits out of a coin toss.

Conclusion

I think this is a refreshing approach to an age-old problem- the coin toss. Extracting 3-bits from a single flip is extracting more entropy than a fair d6 die can produce (~2.5 bits). This means that practically speaking, the coin is more efficient at entropy extraction than dice. However, you can roll multiple dice simultaneously, where it's more difficult to toss multiple coins simultaneously.

Acknowledgments

Thanks to Dr. Markku-Juhani O. Saarinen, Marsh Ray, and JV Roig for the discussions we had on Twitter, and for helping me flush out the ideas.

Middle Square Weyl Sequence PRNG

Introduction

The very first software algorithm to generate random numbers, was supposedly written in 1946 by John von Neumann, and is called the Middle Square Method, and it's crazy simple. Enough so, you could execute it with a pencil, paper, and basic calculator. In this post, I'm going to cover the method, it's drawbacks, and an approach called the Weyl Sequence

Middle Square Method

The algorithm is to start with an n-digit seed. The seed is squared, producing a 2n-digit result, zero-padded as necessary. The middle n-digits are then extracted from the result for the next seed. See? Simple. Let's look at an example.

Suppose my seed is 81 (2 digits). 81-squared is 6561 (4 digits). We then take the middle 2 digits out of the result, which is 56. We continue the process:

812 = 6561
562 = 3136
132 = 0169
162 = 0256
252 = 0625
622 = 3844
842 = 7056
52 = 0025
22 = 0004
02 = 0000

And we've reached the core problem with the middle square method- it has a tendency to converge, most likely to zero, but other numbers are possible, and in some cases a short loop. Of course, John von Neumann was aware of this problem, but he also preferred it that way. When the middle square method fails, it's immediately noticable. But, it's also horribly biased and fails most statistical tests for randomness.

Middle Square Weyl Sequence

A modern approach to an old problem is known as the Middle Square Weyl Sequence, from Hermann Weyl. Basically, a number is added to the square, then the middle bits are extracted from the result for the next seed. Let's first look at the C code, then I'll explain it in detail.

#include 

uint64_t x = 0, w = 0

// Must be odd (least significant bit is "1"), and upper 64-bits non-zero
uint64_t s = 0xb5ad4eceda1ce2a9; // qualifying seed

// return 32-bit number
inline static uint32_t msws() {
    x *= x; // square the number
    w += s; // the weyl sequence
    x += w; // apply to x
    return x = (x>>32) | (x<<32); // return the middle 32-bits
}

Explanation

Okay. Let's dive into the code. This is a 32-bit PRNG using John von Neumann's Middle Square Method, starting with a 64-bit seed "s". As the notes say, it must be an odd number, and the upper 64-bits must be non-zero. It must be odd, to ensure that "x" can be both odd and even. Recall- an odd plus an odd equals an even, and an odd plus an even equals an odd.

Note that at the start, "x" is zero, so squaring it is also zero. But that's not a problem, because we are adding a non-zero number. During that time, the "w" variable is assigned. It's dynamically changed on every iteration, although "s" remains static.

Finally, our return is a 32-bit number (because of the "inline static uint32_t" function width), but we're doing some bit-shifting. Supposedly, this is returning the middle 32-bits of our 64-bit "x", but that's not immediatly clear. Let's look at it more closely.

Example

Suppose "x = 0xace983fe671dbd09". Then "x" is a 64-bit number with the following bits:

1010110011101001100000111111111001100111000111011011110100001001

When that number is squared, it becomes the 128-bit number 0x74ca9e5f63b6047f6a65456d9da04a51, or in binary:

01110100110010101001111001011111011000111011011000000100011111110110101001100101010001010110110110011101101000000100101001010001

But remember, "x" is a 64-bit number, so in our C code, only the bottom 64-bits are returned from that 128-bit number. So "x" is really 0x6a65456d9da04a51, or in binary:

0110101001100101010001010110110110011101101000000100101001010001

But the bits "01101010011001010100010101101101" are the 3rd 32-bits of the 128-bit number that was the result of squaring "x" (see above). They are the "middle" 32-bits that we're after. So, we're going to do something rather clever. We're going to swap the upper 32-bits with the lower, then return the lower 32-bits.

Effectively, what we're doing is "ABCD" -> "CDAB", then returning "AB". We do this via bit-shifting. So, starting with:

0110101001100101010001010110110110011101101000000100101001010001

First, we bitshift the 64-bit number right 32-bits:

  0110101001100101010001010110110110011101101000000100101001010001 >> 32
= 0000000000000000000000000000000001101010011001010100010101101101

Then we bitshift "x" left 32-bits:

  0110101001100101010001010110110110011101101000000100101001010001 << 32
= 1001110110100000010010100101000100000000000000000000000000000000

Now we logically "or" them together:

 0000000000000000000000000000000001101010011001010100010101101101 |
 1001110110100000010010100101000100000000000000000000000000000000
|================================================================
 1001110110100000010010100101000101101010011001010100010101101101

See the swap? Now, due to the function return width, we return the lower 32-bits as our random number, which is 01101010011001010100010101101101, or 1785021805 in decimal. We've arrived at our goal.

Conclusion

At the main website, the C source code is provided, along with 25,000 seeds, as well as C source code for the Big Crush randomness tests from TestU01. This approach passes Big Crush with flying colors on all 25,000 seeds. Something a simple as adding an odd 64-bit number to the square changes John von Neumann's approach so much, it becomes a notable PRNG.

Who said you can't teach an old dog new tricks?

Why The "Multiply and Floor" RNG Method Is Biased

I've been auditing a lot of JavaScript source code lately, and a common problem I'm seeing when generating random numbers is using the naive "multiply-and-floor" method. Because the "Math.random()" function call returns a number between 0 and 1, not including 1 itself, then developers think that the "best practice" for generating a random number is as follows:

1
2
3
function randNumber(range) {
    return Math.floor(Math.random() * range); // number in the interval [0, range).
}

The problem with this approach is that it's biased. There are numbers returned that are more likely to occur than others. To understand this, you need to understand that Math.random() is a 32-bit RNG in Chrome and Safari, and a 53-bit RNG in Edge and Firefox. First, let's pretend every browser RNG is a 32-bit generator, then we'll extend it.

A 32-bit Math.random() means that there are only 232 = 4,294,967,296 possible decimal values in the range of [0, 1). This means that the interval [0, 1) is divided up every "1/232 = 0.00000000023283064365" decimal values. But that doesn't matter though, because if I wanted a random number between 1 and 100, 100 does not divide 4,294,967,296 evenly. I get 42,949,672 with 96 left over. What does this mean? It means that ...

1
randNumber(100);

... will return 0 through 95 more often than 96 through 99. That's our bias.

It doesn't matter if it's a 53-bit RNG either. "253 = 9,007,199,254,740,992" is not a multiple of 100. Instead, dividing by 100, I get 90,071,992,547,409 with 92 left over. So, with a 53-bit RNG, 0 through 91 will be generated more often than 92 through 99 with the same function.

The only time this bias would not exhibit itself in the naive "multiply-and-floor" approach above, is if the random number requested is in the interval [0, 2N), where "N" is any positive integer. 232, 253, and 2X, where "X" is a positive integer, is always a multiple of 2N (2N divides 2X evenly, when N ≤ X, N > 0).

So, what do we do? How do we improve the naive multiply-and-floor approach? Thankfully, it's not too difficult. All we need to do is essentially the following:

  1. Force the RNG into 32-bits (common denominator for all browsers).
  2. Create a range of values that is a multiple of our desired range (E.G.: 1-100).
  3. Loop over the range picking values until a value inside the range is generated.
  4. Output the generated value modulo our desired range.

Let's see this in practice. First the unbiased code, then the explanation:

1
2
3
4
5
6
7
function uniformRandNumber(range) {
    var max = Math.floor(2**32/range) * range; // make "max" a multiple of "range"
    do {
        var x = Math.floor(Math.random() * 2**32); // pick a number of [0, 2^32).
    } while(x >= max); // try again if x is too big
    return(x % range); // uniformly picked in [0, range)
}

I know what you're thinking: WAIT! YOU JUST DID THE "MULTIPLY AND FLOOR" METHOD!! HYPOCRITE!!! Hold on though. There are two subtle differences. See what they are?

The "max" variable is a multiple of "range" (step 2 above). So, if our range is [0, 100), then "max = 4294967200", which is a multiple of 100. This means that so long as "0 < = x < 4294967200", we can return "x % 100", and know that our number was uniformly chosen. However, if "x >= 4294967200", then we need to choose a new "x", and check if it falls within our range again (step 3 above). So long as "x" falls in [0, 4294967200), then we're good.

This extends to cryptographically secure random numbers too. In action, it's just:

1
2
3
4
5
6
7
8
function uniformSecureRandNumber(range) {
    const crypto = window.crypto || window.msCrypto; // Microsoft vs everyone else
    var max = Math.floor(2**32/range) * range; // make "max" a multiple of "range"
    do {
        var x = crypto.getRandomValues(new Uint32Array(1))[0];  // pick a number of [0, 2^32).
    } while(x >= max); // try again if x is too big
    return(x % range); // uniformly picked in [0, range)
}

So it's not that "multiply and floor" is wrong so long as you use it correctly.

One small caveat- these examples are not checking if "range" is larger than 32-bits. I deliberately ignored this to draw your attention on how to correctly generate uniform random numbers. You may or may not need to do various checks on the "range" argument. Is it an integer type? Is it a positive integer? Is it 32-bits or less? Etc.

As an exercise for the reader, how could you extend this uniform generator to pick a random number in the range of [100, 200)? Going further, how could you pick only a random even number in the range of [250, 500)?

Do Not Use sha256crypt / sha512crypt - They're Dangerous

Introduction

I'd like to demonstrate why I think using sha256crypt or sha512crypt on current GNU/Linux operating systems is dangerous, and why I think the developers of GLIBC should move to scrypt or Argon2, or at least bcrypt or PBKDF2.

History and md5crypt

In 1994, Poul-Henning Kamp (PHK) added md5crypt to FreeBSD to address the weaknesses of DES-crypt that was common on the Unix and BSD systems of the early 1990s. DES-Crypt has a core flaw in that, not only DES reversible (which necessarily isn't a problem here), and incredibly fast, but it also limited password length to 8 characters (each of those limited to 7-bit ASCII to create a 56-bit DES key). When PHK created md5crypt, one of the things he made sure to implement as a feature was to support arbitrary-length passwords. In other words, unlike DES-Crypt, a user could have passwords greater than 9 or more characters.

This was "good enough" for 1994, but it had an interesting feature that I don't think PHK thought of at the time- md5crypt execution time is dependent on password length. To prove this, I wrote a simple Python script using passlib to hash passwords with md5crypt. I started with a single "a" character as my password, then increased the password length by appending more "a"s up until the password was 4,096 "a"s total.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import time
from passlib.hash import md5_crypt

md5_results = [None] * 4096

for i in xrange(0, 4096):
    print i,
    pw = "a" * (i+1)
    start = time.clock()
    md5_crypt.hash(pw)
    end = time.clock()
    md5_results[i] = end - start

with open("md5crypt.txt", "w") as f:
    for i in xrange(0, 4096):
        f.write("{0} {1}\n".format(i+1, md5_results[i]))

Nothing fancy. Start the timer, hash one "a" with md5crypt, stop the timer, and record the results. Start the timer, hash two "a"s with md5crypt, stop the timer, and record the results. Wash, rinse, repeat, until the password is 4,096 "a"s in length.

What do the timing results look like? Below are scatter plots of timing md5crypt for passwords of 1-128, 1-512, and 1-4,096 characters in length:

Scatter plot showing execution timing of md5crypt up to 128 characters

md5crypt 1-128 characters

Scatter plot showing execution timing of md5crypt up to 512 characters

md5crypt 1-512 characters

Scatter plot showing execution timing of md5crypt up to 4,096 characters

md5crypt 1-4,096 characters

At first, you wouldn't think this is a big deal; in fact, you may even think you LIKE it (we're supposed to make things get slower, right? That's a good thing, right???). But, upon deeper inspection, this actually is a flaw in the algorithm's design for two reasons:

  • Long passwords can create a denial-of-service on the CPU (larger concern).
  • Passive observation of execution times can predict password length (smaller concern).

Now, to be fair, predicting password length based on execution time is ... meh. Let's be honest, the bulk of passwords will be between 7-10 characters. And because these algorithms operate in block sizes of 16, 32, or 64 bytes, an adversary learning "AHA! I know your password is between 1-16 characters" really isn't saying much. But, should this even exist in a cryptographic primitive? Probably not. Still, the larger concern would be users creating a DoS on the CPU, strictly by changing password length.

I know what you're thinking- it's 2018, so there should be no reason why any practical length password cannot be adequately hashed with md5crypt insanely quickly, and you're right. Except, md5crypt was invented in 1994, 24 years ago. According to PHK, he designed it to take about 36 milliseconds on the hardware he was testing, which would mean a speed about 28 per second. So, it doesn't take much to see that by increasing the password's length, you can increase execution time enough to affect a busy authentication server.

The question though, is why? Why is the execution time dependent on password length? This is because md5crypt processes the hash for every 16 bytes in the password. As a result, this creates the stepping behavior you see in the scatter plots above. A good password hashing design would not do this.

PHK eventually sunset md5crypt in 2012 with CVE-2012-3287. Jeremi Gosney, a professional password cracker, demonstrated with Hashcat and 8 clustered Nvidia GTX 1080Ti GPUS, that a password cracker could rip through 128.4 million md5crypt guesses per second.

You should no longer be implementing md5crypt for your password hashing.

sha2crypt and NIH syndrome

In 2007, Ulrich Drepper decided to improve things for GNU/Linux. He recognized the threat that GPU clusters, and even ASICs, posed on fast password cracking with md5crypt. One aspect of md5crypt was the hard-coded 1,000 iterations spent on the CPU, before the password hash was finalized. This cost was not configurable. Also, MD5 was already considered broken, with SHA-1 showing severe weaknesses, so he moved to SHA-2 for the core of his design.

The first thing addressed, was to make the cost configurable, so as hardware improved, you could increase the iteration count, thus keeping the cost for calculating the final hash expensive for password crackers. However, he also made a couple core changes to his design that differed from md5crypt, which ended up having some rather drastic effects on its execution.

Using code similar to above with Python's passlib, but rather using the sha256_crypt() and sha512_crypt() functions, we can create scatter plots of sha256crypt and sha512crypt for passwords up to 128-characters, 512-characters, and 4,096-characters total, just like we did weth md5crypt. How do they fall out? Take a look:

Scatter plot showing execution timing of sha256crypt up to 128 characters

sha256crypt 1-128 characters

Scatter plot showing execution timing of sha256crypt up to 128 characters

sha256crypt 1-512 characters

Scatter plot showing execution timing of sha256crypt up to 512 characters

sha256crypt 1-4,096 characters

Scatter plot showing execution timing of sha512crypt up to 128 characters

sha512crypt 1-128 characters

Scatter plot showing execution timing of sha512crypt up to 512 characters

sha512crypt 1-512 characters

Scatter plot showing execution timing of sha512crypt up to 4,096 characters

sha512crypt 1-4,096 characters

Curious. Not only do we see the same increasing execution time based on password length, but unlike md5crypt, that growth is polynomial. The changes Ulrich Drepper made from md5crypt are subtle, but critical. Essentially, not only do we process the hash for every character in the password per round, like md5crypt, but we process every character in the password three more times. First, we take the binary representation of each bit in the password length, and update the hash based on if we see a "1" or a "0". Second, for every character in the password, we update the hash. Finally, again, for every character in the password, we update the hash.

For those familiar with big-O notation, we end up with an execution run time of O(pw_length2 + pw_length*iterations). Now, while it is true that we want our password hashing functions to be slow, we also want the iterative cost to be the driving factor in that decision, but that isn't the case with md5crypt, and it's not the case with sha256crypt nor sha512crypt. In all three cases, the password length is the driving factor in the execution time, not the iteration count.

Again, why is this a problem? To remind you:

  • Long passwords can create a denial-of-service on the CPU (larger concern).
  • Passive observation of execution times can predict password length (smaller concern).

Now, granted, in practice, people aren't carrying around 4 kilobyte passwords. If you are a web service provider, you probably don't want people uploading 5 gigabyte "passwords" to your service, creating a network denial of service. So you would probably be interested in creating an adequate password maximum, such as what NIST recommends at 128 characters, to prevent that from occurring. However, if you have an adequate iterative cost (such as say, 640,000 rounds), then even moderately large passwords from staff, where such limits may not be imposed, could create a CPU denial of service on busy authentication servers.

As with md5crypt, we don't want this.

Now, here's what I find odd about Ulrich Drepper, and his design. In his post, he says about his specification (emphasis mine):

Well, there is a problem. I can already hear everybody complaining that I suffer from the NIH syndrome but this is not the reason. The same people who object to MD5 make their decisions on what to use also based on NIST guidelines. And Blowfish is not on the lists of the NIST. Therefore bcrypt() does not solve the problem.

What is on the list is AES and the various SHA hash functions. Both are viable options. The AES variant can be based upon bcrypt(), the SHA variant could be based on the MD5 variant currently implemented.

Since I had to solve the problem and I consider both solutions equally secure I went with the one which involves less code. The solution we use is based on SHA. More precisely, on SHA-256 and SHA-512.

PBKDF2 was standardized as an IETF standard in September 2000, a full 7 years before Ulrich Drepper created his password hashing functions. While PBKDF2 as a whole would not be blessed by NIST until 3 years later, in December 2010 in SP 800-132, PBKDF2 can be based on functions that, as he mentioned, were already in the NIST standards. So, just like his special design that is based on SHA-2, PBKDF2 can be based on SHA-2. Where he said "I went with the one which involves less code", he should have gone with PBKDF2, as code had already long since existed in all sorts of cryptographic software, including OpenSSL.

This seems to be a very clear case of NIH syndrome. Sure, I understand not wanting to go with bcrypt, as it's not part of the NIST standards . But don't roll your own crypto either, when algorithms already exist for this very purpose, that ARE based on designs that are part of NIST.

So, how does PBKDF2-HMAC-SHA512 perform? Using similar Python code with the passlib password hashing library, it was trivial to put together:

Scatter plot showing execution timing of PBKDF2-HMAC-SHA512 up to 128 characters

PBKDF2-HMAC-SHA512 1-128 characters

Scatter plot showing execution timing of PBKDF2-HMAC-SHA512 up to 512 characters

PBKDF2-HMAC-SHA512 1-512 characters

Scatter plot showing execution timing of PBKDF2-HMAC-SHA512 up to 4,096 characters

PBKDF2-HMAC-SHA512 1-4,096 characters

What this clearly demonstrates, is that the only factor driving execution time, is the number of iterations you apply to the password, before delivering the final password hash. This is what you want to achieve, not giving the opportunity for a user to create a denial-of-service based on password length, nor an adversary learn the length of the user's password based on execution time.

This is the sort of details that a cryptographer or cryptography expert would pay attention to, as opposed to an end-developer.

It's worth pointing out that PBKDF2-HMAC-SHA512 is the default password hashing function for Mac OS X, with a variable cost between 30,000 and 50,000 iterations (typical PBKDF2 default is 1,000).

OpenBSD, USENIX, and bcrypt

Because Ulrich Drepper brought up bcrypt, it's worth mentioning in this post. First off, let's get something straight- bcrypt IS NOT Blowfish. While it's true that bcrypt is based on Blowfish, they are two completely different cryptographic primitives. bcrypt is a one-way cryptographic password hashing function, where as Blowfish is a two-way 64-bit block symmetric cipher.

At the 1999 USENIX conference, Niels Provos and David Mazières, of OpenBSD, introduced bcrypt to the world (it was actually in OpenBSD 2.1, June 1, 1997). They were critical of md5crypt, stating the following (emphasis mine):

MD5 crypt hashes the password and salt in a number of different combinations to slow down the evaluation speed. Some steps in the algorithm make it doubtful that the scheme was designed from a cryptographic point of view--for instance, the binary representation of the password length at some point determines which data is hashed, for every zero bit the first byte of the password and for every set bit the first byte of a previous hash computation.

PHK was slightly offended by their off-handed remark that cryptography was not his core consideration when designing md5crypt. However, Niels Provos was a graduate student in the Computer Science PhD program at the University of Michigan at the time. By August 2003, he had earned his PhD. Since 1997, bcrypt has withstood the test of time, it has been considered "Best Practice" for hashing passwords, and is still well received today, even though better algorithms exist for hashing passwords.

bcrypt limits password input to 72 bytes. One way around the password limit is with pre-hashing. A common approach in pseudocode is to hash the password with SHA-256, encode the digest into base64, then feed the resulting ASCII string into bcrypt. In pseudocode:

pwhash = bcrypt(base64(sha-256(password)))

This results in a 44-byte password (including the "=" padding) that is within the bounds of the 72 byte bcrypt limitation. This prehashing allows users to have any length password, while only ever sending 44 bytes to bcrypt. My implementation in this benchmark uses the passlib.hash.bcrypt_sha256.hash() method. How does bcrypt compare to md5crypt, sha256crypt, and sha512crypt in execution time based on password length?

Scatter plot showing execution timing of bcrypt up to 128 characters

bcrypt 1-128 characters (prehashed)

Scatter plot showing execution timing of bcrypt up to 512 characters

bcrypt 1-512 characters (prehashed)

Scatter plot showing execution timing of bcrypt up to 4,096 characters

bcrypt 1-4,096 characters (prehashed)

Now, to be fair, bcrypt is only ever hashing 44 byte passwords in the above results, because of my prehashing. So of course it's running in constant time. So, how does it look with hashing 1 to 72 character passwords without prehashing?

Scatter plot showing execution timing of bcrypt up to 72 characters

bcrypt 1-72 characters (raw)

Again, we see consistent execution, driven entirely by iteration cost, not by password length.

Colin Percival, Tarsnap, and scrypt

In May 2009, mathematician Dr. Colin Percival presented to BSDCan'09 about a new adaptive password hashing function called scrypt, that was not only CPU expensive, but RAM expensive as well. The motivation was that even though bcrypt and PBKDF2 are CPU-intensive, FPGAs or ASICs could be built to work through the password hashes much more quickly, due to not requiring much RAM, around 4 KB. By adding a memory cost, in addition to a CPU cost to the password hashing function, we can now require the FPGA and ASIC designers to onboard a specific amount of RAM, thus financially increasing the cost of production. scrypt recommends a default RAM cost of at least 16 MB. I like to think of these expensive functions as "security by obesity".

scrypt was initially created as an expensive KDF for his backup service Tarsnap. Tarsnap generates client-side encryption keys, and encrypts your data on the client, before shipping the encrypted payload off to Tarsnap's servers. If at any event your client is lost or stolen, generating the encryption keys requires knowing the password that created them, and attempting to discover that password, just like typical password hashing functions, should be slow.

It's now been 9 years as of this post, since Dr. Percival introduced scrypt to the world, and like bcrypt, it has withstood the test of time. It has received, and continues to receive extensive cryptanalysis, is not showing any critical flaws or weaknesses, and as such is among the top choices as a recommendation from security professionals for password hashing and key derivation.

How does it fare with its execution time per password length?

Scatter plot showing execution timing of scrypt up to 128 characters

scrypt 1-128 characters

Scatter plot showing execution timing of scrypt up to 512 characters

scrypt 1-512 characters

Scatter plot showing execution timing of scrypt up to 4,096 characters

scrypt 1-4,096 characters

I'm seeing a trend here.

The Password Hashing Competition winner Argon2

In 2013, an open public competition, in the spirit of AES and SHA-3, was held to create a password hashing function that approached password security from what we knew with modern cryptography and password security. There were many interesting designs submitted, including a favorite of mine by Dr. Thomas Pornin of StackExchange fame and BearSSL, that used delegation to reduce the work load on the honest, while still making it expensive for the password cracker.

In July 2015, the Argon2 algorithm was chosen as the winner of the competition. It comes with a clean approach of CPU and memory hardness, making the parameters easy to tweak, test, and benchmark. Even though the algorithm is relatively new, it has seen at least 5 years of analysis, as of this writing, and has quickly become the "Gold Standard" for password hashing. I fully recommend it for production use.

Any bets on how it will execution times will be affected by password length? Let's look:

Scatter plot showing execution timing of Argon2 up to 128 characters

Argon2 1-128 characters

Scatter plot showing execution timing of Argon2 up to 512 characters

Argon2 1-512 characters

Scatter plot showing execution timing of Argon2 up to 4,096 characters

Argon2 1-4,096 characters

Execution time is not affected by password length. Imagine that. It's as if cryptographers know what they're doing when designing this stuff.

Conclusion

Ulrich Drepper tried creating something more secure than md5crypt, on par with bcrypt, and ended up creating something worse. Don't use sha256crypt or sha512crypt; they're dangerous.

For hashing passwords, in order of preference, use with an appropriate cost:

  1. Argon2 or scrypt (CPU and RAM hard)
  2. bcrypt or PBKDF2 (CPU hard only)

Avoid practically everything else:

  1. md5crypt, sha256crypt, and sha512crypt
  2. Any generic cryptographic hashing function (MD5, SHA-1, SHA-2, SHA-3, BLAKE2, etc.)
  3. Any complex homebrew iterative design (10,000 iterations of salted SHA-256, etc.)
  4. Any encryption design (AES, Blowfish (ugh), ChaCha20, etc.)

UPDATE: A note about PBKDF2 that was brought up in a Twitter thread from @solardiz. PBKDF2-HMAC-SHA512 isn't really an upgrade from sha512crypt (nor PBKDF2-HMAC-SHA256 an upgrade from sha256crypt), because PBKDF2 really isn't GPU resistant in the way bcrypt is. However, bcrypt can be implemented cheaply on ASICs with only 4 KB of memory.

If your choice of password hashing in constrained to NIST standards, which includes PBKDF2, then unfortunately, bcrypt, scrypt, and Argon2 are out of the question; just make sure to use it properly, which includes choosing a high iteration count based on your authentication load capacity. At that point, password storage is probably not the worst of your security concerns.

However, if you're not limited to NIST constraints, then use the others.

Acknowledgement

Thanks to Steve Thomas (@Sc00bzT) for our discussions on Twitter for helping me see this quirky behavior with sha256crypt and sha512crypt.

Use A Good Password Generator

Introduction

Screenshot of a Google Sheets spreadsheet showing my browser password generator audit.

For the past several months now, I have been auditing password generators for the web browser in Google Sheets. It started by looking for creative ideas I could borrow or extend upon for my online password generator. Sure enough, I found some, such as using mouse movements as a source of entropy to flashy animations of rolling dice for a Diceware generator. Some were deterministic, using strong password hashing or key derivation functions, and some had very complex interfaces, allowing you to control everything from letter case to pronounceable passwords and unambiguity.

However, the deeper I got, the more I realized some were doing their generation securely and others weren't. I soon realized that I wanted to grade these online generators and sift out the good from the bad. So, I created a spreadsheet to keep track of what I was auditing, and it quickly grew from "online generators" to "password generators and passphrase generators", to "web password, web passphrase, bookmarklet, chrome extensions, and firefox extenions".

When all was said and done, I had audited 300 total generators that can be used with the browser. Some were great while some were just downright horrible. So, what did I audit, why did I choose that criteria, and how did the generators fall out?

I audited:

  • Software license
  • Server vs. client generation
  • RNG security, bias, and entropy
  • Generation type
  • Network security
  • Mobile support
  • Ads or tracker scripts
  • Subresource integrity

No doubt this is a "version 1.0" of the spreadsheet. I'm sure those in the security community will mock me for my choices of audit categories and scoring. However, I wanted to be informed of how each generator was generating the passwords, so when I made recommendations about using a password generator, I had confidence that I was making a good recommendation.

Use A Password Manager

Before I go any further, the most important advice with passwords, is to use a password manager. There are a number of reasons for this:

  • They encourage unique passwords for each account.
  • They encourage passwords with sufficient entropy to withstand offline clustered attacks.
  • They allow storage of other types of data, such as SSNs or credit card numbers.
  • Many provide online synchronization support across devices, either internally or via Dropbox and Google Drive.
  • Many ship additional application support, such as browser extensions.
  • They ship password generators.

So before you go any further, the Best Practice for passwords is "Use A Password Manager". As members of the security community, this is the advice we should be giving to our clients, whether they are family, friends, coworkers, or professional customers. But, if they are already using a password manager, and discussions arise about password generation, then this audit is to inform members of the security community which generators are "great", which generators are "good", which generators are "okay", and finally, which generators are "bad".

So to be clear, I don't expect mom, dad, and Aunt Josephine to read this spreadsheet, so they make informed decisions about which password generator to use. I do hope however that security researchers, cryptographers, auditors, security contractors, and other members of the security community to take advantage of it.

So with that said, let me explain the audit categories and scoring.

Software License

In an ethical software development community, there is value granted when software is licensed under a permissive "copyleft" license. Not necessarily GPL, but any permissive license, from the 2-clause BSD to the GPL, from the Creative Commons to unlicensed public domain software. When the software is licensed liberally, it allows developers to extend, modify, and share the code with the larger community. There are a number of different generators I found in my audit where this happened; some making great changes in the behavior of the generator, others not-so-much.

License Open Source Proprietary

So when a liberal license was explicitly specified, I awarded one point for being "Open Source" and no points for being "Proprietary" when a license either was either not explicitly specified or was licensed under a EULA or "All Rights Reserved".

This is something to note- just because the code is on a public source code repository, does not mean the code is licensed under a permissive license. United States copyright law states that unless explicitly stated, all works fall under a proprietary "All Rights Reserved" copyright to the creator. So if you didn't specify a license in your Github repository, it got labeled as "Proprietary". It's unfortunate, because I think a lot of generators missed getting awarded that point for a simple oversight.

Server vs. Client Generation

Every generator should run in the browser client without any knowledge of the generation process by a different computer, even the web server. No one should have any knowledge whatsoever of what passwords were generated in the browser. Now, I recognize that this is a bit tricky. When you visit a password generator website such as my own, you are showing a level of trust that the JavaScript delivered to your browser is what you expect, and is not logging the generated passwords back to the server. Even with TLS, unless you're checking the source code on every page refresh and disconnecting your network, you just cannot guarantee that the web server did not deliver some sort of logging script.

Generator Client Server

With that said, you still should be able to check the source code on each page refresh, and check if it's being generated in the client or on the server. I awarded one point of being a "Client" generator and no points for being a "Server" generator. Interestingly enough, I thought I would just deal with this for the website generators, and wouldn't have to worry about this with bookmarklets or browser extensions. But I was wrong. I'll expand on this more in the "Network Security" category, but suffice it to say, this is still a problem.

Generation Type

I think deterministic password generators are fundamentally flawed. Fatally so, even. Tony Arcieri wrote a beautiful blog post on this matter, and it should be internalized across the security community. The "four fatal flaws" of deterministic password generators are:

  1. Deterministic password generators cannot accommodate varying password policies without keeping state.
  2. Deterministic password generators cannot handle revocation of exposed passwords without keeping state.
  3. Deterministic password managers can’t store existing secrets.
  4. Exposure of the master password alone exposes all of your site passwords.

Number 4 in that list is the most fatal. We all know the advice that accounts should have unrelated randomized unique passwords. When one account is compromised, the exposed password does not compromise any of the other accounts. This is not the case with deterministic password generators. Every account that uses a password from a deterministic generator shares a common thread via the master secret. When that master secret is exposed, all online accounts remain fatally vulnerable to compromise.

Proponents of deterministic generators will argue that discovery of the master password of an encrypted password manager database will also expose every online account to compromise. They're not wrong, but let's consider the attack scenarios. In the password manager scenario, a first compromise needs to happen in order to get access to the encrypted database file. Then a second compromise needs to happen in discovering the master password. But with deterministic generators, only one compromise needs to take place- that is using dictionary or brute force attacks to discover the master password that led to password for the online account.

With password managers, two compromises are necessary. With determenistic generators, only one compromise is necessary. As such, the generator was awardeded a point for being "Random" and no points for being "Deterministic".

Generator Random Unknown Deterministic

RNG Security

Getting random number generation is one of those least understood concepts in software development, but ironically, developers seem to think they have a firm grasp of the basic principles. When generating passwords, never at any point should a developer choose anything but a cryptographic random number generator. I awarded one point for using a CRNG, and no points otherwise. In some cases, the generation is done on the server, so I don't know or can't verify its security, and in some cases, the code is so difficult to analyze, that I cannot determine its security that way either.

CRNG Yes Maybe Unknown No

In JavaScript, using a CRNG primarily means using the Web Crypto API via "window.crypto.genRandomValues()", or "window.msCrypto.getRandomValues()" for Microsoft-based browsers. Never should I see "Math.random()". Even though it may be cryptographically secure in Opera, it likely isn't in any of the other browsers. Some developrs shipped the Stanford JavaScript Cryptographic Library. Others shipped a JavaScript implementation of ISAAC, and others yet shipped some AES-based CSPRNG. While these are "fine", you really should consider ditching those userspace scripts in favor of just calling "window.crypto.getRandomValues()". It's less software the user has to download, and as a developer, you are less likely to introduce a vulnerability.

Also, RC4 is not a CSPRNG, neither is ChaCha20, SHA-256, or any other hash function, stream cipher, or block cipher. So if you were using some JavaScript library that is using a vanilla hashing function, stream cipher, or block cipher as your RNG, then I did not consider it as secure generation. The reason being, is that even though ChaCha20 or SHA-256 may be prediction resistant, it is not backtracking resistant. To be a CRNG, the generator must be both prediction and backtracking resistant.

However, in deterministic password generators that are based on a master password, the "RNG" (using this term loosely here) should be a dedicated password hashing function or password-based key derivation function with an appropriate cost. This really means using only:

  • sha256crypt or sha512crypt with at least 5,000 rounds.
  • PBKDF2 with at least 1,000 rounds.
  • bcrypt with a cost of at least 5.
  • scrypt with a cost of at least 16 MiB of RAM and at least 0.5s execution.
  • Argon2 with sufficient cost of RAM and execution time.

Anything else, like hashing the master password or mouse entropy with MD5, SHA-1, SHA-2, or even SHA-3 will not suffice. The goal of those dedicated password generators or key derivation functions is to slow down an offline attempt at discovering the password. Likely the master password does not contain sufficient entropy, so it remains the weakest link in the generation process. By using a dedicated password hashing or key derivation function with an appropriate cost, we can guarantee a certain "speed limit" with offline clustered password attacks, making it difficult to reconstruct the password.

RNG Uniformity

Even though the developer may have chosen a CRNG, they may not be using the generator uniformly. This is probably the most difficult aspect of random number generation to grasp. It seems harmless enough to call "Math.floor(Math.random() * length)" or "window.crypto.getRandomValues(new UInt32Array(1))[0] % length". In both cases, unless "length" is a power of 2, the generator is biased. I awarded one point for being an unbiased generator, and zero points otherwise.

Uniform Yes Maybe Unknown No

To do random number generation in an unbiased manner, you need to find how many times the length divides the period of the generator, and note the remainder. For example, if using "window.crypto.getRandomValues(new UInt32Array(1))", then the generator has a period of 32-bits. If your length is "7,776", is in the case of Diceware, then 7,776 divides 232 552,336 times with a remainder of 2,560. This means that 7,776 divides values 0 through 232-2,561 evenly. So if your random number is between the range of 232-2,560 through 232-1, the value needs to be tossed out, and a new number generated.

Oddly enough, you could use a an insecure CRNG, such as SHA-256, but truncate the digest to a certain length. While the generator is not secure, the generator in this case is unbiased. More often than not actually, deterministic generators seem to fall in this category, where a poor hashing function was chosen, but the digest was truncated.

RNG Entropy

I've blogged about this a number of times, so I won't repeat myself here. Search by blog for password entropy, and get caught up with the details. I awarded one point for generators with at least 70 bits of entropy, 0.5 points for 55 through 69 bits of entropy, and no points for entropy less than 55 bits.

Entropy 70 69 55 54

I will mention however that I chose the default value that was presented to me when generating the password. Worse, if I was forced to chose my length, and I could chose a password of one character, then I awarded it as such. When you're presenting password generators to people like Aunt Josephine, they'll do anything they can do get away with as little as possible. History has shown this is 6-8 characters in length. This shouldn't be possible. A few Nvidia GTX960 GPUs can crack every 8 character ASCII password hashed with SHA-1 in under a week. There is no reason why the password generator should not present minimum defaults that are outside the scope of practical hobbyist brute force searching.

So while that may be harsh, if you developed one of the generators in my audit, and you were dinged for this- I'm calling you out. Stop it.

Network Security

When delivering the software for the password generation, it's critical that the software is delivered over TLS. There should be no room for a man-in-the-middle to inject malicious code to discover what passwords your generating, send you a determined list passwords, or any other sort of compromise. This means, however, that I expect a green lock in the browser's address or status bars. The certificate should not be self-signed, it should not be expired, it should not be missing intermediate certificates, it should not be generated for a different CN, or any other certificate problems. Further, the site should be HTTPS by default.

HTTPS Yes Not Default Expired No

I awarded one point for "Yes", serving the software over secure and problem-free HTTPS, and zero points otherwise.

Mobile View Support

For the most part, due to their ubiquity, developers are designing websites that support mobile devices with smaller screens and touch interfaces. It's as simple as adding a viewport in the HTML header, and as complex as customized CSS and JavaScript rules for screen widths, user agents, and other tests. Ultimately, when I'm recommending a password generator to Aunt Josephine while she's using her mobile phone, she shouldn't have to pinch-zoom, scroll, and other nuisances when generating and copying the password. As silly as that may sound, if the site doesn't support a mobile interface, then it wasn't awarded a point.

Mobile Yes No

Ads and Tracker Scripts

Screenshot showing the Lightbeam extension for Firefox after several days of use.

I get it. I know that as a developer, you want easy access to analytics about who is visiting your site. Having that data can help you learn what is driving traffic to your site, and how you can make adjustments as necessary to improve that traffic. But when it comes to password generators, no one other than me and the server that sent the code to my browser, should know that I'm about to generate passwords. I awarded a point for not shipping trackers, and zero points if the generator did.

Trackers No Yes

Google Analytics, social media scripts, ads, and other 3rd party browser scripts track me via fingerprinting, cookies, and other methods to learn more about who I am and what I visited. If you want to see how extensive this is, install the Lightbeam extension for Firefox. This shows the capability of companies to share data and learn more about who you are and where you've been on the web. Recently, Cambridge Analytica, a small unknown company, was able to mine data on millions of Facebook users, and the data mining exposed just how easy it was for Facebook to track your web behavior even when not on the site.

At first, I thought this would be just something with website generators, but when I started auditing browser extensions, I quickly saw that developers were shipping Google Analytics, and other tracking scripts in the bundled extension as well.

Offline

When I started auditing the bookmarklets and extensions, I pulled up my developer console, and watched for any network activity when generating the passwords or using the software. To my dismay, some do "call home" by either wrapping the generator around an <iframe> or requiring an account, such as in the case with password managers. I awarded one point for staying completely offline, zero points for any sort of network activity.

Offline Yes No

Now, you may be thinking that this isn't exactly fair for password generators or just <iframe>s, but the generation is still happening client-side and without trackers. For the most part, I agree, except, when installing browser extensions, the developer has the opportunity to make the password generation fully offline. That may sound a touch silly for a browser extension, but regardless, it removes the risk of a web server administrator from modifying the delivered JavaScript on every page load. The only times the JavaScript would be changed, is when the extension is updated. This behaves more like standard desktop software.

Subresource Integrity

Finally, the last category I audited was subresource integrity. SRI is this idea that a developer can add cryptographic integrity to <link> and <script< resources. The reason the W3C gives for the motivation of SRI is:

"Compromise of a third-party service should not automatically mean compromise of every site which includes its scripts. Content authors will have a mechanism by which they can specify expectations for content they load, meaning for example that they could load a specific script, and not any script that happens to have a particular URL."

So SRI guarantees that even though the data is delivered under TLS, if the cryptographic hashes are valid, then the data has not been compromised, even if the server has. More information about SRI can be found at the W3C Github page, and I encourage everyone to check out the links there.

I awarded one point if "Yes" or "N/A" (no resources called), and zero points for "No".

SRI Yes N/A No

Scoring

With all these audit categories taken into account, I gave a total score to see how generators ranked among others. Each generator has different auditing criteria, so the score varies from generator type to generator type. I treated the scoring like grade school- 100% is an "A", 99% to roughly 85% as "great", 84% to 51% is "okay", and 50% or less is failing. I translated that grade school score into the following:

Score Perfect Perfect - 1 Perfect - 2 51% 50%

When you look at the spreadsheet, you'll see that it is sorted first by score in descending order then alphabetically by "Name" in ascending order. It's sorted this way, so as a security consultant, you can quickly get a high-level overview of the "good" versus the "bad", and you should be able to find the generator you're looking for quickly. The names are just generic unique identifiers, and sometimes there are notes accompanying each generator when I found odd behavior, interesting generation templates, or other things that stood out.

Conclusion

This audit is for the security community, not for Aunt Josephine and friends or family. I don't expect you to share this with your dad or your spouse or your Facebook friends, and expect them to know what they're looking at, or why they should care. However, I recently took a poll on Twitter, asking the security community if they recommended password generators to family and friends. Only 90 votes came in, but 69% of you said that you did, with a few comments coming in saying that they would only recommend them if they shipped with a password manager (see the beginning of this post). I suspect that some of the 31% of the "No" votes are in this category.

Screenshot of a Twitter poll I took asking if people recommend password generators. 69% said yes, 31% said no.

So, of those 69% of you that said "Yes", I hope this audit will be of some value.

The Entropy of a Digital Camera CCD/CMOS Sensor

Recently, Vault12 released an app for iOS that uses the mobile device's camera as a source of randomness. Unfortunately, when putting the generated binary files through the Dieharder tests, it comes out pretty bad. I get 20 "PASSED", 13 "WEAK", and 81 "FAILED" results. For a TRNG, it should be doing much better than that. Now, to be clear, I'm not writing this post to shame Vault12. I actually really love the TrueEntropy app, and am eagerly waiting for it to hit Android, so I can carry around a TRNG in my pocket. However, things can get better, and that is what this post is hopefully addressing.

Using a camera as a TRNG is nothing new. SGI created a patent for pointing a webcam at a lava lamp, using the chaotic nature of the lava lamp itself as the source of entropy. Later, it was realized that this was unnecessary. The CCD/CMOS in the camera was experiencing enough noise from external events to be more than sufficient. This noise is shown in the photo itself, and is appropriately referred to as "image noise".

The primary sources of noise come from the following:

  • Thermal noise- Caused by temperature fluctuations due to electrons flowing across resistant mediums.
  • Photon noise- Caused by photons hitting the CCD/CMOS and releasing energy to neighboring electronics.
  • Shot noise- Caused by current flow across diodes and bipolar transistors.
  • Flicker noise- Caused by traps due to crystal defects and contaniments in the CCD/CMOS chip.
  • Radiation noise- Caused by alpha, beta, gamma, x-ray, and proton decay from radioactive sources (such as outer-space) interacting with the CCD/CMOS.

Some of these noise sources can be manipulated. For example, by cooling the camera, we can limit thermal noise. A camera at 0 degrees Celsius will experience less noise than one at 30 degrees Celsius. A camera in a dark room with less photons hitting the sensor will experience less noise than a bright room. Radiation noise can be limited by isolating the sensor in a radiation-protective barrier.

Let's put this to the test, and see if we can actually calculate the noise in a webcam. To do this, we'll look at a single frame with the lens cap covered, where the photo is taken in a dark room, and the web cam is further encompassed in a box. We'll take the photo at about 20 degrees Celsius (cool room temperature).

In order to get a basis for the noise in the frame, we'll use Shannon Entropy from information theory. Thankfully, understanding Shannon Entropy isn't that big of a deal. My frame will be taken with OpenCV from a PlayStation 3 Eye webcam, which means the frame itself is just a big multidimensional array of numbers between 0 and 255 (each pixel only provides 8 bits of color depth). So, to calculate the Shannon Entropy of a frame, we'll do the following:

  1. Place each number in its own unique bin of 0 through 255.
  2. Create an observed probability distribution (histogram) by counting the numbers in each bin.
  3. Normalize the distribution, creating 256 p-values (the sum of which should equal "1").
  4. For each of the 256 bins, calculate: -p_i*log_2(p_i).
  5. Sum the 256 values.

Thankfully, I don't have all of this by hand- numpy provides a function for me to call that does a lot of the heavy lifting for me.

So, without further ado, let's look at the code, then run it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#!/usr/bin/python

import cv2
import math
import numpy

def max_brightness(frame):
    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
    h, s, v = cv2.split(hsv)
    v[v > 0] = 255
    v[v < = 0] += 255
    final_hsv = cv2.merge((h, s, v))
    frame = cv2.cvtColor(final_hsv, cv2.COLOR_HSV2BGR)
    return frame

def get_entropy(frame):
    histogram = numpy.histogram(frame, bins=256)[0]
    histogram_length = sum(histogram)
    samples_probability = [float(h) / histogram_length for h in histogram]
    entropy = -sum([p * math.log(p, 2) for p in samples_probability if p != 0])
    return entropy

cap = cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)

ret, frame1 = cap.read()
ret, frame2 = cap.read()
frame_diff = cv2.absdiff(frame1, frame2)

cv2.imwrite('/tmp/frame1.bmp', frame1)
cv2.imwrite('/tmp/frame2.bmp', frame2)
cv2.imwrite('/tmp/frame_diff.bmp', frame_diff)

frame1_max = max_brightness(frame1)
frame2_max = max_brightness(frame2)
frame_diff_max = max_brightness(frame_diff)

cv2.imwrite('/tmp/frame1_max.bmp', frame1_max)
cv2.imwrite('/tmp/frame2_max.bmp', frame2_max)
cv2.imwrite('/tmp/frame_diff_max.bmp', frame_diff_max)

print("Entropies:")
print("    frame1: {}".format(get_entropy(frame1)))
print("    frame2: {}".format(get_entropy(frame2)))
print("    frame_diff: {}".format(get_entropy(frame_diff)))
print("    frame1_max: {}".format(get_entropy(frame1_max)))
print("    frame2_max: {}".format(get_entropy(frame2_max)))
print("    frame_diff_max: {}".format(get_entropy(frame_diff_max)))

Let's look over the code before running it. First, I'm actually capturing two frames right next to each other, then taking their composite difference. We know that a photo consists of its signal (the data most people are generally interested in) and its noise (the data they're not). By taking the composite difference between the two, I'm attempting to remove the signal. Because the frames were taken in rapid succession, provided nothing was drastically changing between the frames, most of the data will be nearly identical. So the signal should disappear.

But what about the noise? Well, as discussed above, the noise is a bit unpredictable and slightly unmanageable. Unlike my signal, the noise will be drastically different between the two frames. So, rather than removing noise, I'll actually be adding noise in the difference.

The next thing you'll notice is that I'm either maximizing or completely removing the luminosity in an HSV color profile. This is done just as a visual demonstration of what the noise actually "looks like" in the frame. You can see this below (converted to PNG for space efficiency).

Frame 1

Frame 2

Difference of frames 1 & 2

Frame 1 maxed luminosity

Frame 2 maxed luminosity

Difference of frames 1 & 2 maxed luminosity


Running the Python script in my 20 degrees Celsius dark room with the lens cap on and all light removed as much as possible, I get:

$ python frame-entropy.py
Entropies:
    frame1: 0.0463253223509
    frame2: 0.0525489364555
    frame_diff: 0.0864590940377
    frame1_max: 0.0755975713815
    frame2_max: 0.0835428883103
    frame_diff_max: 0.134900632254

The "ent(1)" userspace utility confirms these findings when saving the frames as uncompressed bitmaps:

$ for I in frame*.bmp; do printf "$I: "; ent "$I" | grep '^Entropy'; done
frame1.bmp: Entropy = 0.046587 bits per byte.
frame1_max.bmp: Entropy = 0.076189 bits per byte.
frame2.bmp: Entropy = 0.052807 bits per byte.
frame2_max.bmp: Entropy = 0.084126 bits per byte.
frame_diff.bmp: Entropy = 0.086713 bits per byte.
frame_diff_max.bmp: Entropy = 0.135439 bits per byte.

It's always good to use an independent source to confirm your findings.

So, in the standard frames, I'm getting about 0.05 bits per byte of entropy. However, when taking the composite difference, that number almost doubles to about 0.09 bits per byte. This was expected, as you remember, we're essentially taking the noise from both frames, and composing them in the final frame. Thus, the noise is added in the final frame.

What was actually surprising to me were the entropy values after setting the extreme luminosity values. This may be due to the fact that there are larger deltas between adjacent pixels when creating our histogram. When taking the difference of the two adjusted frames, the entropy jumps up to about 0.13 bits per byte. So, we could safely say that a composed frame with maxed luminosity that is the difference of two frames has about 0.11 bits of entropy per byte, plus or minus 0.02 bits per byte.

What does this say about the frame as a whole though? In my case, my frame is 640x480 pixels. Knowing that each pixel in my PS3 Eye webcam only occupies 1 byte or 8 bits, we can calculate the entropy per frame:

(640*480) pixels/frame * 1 byte/pixel = 307200 bytes/frame
307200 bytes/frame * 0.11 entropy bits/byte = 33792 entropy bits/frame

Each frame in my 640x480 PS3 Eye webcame provides about 33,792 bits of entropy. For comparison SHA-256 theoretically provides a maximum of 256-bits of entropic security. Of course, we should run millions of trials, collecting the data, calculate the standard deviation, and determine a more true average entropy. But, this will suffice for this post.

So, now that we know this, what can we do with this data? Well, we can use it as a true random number generator, but we have to be careful. Unfortunately, as the frame is by itself, it's heavily biased. In the frame, there exists spatial correlation with adjacent pixels. In the frame difference, there exists both spatial and time correlations. This isn't sufficient as a secure true random number generator. As such, we need to remove the bias. There are a few ways of doing this, called "randomness extraction", "software whitening", "decorrelation", or "debiasing". Basically, we want to take a biased input, and remove any trace of bias in the output.

We could use John von Neumann decorrelation, where we look at two non-overlapping consecutive bits. If the two bits are identical, then both bits are discarded. If they are different, then the most significant bit is output, while the least significant bit is discarded. This means that at a best, you are discarding half of your data, but how much is discarded all depends on how badly biased the data is. We know that our frame is only providing 0.11 bits of entropy per 8 bits. So we're keeping 11 bits out of 800. That's a lot of data that is discarded. One drawback with this approach, however, is if one or more bits are "stuck", such is an a dead pixel. Of course, this will lower the overall entropy of the frame, but will also drastically impact the extractor.

A different approach would be to use chaos machines. This idea is relatively new and not thoroughly studied; at least I'm struggling to find good research papers on the topic. The idea is taking advantage of the chaotic behavior of certain dynamical systems, such as a double pendulum. Due to the nature of chaos, small fluctuations in initial conditions lead to large changes in the distant future. The trick here is selecting your variables, such as the time distance and chaotic process correctly. Unlike John von Neumann decorrelation, that automatically discovers the bias and discards it for you, care has to be taken to make sure that the resulting output is debiased.

A better approach is using cryptographic primitives like one-way hashing or encryption, sometimes called "Kaminsky debiasing". Because most modern crytographic primitives are designed to emulate theoretical uniform unbiased random noise, the security rests on whether or not that can be achieved. In our case, we could encrypt the frame with AES and feed the ciphertext as our TRNG output. Unfortunately, this means also managing a key, which doesn't necessarily have to be kept secret. A better idea would be to use cryptographic hashing functions, like SHA-2, or even better, extendable output functions (XOFs).

Obviously, it should go without stating, that encrypting or hashing your biased input isn't increasing the entropy. This means that we need to have a good handle on what our raw entropy looks like (as we did above) beforehand. We know in our case that we're getting about 35 kilobits of entropy per frame, so hashing with SHA-256 is perfectly acceptable, even if we're losing a great deal of entropy in the output. However, if we were only getting 200-bits of security in each frame, while SHA-256 is debiasing the data, we still only have 200-bits of entropy in the generated output.

Really though, the best approach is an XOF. We want to output as much of our raw entropy as we can. Thankfully, NIST has 2 XOFs standardized as part of the SHA-3: SHAKE128 and SHAKE256. An XOF allows you to output a digest of any length, where SHA-256 for example, only allows 256-bits of output. The security margin of the SHAKE128 XOF function is the minimum of half of the digest or 128-bits. If I have an entropy 35 kilobits, I would like to have all of that entropy available in the output. As such, I can output 4 KB in the digest knowing full well that's within my entropy margin. Even though I'm losing as much data as the John von Neumann extractor, I'm not vulnerable to "stuck pixels" being a problem manipulating the extractor.

When I put all this code together in Python:

  1. Take the difference of two consecutive overlapping frames.
  2. Maximize the luminosity of the new composed frame.
  3. Hash the frame with SHAKE128.
  4. Output 4 KB of data as our true random noise.

At 30 frames per second for a resolution of 640x480, outputting 4 KB per frame will provide 120 KBps of data per second, and this is exactly what I see when executing the Python script. The PS3 Eye camera also supports 60 fps at a lower resolution, so I could get 240 KBps if I can keep the same security margin of 4 KB per frame. I haven't tested this, but intuition tells me I'll have a lower security margin at the lower resolution.

Coming full circle, when we put our TRNG to the Dieharder test, things come out vastly different than Vault12's results:

  • Vault12 TrueEntropy:
    1. PASSED: 20
    2. WEAK: 13
    3. FAILED: 81
  • My webcam TRNG:
    1. PASSED: 72
    2. WEAK: 12
    3. FAILED: 30

1,000 Books Read In One Year? No, Not By A Long Shot

Recently, Goodreads sent out a tweet about how to remove social media and the Internet from your life, so you can focus on reading 1,000 books in one year. The post follows this sort of math:

  1. The average person reads 400 words per minute.
  2. The typical non-fiction books have around 50,000 words.
  3. Reading 200 books will take you 417 hours.
  4. The average person spends 608 hours on social media annually.
  5. The average person spends 1,642 hours watching TV annually.
  6. Giving up 2,250 hours annually will allow you to read 1,000 books in one year.

This blew my mind. I'm a very avid reader. Since signing up for Goodreads in 2013, I've been hitting at least 20,000 pages read every year, and I'm on track to read 25,000 pages this year. But, I'm only putting down 75 books each year. Now granted, a spare 2,250 hours per year ÷ 365 days per year is just over 6 hours per day of reading. I'm not reading 6 hours per day. I don't watch TV, I have a job, kids and a wife to take care of, and other things that keep me off the computer most of my time at home (I'm writing this blog post after midnight).

No doubt, 6 hours per day is a lot of reading. But I average 2 hours per day, and I'm only putting down 75 books annually. 6 hours of reading per day would only put me around 225 books each year, a far cry from the 1,000 I should be hitting. What gives?

Well, it turns out, Charles Chu is being a bit ... liberal with his figures. First off, the average person does not read 400 words per minute. Try about half, at only 200 words per minute, according to Iris Reading, a company that sells a product on improving your reading speed and memory comprehension. This cuts our max books from 1,000 in a year to 500.

Second, Chu claims the average non-fiction book is 50,000 words in length. I can tell you that 50,000 words is a very slim novel. This feels like a Louis L'Amour western length to me. Most books that I have read are probably closer to twice that length. However, according to HuffPost which quotes Amazon Text Stats, the average book is 64,000 words in length. But, according to this blog post by Writers Workshop, the average "other fiction" novel length is 70,000 to 120,000 words. This feels much more in line with what I've read personally, so I'll go with about 100,000 words in a typical non-fiction novel.

So now that brings our annual total down from 500 to 250 books. That's reading 200 words per minute, for 6 hours every day, with non-fiction books that average 100,000 words in length. I claimed that I would probably come in around 225 books, so this seems to be a much closer ballpark figure.

But, does it line up? Let's look at it the another way, and see if we can agree that 200-250 books annually, reading 6 hours per day, is more realistic.

I claimed I'm reading about 2 hours per day. I read about 3 hours for 4 of the 7 days in a week while commuting to work. For the other three days in the week, I can read anywhere from 1 hour to 3 hours, depending on circumstances. So my week can see anywhere from 13 hours to 15 hours on average. That's about 2 hours per day.

During 2016, I read 24,048 pages. That's about 65 pages per day, which feels right on target. But, how many words are there per page? According to this Google Answers answer, which offers a couple citations, a novel averages about 250 words per page.

But, readinglength.com shows that many books I've read are over 300 words per page, and some denser at 350 words per page, with the average sitting around 310. So 250 words per page at 65 pages per day is 16,250 words per day, and 310 words per page at 65 pages per day 20,150 pages that I'm reading.

Because I'm only reading about 2 hours per day, that means I'm reading at a meager 135 to 168 words per minute, based on the above stats. I guess I'm a slow reader.

If I highball it at 168 words per minute, then in 6 hours, I will have read 60,480 words. After a year of reading, that's 22,075,200 words. An independent blog post confirms this finding of 250-300 words per page, but also uses that to say that most adult books are 90,000 - 100,000 words in length (additional confirmation from earlier), and young adult novels target the 55,000 word length that Chu cited (maybe Chu likes reading young adult non-fiction?). As such, I can expect to read 22,075,200 words per year ÷ 100,000 words per book, or about 220 books in a year of reading 6 hours every day.

Bingo.

So, what can we realistically expect from reading?

  1. Readers average 200 words per minute.
  2. A page averages 250 words.
  3. A novel averages 100,000 words.
  4. One hour of reading per day can hit 30-40 books per year.
  5. Six hours of reading per day can hit 200-250 books per year.
  6. To read 1,000 books in a year, you need to read 22 hours per day.

This is reading average length adult non-fiction books at an average speed of 200 words per minute. The calculus completely changes if your average reading speed is faster than 200 wpm, you read primarily graphic novels with little text, or read shorter non-fiction novels. Fourth-grade chapter books? Yeah, I could read 1,000 of those in a year. 🙂

Password Best Practices I - The Generator

This is the first in a series of posts about password best practices. The series will cover best practices from a few different angles- the generator targeted at developers creating those generators, the end user (you, mom, dad, etc.) as you select passwords for accounts from those generators, and the service provider storing passwords in the database for accounts that your users are signing up for.

Motivation

When end users are looking for passwords, they may turn to password generators, whether they be browser extensions, websites, or offline installable executables. Regardless, as a developer, you will need to ensure that the passwords you your providing for your users are secure. Unfortunately, that's a bit of a buzzword, and can be highly subjective. So, we'll motivate what it means to be "secure" here:

  • The generator is downloaded via HTTPS, whether it's a website, executable ZIP, or browser extension.
  • The generator uses a cryptographically secure random number generator.
  • The generator provides at least 70-bits of entropy behind the password.
  • The generator is open source.
  • The generator generates passwords client-side, not server-side.
  • The generator does not serve any ads or client-side tracking software.

I think most of us can agree on these points- the software should be downloaded over HTTPS to mitigate man-in-the-middle attacks. A cryptographically secure RNG should be used to ensure unpredictability in the generated password. In addition to that, the CRNG should also be uniformly distributed across the set, so no elements of the password are more likely to appear than any other. Creating an open source password generator ensures that the software can be audited for correctness and instills trust in the application. Generating passwords client-side, means the server hos now possible way of knowing what passwords were generated, unless the client is also calling home (the code should be inspected). And of course, we don't want any adware or malware installed in the password generating application to further compromise the security of the generator.

Okay. That's all well and good, but what about this claim to generate passwords from at least 70-bits in entropy? Let's dig into that.

Brute force password cracking

Password cracking is all about reducing possibilities. Professional password crackers will have access to extensive word lists of previously compromised password databases, they'll have access to a great amount of hardware to rip through the password space, and they'll employ clever tricks in the password cracking software, such as Hashcat or MDXFind, to further reduce the search space, to make finding the passwords more likely. In practice, 90% of leaked hashed password databases are reversed trivially. With the remaining 10%, half of that space takes some time to find, but those passwords are usually recovered. The remaining few, maybe 3%-5%, contain enough entropy that the password cracking team likely won't recover those passwords in a week, or a month, or even a year.

So the question is this- what is that minimum entropy value that thwarts password crackers? To answer this question, let's look at some real-life brute force searching to see if we can get a good handle on the absolute minimum security margin necessary to keep your client's leaked password hash out of reach.

Bitcoin mining

Bitcoin mining is the modern-day version of the 1849 California Gold Rush. As of right now, Bitcoin is trading at $3,665.17 per BTC. As such, people are fighting over each other to get in on the action, purchasing specialized mining hardware, called "Bitcoin ASICs", to find those Bitcoins as quickly as possible. These ASICs are hashing blocks of data with SHA-256, and checking a specific difficulty criteria to see if it meets the requirements as a valid Bitcoin block. If so, the miner that found that block is rewarded that Bitcoin and it's recorded in the never-ending, ever-expanding, non-scalable blockchain.

How many SHA-256 hashes is the word at large calculating? As of this writing, the current rate is 7,751,843.02 TH/s, which is 7,751,843,020,000,000,000 SHA-256 hashes per second. At one point, it peaked at 8,715,000 THps, and there is no doubt in my mind that it will pass 10,000,000 THps before the end of the year. So let's run with that value, of 10,000,000,000,000,000,000 SHA-256 hashes per second, or 1019 SHA-256 hashes per second.

If we're going to talk about that in terms of bits, we need to convert it to a base-2 number, rather than base-10. Thankfully, this is easy enough. All we need to calculate is the log2(X) = log(X)/log(2). Doing some math, we see that Bitcoin mining is roughly flipping every combination of bits in a:

  • 63-bit number every second.
  • 69-bit number every minute.
  • 74-bit number every hour.
  • 79-bit number every day.
  • 84-bit number every month.
  • 88-bit number every year.

What does this look like? Well, the line is nearly flat. Here in this image, the x-axis is the number of days spent mining for Bitcoin, starting from 0 through a full year of 365 days. The y-axis is the search space exhaustion in bits. So, you can see that in roughly 45 days, Bitcoin mining have calculated enough SHA-256 hashes to completely exhaust an 85-bit search space (click to enlarge):

Plot showing log(x*10^19*86400)/log(2) for Bitcoin mining.

Real-world password cracking

That's all fine and dandy, but I doubt professional password crackers have access to that sort of hardware. Instead, let's look at a more realistic example.

Recently, Australian security researcher Troy Hunt, the guy that runs https://haveibeenpwned.com/, released a ZIP of 320 million SHA-1 hashed passwords that he's collected over the years. Because the passwords were hashed with SHA-1, recovering them should be like shooting fish in a barrel. Sure enough, a team of password crackers got together, and made mincemeat of the dataset.

In the article, it is mentioned that they had a peak password cracking speed of 180 GHps, or 180,000,000,000 SHA-1 hashes per second, or 18*1010 SHA-1 hashes per second. The article mentions that's the equivalent of 25 NVidia GTX1080 GPUs working in concert. To compare this to Bitcoin mining, the team was flipping every combination of bits in a:

  • 41-bit number every second.
  • 47-bit number every minute.
  • 53-bit number every hour.
  • 58-bit number every day.
  • 63-bit number every month.
  • 66-bit number every year.

As we can see, this is a far cry from the strength of Bitcoin mining. But, are those numbers larger than you expected? Let's see how it looks on the graph, compared to Bitcoin (click to enlarge):

Plot showing log(x*18*10^10*86400)/log(2) for this cluster of password cracking hobbyists.

So, it seems clear that our security margin is somewhere above that line. Let's look at one more example, a theoretical one.

Theoretical password cracking by Edward Snowden

Before Edward Snowden became known to the world as Edward Snowden, he was known to Laura Poitras as "Citizenfour". In emails back-and-forth between Laura and himself, he told her (emphasis mine):

"Please confirm that no one has ever had a copy of your private key and that it uses a strong passphrase. Assume your adversary is capable of one trillion guesses per second. If the device you store the private key and enter your passphrase on has been hacked, it is trivial to decrypt our communications."

But one trillion guesses per second is only about 5x the collective power of our previous example of a small team of password cracking hobbyists. That's only about 125 NVidia GTX1080 GPUs. Certainly interested adversaries would have more money on hand to invest in more computing power than that. So, let's increase the rate to 10 trillion guesses per second. 1,250 NVidia GTX1080 GPUs would cost our adversary maybe $500,000. A serious investment, but possibly justifiable, and certainly not outside the $10 billion annual budget of the NSA. So let's roll with it.

At 1013 password hashes per second, we are flipping every combination of bits in a:

  • 43-bits every second.
  • 49-bits every minute.
  • 54-bits every hour.
  • 59-bits every day.
  • 64-bits every month.
  • 68-bits every year.

Plotting this on our chart with both Bitcoin mining and clustered hobbyist password cracking, we see (click to enlarge):

Plot of log(x*86400*10^13)/log(2)

The takeaway

What does all this math imply? That as a developer of password generator software, you should be targeting a minimum of 70-bits of entropy with your password generator. This will give your users the necessary security margins to steer clear of well-funded adversaries, should some service provider's password database get leaked to the Internet, and they find themselves as a target.

As a general rule of thumb, for password generator developers, these are the sort of security margins your can expect with entropy:

  • 70-bits or more: Very secure.
  • 65-69 bits: Moderately secure.
  • 60-64 bits: Weakly secure.
  • 59 bits or less: Not secure.

Colored recommendation of the previous plot showing all brute force attempts.

What does this mean for your generator then? This means that the number of size of the password or passphrase that you are giving users should be at least:

  • Base-94: 70/log2(94)=11 characters
  • Base-64: 70/log2(64)=12 characters
  • Base-32: 70/log2(32)=14 characters
  • Base-16: 70/log2(16)=18 characters
  • Base-10: 70/log2(10)=22 characters
  • Diceware: 70/log2(7776)=6 words

Now, there is certainly nothing wrong with generating 80-bit, 90-bit, or even 128-bit entropy. The only thing you should consider with this, is the size of the resulting password and passphrases. For example, if you were providing a minimum of 128-bit security for your users with the password generator, then things would look like:

  • Base-94: 128/log2(94)=20 characters
  • Base-64: 128/log2(64)=22 characters
  • Base-32: 128/log2(32)=26 characters
  • Base-16: 128/log2(16)=32 characters
  • Base-10: 128/log2(10)=39 characters
  • Diceware: 128/log2(7776)=10 words

As you can see, as you increase the security for your users, the size of the generated passwords and passphrases will also increase.

Conclusion

It's critical that we are doing right by our users when it comes to security. I know Randall Munroe of XKCD fame created the "correct horse battery staple" comic, advising everyone to create 4-word passphrases. This is fine, provided that those 4 words meets that minimum 70-bits of entropy. In order for that to happen though, the word list needs to be:

   4 = 70/log2(x)
=> 4 = 70/log(x)/log(2)
=> 4 = 70*log(2)/log(x)
=> 4*log(x) = 70*log(2)
=> log(x) = 70/4*log(2)
=> x = 1070/4*log(2)
=> x ~= 185,364

You would need a word list of at least 185,364 words to provide at least 17.5-bits of entropy per word, which brings us to required 70-bits of total entropy for 4 words. All too often, I see generators providing four words, but the word list is far too small, like around Diceware size, which is only around 51-bits of entropy. As we just concluded, that's not providing the necessary security for our users.

So, developers, when creating password and passphrase generators, make sure they are at least targeting the necessary 70-bits of entropy, in addition to the other qualifications that we outlined at the beginning of this post.

Colorful Passphrases

Since the development of my passphrase and password generator, I started working toward improving the other online generators out there on the web. I created a Google Spreadsheet to work toward that goal, by doing reasonable audits to "rank" each generator, and see how they stacked up against the rest. Then, I started submitting patches in hopes of making things better.

One passphrase generator that was brought to my attention was Pass Plum. Pass Plum supplies an example word list to use for generating your passphrases, if you choose to install the software on your own server. Unfortunately, the list is only 140 words in size, so if you choose to use that for your word list, then you only get about 7.13-bits of entropy per word. Sticking to the default configuration 4 words given to the user, that's a scant 28-bits of security on your passphrase, which is trivially reversed. I submitted a pull request to extend it to 4,096 words, providing exactly 13-bits of entropy per word, or about 52-bits of entropy for a 4-word passphrase- a significant improvement.

I noticed, however, that the default list was nothing but color names, and that got me thinking- what if not only the generator provided color names for passphrases, but also colored the word that color name? Basically, a sort of false visual synesthesia. What I want to know is this, is it easier to remember passphrases when you can associate each word with a visual color?

So, over the past several nights, and during weekends, I've been putting this together. So, here is is- colorful passphrases.

Collage of 4 separate screenshots of what the color passphrase generator looks like.

Head over to my site to check it out. If a color is too light (its luma value is very high), then the word is outlined with CSS. Every word is bold, to make the word even more visible on the default white background.

As I mentioned, the idea is simple: people struggle remembering random meaningless strings of characters for passwords, so passphrases are a way to make a random series of words easier to recall. After all, it should be easier to remember "gnu hush gut modem scamp giddy" than it is to remember "$5hKXuE[\NK". It's certainly easier to type on mobile devices, and embedded devices without keyboards, like smart TVs and video game consoles.

But, even then, there is nothing that is really tying "gnu hush gut modem scamp giddy" together, so you force yourself in some sort of mnemonic to recall it. Visually stimulated color passphrases have the benefit of not only using a mnemonic to recall the phrase, but an order of colors as well. For example, you might not recall "RedRobin Pumpkin Revolver DeepPuce Lucky Crail TealDeer", but you may remember its color order of roughly "red orange black purple gold brown teal". "A RedRobin is red. A pumpkin is orange. A revolver (gun) is black. DeepPuce is a purple. Lucky coins are gold. Crail, Soctand has brown dirt. TealDeer are teal."

However, it also comes with a set of problems. First, what happens if you actually have visual synesthesia? Will seeing these colors conflict with your mental image of what the color should be for that word? Second, many of the words are very obscure, such as "Crail" or "Tussock" or "Tuatara" (as all seen in the previous screenshot collage). Finally, what happens when you have a color passphrase where two similar colors are adjacent to each other? Something like "Veronica Affair Pipi DeepOak Atoll BarnRed RedOxide"? Both "BarnRed" and "RedOxide" are a deep reddish color. Will it be more difficult to recall which comes first?

A screenshot showing a color collision between "BarnRed" and "RexOxide"

As someone who is interested in password research, I wanted to see what sort of memory potential visually colorful passphrases could have. As far as I know, this has never been investigated before (at least I could find any research done in this area, and I can't find any passphrase generators doing it). This post from Wired investigates alternatives to text entry for password support, such as using color wheels, but doesn't say anything about visual text. Here is a browser extension that colors password form fields on websites, with the SHA-1 hash of your password as you type it. You know if it's correct, by recognizing if the pattern is the same it always is when logging in.

Long story short, I think I'm wading into unknown territory here. If you find this useful, or even if you don't, I would be very interested in your feedback.

A Practical and Secure Password and Passphrase Generator

The TL;DR

Go to https://ae7.st/g/ and check out my new comprehensive password and passphrase generator. Screenshots and longer explanation below.

Introduction

Sometime during the middle of last summer, I started thinking about password generators. The reason for this, was that I noticed a few things when I used different password generators, online or offline:

  1. The generator created random meaningless strings.
  2. The generator created XKCD-style passphrases.
  3. The generator gave the user knobs and buttons galore to control
    • Uppercase characters
    • Lowercase characters
    • Digits
    • Nonalphanumeric characters
    • Pronounceable passwords
    • Removing ambiguous characters
    • Password Length

The Problem

Here is just one example of what I'm talking about:

Screenshot showing a "secure" password generator from a website.

This password generator has a lot of options for tweaking your final password.

Ever since Randal Munroe published https://xkcd.com/936/, people started creating "XKCD-style" passphrase generators. Here's a very simple one that creates a four-word passphrase. No knobs, bells, or whistles. Just a button to generate a new XKCD passphrase. Ironically, the author provides an XKCD passphrase generator for you to use, then tells you not to use it. 🙂

On the other hand, why not make the XKCD password generation as complex as possible? Here at https://xkpasswd.net/s/, not only do you have an XKCD password generator, but you have all the bells, whistles, knobs, buttons, and control to make it as ultimately complex as possible. Kudos to the generator even make entropy estimates about the generated passwords!

Screenshot showing a very complex control board of an XKCD style password generator.

Why not add all the complexity of password generation to XKCD passwords?

What bothers me about the "XKCD password" crowd, however, is that no one knows that Diceware was realized back in 1995, making passphrases commonplace. Arnold Reinhold created a list of 7,776 words, enough for every combination of a 6-sided die rolled 5 times. Arnold explains that the passphrase needs to be chosen from a true random number generator (thus the dice) and as a result each word in the list will have approximately 12.9-bits of entropy. Arnold recommends throwing the dice enough times to create a five-word Diceware passphrase. That would provide about 64-bits of entropy, a modestly secure result.

A five-word Diceware passphrase could be:

  • soot laid tiger rilly feud pd
  • 31 al alibi chick retch bella
  • woven error rove pliny dewey quo

My Solution

While these password generators are all unique, and interesting, and maybe even secure, it boils down to the fact that my wife, never mind my mom or grandma, isn't going to use them. They're just too complex. But worse, they give the person using them a false sense of security, and in most cases, they're not secure at all. I've talked with my wife, family, and friends about what it requires to have a strong password, and I've asked them to give me examples. You can probably guess what I got.

  • Spouse's first name with number followed by special character. EG: "Alan3!"
  • Favorite sports team in CamelCase. EG: "UtahUtes"
  • Keyboard patterns. EG: "qwertyasdf"

The pain goes on and on. Usually, the lengths of each password is somewhere around 6-7 characters. However, when you start talking about some of these generators, and they see passwords like "(5C10#+b" or "V#4I5'4c", their response is usually "I'm never going to remember that!". Of course, this is a point of discussion about password managers, but I'll save that for another post.

So I wanted to create a password and passphrase generator that met everyone's needs:

  • Simplicity of use
  • Length and complexity
  • Provably secure
  • Desktop and mobile friendly

If you've been a subscriber to my blog, you'll know that I post a lot about Shannon entropy. Entropy is maximized when a uniform unbiased random function controls the output. Shannon entropy is just a fancy way for estimating the total number of possibilities something could be, and it's measured in bits. So, when I say a Diceware passphrase as approximately 64-bits of entropy, I'm saying that the passphrase that was generated is 1 in 2^64 or 18,446,744,073,709,551,616 possibilities. Again, this is only true if the random function is uniform and unbiased.

So, I built a password generator around entropy, and entropy only. The question became, what should the range be, and what's my threat model? I decided to build my threat model after offline brute force password cracking. A single computer with a few modest GPUs can work through every 8-character password built from all 94 graphical characters on the ASCII keyboard hashed with SHA-1 in about a week. That's 94^8 or 6,095,689,385,410,816 total possibilities. If chosen randomly, Shannon entropy places any password built from that set at about 52-bits. If the password chosen randomly from the same set of 94 graphical characters was 9 characters long, then the password would have about 59-bits of Shannon entropy. This would also take that same GPU password cracking machine 94 weeks to fully exhaust every possibility.

This seemed like a good place to start the range. So, for simplicity sake, I started the entropy range at 55-bits, then incremented by 5 bits until the maximum of 80-bits. As you can see from the screenshot of the entropy toolbar, 55-bits is red as we are in dangerous territory of an offline password cracker with maybe a cluster of GPUs finding the password. But things get exponentially expensive very quickly. Thus, 60-bits is orange, 65-bits is yellow, and 70-bits and above are green. Notice that the default selection is 70-bits.

Screenshot showing the entropy toolbar of my password generator.

The entropy toolbar of my password generator, with 70-bits as the default.

When creating the generator, I realized that some sites will have length restrictions on your password, such as not allowing more than 12 characters, or not allowing certain special characters, or forcing at least one uppercase character and one digit, and so forth. Some service providers, like Google, will allow you any length with any complexity. But further, people remember things differently. Some people don't need to recall the passwords, as they are using password managers on all their devices, with a synced database, and can just copy/paste. Others want to remember the password, and others yet want it easy to type.

So, it seemed to me that not only could I build a password generator, but also a passphrase generator. However, I wanted this to be portable, so rather than create a server-side application, I made a client-side one. This does mean that you download the wordlists as you need them to generate the passphrases, and the wordlists are anything but light. However, you only download them as you need them, rather than downloading all on page load.

To maximize Shannon entropy, I am using the cryptographically secure pseudorandom number generator from the Stanford Javascript Crypto Library. I'm using this, rather than the web crypto API, because I use some fairly obscure browsers, that don't support it. It's only another 11KB download, which I think is acceptable. SJCL does use the web crypto API to seed its generator, if the browser supports it. If not, a entropy collector listener event is launched, gathering entropy from mouse movements. The end result, is that Shannon entropy is maximized.

Passphrases

There are 5-types of passphrases in my generator:

  • Alternate
  • Bitcoin
  • Diceware
  • EFF
  • Pseudowords

Diceware

For the Diceware generator, I support all the languages that you'll find on the main Diceware page, in addition to the Beale word list. As of this writing, that's Basque, Bulgarian, Catalan, Chinese, Czech, Danish, Dutch, English, Esperanto, Finnish, French, German, Italian, Japanese (Romaji), Maori, Norwegian, Polish, Portuguese, Russian, Slovenian, Spanish, Swedish, and Turkish. There are 7,776 words in each word list, providing about 12.9248-bits of entropy per word.

EFF

For the EFF generator, I support the three word lists that the EFF has created- the short word list, the long word list, and the "distant" word list, where every work has an edit distance of at least three from the others in the list. The long list is similar to the Diceware list, in that it is 7,776 words providing about 12.9248-bits of entropy per word. However, the number of characters in each word in the word list are longer on average, at around 7 characters per word than the Diceware word list, at around 4.3 characters per word. So, for the same entropy estimate, you'll have a longer EFF passphrase than a Diceware passphrase. The short word list contains only 1,296 words, to be used with 4 dice, instead of 5, and the maximum character length of any word is 5 characters. The short word list provides about 10.3399-bits of entropy per word. Finally, the "distant" word list is short in number of words also at 1,296 words, but longer in character count, averaging 7 characters per word.

Bitcoin

For the Bitcoin generator, I am using the BIP-0039 word lists to create the passphrase. These lists are designed to be a mnemonic code or sentence for generating deterministic Bitcoin wallets. However, because they are a list of words, they can be used for building passphrases too. Each list is 2,048 words, providing exactly 11-bits of entropy per word. Like Diceware, I support all the languages of the BIP-0039 proposal, which as of this writing includes Simplified Chinese, Traditional Chinese, English, French, Italian, Japanese (Hiragana), Korean (Hangul), and Spanish.

Alternate

Elvish

In the Alternate generator, I have a few options that provide various strengths and weaknesses. The Elvish word list is for entertainment value only. The word list consists of 7,776 words, making it suitable for Diceware, and provides about 12.9248-bits of entropy per word. However, because the generator is strictly electronic, and I haven't assigned dice roll values to each word, I may bump this up to 8,192 words providing exactly 13-bits of entropy per word. The word list was built from the Eldamo lexicon.

Klingon

Another passphrase generator for entertainment value is the Klingon generator. This word list comes from the Klingon Pocket Dictionary, and my word list provides exactly 2,604 unique words from the 3,028 words in the Klingon language. Thus, each word provides about 11.3465-bits of entropy.

PGP

The PGP word list was created to make reading hexadecimal strings easier to speak and phonetically unambiguous. It comprises of exactly 256 words providing exactly 8-bits of entropy per word. This generator works well in noisy environments, such as server rooms, where passwords need to be spoken from one person to another to enter into a physical terminal.

Simpsons

The Simpson's passphrase generator consists of 5,000 words, providing about 12.2877-bits of entropy per word. The goal of this generator is not only educational to show that any source of words can be used for a password generator, such as a television series of episodes, but also more memorable. Because this list contains the most commonly spoken 5,000 words from the Simpson's episodes, a good balance of verbs, nouns, adjectives, etc. are supplied. As such, the generated passphrases seem to be easier to read, and less noun-heavy than the Diceware or EFF word lists. These passphrases may just be the easiest to recall, aside from the Trump word list.

Trump

And now my personal favorite. The Trump generator was initially built for entertainment purposes, but ended up having the advantage of providing a good balanced passphrase of nouns, verbs, adjectives, etc. much like the Simpson's generator. As such, these passphrases may be easier to recall, because they are more likely to read as valid sentences than the Diceware or EFF generators. This list is pulled from Donald J. Trump's Twitter account. The list is always growing, currently at 5,343 words providing about 12.3404-bits of entropy per word.

Pseudowords

The pseudowords generator is a cross between unreadable/unpronounceable random strings and memorable passphrases. They are pronounceable, even if the words themselves are gibberish. They are generally shorter in practice than passphrases, and longer than pure random strings. The generators are here to show what you can do with random pronounceable strings.

Bubble Babble

Bubble Babble is a hexadecimal encoder, with builtin checksumming, initially created Antti Huima, and implemented in the original proprietary SSH tool (not the one by the OpenSSH developers). Part of the specification is that every encoded string begins and ends with "x". However, rather than encode data from the RNG, it is randomly generating 5-characters words in the syntax of "". As such, each 5-character word, except for the end points, provides 21521521=231,525 unique combinations, or about 17.8208-bits of entropy. The end points are in the syntax of "x" or "x, which is about 21521*5=11,025 unique combinations, or about 13.4285-bits of entropy.

Secret Ninja

This generator comes from a static character-to-string assignment that produces pronounceable Asian-styled words. As such, there are only 26 assignments, providing about 4.7004-bits of entropy per string. There are three strings concatenated together per hyphenated word.

Cosby Bebop

I was watching this YouTube video with Bill Cosby and Stewie from Family Guy, and about half-way through the skit, Bill Cosby starts using made-up words as part of his routine. I've seen other skits by comedians where they use made-up words to characterize Bill Cosby, so I figured I would create a list of these words, and see how they fell out. There are 32 unique words, providing exactly 5-bits of entropy per word. Unlike the Bubble Babble and Secret Ninja generators, this generator uses both uppercase and lowercase Latin characters.

Korean K-pop

In following with the Bill Cosby Bebop generator, I created a Korean "K-pop" generator that used the 64-most common male and female Korean names, providing exactly 6-bits of entropy per name. I got the list of names from various sites listing common male and female Korean names.

Random

These are random strings provided as a last resort for sites or accounting software that have very restrictive password requirements. These passwords will be some of the shortest generated while meeting the same minimum entropy requirement. Because these passwords are not memorable, they should be absolutely stored in a password manager (you should be using one anyway).

  • Base-94: Uses all graphical U.S. ASCII characters (does not include horizontal space). Each character provides about 6.5546-bits of entropy. This password will contain ambiguous characters.
  • Base-64- Uses all digits, lowercase and uppercase Latin characters, and the "+" and "/". Each character provides exactly 6-bits of entropy. This password will contain ambiguous characters.
  • Base-32: Uses the characters defined in RFC 4648, which strives to use an unambiguous character set. Each character provides exactly 5-bits of entropy.
  • Base-16: Uses all digits and lowercase characters "a" through "f". Each character provides exactly 4-bits of entropy. This password will contain fully unambiguous characters.
  • Base-10: Uses strictly the digits "0" through "9". This is mostly useful for PINs or other applications where only digits are required. Each digits provides about 3.3219-bits of entropy. This password will contain fully unambiguous characters.
  • Emoji: There are 881 emoji glyphs provided by that font, yielding about 9.7830-bits per glyph. One side-effect, is that even though there is a character count in the generator box, each glyph may be more than 1 byte, so some input forms may count that glyph as more than 1 character. Regardless, the minimum entropy is met, so the emoji password is still secure.

I want to say something a bit extra about the Emoji generator. With the rise of Unicode and the UTF-8 standard, and the near ubiquitous popularity of smartphones and mobile devices, having access to non-Latin character sets is becoming easier and easier. As such, password forms are more likely supporting UTF-8 on input to allow Cyrillic, Coptic, Arabic, and East Asian ideographs. So, if Unicode is vastly becoming the norm, why not take advantage of it while having a little fun?

I opted for the black-and-white font, as opposed to the color font, to stay consistent with the look and feel of the other generators. This generator uses the emoji character sets provided by Google's Noto Emoji fonts, as that makes it easy for me to support the font in CSS 3, allowing every browser that supports CSS 3 to take advantage of the font and render the glyphs in a standard fashion. The license is also open so that I can redistribute the font without paying royalties, and others can do the same.

Screenshots

The post wouldn't be complete without some screenshots. The generator is both desktop friendly, fitting comfortably in a 1280x800 screen resolution, as well a mobile friendly, working well on even some of the oldest mobile devices.

Desktop screenshot of my password generator.

Desktop screenshot.

First mobile screenshot of my password generator.

First mobile screenshot.

Second mobile screenshot of my password generator.

Second mobile screenshot.

Random Passphrases Work, Even If They're Built From Known Passwords

Just this morning, security researcher Troy Hunt released a ZIP containing 306 million passwords that he's collected over the years from his ';--have i been pwned? service. As an extension, he created a service to provide either a password or a SHA-1 hash to see if your password has been pwnd.

Screenshot showing the entry text field to test your password from the new password checking service Troy Hunt has provided.

In 2009, the social network RockYou was breached, and 32 million accounts of usernames and passwords was released to the public Internet. No doubt those 32 million passwords are included in Troy Hunt's password dump. What's interesting, and the point of this post, is individually, each password from the RockYou breach will fail.

Collage of six screenshots of individual RockYou passwords failing Troy Hunt's password check. They were: caitlin, peachy, keeley, doreen, ursulet, & juggalo.

However, what would happen if you took 6 random RockYou passwords, and created a passphrase? Below is screenshot demonstrating just that using the above 6 randomly chosen RockYou passwords. Individually, they all fail. Combined, they pass.

Screenshot showing how creating the passphrase 'caitlinpeachykeeleydoreenursuletjuggalo' passes Troy Hunt's check.

Now, to be fair, I'm choosing these passwords from my personalized password generator. The list is the top 7,776 passwords from the 32 million RockYou dump. As such, you could use this list as a Diceware replacement with 5 dice. Regardless, each password is chosen at random from that list, and enough passwords are chosen to reach a 70-bits of entropy target, which happen to be 6 passwords. Mandatory screenshot below:

Screenshot showing my password generator using the RockYou passwords as a source for a strong passphrase. One screenshot is hyphenating the passwords, the other is not.

The point of this post, is that you can actually build a secure password for sites and services using previously breached passwords for your word list source, in this case, RockYou. The only conditions is that you have a word list large enough create a reasonable passphrase with few selections, and that the process picking the passwords for you is cryptographically random.

Electronic Slot Machines and Pseudorandom Number Generators

TL;DR

An Austrian casino company used a predictable pseudorandom number generator, rather than a cryptographically secure one, and people are taking advantage of it, and cashing out big.

The Story

Wired reported on an article about an amazing operation at beating electronic slot machines, by holding your phone to the slot machine screen for a time while playing, leaving the slot machine, then coming back an additional time, and cashing in big.

Unlike most slots cheats, he didn’t appear to tinker with any of the machines he targeted, all of which were older models manufactured by Aristocrat Leisure of Australia. Instead he’d simply play, pushing the buttons on a game like Star Drifter or Pelican Pete while furtively holding his iPhone close to the screen.

He’d walk away after a few minutes, then return a bit later to give the game a second chance. That’s when he’d get lucky. The man would parlay a $20 to $60 investment into as much as $1,300 before cashing out and moving on to another machine, where he’d start the cycle anew.

These machines were made by Austrian company Novomatic, and when Novomatic engineers learned of the problem, after a deep investigation, the best thing they could come up with, was that the random number generator in the machine was predictable:

Novomatic’s engineers could find no evidence that the machines in question had been tampered with, leading them to theorize that the cheaters had figured out how to predict the slots’ behavior. “Through targeted and prolonged observation of the individual game sequences as well as possibly recording individual games, it might be possible to allegedly identify a kind of ‘pattern’ in the game results,” the company admitted in a February 2011 notice to its customers.

The article, focused on a single incident in Missouri, mentions that the state vets the machines before they go into production:

Recognizing those patterns would require remarkable effort. Slot machine outcomes are controlled by programs called pseudorandom number generators that produce baffling results by design. Government regulators, such as the Missouri Gaming Commission, vet the integrity of each algorithm before casinos can deploy it.

On random number generators

I'll leave you to read the rest of the article. Suffice it to say, the Novomatic machines were using a predictable pseudorandom number generator after observing its output for a period of time. This poses some questions that should immediately start popping up in your head:

  1. What is the vetting process by states to verify the quality of the pseudorandom number generators in solt machines?
  2. Who is on that vetting commission? Is it made up of mathematicians and cryptographers? Or just a board of executives and politicians?
  3. Why aren't casino manufacturers using cryptographically secure pseudorandom number generators?

For me, that third item is the most important. No doubt, as the Wired article states, older machines just cannot be fixed. They need to be taken out of production. So long as they occupy casinos, convenience stores, and gas stations, they'll be attacked, and the owner will lose money. So let's talk about random number generators for a second, and see what the gambling industry can do to address this problem.

You can categorize random number generators into four categories:

  1. Nonsecure pseudorandom
  2. Cryptographically secure pseudorandom
  3. Chaotic true random
  4. Quantum true random

What I would be willing to bet, is that most electronic machines out there are of the "nonsecure pseudorandom" type of random number generator, and Novomatic just happened to pick a very poor one. Again, there likely isn't anything they can do about existing machines in production now, but what can they do moving forward? They should start using cryptographically secure pseudorandom number generators (CSPRNGs).

In reality, this is trivial. There are plenty of CSPRNGs to choose from. CSPRNGs can be broken down further into three subcategories:

  1. Designs based on cryptographic primitives.
  2. Number theoretic designs.
  3. Special-purpose designs.

Let's look at each of these in turn.

Designs based on cryptographic primitives.

These are generators that use things like block ciphers, stream ciphers, or hashing functions for the generator. There are some NIST and FIPS standardized designs:

  • NIST SP 800-90A rev. 1 (PDF): CTR_DRBG (a block cipher, such as AES in CTR mode), HMAC_DRBG (hash-based message authentication code), and Hash_DRBG (based on cryptographically secure hashing functions such as SHA-256).
  • ANSI X9.31 Appendix A.2.4: This is based on AES, and obsoletes ANSI X9.17 Appendix C, which is based on 3DES. It requires a high-precision clock to initially seed the generator. It was eventually obsoleted by ANSI X9.62-1998 Annex A.4.
  • ANSI X9.62-2005 Annex D: This standard is defines an HMAC_DRBG, similar to NIST SP 800-90A, using an HMAC as the cryptographic primitive. It obsoletes ANSI X9.62-1998 Annex A.4, and also requires a high-precision clock to initially seed the generator.

It's important that these designs are backtracking resistant, meaning that if you know the current state of the RNG, you cannot construct all previous states of the generator. The above standards are backtracking resistant.

Number theoretic designs

There are really only two current designs, that are based on either the factoring problem or the discrete logarithm problem:

  • Blum-Blum-Shub: This is generator based on the fact that it is difficult to compute the prime factors of very large composites (on the order of 200 or more digits in length). Due to the size of the prime factors, this is a very slow algorithm, and not practical generally.
  • Blum-Micali: This is a generator based on the discrete logarithm problem, when given two known integers "b" and "g", it is difficult to find "k" where "b^k = g". Like Blum-Blum-Shub, this generator is also very slow, and not practical generally.

Special-purpose designs

Thankfully, there are a lot of special purpose designs designed by cryptographers that are either stream ciphers that can be trivially ported to a CSPRNG, or deliberately designed CSPRNGs:

  • Yarrow: Created by cryptographer Bruce Schneier (deprecated by Fortuna)
  • Fortuna: Also created by Bruce Schneier, and obsoletes Yarrow.
  • ISAAC: Designed to address the problems in RC4.
  • ChaCha20: Designed by cryptographer Daniel Bernstein, our crypto Lord and Savior.
  • HC-256: The 256-bit alternative to HC-128, which is part of the eSTREAM portfolio.
  • eSTREAM portfolio: (7 algorithms- 3 hardware, 4 software)
  • Random123 suite: Contains four highly parallelizable counter-based algorithms, only two of which are cryptographically secure.

The solution for slot machines

So now what? Slot machine manufacturers should be using cryptographically secure algorithms in their machines, full stop. To be cryptographically secure, the generator:

  • Must past the next-bit test (you cannot predict the next bit any better than 50% probability).
  • Must withstand a state compromise (you cannot reconstruct past states of the generator based on the current state).

If those two properties are met in the generator, then the output will be indistinguishable from true random noise, and the generator will be unbiased, not allowing an adversary, such as someone with a cellphone monitoring the slot machine, to get the upperhand on the slot machine, and prematurely cash out.

However, the question should then be raised- "How do you properly seed the CSPRNG, so it starts in an unpredictable state, before release?" Easy, you have two options here:

  • Seed the CSPRNG with a hardware true RNG (HWRNG), such as a USB HWRNG, or....
  • Build the machine such that it collects environmental noise as entropy

The first point is much easier to achieve than the second. Slot machines likely don't have a lot of interrupts built into the system-on-a-chip (SoC). So aside from a microphone, video camera, or antenna recording external events, you're going to be hard-pressed to get any sort of high-quality entropy into the generator. USB TRNGs are available all over the web, and cheap. When the firmware is ready to be deployed, read 512-bits out of the USB generator, hash it with SHA-256, and save the resulting hash on disk as an "entropy file".

Then all that is left is when the slot machine boots up and shuts down:

  • On startup, read the "entropy file" saved from the previous shutdown, to seed the CSPRNG.
  • On shutdown, save 256-bits of data out of the generator to disk as an "entropy file".

This is how most operating systems have solved the problem with their built-in CSPRNGs. Provided that the very first "entropy file" was initially seeded with a USB true HWRNG, the state of every slot machine will be always be different, and will always be unpredictable. Also, 256-bits is more than sufficient to make sure the initial state of the generator is unpredictable; physics proves it.

Of course, the SoC could have a HWRNG onboard, but then you run the risk of hardware failure, and the generator becoming predictable. This risk doesn't exist with software-based CSPRNGs, so provided you can always save the state of the generator on disk at shutdown, and read it on startup, you'll always have an unpredictable slot machine.

Adblockers Aren't Part Of The Problem- People Are

Troy Hunt, a well-respected security researcher, and public speaker, wrote a blog post recently about how adblockers are part of the bad experience of the web. His article is about a sponsorship banner he posts at the top of his site, just below the header. It's not flashy, intrusive, loud, obnoxious, or a security or privacy concern. He gets paid better for the sponsorship strip than he does for ads, and the strip is themed with the rest of his site. It's out of the way of the site content, and scrolls with the page. In my opinion, it's in perfectly good taste. See for yourself:

Screenshot of Troy Hunt's homepage, showing the sponsorship strip just below the header.

Troy was surprised to find out, however, that his sponsorship strip is not showing when AdBlock Plus or UBlock Origin ad blockers are installed and enabled in the browser. He is understandably upset, as he is avoiding everything that piss off the standard web user when it comes to ads. He reached out to ABP about whitelisting his strip, and they've agreed it's hardly violating web user experience. However, someone added it to the EasyList filters, which means any ad blocker outside of ABP, will filter the sponsorship strip.

So, here's my question- are users wrong in filtering it?

Let's look at the state of web ads over the past couple decades. First, there was the ad popup, where the web page you were visiting would popup an ad right in front of the page. Sometimes they were difficult to close, and sometimes closing one would open a different one. Some pages would open dozens of popups, some fullscreen. It wasn't long before browsers across the board blocked popups by default, baked right into the browser.

Screenshot showing a Windows XP desktop littered with ad popups.

After popups were unanimously blocked across every browser, advertisers turned to ad banners. These were just as obnoxious as the popups, even if you didn't have to close a window. The flashed, blinked, falsely promised free trips and gadgets, and even sometimes auto-played videos. They were rarely relevant to the site content, but web page owners were promised a revenue per click, regardless. So, the more you could fit on the page, the more likely someone would click on an ad, and you would get paid. Web page owners placed these obnoxious ads above the header, below the header, in the sidebars, in the middle of the pages breaking up paragraphs in posts, in the footers. In some cases, the screen real estate dedicated to ads was more than the actual content on the site.

An image showing a collection of annoying banner ads.

Some HTML5 and CSS3 solutions now include overlays, that have to be manually closed or escaped, in order to continue parsing the site content. Unfortunately, ad blockers don't do a great job at blocking these. While they're great at finding and filtering out elements, blocking CSS overlay popups seems to be too difficult, as they are prevalent on the web, much to the chagrin of many ad block users.

Screenshot showing a CSS overlay on a web page showing an ad.

Ad blockers then became a mainstay. Web users were pissed off due to Flash crashing the browser (most ads were Flash-based), slowing down their connection to download additional content (at the time, most were on dial-up on slow DSL), and in general just getting in the way. It got so bad, that DoubleClick's "privacy chief" wrote a rant about ad blockers, and how they were unethical, and ad blocker users were stealing revenue.

As web page analytics started becoming a thing, more and more website owners wanted to know how traffic was arriving at their site, so they could further increase that traffic, and in addition, increase ad revenue. Already, page counters like StatCounter existed, to help site owners understand partially how traffic was hitting them, where they came from, what time, what search engine they used, how long they stayed, etc. Well, advertisers started putting these analytics in their ads. So, not only did the website owner know who you were, the advertising company did too. And worse, while the website owner might not be selling that tracking data, the advertiser very likely is.

The advertiser also became a data broker.

But here's the tricky part- ad blocking was no longer enough. Now website owners were adding JavaScript trackers to their HTML. They're not visible on the page, so the ad blocker isn't hiding an element. It's not enough to block ads any longer. Privacy advocates begin warning about "browser fingerprinting" based on the specific details in your browser that can uniquely identify you. Those unique bits are then tracked with these tracking scripts, and set to advertisers and data brokers, which change many hands along the way. The EFF created a project to help users understand how unique they appeared on the web through the Panopticlick Project.

Screenshot of my browser results after testing at https://panopticlick.eff.org.

As a result, other browser extensions dedicated to blocking trackers started showing up. Things like Ghostery, Disconnect, Privacy Badger, and more. Even extensions that completely disable JavaScript and Flash became popular. Popular enough, that browsers implemented a "click-to-play" setting, where flash and other plugin content was blocked by default, and you would need to click the element to display it. It's not uncommon now to visit a web page where you tracking blocker will block a dozen or more trackers.

Screenshot of Ghostery blocking 20 trackers on a web page.

I wish I could stop here, but alas, many advertisers have made a turn for the ugly. Now, web ads are the most common way to get malware installed on your computer. Known as "malvertising", it is more common at infecting your computer than shady porn sites. Even more worrisome, is that this trend is shifting away from standard desktops to mobile. Your phone is now more valuable than your desktop, and advertisers know it. Never mind shady apps that compromise your device, ads in legitimate "safe" apps are compromising devices as well.

Infographic showing the threat of malvertising on mobile in 2014.

So, to summarize, the history of ads has been:

  1. Annoying popups.
  2. Annoying banners.
  3. Annoying CSS overlays.
  4. Transparent trackers.
  5. Malvertising.

So, to Troy Hunt, here's my question: Given the awful history of advertisements on the web, are you honestly surprised that users don't trust a sponsorship strip?

Consider the following analogy: Suppose I brought a bunch of monkeys to your home, and they trashed the place. Smashed dishes, tore up furniture, destroyed computers and televisions, ruined floors, broke windows, and generally just destroyed anything and everything in sight. Then, after cleaning the place up, not only do I bring the monkeys back, but this time, they have digital devices (cameras, microphones, etc.) that report back to me about what your house looks like, where you live, what you're doing in response to the destruction. Again, you kick them out, clean up the place, and I return with everything as before, with some of them carrying a contagious disease that can get you and your family sick. I mean, honestly, one visit of these monkeys is enough, but they've made three visits, each worse than before.

Now, you show up at my doorstep, with a well-trained, leashed, groomed, clean, tame monkey, and I'm supposed to trust that it isn't anything like the past monkeys I've experienced before? As tame as it may be, call me rude, but I'm not trusting of monkeys right now. I've installed all sorts of alarm and monitoring systems, to warn me when monkeys are nearby, and nuke them with lasers. I've had too many bad experiences with monkeys in the past, to trust anyone bringing a new monkey to the premises.

So, you can see, it's not ad blockers that are the problem. It's the people behind the advertising firms and it's the people not trusting the Internet. The advertising c-level executives are trying to find ways to get their ad in front of your eyes, and are using any sort of shady means necessary to do it. The average web user is trying to find ways to have a pleasant experience on the web, without getting tracked, infected with malware, shouted at by a video, while still being able to consume the content.

People arguably don't trust ads. The people in the advertising firms have ruined that trust. You may have a clean privacy-aware non-intrusive sponsorship strip, but you can't blame people for not trusting it. We've just had too long of a history of bad ad experiences. So, while reaching out to the ad blocker developers to whitelist the sponsorship strip is a good first step, ultimately, if people don't trust it, and want to block, you can't blame them. Instead, continue focusing on what makes you successful, for your revenue from the ad blockers- blogging, speaking, developing, engaging. Your content, who you are, how you handle yourself is your most valuable ad.