Image of the glider from the Game of Life by John Conway
Skip to content

Cryptographically Secure Pseudorandom Locally Administered Unicast MAC Addresses

Recently, Apple released the ability for iPhone 5c and newer hardware to create a spoofed software MAC address for 2.4 GHz and 5 GHz wireless access points. The MAC address is locally administered, and a unicast address. This has sparked a small discussion in various forums about how to generate valid locally administered unicast MAC addresses. It is necessary that the MAC address is unicast, as many Cisco switches and routers will block non-unicast addresses. It's not necessary that the address be locally administered, however. This just takes the address out of the globally administered range, and the possibility of conflict with other devices connected to the same switch.

According to the IEEE, in order to create a unicast address, the least significant bit of the most significant byte must be 0. This means the most significant byte must be an even number, eliminating half of the 256 valid possibilities for the first byte. This leaves us with only 128 numbers. In order to be locally administered, the second least significant bit in the most significant byte of the address must be 1. This eliminates another 64 addresses from the total space for this byte. As such, there are only 64 possible values this address can start with. The rest of the bytes can be as random as you wish, giving you a total space of 70,368,744,177,664 addresses to choose from.

In the Unix shell, you can execute the following code, which should be fairly platform agnostic:

$ random_mac() {printf '%02x' $((0x$(od /dev/urandom -N1 -t x1 -An | cut -c 2-) & 0xFE | 0x02)); od /dev/urandom -N5 -t x1 -An | sed 's/ /:/g'}
$ random_mac
3a:cc:9a:1a:fb:00

Here are 16 valid locally administered unicast MAC addresses generated from the shell:

$ for i in {1..16}; do random_mac; done
ca:4a:db:c0:b5:d3
c2:67:92:8b:3c:f2
1e:ca:bb:2d:0c:2b
fa:86:2e:61:90:8c
c6:f4:17:50:5f:c2
12:1b:db:55:c9:36
fe:5f:7f:34:36:49
3a:2e:be:b0:11:17
4a:85:af:03:ca:3c
22:64:7c:49:fd:1f
4a:cb:16:5a:18:1c
d6:10:25:5b:86:42
a2:2d:0f:1c:49:c5
a2:b7:a0:46:72:1c
d2:ac:73:2c:55:5b
5a:56:45:b7:94:61

If you wanted to have a random mac address assigned to your wireless NIC every time you brought up your network interfaces on Debian or Ubuntu, you could write the following shell script, and place it in the "/etc/network/if-pre-up.d/" directory:

1
2
3
#!/bin/sh
LLADDR=$(printf '%02x' $((0x$(od /dev/urandom -N1 -t x1 -An | cut -c 2-) & 0xFE | 0x02)); od /dev/urandom -N5 -t x1 -An | sed 's/ /:/g')
ip link set address $LLADDR wlan0

Make sure it's executable:

$ sudo chmod a+x /etc/network/if-pre-up.d/random_mac.sh

Playing Card Ciphers

For the past couple of weeks, I've been focused heavily on hand ciphers for field agents. Although I'm certainly no expert on cryptography, aside from the One Time Pad (OTP), I've had a hard time finding any hand cipher that would be considered secure in the computer age. It's certainly no doubt that field agents are very likely using computing with SSL and GPG, among other crypto tools, to communicate with each other. The romantic days of "Spy versus Spy" encrypting and decrypting notes by hand, doing dead drops in a tree stump in a park, and speaking 15 foreign languages fluently, are probably over. Regardless, I could not bring myself to believe that there were absolutely no secure hand ciphers to study.

I had already been familiar with Bruce Schneier's "Solitaire" card cipher, and always considered that a fairly creative use for a hand cipher. Despite its bias, it's still very secure, although slow to execute by hand, and very error prone. But this got me thinking- has anyone else attempted to create hand ciphers with playing cards, and if so, how far have they taken it?

After searching the Internet, I found five additional playing card ciphers, and implemented one mechanical cipher into a playing card cipher, Each comes with their own varying levels of security. While I was at it, I created my own playing card cipher, and I'm still currently evaluating its security. That brings to total list to eight playing card ciphers:

As you can see, I've spent a great deal of time learning each of the algorithms, and have typed them up by hand on my own personal wiki. Quadibloc is the only card cipher, at the time of this writing, that I am still learning and working on. I'm hoping this can be a centralized repository for all things playing card ciphers. My goals for this personal project are:

  • Publish software implementations of each card cipher.
  • Publish online videos giving a tutorial of each card cipher
  • Learn the strengths and weaknesses of each card cipher.

The big advantage of using playing cards, in my opinion, is the ability for the deck to maintain state while working through the algorithm. A standard deck of cards can have a maximum key space of 52! which is about 238-bits of entropy. This is likely larger than many SSL keys on the Internet protecting your bank login. So, provided the algorithm is non-linear, mixes the deck thoroughly on each round, and is not biased, it is possible that the algorithm could resist attacks from even the most well funded adversaries.

There is still work to be done, and I doubt this will be of any value to the general cryptographic community. After all, the OTP can be mathematically proven to be unbreakable, and computer algorithms are fast and error-free. So, who in their right mind would want to learn hand ciphers with playing cards, when they won't have a mathematical proof of unbreakability, they're slow, and error-prone?

Me.

Talon

I've been obsessing over the past couple weeks trying to improve Bruce Schneier's solitaire cipher, aka "Pontifex". The more I think about it, the more I realize that there just isn't a lot that can be done about the bias of Pontifex without severely slowing down the already slow algorithm. So, instead of trying to improve on his algorithm, I came up with my own - "Talon".

This cipher discards the two jokers. You only need the standard 52-cards from a poker or bridge set (4 suits, Ace through King). As with Pontifex, this is a output feedback mode stream cipher. Also, when determining the value of the output card, the same suit order is used:

  • Clubs - face value + 0
  • Diamonds - face value + 13
  • Hearts - face value + 26
  • Spades - face value + 39

An unkeyed deck would have Ace through King of Clubs, followed by Ace through King of Diamonds, followed by Ace through King of Hearts, followed by Ace through King of Spades.

This algorithm is executed in 4 steps. You will be making four discard piles, or "talons", labeled 1, 2, 3, & 4 from left to right:

  1. Create four discard piles. With the deck face-up in your hand, place the top card in discard pile #1, the 2nd card in discard pile #2, the 3rd card is discard pile #3, and the 4th card in discard pile #4. For example, if the deck was unkeyed, then the Ace of Clubs would be in discard pile #1, the Two of Clubs in #2, the Three of Clubs in #3, and the Four of Clubs in #4.
  2. Note the face value of discard pile #1, ignoring suit, and count that many cards minus 1 from the top of the deck, and place them on top of discard pile #1. If the card was a Jack, then count 10 cards from the face up deck in your hard, and place them on top of the Jack. Do the same for the other three piles, in order (#2, then #3, then #4). In other words, the first card placed down in the discard pile, or "talon", will determine the total number of cards in that stack.
  3. Collect the piles by placing discard pile #1 on top of pile #2 on top of pile #3 on top of pile #4, and place the stack behind the face up deck in your hand. If all 52 cards were in discard piles (13 cards in each pile), then place the newly collected stack in your hand, face up.
  4. Find the output card by summing the deck value of the top card, with the deck value of the bottom card, modulo 52. Count down that many cards into the deck, and record the value of the next card. If the top card is a Queen of Hearts, then the value would be 12 + 26 = 38. And if the bottom card is the 3 of Diamonds, then the value would be 3 + 13 = 19. (38 + 19) mod 52 = 5. Count 5 cards from the top of the deck, and record the face value of the 6th card, including suit. This ensures that each card is equally likely to be the output card as the rest in the deck. To make sure you are doing the counting correctly, if the sum value mod 52 is 0, you would record the value of the top card. If the sum value mod 52 is 51, you would record the value of the bottom card.

The key lies in the initial order of the deck, as with other designs. It can be keyed with random shuffling, bridge puzzles, or passphrases. If a passphrase is used, step 4 is replaced:

  1. Pass cut. Get the numerical value of the first character in your passphrase. As with the original step #4, count that many cards minus 1 from the top of the deck, and cut them below the rest of the cards. A = 1, B = 2, ..., Y = 25, Z = 26. This step will break the reversibility of Talon, while keying the deck.

Example:

Suppose we start with an unkeyed deck. Our top card will be the Ace of Clubs, with the deck face up in our hands, and the bottom card will be the King of Spades.

After step 1, we would have the following 4 discard piles:

#1  #2  #3  #4
--------------
AC  1C  2C  3C

After step 2, our discard piles would look like:

#1  #2  #3  #4
--------------
AC  2C  3C  4C  <-- Bottom of discard piles
    5C  6C  8C
        7C  9C
            10C

Remaining in my hand would be:

After step 3, the order of my hand would now be:

JC,QC,KC,AD,2D,3D,4D,5D,6D,7D,8D,9D,10D,JD,QD,KD,AH,2H,3H,4H,5H,6H,7H,8H,9H,10H,JH,QH,KH,AS,2S,3S,4S,5S,6S,7S,8S,9S,10S,JS,QS,KS,AC,5C,2C,7C,6C,3C,10C,9C,8C,4C

For step 4, the Jack of Clubs is the top card. Thus, its numerical value is 11 + 0 = 11. Counting 11 cards gives me the Nine of Diamonds as my output card. I would write down 22 as my output number (9 + 13 = 22).

After another round, my deck would be ordered as:

9S,10S,JS,QS,KS,AC,5C,2C,7C,6C,3C,10C,9C,8C,4C,2D,3D,4D,5D,6D,7D,8D,9D,10D,JD,JC,QD,KD,AH,2H,3H,4H,5H,6H,7H,8H,9H,QC,10H,JH,QH,KH,AS,2S,3S,4S,5S,6S,7S,8S,KC,AD

Because my top card is the Nine of Spades, it's numerical value is 9 + 39 = 45. Counting 45 cards gives me the Four of Spades as my output card. I would write down 43 as my output number (4 + 39 = 43).

Talon is reversible, does not need an IV like Mirdek, and less error-prone than Pontifex. It greatly reduces the chance of a bias, by fast mixing the internal state through the discard piles with 4 cuts in total in 1 round. Unfortunately, according to Bruce Schneier, the chances of this cipher being secure and efficient are negligible.

I see about two new cipher designs from amateur cryptographers every week. The odds of any of these ciphers being secure are slim. The odds of any of them being both secure and efficient are negligible. The odds of any of them being worth actual money are virtually non-existent.

Currently, I do not have an implementation in Python or C. I also do not know the length of the internal PRNG. My testing shows it is sufficient for small messages, such as the length of a tweet, which is practical for field operation. My testing also shows that there is no internal bias in the system. One thing I did catch during my analysis, is that the sum of the discard piles follows the Poisson Distribution. I'm not sure how this will affect the security of Talon, if any.

The Outrageous Fares Of The Utah Transit Authority

I hope my readers don't mind, but I'm going to break from the standard geekery for a second, and type up a personal post about something that has been bugging me for the past few years.

For those that don't know, I live in Utah, and work in Salt Lake City for the best local ISP on the planet. I also teach a Linux certification course at the University of Utah 1 night per week as a faculty adjunct instructor. As a benefit of teaching at the University, I get a monthly Utah Transit Authority (UTA) premium transit pass that allows me to ride FrontRunner, TRAX, and standard bus services, without paying out of my personal pocket.

I'll spare you my personal philosophy that all public transit should be paid for in gas taxes at the pump. However, according to a report by KSL, the UTA is one of the most expensive transit systems in the United States. The Salt Lake Tribune also ran a similar story. Just in case the link goes dead, here is a paragraph that I found very telling:

Of 169 U.S. transit agencies that participated in the APTA survey, only five will now have higher base fares for buses than UTA. They are in New York City, $2.75; San Francisco, $4; Nashville, $4; Sugar Land, Texas, $3.50; and Eden Prairie, Minn., $3.

Another four systems match the $2.50 to be charged for buses by UTA. They are in Atlanta, Pittsburgh, Portland and Sacramento. The 2012 APTA survey said the median base fare for buses among U.S. transit agencies was $1.50 per trip.

UTA's TRAX also will have among the highest fares nationally for light rail.

No agency included in the 2012 APTA survey now has fares higher than the $2.50 to be charged by UTA — but four match it. They are in Sacramento, San Diego, Pittsburgh and Dallas. Of 21 light-rail agencies surveyed by APTA in 2012, the median light rail base fare was $2 a trip.

That means among transit systems nationally that have lower bus or train fares than UTA are such places as Chicago, Los Angeles, Phoenix, Denver, Philadelphia, Cleveland and Dallas.

Now, I take advantage of my university pass on the UTA, when it's valid. However, it's only valid during trimesters. Between trimesters, because I am adjunct faculty, my card is deactivated on the last pay period of the trimester, and activated on the first pay period of the trimester. So, there is about a full month where I cannot use my university pass, and must pay full fare ride transit.

However, the UTA is offering a 20% discount if you use their tracking FAREPAY card. It's a pre-paid card that you can use as often as you need, provided a balance remains on the card to pay the fare. Because I am currently using the FAREPAY system for about 3 months of the year, I was curious how I could optimize my costs between trimesters. Currently, I drive to a FrontRunner station (a diesel commuter train) which takes me to a TRAX stop (a light rail train). The stop is in the "free fare" zone, so there is no charge to take TRAX to the stop near my office.

So, is it cheaper in both gas and UTA fares to drive to the closest FrontRunner station? Or should I drive a station or two away? Where will I get the cheapest per month cost for getting into work from home? I only live 30.7 miles away from the office by the most direct route.

Turns out, it doesn't involve the UTA at all. Not only is driving directly into the office considerably faster (45 minutes one way commute versus 65 minutes on transit), but it's cheaper. Considerably cheaper.

Here is a table showing my work. There are a total of 6 FrontRunner stations between my home and my office. In order, they are Clearfield, Layton, Farmington, Woods Cross, North Temple, and Salt Lake Central. I work a standard 9-5 M-F day job, and once per week, I must take TRAX up to the university.

First, I have a fuel efficient car. I've averaged about 32-35 miles to the gallon with mixed city and freeway driving from home to work and back. I've actually logged 45 mpg many times, but I'm usually doing almost straight freeway driving with hypermiling strategies, and it isn't that difficult to get 40 mpg in my car. Regardless, I'll use 32 miles per gallon as the basis for my commuter costs. With an average Utah state gas rate of $3.61 per gallon, this puts my monthly costs for gasoline at $150 per month to commute round trip to work from home. Looking over my payment history with my bank, this value is almost exactly spot-on.

Here are the distances from my house to each of the FrontRunner stations:

To Station One Way Gallons PPG Gas Round Trip
Clearfield 4.2 mi 0.13125 $3.61 $0.47 $0.95
Layton 6.6 mi 0.20625 $3.61 $0.74 $1.49
Farmington 13.4 mi 0.41875 $3.61 $1.51 $3.02
Woods Cross 20.6 mi 0.64375 $3.61 $2.32 $4.65
North Temple 29 mi 0.90625 $3.61 $3.27 $6.54
Salt Lake 30.3 mi 0.946875 $3.61 $3.42 $6.84

Looking strictly at UTA fares, here is what it would cost me to get into the office from each FrontRunner station:

From Station Fare 80% Fare Round Trip 80% Round Trip
Clearfield $4.30 $3.50 $8.60 $7.00
Layton $3.70 $3.00 $7.40 $6.00
Farmington $3.10 $2.50 $6.20 $5.00
Woods Cross $2.50 $2.00 $5.00 $4.00
North Temple $2.50 $2.00 $5.00 $4.00
Salt Lake $0.00 $0.00 $0.00 $0.00

So, combining gas prices and UTA fares, here would be the total cost per stop to get to my office:

From Station One-way 80% One-way Round Trip 80% Round Trip
Clearfield $4.77 $3.97 $9.55 $7.95
Layton $4.44 $3.74 $8.89 $7.49
Farmington $4.61 $4.01 $9.22 $8.02
Woods Cross $4.82 $4.32 $9.65 $8.65
North Temple $5.77 $5.27 $11.54 $10.54
Salt Lake $3.42 $3.42 $6.84 $6.84

It's clearly cheaper to drive all the way into the Salt Lake Central station, and take TRAX to the office, as that rail fare is in the "Free Zone", and no extra cost to me. However, if I don't want to drive all the way into Salt Lake, then driving to the Layton station 6.6 miles away is the most optimal.

I can't forget that once per week, I need to take TRAX up to the university to teach. This is a $5 round trip ticket, which is good for all destinations on TRAX, all day long, or $4 with FAREPAY. Including that into my final cost, what would my gas and UTA fare costs look like in 5-day work week? Per month? Per year? (Note: the "80%" headers show the cost of gas plus the 20% FAREPAY discount. It is not 80% of each total.)

From Station Weekly 80% Weekly Yearly 80% Yearly Monthly 80% Monthly
Clearfield $52.74 $43.74 $2,742.38 $2,274.38 $228.53 $189.53
Layton $49.45 $41.45 $2,571.17 $2,155.17 $214.26 $179.60
Farmington $51.12 $44.12 $2,658.08 $2,294.08 $221.51 $191.17
Woods Cross $53.24 $47.24 $2,768.45 $2,456.45 $230.70 $204.70
North Temple $62.72 $56.72 $3,261.21 $2,949.21 $271.77 $245.77
Salt Lake $39.18 $38.18 $2,037.47 $1,985.47 $169.79 $165.46

If I drove to the Salt Lake Central station 5 times per week, and paid for a round trip TRAX ticket to the university once per week, I'm looking at about $170 per month, or $165 with FAREPAY. If I take FrontRunner from my most cost-efficient station, it's $214 per month, or $180 per month with FAREPAY. From the closest station, I'm looking at $230 per month or $190 per month with FAREPAY. UTA charges $198 per month for a premium pass (which is required to use FrontRunner and express buses), which doesn't include my gas costs driving to each station.

$214 is over a 60% increase over commuting to work in my car (the FAREPAY discounts ends Dec 31, 2014), and $198 is almost a 50% increase. I will spend an extra $700 per year to the UTA for using their services. Over 10 years, that's almost half the cost of a base model Toyota Corolla, brand new. So, when purchasing a new car, if there are plans on using the UTA, realize that the cost of your new car went from $15,000 to $22,000.

As a frequent rider of the UTA system, watching people use their passes, gauging who is riding by dress, demeanor, and conversation, I would dare say that 80-85% of ridership is only riding FrontRunner and TRAX, because they have an education pass or their pass is paid for or reimbursed by their employer. This means that most people are riding the UTA, because they don't have to drive, and they're not paying for it personally, which is probably due to the astronomical fare prices. Of course, I don't have access to their ridership data, so this is just speculation based on observation while riding. I doubt I'm far off though.

I would like to say that the fare goes to quality services, buses, and trains, but it doesn't. The FrontRunner and TRAX stops have very little overhead protection from the sun and weather, and even fewer seats. The FrontRunner has three state-of-the-art Bombardier double-decker commuter cars, and 1 refurbished 1970s Comet car. Some TRAX trains are new, many are old and tattered. Service reliability is not great across the rail systems (I don't have any numbers, just personal experiences), and some stations are in need of repair (the Clearfield station was getting some tile replaced, but it's only half-finished, and has remained so for a few months, with no evidence of progress).

It is true that I don't have to deal with traffic jams on the freeway, but it has rarely taken me more than an hour to reach the office, even in the most deadlocked situation. I don't have to deal with bad weather driving, which is a bonus, and I can get work done while on the train, provided I can find a table to work on in the commuter car. Despite these benefits, I hardly think they are worth the extra $700 per year you will give the UTA for these luxuries, provided the setbacks of their infrastructure.

As a former resident of Toronto, Canada, the UTA fares are outrageous. I came from a city where fares are currently less than $135 per month for access to all TTC buses, street cars, light rail, and subways. The TTC rail stations are vastly superior to UTA stations, providing adequate overhead protection, generous seating, and are well-maintained. The system is much more reliable (I didn't experience a single delay in over two years) and the train cars are more up-to-date in a wider fashion. I wish I was familiar with other transit systems to compare, but the TTC is the only other transit system I'm deeply familiar with. Notice also that while an individual ride on the TTC might be $3, the monthly pass is $133. Compare to the UTA where a single ride is $2.50 and the monthly premium pass is $198 (it's true that the TTC does not include Go Transit, the long-distance commuter rail, and Go Transit does not offer discounts for TTC card holders, so this may not be exactly apples-to-apples).

I hope the UTA comes down considerably in price (about half), because when the University of Utah decides it doesn't want to pay the UTA for my transit pass, I'll be back on the road commuting in my car.

UPDATE:
As a partial solution to reducing fares, aside from increasing the state gas tax, maybe the executive officers, directors, and managers of the Utah Transit Authority could take a pay cut. I'd wager that $511,000 puts the COO of the UTA in the top 1 or 2 percent of Utah salaries.

What's The Matter With PGP?

http://blog.cryptographyengineering.com/2014/08/whats-matter-with-pgp.html has been making its way around crypto circles recently. Enough so, that I figured I would comment by posting a reply on my own blog.

This post is written by cryptographer Matthew D. Green. As a result, even though I've never heard of Dr. Green, when I read the post, I was expecting something of high caliber. After all, this isn't some random dude on the Internet blowing his horn. THIS IS A GENIUNE CRYPTOGRAPHER. Turns out, the post is complete junk. It's about as thoughtful and informative as the Nigerian 411 scam. Let's analyze the post, top-to-bottom.

First off, as a TL;DR, Dr. Green thinks PGP is Teh Fail. After reading the post, it almost seems to me that he just barely installed PGP on his desktop, couldn't figure most of it out, got confused by a great deal of the terminology (isn't he a cryptographer?), and threw his hands up in defeat. When you read it a 2nd time, you notice almost immediately that he is confusing a few things:

  • The OpenPGP specification.
  • The GnuPG implementation of the OpenPGP specification.
  • Mail client frontends to GnuPG.

Are they all Teh Fail, or just the OpenPGP specification? After reading it a few times, I'm convinced he is rolling up anything and everything to do with PGP, past, present, and future, as total failure. It's just unfortunate he can't set things straight between the OpenPGP specification, the GnuPG implementation, or a mail client.

Meh. Color me unimpressed, Dr. Green.

1. PGP Keys Suck
My favorite line out of his post is this:

"For historical reasons [PGP keys] tend to be large and contain lots of extraneous information, which it difficult to print them a business card or manually compare."

Yes, they are definitely large. But printing the entire key on a business card? Really?? I've been using GnuPG since 2004, and I've never heard of anyone wanting to even compare public asymmetric keys character-by-character, let alone print a full asymmetric key on a business card. This is what the SHA-1 fingerprint is for. It's been generally accepted that comparing keys using the 40-character hexadecimal string is sufficient enough to know that each person has the same key. Key signing parties are based on this principle. Fingerprints as a result have been printed on business cards. Example 1, example 2, example 3.

Regardless, asymmetric keys in general are nasty to deal with by hand. This isn't just a problem with GnuPG, but OpenSSH, OpenSSL, or any other front end implementation that uses public key cryptography. This shouldn't come as a surprise. This is why we have tools to work with them. Things like Gnu Privacy Assistant, KGPG, Enigmail, GnuPG, and others. There is nothing in the OpenPGP specification that defines how you should implement such a front end. As a result, this is not the fault of OpenPGP. If you don't like the way GnuPG handles transferring public keys from one destination to another, propose a patch?

Without taking a breath, he then complains that your request for a GnuPG key is over plaintext HTTP. Well, okay. Sure. HTTPS/HKPS would be optimal to transfer the key, and he rightfully identifies a bug with GnuPG that doesn't verify if the correct key was downloaded. But, again, this is not a problem with OpenPGP. OpenPGP does not define anything about infrastructure. These servers are poorly designed, with OpenPGP front ends like GnuPG full of bugs. Hardly a problem with OpenPGP. If you don't like the key server implementations, maybe submit a patch?

Then his rant turns to OpenPGP infrastructure, by bitching about public key servers (it actually seems like he's comparing everything to miniLock, and setting miniLock as the Gold Standard). His first gripe is getting keys via Twitter:

"If we must exchange things via Twitter, why not simply exchange keys?"

Really? Twitter? Do we want to rely on Twitter for getting public keys? We can't get them from an email, or in person, or from a website, or from a business card, or some other form of informational transfer? Just, Twitter?

Finally, he opens up ECC keys, with the following:

"Modern EC public keys are tiny."

Seems Dr. Green has some sort of obsession with ECC keys. Earlier he mentions that you can fit ECC keys on business cards. He continues comparing a lot of OpenPGP to ECC keys. I'll address that later.

2. PGP key management sucks
There really isn't much to say here. Every point I argued previously, in that OpenPGP provides no infrastructure, applies here. However, he does hit one topic which strikes me as odd: the Web of Trust. He says the following:

"Most OpenPGP implementations do a lousy job of presenting any of this data to their users anyway."

OpenPGP implementations should never define trust for the user. That should always be at the discretion of the user. OpenPGP implementations should only define accuracy, such as correctly decrypting an encrypted piece of text, or successfully verifying a digital signature, because that's all the OpenPGP specification defines. I decide whether or not to trust the data given to me by someone else. And, most of the tools I have dealt with properly work this way. A trust database exists on the computer, allowing me to say "I trust this key", "I don't trust this key", and so forth. I define how those trust relationships are built, and I can put them into the database, so OpenPGP front ends can notify to me whether or not I am dealing with trustworthy data. But notifications should be the extent of the Web of Trust in OpenPGP implementations. And just because I signed a key, doesn't necessarily mean I trust that data from that key owner, so the tool should never make that assumption.

3. No forward secrecy
Okay. I get it. People like perfect forward secrecy (PFS), even to the point of obsession. I'm not one of those people. I use OTR, and I implement PFS on SSL where appropriate, but I'm not going to lose any sleep over the fact that PFS isn't defined in OpenPGP. OpenPGP has its problems. PFS isn't one of them, for a couple of reasons.

First, email is generally looked upon as long-term communication. Most people want to read their email, not destroy it. Many use long-term archival to go back to email replies in a thread, or to look up purchase orders, or recall sensitive employee information. Email has value in multiple views. PFS just doesn't fit well here. If you want PFS in your communication, look at something where there is not high value in retrieving old communication, such as instant chat. Off-the-record (OTR) messaging works well here, but even then, it has it's problems. As someone who logs every chat message that comes in, I want to have the ability to decrypt old communication messages sent to me, for a number of reasons. OTR makes this impossible. OpenPGP makes this doable.

Second, GnuPG cryptographically signs encrypted data by default. Thus, if you were to use an OpenPGP implementation for end-to-end communication, all messages have been cryptographically signed. Enabling PFS is perpendicular to signed data. Prior messages should not only be un-decryptable, but they should also not identify who sent the message. Because OpenPGP implementations build everything around a Web of Trust, I find it difficult to see where PFS would fit at all.

However, this sentence down right scares me:

"If the NSA is your adversary just forget about PGP."

What does that mean? The NSA has beaten OpenPGP? I doubt it. Strongly doubt it, actually. So much so, I'm really beginning to wonder about Dr. Green's competence as a cryptographer. Dude, look at the specification, and tell me the NSA has beaten every cryptographic cipher and hash in that RFC. I don't buy into the conspiracy theories that the NSA has some super-duper-uber-mega-cracking-datacenter, and AES-256 is child's play to the NSA. I trust the mathematics, and I trust academia. Maybe the NSA has their own microprocessor manufacturing plant? That has outpaced Intel? Yeah, not buying it.

However, maybe Dr. Green means to say that if you're targeted by the NSA, then before you even use your OpenPGP implementation to encrypt some communication, the NSA has a copy. But then, how is this a problem of OpenPGP and its implementations?

As a cryptographer, to wave your hand like a Jedi, claiming thet PGP is worthless in front of the NSA is frightening. Not because there might be some truth to it (there isn't), but because that's just amateur FUD slinging. For someone with a Ph.D in the field, I would expect better.

4. The OpenPGP format and defaults suck
Hahahahahahahahahahahah. Uh, no. Not even close. I literally laughed out loud when I read that section. So much so, I showed a couple of mathematicians and amateur cryptographers, and we had a good time.

Chapter 9 of RFC 4880 (the OpenPGP RFC) defines the following constants:

9.1. Public-Key Algorithms

ID Algorithm
-- ---------
1 - RSA (Encrypt or Sign) [HAC]
2 - RSA Encrypt-Only [HAC]
3 - RSA Sign-Only [HAC]
16 - Elgamal (Encrypt-Only) [ELGAMAL] [HAC]
17 - DSA (Digital Signature Algorithm) [FIPS186] [HAC]
18 - Reserved for Elliptic Curve
19 - Reserved for ECDSA
20 - Reserved (formerly Elgamal Encrypt or Sign)
21 - Reserved for Diffie-Hellman (X9.42,
as defined for IETF-S/MIME)
100 to 110 - Private/Experimental algorithm

Implementations MUST implement DSA for signatures, and Elgamal for
encryption. Implementations SHOULD implement RSA keys (1). RSA
Encrypt-Only (2) and RSA Sign-Only are deprecated and SHOULD NOT be
generated, but may be interpreted. See Section 13.5. See Section
13.8 for notes on Elliptic Curve (18), ECDSA (19), Elgamal Encrypt or
Sign (20), and X9.42 (21). Implementations MAY implement any other
algorithm.

9.2. Symmetric-Key Algorithms

ID Algorithm
-- ---------
0 - Plaintext or unencrypted data
1 - IDEA [IDEA]
2 - TripleDES (DES-EDE, [SCHNEIER] [HAC] -
168 bit key derived from 192)
3 - CAST5 (128 bit key, as per [RFC2144])
4 - Blowfish (128 bit key, 16 rounds) [BLOWFISH]
5 - Reserved
6 - Reserved
7 - AES with 128-bit key [AES]
8 - AES with 192-bit key
9 - AES with 256-bit key
10 - Twofish with 256-bit key [TWOFISH]
100 to 110 - Private/Experimental algorithm

Implementations MUST implement TripleDES. Implementations SHOULD
implement AES-128 and CAST5. Implementations that interoperate with
PGP 2.6 or earlier need to support IDEA, as that is the only
symmetric cipher those versions use. Implementations MAY implement
any other algorithm.

9.3. Compression Algorithms

ID Algorithm
-- ---------
0 - Uncompressed
1 - ZIP [RFC1951]
2 - ZLIB [RFC1950]
3 - BZip2 [BZ2]
100 to 110 - Private/Experimental algorithm

Implementations MUST implement uncompressed data. Implementations
SHOULD implement ZIP. Implementations MAY implement any other
algorithm.

9.4. Hash Algorithms

ID Algorithm Text Name
-- --------- ---------
1 - MD5 [HAC] "MD5"
2 - SHA-1 [FIPS180] "SHA1"
3 - RIPE-MD/160 [HAC] "RIPEMD160"
4 - Reserved
5 - Reserved
6 - Reserved
7 - Reserved
8 - SHA256 [FIPS180] "SHA256"
9 - SHA384 [FIPS180] "SHA384"
10 - SHA512 [FIPS180] "SHA512"
11 - SHA224 [FIPS180] "SHA224"
100 to 110 - Private/Experimental algorithm

Implementations MUST implement SHA-1. Implementations MAY implement
other algorithms. MD5 is deprecated.

Please identify what is wrong with this specification. GnuPG implements IDEA, CAST5, 3DES, AES, AES-192, and AES-256 as ciphers, and RIPEMD-160, SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512 as cryptographic hashes. What are wrong with these defaults? Please, enlighten us Dr. Green.

However, Dr. Green is mixing what the OpenPGP specification requires and what implementations use. The RFC exists as a core minimum set of standards, defining what is required to use as a base, with things optional and deprecated. However, GnuPG (as well as PGP) have both gone well beyond the defaults that the RFC requires.

But then there's this little gem:

"Elliptic curve crypto is (still!) barely supported."

He's back at ECC. Of course, this shouldn't come as much of a surprise, but there is a good reason most academic and open source cryptographers have steered clear of ECC- software patents:

According to Bruce Schneier as of May 31, 2007, "Certicom certainly can claim ownership of ECC. The algorithm was developed and patented by the company's founders, and the patents are well written and strong. I don't like it, but they can claim ownership."

Dr. Thomas Pornin (one cryptographer on Stack Exchange I have great respect for) answers the ECC patents this way:

"To my knowledge (but I am not entitled, in any way, to give law-related advice), parts of ECC cryptography which may still be patented are:

  • implementation of curves over binary fields using normal bases;
  • point compression;
  • acceleration of Koblitz curves using the Frobenius endomorphism;
  • various optimization tricks on dedicated hardware architectures (FPGA, ASIC)."

However, there is finally an RFC for ECC in OpenPGP. It defines three curves, which are also already standardized by NIST (pdf)- Curve P-256, Curve P-384, and Curve P-521. This document was standardized only two years ago, June 2012. Unfortunately, these three curves do not meet the "safe curves" definition as defined by Dr. Daniel J. Bernstien ("djb").

GnuPG will likely use EdDSA and Ed25519 as default for signatures (the same algorithms supported by OpenSSH actually). Werner did implement Brainpool P256r1, P384r1, and P512r1, even though it is not defined in the RFC.

5. Terrible mail client implementations
Again, mixing the OpenPGP standard, which does nothing to define how an MUA should behave, with a front end to a specific OpenPGP implementation. Regardless, there are some hidden gems in this section.

First, his claim that we're storing the passphrase to the private key in memory:

"[Mail clients] demand you encrypt your key with a passphrase, but routinely bug you to enter that passphrase in order to sign outgoing mail -- exposing your decryption keys in memory even when you're not reading secure email."

Hahahahahaha. Oh, boy. I guess we haven't heard of "GnuPG Agent"? Yeah, there's this thing that exists setting up a "token" based system. If you can successfully decrypt your private key, GnuPG will setup a token which is cached in RAM. Then, the MUA just needs to check if such a token exists. If so, it can use the full features of decryption and signing, without ever asking the user for the passphrase.

Regardless, if an attacker has access to your RAM to read it's contents, then you have bigger fish to fry, and there is nothing on the OpenPGP specification to help you.

Now this little gem:

"Most of these problems stem from the fact that PGP was designed to retain compatibility with standard (non-encrypted) email. If there's one lesson from the past ten years, it's that people are comfortable moving past email."

Really, I'm fairly confident that I don't need to say much here, but I'll just end this section with my own personal thought:

"If there's one lesson from the past 50 years, it's that people are not comfortable moving past email."

What's cute, is his examples to vendor controlled messaging systems provided by Apple, Facebook, and WhatsApp as a way to do secure communication. All proprietary, all vendor controlled lockin, all reliant on 3rd-party infrastructures, all client-server encrypted/decrypted, and not end-to-end, which OpenPGP is not. OpenPGP is end-to-end encryption, decentralized, and ad hoc. More importantly, the largest implementation (GnuPG) is Free and Open Source Software.

6. So what should we be doing?
Well for one, not listening to Dr. Green. OpenPGP has legitimate concerns, no doubt. But they aren't what are described in the article. He followed a white rabbit down a hole, when he should have just been studying the specification, and identifying it's real problems.

For example, if using OpenPGP in email, all of the email headers remain in plaintext. Only the body is encrypted. This metadata can reveal a great deal of information, such as the sender and recipient, the SMTP servers used in the exchange, the subject content, additional recipients on the carbon copy, etc.

Further, because OpenPGP encrypts data in packets, "header packets" exist in every OpenPGP encrypted file. These header packets can also reveal a great deal of information, such as who encrypted the data, who the recipient of the encrypted data is for, when the encrypted data was created, and other things.

In other words, one problem with OpenPGP is plausible deniability. If you want your encrypted data to be indistinguishable from true random, you're going to fail miserably using OpenPGP. OpenPGP concerns itself with message confidentiality and integrity- deniability just isn't in its scope.

For me, throwing away PGP, because of a number of misguided personal opinions, seems like a Big Mistake. GnuPG has been in development since 1999. OpenPGP itself has undergone revisions, improving the specification. PGP, which OpenPGP built itself upon, is even older, entering the arena at 1991. Throwing 23 years of development, bug fixing, and revisions out the window, just because one troubled cryptographer doesn't like it, seems to be akin to the very tardy, and incomplete Netscape 6.0 release.

Entropy As A Service

Back in October 2012, I announced hundun.ae7.st. It's an "entropy server" that delivers encrypted random bits that are indistinguishable from true random, directly to your GNU/Linux input entropy pool. Per the Linux CSPRNG source code, the bits in the input entropy pool are then cryptographically hashed with SHA1 before sending the bits to the blocking and non-blocking pools. The bits generated by my entropy server come only from 5 Entropy Keys, which appear to no longer be in production. I'm working on a "version 2.0" of the server, where more true random sources will be mixed into the pool, such as the onboard Broadcom HWRNG, atmospheric noise, haveged(8), and 2 TrueRNG HWRNGs. Keep an eye out both on this blog, and the entropy server web page for updates.

However, it appears that I am not the only one who has thought of this idea.

In September 2013, the U.S. National Institude of Standards and Technology introduced the NIST Randomness Beacon. Every 60 seconds, a random number is "broadcast", which is cryptographically signed, timestamped, and contains the value of the random number from the previous minute's broadcast. It comes complete with a REST API, which you can take advantage of for your internal application. They warn, very strictly, that these bits should not be used for cryptographic keys. Instead, it would be more fitting to use these random bits as for things such as lottery drawings, dice rolls, and other non-sensitive randomness implementations. However, you could write a REST application that took this data, and reseeded the Linux CSPRNG every 60 seconds. Because SHA1 is only theoretically broken, and not practically broken, and further because the Linux kernel is receiving other sources of input to reseed the CSPRNG, it seems difficult for an attacker to recover the internal state of your CSPRNG without compromising the computer, to determine the sequence of the CSPRNG. A simple shell script could be something like this (assuming you have rhash(1) installed):

1
2
3
4
5
6
7
#!/bin/sh
# Reseed the Linux kernel CSPRNG with data from NIST
SALT=$(dd if=/dev/urandom bs=64 count=1 2> /dev/null)
EPOCH=$(date --date="$(date +%H:%M:00)" +%s)
RESULTS=$(GET https://beacon.nist.gov/rest/record/"$EPOCH")
DATA=$(echo "${SALT}${RESULTS}" | rhash --sha3-512 - | cut -d ' ' -f 1)
echo "$DATA" > /dev/random

Further, in February 2014, Canonical announced entropy.ubuntu.com. Canonical is providing their own application, which currently seems to be only packaged for Ubuntu, called pollinate. The idea is for newly created Ubuntu VMs or containers to immediately have a source of entropy to rely on, to quickly feed the Linux CSPRNG before it is needed in generating cryptographic keys, such as SSH or SSL keys. Whlie some criticize its security, it seems more like FUD from those who don't fully understand the internal workings of the Linux kernel CSRNG, than actual security concerns. Reseeding the CSPRNG is less of an issue for me, as much as Canonical tracking which IP requested bits from their server, and logging them (as should be the case for any service from any provider). It may be true that on first boot, not enough entropy may have been generated to seed the CSPRNG. As such, initiating a TLS session to a server may open the network connection up for an attack. While the theory might be sound, I would like to see a proof of concept or even an implementation of the attack, before I start FUD slinging.

So, as it sits, there are to my knowledge, three Entropy-as-a-Service (EaaS) servers online, each with different implementations, that require different software, that you could use for a variety of applications. Of course, there is also random.org which has been on the Internet for a good long while, long before "as a service" was a buzz term. However, I don't see them as an entropy server as much as an implementation server. They are great for getting dice rolls, lottery picks, card shuffles, etc, but not so much "raw data", even though it does have a "random number sequence" generator, and it does provide an API.

With the revelations about the NSA and the increasing weight put on cryptography as a result, I'm guessing more of these types of servers will pop up. Also, only my service, so far, actually increases your entropy estimate in your kernel. The other services mentioned here only reseed the Linux kernel CSPRNG, without increasing the entropy estimate.

The Linux Random Number Generator

Introduction
There is a lot of misinformation out there about /dev/random and /dev/urandom with regards to the Linux kernel. Unfortunately, not even the random(4) manpage seems to get it right. So, rather than argue with everyone on the Internet, I'll post the inner workings of the Linux kernel with respect to randomness here.

TL;DR
If you're not interested in reading the full post, then call this a good summary:

  • There are 3 separate and distinct entropy pools in the Linux kernel.
  • /dev/random and /dev/urandom both use the same CSPRNG.
  • Use /dev/urandom.

The Entropy Pools
There are three arrays which store unsigned integers called entropy pools in the Linux kernel, each which maintain their own state. They are the input pool, the blocking pool, and the non-blocking pool. The input pool maintains a maximum internal state of 4096 bits. When looking at the value of "/proc/sys/kernel/random/entropy_avail", you are looking at the current estimate of the input pool state. The blocking and non-blocking pools each keep an internal state of just 1024 bits.

Below is an image showing the architecture of the Linux kernel random number generators. Each of the three pools are green. You'll notice that the input pool is fed with data from external events, such as timing interrupt requests, disk and network I/O, as well as human input on the keyboard and from the mouse. The data is hashed with SHA1 as part of the mixing functions in the cryptographically secure pseudorandom number generator (CSPRNG).

entropy

The blocking and non-blocking pools both feed from the same CSPRNG to produce their random output. There are three interfaces which user space can access to get this data from the CSPRNG. The /dev/random character device will block when the blocking entropy pool is exhausted, whereas both the /dev/urandom character device, and the get_random_bytes() system call will not block, even when entropy has been exhausted.

What's important to understand here, is that both the blocking pool, from which /dev/random draws its data, and the non-blocking pool, from which /dev/urandom draws its data, get their data from the same CSPRNG.

Theoretic versus Computational Security
In fact, there are really just two types of security that you should be aware of: theoretic security and computational, or practical security.

Theoretic Security: This is security which can be proven through a logical mathematical proof. Two types of proofs come to mind (there are probably others): The One Time Pad, and Shamir's Secret Sharing. Both of these proposals can be implemented in software algorithms, even though there may be some difficulty in the implementation.

Computational Security: This is security where algorithms are build from proofs that demonstrate a weakness in classical computing, such as the discrete logarithm problem and the quadratic residuosity problem. It is known that it takes considerable energy, time, and effort to work through these problems when certain criteria is met, such as factoring out primes from a quadratic with hundreds or thousands of digits in the number. Because it's computationally difficult to solve these problems, we consider algorithms that are built from them, computationally secure.

Probably every cryptographic algorithm that you are aware of, such as AES, RSA, SHA, and so forth, are computationally secure, not theoretically secure. This means, that an adversary with unlimited time, resources, and processing power could break computationally secure algorithms. This is why bit sizes are ever increasing- as processing power and computing "tricks" improve, the size of the problem needs to increase.

Random Numbers
There are basically two ways which you can generate random numbers- in hardware or in software (if you are getting random numbers from a Geiger Counter, even though radioactive decay is found in nature, you are using a hardware device to feed it into the computer). In both cases, however, the sequences of numbers could be "true random" or "pseudorandom".

First, I'm not going to jump into the long argument that is "true randomness". This quickly turns into a philosophical debate. Some will say that "true random" only exists in quantum nature, while others will explain that even quantum mechanics lives by mathematical equations and predictable output that we just haven't discovered yet. For me, I'm content with calling the output of some quantum events "true random". While all mathematical algorithms in software can only produce pseudorandom numbers, some mathematical algorithms can have such a large cycle, and their algorithm computationally secure, that their output is indistinguishable from true random.

Further, even though you have a "true random" number from nature, how do you say a number by itself is random? It's usually more practical to call a sequence of numbers unpredictable, rather than "random". For cryptography, that's really the only thing that matters. Consider the following sequences of text- one was encrypted with AES, one is "true random" data fed from a hardware random number generator, and the other sequence was pseudorandomly generated:

fa819c3e86d9397184628a5d1c06f65b
76a153e028f1c1084bb56679829e5e95
399916454dbb20cece79224b9be0ca5a

Looking at those three sequences, one has a very rigid structure. It is legitimate encrypted data with AES, while the other two strings are 16 bytes of data read from either /dev/urandom or /dev/hwrng. While examining the sequences of bits, is it computationally feasible to determine the underlying structure that was used to build the encrypted ciphertext, and get back to the original plaintext, without the key? From a theoretic perspective, the sequence of encrypted bits should appear indistinguishable from a truly unpredictable sequence of bits, such as those grabbed from the hardware random number generator.

I'll give you the solution:

$ echo "Random numberS" | aespipe | xxd -ps
Password: 12345678901234567890
fa819c3e86d9397184628a5d1c06f65b

However, suppose you have "true random" numbers; maybe you got them from radioactive decay, diode breakdown, or atmospheric noise. Regardless, you have a sequence of bits that are as "true random" as you could get. So, now what are you going to do with them? If you're going to study some informational theoretic security algorithm, you might be doing the world some good. However, if you're going to subject these numbers to a computationally secure, man-made, software algorithm, running on a deterministic computer, what was gained by having those "true random" numbers? The weakness will then lie in the software algorithm, the unpredictable internal state of the computer, and human factors; not necessarily the numbers from which it was drawn. Subjecting "true random" numbers to a theoretic algorithm is one thing; subjecting them to a man-made computer software algorithm that is at best computationally secure, is another. In all honesty, you are not much better off with a CSPRNG giving you the numbers than the you are the Universe giving them to you, if all you need to do is generate a long-term GPG or SSL key.

Don't get me wrong. I'm not suggesting that having "true random" numbers is not a good thing. It is. Injecting "true random" numbers into the input entropy pool of the kernel reseeds the CSPRNG, putting it in a "true random" state, from which it can build your pseudorandom numbers. Having "true random" numbers is a good thing, and increasing the input pool entropy is a great thing. In fact, due to the nature of "reseeding" the CSPRNG, the Linux random number generator "self heals". In other words, if the internal state of the system was known at a fixed point in time, reseeding the CSPRNG with entropy feed by "true random" numbers, changes the internal state of the random number generator. This means that although all future numbers could be determined, because the state of the system was compromised, future output has been changed, because the internal state of the system has changed.

A self healing CSPRNG is a great thing to have. And guess what? If you're constantly feeding the input pool with "true random" noise, then the output of /dev/urandom will be completely indistinguishable from "true random" data sequences, due to this reseeding.

/dev/random versus /dev/urandom
And now we get to the meat of the issue- which output interface to use for "the most demanding cryptographic computational long-term security"? First thing is first- both /dev/random and /dev/urandom are fed data from the same CSPRNG. Let me repeat that, with a bit more emphasis: Both /dev/random and /dev/urandom are fed data from the same CSPRNG. In fact, most who understand this, will explain that you should use /dev/urandom, and NOT /dev/random. The problem is, that /dev/random has the horrible bug of blocking when the input entropy pool is exhausted. So, if your application relies on /dev/random to get random numbers, there are pauses and delays until there is enough entropy in the input pool to reseed the CSPRNG, so you can get more data.

The point? Stop using /dev/random and start using /dev/urandom. To back up my claim, here are some people who back me up:

Here, cryptographer Dr. Daniel Bernstein puts me, personally, in my place on a mailing list (I've learned the error of that line of logic since then):

Cryptographers are certainly not responsible for this superstitious
nonsense. Think about this for a moment: whoever wrote the /dev/random
manual page seems to simultaneously believe that

(1) we can't figure out how to deterministically expand one 256-bit
/dev/random output into an endless stream of unpredictable keys
(this is what we need from urandom), but

(2) we _can_ figure out how to use a single key to safely encrypt
many messages (this is what we need from SSL, PGP, etc.).

For a cryptographer this doesn't even pass the laugh test.

I'm not saying that /dev/urandom has a perfect API. It's disappointingly
common for vendors to deploy devices where the randomness pool has never
been initialized; BSD /dev/urandom catches this configuration bug by
blocking, but Linux /dev/urandom (unlike Linux /dev/random) spews
predictable data, causing (e.g.) the widespread RSA security failures
documented on http://factorable.net. But fixing this configuration bug
has nothing to do with the /dev/random superstitions.

Dr. Bernstein continues in a paper about the problems of having too much entropy, and why that could be a bad thing. However, he repeats the same thing as above: use /dev/urandom. Dr. Bernstein also created a cryptographic library called "NaCL", which relies on /dev/urandom for its randomness.

Cryptographer Dr. Thomas Pornin also explains the nature of /dev/urandom:

/dev/urandom yields data which is indistinguishable from true randomness, given existing technology. Getting "better" randomness than what /dev/urandom provides is meaningless, unless you are using one of the few "information theoretic" cryptographic algorithm, which is not your case (you would know it).

In an additional post, he mentions:

There are many applications which read /dev/random as a kind of ritual, as if it was "better" than /dev/urandom, probably on a karmic level. This is plain wrong, especially when the alea is to be used with classical cryptographic algorithms (e.g. to generate a SSH server public key). Do not do that. Instead, use /dev/urandom and you will live longer and happier. Even for one-time pad.

Even Bram Cohen agrees.

In terms of software packages, the Go language uses /dev/urandom as its source for a CSPRNG, where available. OpenSSL uses /dev/urandom when available. OpenSSH will use OpenSSL's random number source, if compiled with OpenSSL support, which uses /dev/urandom. OpenVPN will also use OpenSSL's random number source, unless compiled with PolarSSL. Guess what? /dev/urandom.

The only software package I can think of that does not use /dev/urandom, but instead deliberately uses /dev/random, is when generating keys with GnuPG. However, this is only to guarantee that there has been sufficient entropy in the system to guarantee "randomness". When the input entropy pool is exhausted, /dev/urandom will happily give away a non-blocking stream of random data, where /dev/random will block. Adding non-malicious entropy to your system is not a bad thing. As we mentioned earlier, reseeding the CSPRNG with entropy from random sources introduces a "self-healing" RNG. Thus, GnuPG ensures that there is enough entropy to reseed the CSPRNG, such that the state of your system has "healed", and is in a completely different state than before you started. However, if you just linked /dev/random to /dev/urandom, and generated your GPG key, it would be no less secure, provided that the system you generated the key on is not compromised in any fashion.

The point? Use /dev/urandom.

Cryptographically Secure Pseudorandom Number Generators
I know what you're thinking- a pseudorandom number generator could not possibly be as "good" as a true random number generator. In many cases, this is true. There are pseudorandom number generators which are not cryptographically (which means "computationally") secure. Mersenne Twister comes to mind. It's "fast enough" and will provide pseudorandom numbers as long as you wish. It's the pseudorandom number generator for a great deal of programming languages, such as Python, and rightfully so. It's a good choice. But, it's not computationally strong. After observing a the sequence for a space of time, enough information can be gathered that will allow the attacker to predict the remaining sequence.

So, the question remains- can you have a computationally strong pseudorandom number generator? The answer is a big resounding YES! Wikipedia to the rescue:

  • The Yarrow algorithm (used in FreeBSD and Mac OS X for both /dev/random and /dev/urandom).
  • The Fortuna algorithm.
  • CryptGenRandom as used in Microsoft Windows.
  • ISAAC which is based on the RC4 cipher.
  • arc4random used in OpenBSD until 5.5, which is also based on the RC4 cipher.
  • AES-CTR DRBG (Deterministic Random Bit Generator) is often used as a random number generator. AES in CTR mode involves incrementing a counter with a random key, and using the encrypted output as the sequence of random numbers. Provided the initial counter and key are kept secret, this is cryptographically secure. In reality, any block cipher in CTR mode will work here.
  • The ANSI X9.17 standard, also adopted by FIPS, explains the use of Triple Data Encryption Algorithm (TDEA) keying option #2 as a CSPRNG.

There are a couple of theoretic designs which are computationally strong, but due to the nature of the algorithm, slow and impractical:

  • Blum Blum Shub relies on the hard quadratic residuosity problem to get its security.
  • Blum Micali relies on the hard discrete logarithm problem to get its security.
  • Dual_EC_DRBG, which has later been revealed to contain a backdoor by the NSA, is based on the decisional Diffie–Hellman assumption.

The CSPRNG in the Linux kernel uses SHA1 on the input pool for handing out our random numbers. The result is also put back into the input entropy pool for further evaluation. Even though theoretical attacks have been made on SHA1, the design is flexible enough that a different cryptographically secure hashing algorithm could be put in its place, such as SHA2, SHA3, WHIRLPOOL, etc.

Non-blocking /dev/random
If you absolutely must have a /dev/random device that doesn't block, either because some software applications are relying on it, such as GnuPG, or because you are working on some informational theroetic algorithm, then there are some options available to you.

The first is the HAVEGE software daemon. This queries a lot more timing events on your motherboard than the standard kernel, and can keep your input entropy pool at full. In my testing, this can produce around 1 MBps of pseudorandom data out of /dev/random. The only drawback, is that it hits the CPU fairly frequently, and can cause the load on the system to jump. However, I have found in practice that if you set the watermark to 512, which is typically considered enough bits of entropy for even the most stringent cryptographic keys, you'll never notice it running.

Other options include purchasing USB devices that use avalanche noise in electronic circuits. Using this USB device, along with rngd(8), can keep /dev/random from blocking, provided that the USB device remains operational. One drawback with these devices, is the capability that the circuit breaks in some fashion, producing biased and highly predictable data. Some implementations, such as the Entropy Key (which is no longer in production) solve this by doing internal bias testing and whitening, at the expense of throughput. These USB devices can be purchased for around $50, and could give you up to 40 KBps of random data per device.

Lastly, some chip makers are introducing a HWRNG (hardware random number generator) on the chip, such as Intel and Broadcom. All it takes to use these generators is by loading a kernel module. Typically, a /dev/hwrng device will then show up, which you again can use rngd(8) to feed its output into the input entropy pool for /dev/random. Unfortunately, it is unknown if the NSA has compromised the integrity of these hardware generators, if the specifications are not available, or an audit has not been performed. Just passing the randomness tests, such as DIEHARD and FIPS are not good enough to discover whether or not the random data is predictable by an attacker. However, throughput can be upwards of 500 MBps for these generators. If you need that sort of throughput, these are the generators for you.

Conclusion
Use /dev/urandom.

Tor Versus Road Warrior

Lately, I have been doing some research regarding Tor, and the technology behind it. Further, I wanted to compare it to other products such as Freenet and I2P. In the process, I stumbled upon this post regarding comparing Tor to a proprietary product called "Road Warrior" from a company called "Cryptohippie". Initially, I tried commenting on the post, but it seems that the WordPress installation at the Daily Anarchist is broken, as every attempt to supply a comment led to a 404 error page. So, instead, I'll post the comment here:

Interesting read. I know the post is more than 3 years old, however I stumbled upon it looking up some alternatives to Tor. Not because I think Tor is bad, but because I'm interested in what the world is doing with the Edward Snowden revelations, and what's out there.

You make some interesting points, mostly with are standing on shaky ground.

1. "Tor is slow". Well, yes and no. Sure, going through 3 random hops to get to your destination is going to add latencies to your round trip connection. That's somewhat expected though. However, with that said, I usually don't have any issue watching YouTube videos, downloading updates from my operating system vendor, or downloading PDFs and other multimedia from the Internet. There have also been updates since this post was published, where the algorithms Tor use have been greatly optimized, lowering latencies, and increasing bandwidth. Personally, I don't generally see any undue latency burdens compared to standard clear net connections. For me, the bandwidth and performance is acceptible.

2. "When someone owns something and generates income from it, they almost always take care of it, and usually work hard to improve it". Sure, that is, when it's financially convenient, which isn't always the case. Many times, companies shift focus, lose investments, or just plain go under, and the product goes with it. We could list example, after example, where commercial products failed, where Free and Open Source products not only survived, but thrived. Especially when it comes to crypto, proprietary and commercial products usually don't fare well. OpenVPN rules the VPN world. OpenSSH rules the SSH world. OpenSSL rules the secure web and email worlds. The standard crypto libraries are all Free and Open Source Software, not proprietary. Even the standard crypto protocols, which Road Warrior relies on, are open not encumbered by patents, such as AES, SHA1/SHA2, etc. Because this is a criticism of "free" software, however, you are aware that there are financial contributions by the EFF to develop the Tor product, just like there are financial contributions from IBM, Oracle, Red Hat, and even Microsoft, to develop the Linux kernel- another "free" software.

3. "Tor may include malicious nodes". Sure. This will be a problem when you use ANY service, and is not unique to Tor. When using any Internet service, there is a level of trust that must be maintained between the provider, and the user. As an example, HTTPS connections could be decrypted on the server, logged, and shared. However, with the 3-random relay node selection, it is actually pretty difficult for a node operator to compromise an entire Tor client connection. In fact, that's kind of the point with the "onion" layer encryption. Really, the only thing that is practical, is controlling both the entry and exit nodes, at which connection profiles could be built based on traffic patterns, IP addresses, times of connections, and sites visited. This is possible by controlling a large number of the relay nodes in existence. As such, entry guards were introduces to minimize this attack. Even DNS, if not properly tunneled through Tor, could give away details about what your doing with your Tor connection. Regardless, no service is 100% secure, and Tor is no exception.

4. "Tor is only for web browsing". Garbage. This couldn't be further from the truth. I use Tor for my email IMAPS/SMTPS connections, for IRC over SSL, XMPP, DNS, SSH, FTP, and so much more. If your application supports using a SOCKS proxy, or if you can setup a transparent proxy for all connections to go through, it just works. Tor is completely agnostic with regards to the protocol. I use email, IRC, XMPP, and web over Tor, all the time, with very little problems.

5. "Tor requires all the software on your computer that accesses the internet to be cooperative. Many programs, however, (whether created by shady marketers, governments, crooks, or just poorly written) are not cooperative, but bypass Tor and give away your network identity". Sure. Many software programs will not allow the ability to change network connections and take advantage of a SOCKS proxy, like your browser or chat clients. However, it's not difficult to build a transparent proxy, to force all software programs over Tor. This can be done at the router, a separate server, or on the local client.

6. "For most people, Tor is to hard to use regularly. This makes security errors and leaks much more likely." This hits the nail squarely on the head, but it's not Tor's fault, and the same thing could be said for Road Warrior. Crypto is just hard. There is a reason for the mantra "don't roll your own crypto". There is a reason many cryptographers have doctorate degrees in mathematics and/or computer science. Crytpo is just exceptionally difficult to grasp. Very little about it is actually intuitive. This is why PGP is not more widely implemented. This is why people don't run encrypted filesystems on their computer. This is why people share their entire lives on Facebook and Instagram. People want things to Just Work, with very little to no work on their part. Unless Tor is 100% transparent, the only people deploying Tor will be nerds and system administrators. Same will be said for PGP, I2P, Freenet, Bitmessage, etc., and even your own product, Road Warrior. Unless cryptography and perfect forward secrecy, among other technologies, are fully 100% baked into the product, people won't deploy it. It has to be just part of the normal way things are done, transparently, such as HTTPS, DNSsec, IPsec, etc. The only thing expected out of people, is the ability to use a browser or an email client. IE- a dumbed down client application. If you expect them to hook in an additional utility, it probably won't happen.

Now, some of my own personal criticisms of Road Warrior.

1. First and foremost, Road Warrior is closed source, proprietary software. This isn't looked upon favorably in the cryptographic community. Especially for a cryptographic tool. I might believe that you're not intentionally being malicious with your Road Warrior product, or have implemented back doors for the NSA, but how do I know you've used encryption standards and best practices? How can the community evaluate that there are no serious bugs, or security vulnerabilities, without access to the source code? How do I know you're addressing CVEs? Just trusting some company, because they say so, is a big leap. Has Road Warrior been independently audited for security? If so, I'm not finding the results anywhere on the site. If the product is to remain proprietary, closed source, then at least provide your customers with the full results of a security audit.

2. All traffic goes through Cryptohippie's servers. With Tor, traffic is split out among a fully distributed, decentralized, voluntary network. Every relay is donated. My traffic might go through Denver on one connection, and Amsterdam on the other. Not so with Road Warrior. While the servers might be geographically separated, customers can't run their own server. As such, the IP list is very static, and makes Cryptohippie a very large single point of failure. Even further, how do I know you're not examining my packets? It's one thing to trust that some random Tor server on the network is not malicious. It's another to trust a company, who is in control of all of the anonymizing servers. Cryptohippie may not be malicious, but what about a disgruntled Cryptohippie employee? Cryptohippie even has an email service. This is a lot of eggs in one basket for a crypto product.

3. Because Cryptohippie is in control of the full stack, Cryptohippie would be aware of a single client and the destination servers they visit. Child porn? Illegal drug trade? Whistle blowing to journalists? Posting to a forum about overthrowing a government? Cryptohippie would know about each of this. I'm sure you're aware, as I am administrating a Tor exit node, that the FBI will make requests regarding illegal content moving across a public IP address. With Tor, the administrator of the exit node is not in control of the content entering and leaving the server. While exit nodes have been seized by governments in the past, no system administrator has been charged with the data communicating over that IP address in question. That is why the hardware is seized, for the possibility of logs or stored data. Requesting client data, such as the originating IP address requesting the data, is not possible, as it's not known. So, with Tor, it's ineffective for the FBI to request an exit node operator to hand over the Tor client data. With Cryptohippie, because that company is fully in charge of the entire network, including accounting details, warrants to release customer information is not only effective, it's easy. If Cryptohippie wishes to remain in business, it must comply with local laws, which includes turning over data to law enforcement.

4. Finally, this may seem petty, but where do your clients go for support? No bug tracker or community forum is present, unless it's behind the login. Looking around your site, I'm seeing copyrights haven't been updated since 2007-2008, which doesn't instill a lot of confidence for me as a client of yours. If your basic website doesn't have an updated copyright, how do I know your Road Warrior software does as well? And if the copyright hasn't been updated in your Road Warrior product, then how do I know you've been addressing bugs and vulnerabilities, introducing new features, or even general maintenance of the software?

I'm sure Cryptohippie is a fine company, and Road Warrior is a fine piece of software. But this blog post glosses over the issues, and is a bit sensationalist, if not a little bit incorrect.

Cryptographically Secure Passphrases In d-note

A couple nights ago, while coming home from work, I started thinking about the button you press on the d-note web application (an instance running at https://secrets.xmission.com) for generating passphrases used to encrypt your note. Each passphrase is a 22-character base 64 passphrase. Initially, I was using the following code in JavaScript:

1
2
3
4
5
6
7
8
9
function make_key() {
    var text = "";
    var possible =
        "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
    for(i=22; i--;) {
        text += possible.charAt(Math.floor(Math.random() * possible.length));
    }
    return text;
}

Simple, functional, works. However, using Math.random() for each character generation isn't cryptographically strong. The d-note web application is known for going over the top on the security engineering, and I know I can do better. So, I did a little digging, and learned about the "Web Crypto API" that many modern browsers support, despite the specification being a "working draft". So, I figured I could use that for my code. As such, the code morphed into the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
function make_key() {
    var text = "";
    var possible =
        "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_";
    var random_array = new Uint32Array(22);

    // Make some attempt at preferring a strong CSPRNG first
    if (window.crypto && window.crypto.getRandomValues) {
        // Desktop Chrome 11.0, Firefox 21.0, Opera 15.0, Safari 3.1
        // Mobile Chrome 23, Firefox 21.0, iOS 6
        window.crypto.getRandomValues(random_array);
    }
    else if (window.msCrypto && window.msCrypto.getRandomValues) {
        // IE 11
        window.msCrypto.getRandomValues(random_array);
    }
    else {
        // Android browser, IE Mobile, Opera Mobile, older desktop browsers
        for(i=22; i--;) {
            random_array[i] = Math.floor(Math.random() * Math.pow(2, 32));
        }
    }

    for(i=22; i--;) {
        text += possible.charAt(Math.floor(random_array[i] % possible.length));
    }

    return text;
}

The Web Crypto API ensures that browser is using a cryptographically secure non-blocking PRNG from the operating system, such as /dev/urandom that ships with the Linux kernel. While this works, it means that browsers that don't support the Web Crypto API are stuck with non-cryptographic passphrases. This certainly wouldn't do, so I went out to fix it.

Enter Blum Blum Shub (this sounds like something out of a Dr. Seuss book). Blum Blum Shub is a cryptographically secure PRNG, provided a few criteria:

  • The primes 'p' and 'q' can be static or chosen pseudorandomly.
  • The primes 'p' and 'q' should be congruent to 3 modulo 4.
  • The seed 'x0' should be chosen pseudorandomly to start the process.
  • The seed 'x0' should not be '0' or '1'.
  • The seed 'x0' should be coprime to the product of the primes 'p' and 'q'.
  • The GCD of phi(p-1) and phi(q-2) should be small (subjective?), where phi is the Euler's totient function.

Unfortunately, as the product of primes 'p' and 'q' grow, the Blum Blum Shub algorithm slows to a crawl. Fortunately and unfortunately, the maximum integer JavaScript can store is 53-bits, or 2^53. This means the product of 'p' and 'q' cannot exceed that value. That leaves us with an integer space of '2^26' and '2^27' for the primes. However, this is okay, as even 'p = 11' and 'q = 19' generate a cycle that is long enough for our 22-character passphrase.

Thus, the code had changed to:

1
2
3
4
    else {
        // Android browser, IE Mobile, Opera Mobile, other browsers
        random_array = bbs(22);
    }

This new "bbs(n)" function is the bread and butter for generating cryptographically secure pseudorandom numbers. The function takes an integer as an argument, to know how many numbers need to be generated, and returns an unsigned 32-bit integer array with that many random numbers. The code is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
function gcd(x, y) {
    if(!y) return x;
    return gcd(y, x%y);
}

function seed() {
    var s = 2*Math.floor(Math.random() * Math.pow(2,31))-1; //odd
    if(s < 2) {
        return seed();
    } else {
        return s;
    }
}

function bbs(n) {
    // Blum Blum Shub cryptographically secure PRNG
    // See https://en.wikipedia.org/wiki/Blum_Blum_Shub
    var a = new Uint32Array(n);
    // Max int = 2^53 == (2^26)*(2^27) -> (2^p1)*(2^p2)
    var p1 = Math.floor(Math.random()*2)+25; // first power, 25 or 26
    var p2 = 51-p1; // second power
    var p = random_prime(2*Math.floor(Math.random() * Math.pow(2,p1))-1);
    var q = random_prime(2*Math.floor(Math.random() * Math.pow(2,p2))-1);
    var s = seed();

    // Ensure each quadratic residue has one square root which is also quadratic
    // residue. Also, gcd(totient(p-1),totient(q-1)) should be small to ensure a
    // large cycle length.
    while(p%4 != 3 || q%4 != 3 || gcd(totient(p-1),totient(q-1)) >= 5) {
        p = random_prime(2*Math.floor(Math.random() * Math.pow(2,p1))-1);
        q = random_prime(2*Math.floor(Math.random() * Math.pow(2,p2))-1);
    }

    // s should be coprime to p*q
    while(gcd(p*q, s) != 1) {
        s = seed();
    }

    for(i=n; i--;) {
        s = Math.pow(s,2)%(p*q);
        a[i] = s;
    }

    return a;
}

The key to this function is generating primes 'p' and 'q' that meet the requirements outlined earlier in the post. The primes can be static, with the seed pseudorandomly generated, or the primes can also be pseudorandomly genenerated. I went with the latter. So, the only trick to getting these values, is testing for primality.

The standard way to test for primality is through trial division. Unfortunately, it's slow. Sure, there are a few optimization techniques that you can do to speed things up, but by and large, it's not an efficient way of attacking the problem. Instead, I applied the Miller-Rabin test for composites. Miller-Rabin uses "witnesses" that "testify" about the compositeness of a positive integer. The more witness there are to testify that your integer is composite, the more likely it is. Thus, we would say that we have a "strong composite" integer. However, if there are zero witnesses for a positive integer we can say that the integer "may or may not be prime". However, after enough witnesses, we can actually say with deterministic fact if a number is prime.

I'll stop going over the algorithm there, and let you read up on the Wikipedia article about it. Suffice it to say, the Miller-Rabin primality test runs in O(k*log(n)^3), where 'k' is the accuracy that is sought. This is good news for Blum Blum Shub, as it's slow enough already. Code below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
function modProd(a,b,n){
    if(b==0) return 0;
    if(b==1) return a%n;
    return (modProd(a,(b-b%10)/10,n)*10+(b%10)*a)%n;
}

function modPow(a,b,n){
    if(b==0) return 1;
    if(b==1) return a%n;
    if(b%2==0){
        var c=modPow(a,b/2,n);
        return modProd(c,c,n);
    }
    return modProd(a,modPow(a,b-1,n),n);
}

function isPrime(n){
    // Miller-Rabin primality test taken from
    // http://rosettacode.org/wiki/Miller-Rabin_primality_test#JavaScript
    // O(k*log(n)^3) worst case, given k-accuracy
    if(n==2||n==3||n==5) return true;
    if(n%2==0||n%3==0||n%5==0) return false;
    if(n<25) return true;
    for(var a=[2,3,5,7,11,13,17,19],b=n-1,d,t,i,x;b%2==0;b/=2);
    for(i=0;i<a.length;i++) {
        x=modPow(a[i],b,n);
        if(x==1||x==n-1) continue;
        for(t=true,d=b;t&&d<n-1;d*=2){
              x=modProd(x,x,n); if(x==n-1) t=false;
        }
        if(t) return false;
    }
    return true;
}

function random_prime(n) {
    while(!isPrime(n)) n -= 2;
    return n;
}

The primes 'p' and 'q' must then go through some rigid tests to ensure that the algorithm has a long cycle, and the seed doesn't fall into a trap, where it falls to 0, and the algorithm cannot recover. Part of these tests is using Euler's totient function. The totient function does nothing more than just count what are called "totatives". A "totative" is a number that is coprime to a given number 'n'. In other words, the totative and 'n' do not share any common divisors other than '1'. Every positive integer from 1 through 'n' must be tested. Unfortunately, this is slow. However, there is a proof using a product formula, which shows that we can arrive at the same result in O(sqrt(n)/3) time. Again, because Blum Blum Shub is slow, we need to keep our processor time down.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
function totient(n) {
    // compute Euler's totient function
    // O(sqrt(n)/3) worst case
    // Taken from:
    // https://en.wikipedia.org/wiki/Talk:Euler%27s_totient_function#C.2B.2B_Example
    if(n < 2) return n;
    var phi = n;
    if (n % 2 == 0) {
        phi /= 2;
        n /= 2;
        while(n % 2 == 0) n /= 2;
    }
    if (n % 3 == 0) {
        phi -= phi/3;
        n /= 3;
        while(n % 3 == 0) n /= 3;
    }
    for(p = 5; p * p <= n;) {
        if(n % p == 0) {
            phi -= phi/p;
            n /= p;
            while(n % p == 0) n /= p;
        }
        p += 2;
        if(p * p > n) break;
        if(n % p == 0) {
            phi -= phi/p;
            n /= p;
            while(n % p == 0) n /= p;
        }
    }
    if(n > 1) phi -= phi/n;
    return phi;
}

Putting everything together, the Blum Blum Shub algorithm on my workstation can produce approximately 1 million random numbers per second for a 53-bit space. We only need 22 numbers, so that should be sufficient, even for weaker devices. I was able to test the code successfully on a number of browsers that do not support the Web Crypto API, and the key generation is near instantaneous, even on my phone. Interacting with the console, here is the output of one call to the algorithm:

> bbs(22);
[3143943020, 475278844, 386457630, 124718623, 280175014, 2600881459,
127152064, 749398749, 2269393658, 692609408, 1218408987, 1523732228,
1265360812, 1641372390, 2500929554, 2223592103, 2462017186, 310616491,
426752821, 2973180471, 2248877527, 574751875]

Those numbers are cryptographically strong, because they are the result of the product of two primes, which means using the sequence to get back to the primes is at least as difficult as the factoring problem. It also turns out that it may also be at least as difficult as computing modular square roots, which is at least as difficult as the factoring problem.

So, even though most users visiting a site hosting d-note will likely be using the Web Crypto API, there may be some other browsers that are not. For them, should they choose to have the browser generate the passphrase for encrypting their note server-side, they can rest assured that the result is cryptographically strong.

Officially Announcing d-note Version 1.0

I've been looking forward to this post. Finally, on my birthday, it's here. My Python Flask web application of encrypted self-destructing notes is stable, and ready for production use.

History
Around 2011, or so, I started thinking about a way that I could send data privately and securely to friends, family and coworkers, without requiring them to spend a great deal of time setting up PGP keys, knowing a great deal about encryption and security, and other things. Basically, I wanted a way to send them an encrypted note, and transparently, it could be decrypted with very little work, or at most, providing a passphrase to decrypt the note. I knew this would need to be a web application, but I wasn't sure of the implementation details. Eventually, that got worked out, and d-note was born early January of this year.

Shortly after that release, I started focusing on a script that would generate random ASCII art with OpenPGP keys. There will be some additional news about that project I'm excited about, but will announce at a later date. As a result of that OpenPGP project, d-note development took the back seat for a bit. However, a recent pull request from Alan Dawson brought focus back to the code.

Changes And Increased Security
The changes that Alan Dawson introduced were switching away from Blowfish in ECB mode to AES-128 in CBC mode with HMAC-SHA1. This is something that I initially wanted to support, but didn't for two reasons:

  • Without thinking about PBKDF2, AES keys must be 16, 24 or 32 bytes in size. This prevented users from entering personal passwords to encrypt the note, rather than the server-side key.
  • Also, without thinking about PBKDF2, Blowfish was intentionally designed to make the key setup slow, so brute forcing the password out of the encrypted file would be more difficult.

However, Blowfish uses a 64-bit block size for its internal operations, while AES uses 128-bit. This might be of little consequence for the standard plaintext encrypted note, but for encrypting files, which I would like to support in the long term, it could mean the difference of repeated encrypted blocks in Blowfish, or none in AES.

Since the changes introduced by Alan, I've increased the security of the application to AES-256 in CTR with HMAC-SHA512. The application relies on the Python Cryptography Toolkit version 2.6. In version 2.7, new authenticated block cipher modes are introduced, such as GCM. With the ability to use GCM, the need to create an HMAC-SHA512 tag will no longer be needed. However, switching modes will break previously encrypted notes that have not yet been decrypted, so this will need to be handled with care. Also, when SHA3 is standardized, I would like to switch to that if it is introduced into PyCrypto, rather than use SHA512, even though there have been no serious security concerns over SHA2.

The Encryption Process Visualized Without A Custom Passphrase
I wanted to at least show what the encryption process looks like from top-to-bottom visually, so you wouldn't need to piece together the code and figure it out.

First, we start out with the application creating 3 static salts at random. Each salt is 16-bytes in size, and should be different from the other two, although this isn't a requirement. The salts will remain static as long as you wish. They are the only random data that is not deleted, but instead saved on the server. Changing the salts will mean any previously encrypted notes still on the server will no longer be able to be decrypted. As such, once generated, they should be left alone. However, if no encrypted notes remain on the server to be decrypted, the web server could be stopped, the salts changed, and the web server started back up. The only reason you would do this, is if you feel the salts have been compromised in some way.

3-static-salts

When a browser renders the main index.html page, we need to create a unique and random URL to post to. As such, a one-way function is used to generate three random keys, from which we'll build everything from. First, we generate a random 16-byte nonce. This nonce is the key to starting the one-way function to build everything for encrypting and decrypting the note.

random-nonce

All of the security of this application rests on the ability to generate a cryptographically secure random nonce on every page view. We've taken the steps necessary to ensure that not only is that choice cryptographically strong, but all the building blocks that build from it are industry best practices, and over engineered at that. Our one-way function is started by using the salts along with our nonce in a PBKDF2 function. PBKDF2 is a password-based key derivation function that can derive a pseudorandom key of any size, given a nonce and a salt. As such, we use our three salts and the nonce to generate a 16-byte file key, a 32-byte AES key, and a 64-byte HMAC key. Notice that we use each salt only once.

pbkdf2

Now that we have a file key, we can base-64 encode that data, which becomes our file name to save our encrypted data to. This is different than initially released, where the URL would also be the file name. The URL can produce the file name, but the reverse is not true.

filename

Finally, our nonce is base-64 encoded, and becomes the URL that we post to, and the URL that we give to the recipient.

url-creation

At this point, the user is now copying their data into the form. The user has the option of either using a personal passphrase to encrypt the note, or to let the server use its key. It's important to note, that due to our nonce above, unless the same passphrase is chosen for encrypting multiple notes, the same AES key is never used to encrypt multiple notes. Because this is symmetric encryption server-side, we don't want to run the risk of deriving the AES key and having multiple encrypted files decrypted, because they were encrypted with the same key. So the random nonce generation at each index.html view is critical.

When the note is posted to the server, it is first compressed using the ZLIB compression library. This reduces data structure out of the plaintext, and increases the overall entropy of the encrypted note before it is finally saved to disk. It should be mentioned that the note is never stored to disk until it is fully encrypted.

plaintext-compression

We are now ready to ship the compressed plaintext off to AES for encryption processing. First, we need to generate a 12-byte random initial value for our counting. We need to do this, because we will be encrypting the note with AES-256 in CTR mode, and I would like to protect the end users from backups. CTR mode uses a counter, that typically starts with "1", for ensuring that the same plaintext block is not encrypted to the same ciphertext block during the encryption process. However, if the same AES key is shared between two plaintexts, then an XOR of the two ciphertexts can reveal the AES key. As such, for every encryption process, we use that random 12-byte initial value to practically guarantee that the starting point of the AES counter is always different, even if the same AES key is used between multiple plaintexts. The initial value is 12-bytes in size, rather than 16-bytes, to ensure that we still have plenty of counting space during the encryption process.

We also need our pseudorandom 32-byte AES key that we generated from our earlier PBKDF2 function. Using this 32-byte key, our 12-byte initial value, and our compressed plaintext, we encrypt the data. This encryption process will give us a preliminary ciphertext. It is preliminary, in that we have additional work to do, before we are ready to store it on disk.

aes-encryption

Because we used an initial value to start our AES-256 in CTR mode, we will require the initial value also for decryption. As such, we need to store the initial value with the preliminary ciphertext. As such, the initial value is prepended at the beginning of the ciphertext. Because the initial value is random data, and the ciphertext should appear as random data, prepending the initial value to the ciphertext should not reveal any data boundaries or leak any information about the stored contents. Prepending the initial value to the ciphertext gives us a file 12-bytes larger as an intermediate ciphertext.

iv-prepend

The final step in the encryption process is to ensure data integrity and to provide authentication by using HMAC-SHA512. The reason for this, is non-authenticated block cipher modes can suffer from a practical malleability attack on the ciphertext to reveal the plaintext. Thus, the intermediate ciphertext is HMAC-SHA512 hashed. This is known as "encrypt-then-MAC" (EtM), and is the preferred way for storing MAC tags.

HMAC allows you to choose a number of different cryptographic hashing algorithms. In our case, SHA512 is used, because we can. HMAC requires both a salt and a message. In our case, because the user is not providing a passphrase to the application, the salt is our 64-byte HMAC pseudorandom key that we generated with PBKDF2 earlier. The intermediate ciphertext and this 64-byte key are added to the HMAC-512 function, to generate a 64-byte SHA512 tag.

hmac-sha512

This 64-byte SHA512 tag is prepended to our intermediate ciphertext to produce our final ciphertext document, which is finally written to disk in binary form. Initially, I had base-64 encoded the ciphertext, as is standard practice, but this just increases the used disk space by an additional 30%, and adds additional code, with no real obvious benefit, other than not being able to view the ciphertext in a text editor, or on your terminal. So, the raw binary is stored instead.

hmac-prepend

The Encryption Process Visualized With A Custom Passphrase
In the process of encrypting the note with a user-supplied passphrase, rather than use our cryptographic nonce to generate the AES key and the HMAC key, the passphrase is used in place. In other words, we still use PBKDF2 to generate our AES key and our HMAC key, combined with the appropriate salt. Everything else about the encryption process is the same.

user-passphrase

Knowing whether or not a user supplied a passphrase to encrypt the note, an empty file with ".key" as an extension is created. The user-supplied password is not stored on disk. The empty file exists only to tell the application not to use PBKDF2 to generate the AES key, but to ask the client for it. However, if a duress key for Big Brother is supplied, then that key is indeed stored on disk. That key will not decrypt the note, but instead return random sentences from The Zen of Python, as well as destroy the original note.

To show the effectiveness of the 12-byte initial value discussed earlier, I'll encrypt the text "Hello Central, give me Doctor Jazz!" twice, both with the same passphrase "Oliver". If the AES counter started with the same value in both instances, then the ciphertext will be identical, as it should be. By starting with a random initial value for our counter, the ciphertexts should be different. Let's take a look.

Encrypting it twice, I ended up with two files "qZ-cloUfLnFYVYZImgYWmQ" and "hxlUK9GxbNSMgJZDUgEsMw". Let's look at their content:

$ ls data
hashcash.db  qZ-cloUfLnFYVYZImgYWmQ  qZ-cloUfLnFYVYZImgYWmQ.key hxlUK9GxbNSMgJZDUgEsMw  hxlUK9GxbNSMgJZDUgEsMw.key
$ cat data/qZ-cloUfLnFYVYZImgYWmQ | base64 -
CgHOJ4NTTMSNWOVbC1SduXg7jqij92o3ZHhOcCumu8DKmDKI3cyZ5Ne9dB6w7TrnxMaR8EyDf5w9
eMpMM+VLkztyfAieUWFYVFvFqJ0cY7+fgX1tzyaLs71rHywjydE9bobAfof0Bqo6j87sTJEgJTu+
BFXko1w=
$ cat data/hxlUK9GxbNSMgJZDUgEsMw | base64 -
D/fLA+UgghHQMGadJxmATifaL+JTXybRNROUDvSBzgTV6EWX7Dau9Z9zI1KpEuMbDeFNp4oZk4SN
hlVGm2k8vWwKJWhys78U6FFd1jGmKWDC61VKH7+zXEGhNf/fo/igEEEG+Jge1awQ9A0cJbsLSmoh
zgj6XH8=

This should not be surprising, knowing how CTR mode works with block ciphers:

ctr-encryption

As long as that IV is different, then the ciphertexts should be different, and one should not be able to extract the private key that encrypted the plaintext, even if the key is reused across multiple plaintexts. Let's see what the IV was for each, using Python:

>>> msg = """CgHOJ4NTTMSNWOVbC1SduXg7jqij92o3ZHhOcCumu8DKmDKI3cyZ5Ne9dB6w7TrnxMaR8EyDf5w9
... eMpMM+VLkztyfAieUWFYVFvFqJ0cY7+fgX1tzyaLs71rHywjydE9bobAfof0Bqo6j87sTJEgJTu+
... BFXko1w="""
>>> msg = msg.decode('base64')
>>> data = msg[64:] # remember, the first 64-bytes are the SHA512 tag
>>> iv = data[:12]
>>> long(iv.encode('hex'),16)
18398018855321261569467991464L
>>> msg = """D/fLA+UgghHQMGadJxmATifaL+JTXybRNROUDvSBzgTV6EWX7Dau9Z9zI1KpEuMbDeFNp4oZk4SN
... hlVGm2k8vWwKJWhys78U6FFd1jGmKWDC61VKH7+zXEGhNf/fo/igEEEG+Jge1awQ9A0cJbsLSmoh
... zgj6XH8="""
>>> msg = msg.decode('base64')
>>> data = msg[64:]
>>> iv = data[:12]
>>> long(iv.encode('hex'),16)
21167414188217590509613634382L

Decryption
Not much really needs to be said about decrypting the encrypted notes, other than because we are using authenticated encryption with HMAC-SHA512, when client supplies the URL to decrypt a note, a SHA512 tag is generated dynamically based on the encrypted file and then compared to the SHA512 tag actually stored in the encrypted note. If the tags match, the plaintext is returned. If the tags do not match, something went wrong, and the client is redirected to a standard 404 error. Other than that, the code for decrypting the notes should be self-explanatory.

Conclusion
I hope you enjoy the self-destructing encrypted notes web application! If there are any issues or concerns, please be sure to let me know.

Going Google Free

On June 25, 2004 I received the following email:

To: Aaron Toponce
From: Gmail Team
Subject: Gmail is different. Here's what you need to know.
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

First off, welcome. And thanksfor agreeing to help us test Gmail. By now you probably know the key ways in which Gmail differs from traditional webmail services. Searching instead of filing. A free gigabyte of storage. Messages displayed in context as conversations.

So what else is new?

Gmail has many other special features that will become apparent as you use your account. You’ll find answers to most of your questions in our searchable help section, which includes a Getting Started guide. You'll find information there on such topics as:

  • How to use address auto-complete
  • Setting up filters for incoming mail
  • Using advanced search options

You may also have noticed some text ads or related links to the right of this message. They're placed there in the same way that ads are placed alongside Google search results and, through our AdSense program, on content pages across the web. The matching of ads to content in your Gmail messages is performed entirely by computers; never by people. Because the ads and links are matched to information that is of interest to you, we hope you'll find them relevant and useful.

You're one of the very first people to use Gmail. Your input will help determine how it evolves, so we encourage you to send your feedback, suggestions and questions to us. But mostly, we hope you'll enjoy experimenting with Google's approach to email.

Speedy Delivery,

The Gmail Team

p.s. You can sign in to your account any time by visiting http://gmail.google.com

On June 25, 2014, I'll be closing my Google account for good. It seems fitting to end it on that date, exactly 10 years after first receiving it. I've already migrated all my documents, my contacts and my calendars to my personal Owncloud instance. When Google shut down Reader, I moved to TTRSS. Last week, I replaced the ROM on my phone with Cyanogenmod, and have not installed or enabled any Google accounts or applications, other than using K-9 for my email. I've already migrated my chat to my own personal XMPP server. The only thing that remains is email.

I will miss the community at Google+. I've made a great many friends there, and have had some amazing discussions. Sadly, it's time to say goodbye. You can find me on Twitter @AaronToponce, and I've usually cross-posted my articles from Plus on Twitter also. We can still have the amazing discussions there. Or, you can find me on IRC under the nick 'eightyeight' on the Freenode, OFTC and XMission IRC networks.

There are many reasons why I'm leaving the Google-verse. They include:

  • Cooperation with NSA spying.
  • Amassing vast amounts of data collection from hundreds of products.
  • Support for proprietary software.
  • A large monopolistic monoculture.

I appreciate that they will allow me to export my data, and delete it off of their servers. Their Takeout service isn't entirely evil. And their Google Summer of Code is to be commended. I appreciate that they have built Android, and submit back to the Linux kernel, and I love that they financially support Free Sofware conferences. However, despite all the good they have done, their toes are touching the "Do No Evil" line, and it's creepy.

I am going to self-host my data. I want as much control over my data as humanly possible. Using Google's services, means losing some of that control, and I just can't tolerate it any longer. This means work and some growing pains, but I'm ready for the trials.

Goodbye Google.

OpenPGP Key Random Art, Now With ANSI Color Support

I just recently committed support for my OpenPGP key random art Python script to support ANSI color. The idea is to create a "heat map" of which squares the drunken bishop has traversed during his dizzying travels. So not only can you see what your key "looks" like, but now you can sense what your key "feels" like.

The idea is simple- the squares that have not been walked will not have any color applied to them. The squares on the floor with the fewer visits will be "cold" with the blue color, while squares that have been visited many times will be "hot" with the red color, approaching white hot for even higher frequencies. There is a direct one-to-one relationship with the coin and the color. The point isn't necessarily to introduce additional entropy or to reduce collisions, but merely for entertainment value. For PGP key signing parties, standard plain text output should be the default, preventing anyone from being pigeon holed into specific software (browsers, PDF readers, etc).

Right now, the Python script is under very active development, so there are many things missing, such as switches to turn on foreground and background colors, or enabling and disabling ANSI entirely. For the time being, ANSI is default, and the script will fallback to plain text if the TTY or STDIN does not support ANSI escape sequences. The next step is to start working on command line switches. I haven't decided if ANSI color should be the default or not.

Necessary screenshots with the heat map and a key chosen at random from my keyring displayed, both for foreground and background colors are below. Note: the switches are missing in the command to differentiate between foreground and background display. I changed the code manually between the screenshots. Also, I might adjust the heat map, adding one or two extra pink to white colors, bringing red closer to blue, so a larger variety of color can be displayed in the ANSI.

Screenshot showing ANSI foreground color for an OpenPGP key.

Screenshot showing ANSI background color for an OpenPGP key.

Gunnar Glasses Review

I've owned a pair of GUNNAR Optiks for a couple of months now, and figured it would probably be a good time for a review. For those unaware, GUNNAR Optiks are specially designed eyewear for long exposure computer use. The claims on the main site are:

  • Eyes are more moist.
  • Headaches are reduced.
  • Blurry vision is reduced.
  • Eyes are no longer fatigued.

They claim these points are met by the curvature of the eyewear and the unique color of the yellow lens. I'll review each individiually.

The curvature of the glasses brings the lens in more closely to the face. The GUNNARS claim is that your eye produces natural moisture that humidifies the immediate air surrounding the eye. By bringing the lens in more closely to the face, that humidity is trapped in the area surrounding the eye, preventing the eye from drying out. So someone who does not wear glasses regularly, I was skeptical of this claim. However, after wearing them for 8 hours per day, every day, I can most certainly feel that my eyes are much more moist than without. People who regularly wear glasses are probably already aware of this. In fact, I've been fighting allergies this season, and my eyes have been abnormally moist. Wearing the GUNNARS has actually complicated the problem, to where I am constantly wiping my eyes to remove the excessive tearing.

The yellow color tint of the lens claims to neutralize the high-intensity blues that artifical light provide, such as flourescent and LED bulbs common in computer monitors. This actually didn't surprise me all that much, as the lens of your eye yellows as you get older due to UV radiation from the sun. On the color wheel, yellow is complementary to blue. Complementary colors of light, when combined correctly, produce white light. So, using a yellow tint to neutralize a blue light, providing a more balanced color spectrum is something photograhpers have known for a long time.

Due to the yellow tinting of the glasses, the claim is that headaches, fatiqued eyes and blurry vision are reduced or eliminated. I was highly skeptical of these claims, but after wearing them for a full day, I have noticed how much more "comfortable" my eyes feel at the end of the day. Indeed, I do not have the typical eye fatique that I normally have when driving home. I've never encountered headaches working in front of a computer all day, but I do get blurry vision, and the GUNNARS have most certainly reduced it. Even my wife has noticed how less grumpy and fatiqued I am when coming home after a full day of work.

After wearing the GUNNARS for the past couple of months, I've concluded that they do indeed hold up to the claims they make. There is something there to the science behind the eyewear. In fact, I've come to prefer wearing them when watching movies and television as well. Even when sitting in front of this 17" monitor writing this post without wearing the glasses, I am beginning to notice eye strain. I do feel like I look a bit kooky when walking around the office with them on my face, but they work, and they work well.

All isn't positive, however. The GUNNARS I purchased were from a friend who was getting migrains as a result of wearing them. The pair is non-prescription, so other than wearing typical sunglasses or ski goggles, I don't understand how changing the tint of what his eye sees would cause a migrain, but it was certainly the case for him. For myself, my life has been improved, not degraded by wearing the GUNNARS.

If someone were to ask me if I recommend wearing GUNNARS for gaming or general computing, it would certainly be in the affirmative.

Analysis of RIPEMD-160

Recently on Hacker News, I noticed a table showing the "Life cycles of popular cryptographic hashes" by Valerie Aurora (in this post, I've greatly compressed her HTML for faster page delivery).

Life cycles of popular cryptographic hashes (the "Breakout" chart)
Function 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
Snefru                                              
MD4                                              
MD5                                              
MD2                                              
RIPEMD                                              
HAVAL-128                                              
SHA-0                                              
SHA-1                                              
RIPEMD-128 [1]                                              
RIPEMD-160                                              
SHA-2 family                                    [2]          
SHA-3 (Keccak)                                              
Key Unbroken Weakened Broken Deprecated
[1] Note that 128-bit hashes are at best 2^64 complexity to break; using a 128-bit hash is irresponsible based on sheer digest length.
[2] In 2007, the NIST launched the SHA-3 competition because "Although there is no specific reason to believe that a practical attack on any of the SHA-2 family of hash functions is imminent, a successful collision attack on an algorithm in the SHA-2 family could have catastrophic effects for digital signatures." One year later the first strength reduction was published.
The Hash Function Lounge has an excellent list of references for most of the dates. Wikipedia now has references to the rest.

I find this table a great resource, and I'm glad she put it online. However, I have one small issue with the table (other than it's out of date, and small on functions), and that's her calling RIPEMD-160 "deprecated". My first question would be: deprecated by whom exactly? RIPEMD-160 isn't a FIPS standardized cryptographic hash function, so it couldn't be deprecated by NIST. RIPEMD-160 was actually developed in Belgium, and as far as I can tell, the Belgium NBN - Bureau for Standardisation hasn't deprecated it either. It is standardized by CRYPTREC in Japan, and also has not been officially deprecated there, as far as I can tell, although it is on their "monitored list".

It could be considered deprecated if there are known attacks that greatly weaken the algorithm, or if known collisions exist, such as MD5. However, RIPEMD-160 does not have any known weaknesses nor collisions. The simplified versions of RIPEMD do have problems, however, and should be avoided. But as it stands, RIPEMD-160 is still considered "strong" and "cryptographically secure". Being that it was first published in 1996, almost twenty years ago, in my opinion, that's impressive. Compared to SHA1, another 160-bit digest, which was first published in 1995, the first published attack against SHA-1 was published just 8 years later, in 2003, and attacks have been pouring out since.

In fact, maybe Valerie is calling RIPEMD-160 "deprecated", because it's old, and there are plenty of other functions with larger digests, such as the new proposed FIPS standard SHA3/Keccak. The only problem with that, is that while we may be able to call these secure now, they too could fall victim to various attacks, and be severely weakened. It could also be possible that RIPEMD-160 would still be considered strong and cryptographically secure.

Granted, RIPEMD-160 has not received the attention that SHA1 has. As such, it is likely that it has not gotten the mathematical attention of the cryptographic community. I understand that. However, RIPEMD-160 is part of the OpenPGP standard, and available in many cryptographic libraries for many different programming languages. While it's not the running favorite of cryptographers and developers, it also hasn't been overlooked.

The only concern I would have against RIPEMD-160 is the 160-bit output digest size. Is that large enough to withstand a sophisticated brute force search? To answer that question, we can look at the Bitcoin distributed network, likely the largest distributed computing network in the world. The Bitcoin network, at the time of this post, is currently calculating approximately 60 million billion SHA256 hashes per second, largely using specialized hardware called "ASICs". 60 million billion is 60 quadrillion, or 6x10^16. 160-bits is approximately 1.5x10^48. This means it is taking the Bitcoin distributed network approximately 2.4x10^31 seconds to completely exhaust the RIPEMD-160 digest space, or about 7.7x10^23 years. So even then, at the amazing brute force pace of 60 million billion hashes per second, it's still unreasonable to find legitimate collisions for a 160-bit digest. Even using the Birthday Attack, the likelihood of finding a collision is 50% in 890,362,173,273 years at that pace. RIPEMD-160 still stands strong.

Now, I'm an amateur cryptographer, and a lousy one at that. But my Internet searching has left me empty handed in trying to find any resource that successfully attacks, weakens, or criticizes RIPEMD-160, let alone calling it "deprecated". From everything I can tell, it's withstood the test of time, and it's still going very, very strong.

SHA3 (Keccak) in Linux

For a long time, I've been waiting to use the newly accepted SHA3 in Linux for file integrity and other uses. Like the md5sum(1), sha1sum(1), sha224sum(1), sha256sum(1), sha384sum(1), and sha512sum(1), I was hoping that a similar "sha3-224sum(1)", etc would be developed, and make its way into the GNU/Linux library. Unfortunately, I kept waiting and waiting, until eventually, I just stopped worrying about it. Well, to my surprise, it appears that there is a package that ships SHA3, as accepted by NIST in the rhash package (it also does a number of other hashes as well).

Keccak was chosen by NIST as the SHA3 winner, due to it's performance, security and construction. Keccak uses the sponge function for creating the cryptographic hash, which truly sets it apart from SHA1 and SHA2. This means any successful attack against SHA1 or SHA2 will likely be ineffective on SHA3. SHA3 clams 12.5 cycles per byte on an Intel Core 2 CPU in a software implementation. Unfortunately, it appears that SHA3 as it appears in the rhash package still needs some optimizations, as SHA2, which requires more cycles per byte due to its construction, can calculate a SHA2-256 hash faster than a SHA3-256 hash. SHA3 support was added in September 2013.

The implementation of SHA3 in rhash uses the offical acceptance of the original Keccak function as approved by NIST. This means that it does not contain the 2 bits "01" which are appended to the message for padding. It should be noted that SHA3 is only a FIPS draft as of the time of this blog post. As such, outputs could change until the standard is formalized.

Below are examples of hashing:

$ echo -n "" | rhash --sha3-224 -
f71837502ba8e10837bdd8d365adb85591895602fc552b48b7390abd  (stdin)
$ echo -n "" | rhash --sha3-256 -
c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470  (stdin)
$ echo -n "" | rhash --sha3-384 -
2c23146a63a29acf99e73b88f8c24eaa7dc60aa771780ccc006afbfa8fe2479b2dd2b21362337441ac12b515911957ff  (stdin)
$ echo -n "" | rhash --sha3-512 -
0eab42de4c3ceb9235fc91acffe746b29c29a8c366b7c60e4e67c466f36a4304c00fa9caf9d87976ba469bcbe06713b435f091ef2769fb160cdab33d3670680e  (stdin)
$ echo -n "The quick brown fox jumps over the lazy dog." | rhash --sha3-224 -
c59d4eaeac728671c635ff645014e2afa935bebffdb5fbd207ffdeab  (stdin)
$ echo -n "The quick brown fox jumps over the lazy dog." | rhash --sha3-256 -
578951e24efd62a3d63a86f7cd19aaa53c898fe287d2552133220370240b572d  (stdin)
$ echo -n "The quick brown fox jumps over the lazy dog." | rhash --sha3-384 -
9ad8e17325408eddb6edee6147f13856ad819bb7532668b605a24a2d958f88bd5c169e56dc4b2f89ffd325f6006d820b  (stdin)
$ echo -n "The quick brown fox jumps over the lazy dog." | rhash --sha3-512 -
ab7192d2b11f51c7dd744e7b3441febf397ca07bf812cceae122ca4ded6387889064f8db9230f173f6d1ab6e24b6e50f065b039f799f5592360a6558eb52d760  (stdin)

In my limited testing, it appears that the SHA3 implementation in rhash(1) is not quite up to par, and could use some additional performance improvements. I'm sure these will be committed over time. However, it's hardly a poor performer. I've been very happy with the performance results so far.