Image of the glider from the Game of Life by John Conway
Skip to content

The Lagged Fibonacci Generator

Lately, I have been studying pseudorandom number generators (PRNGs, also called "deterministic random bit generators", or DRBGs). I've been developing cryptographically secure PRNGs (CSPRNGs), and you can see my progress on Github at https://github.com/atoponce/csprng. This project is for nothing more than for me to somewhat get a feeling for new languages, while also learning a thing or two about applied cryptograhpy. However, for the subject of this post, I want to address one PRNG that is not cryptographically secure- the Lagged Fibonacci Generator.

What drew me to this generator was thinking about a way to have a PRNG to do by hand. I started thinking about different ways to construct a PRNG mathematically. But, before creating an algorithm, I needed to identify all the points that make a good PRNG. A good PRNG should have:

  • Easy to implement.
  • High efficiency in calculating the pseudorandom values.
  • Long (practically un-observable) periods for most, if not all initial seeds.
  • A uniform distribution over the finite space.
  • No correlation between successive values.

I put a great deal of thought into it, but couldn't come up with anything I was very proud of. I thought of using trigonometric functions, various logarithm functions, geometric and algebraic expressions, and even fancy equations using derivatives. The more I thought about it, the further away I drifted from something simple that could be done by hand with pencil and paper.

The best I came up with, which required using a scientific calculator, was forcing the sequence to grow (a monotonically increasing function), then forcing it into a finite field with a modulus. However, no matter what I threw at it, I always struggled with either dealing with "0" or "1". For example, taking the n-th exponent of either "0" or "1" will always return a "0" or "1". I realized quickly that multiplication might be a problem. For example, one thought I had was the following:

Si = Floor[(Si-1)3/2], mod M

This works out fine, until your output is a "0" or "1", then the generator sits on either of those numbers indefinitely. I realized that my function should probably just stick with addition, or I'm bound to get myself into trouble. I thought, and thought about it, then it hit me. It was my "Beautiful Mind" moment.

I thought of the Fibonacci sequence.

The Fibonacci sequence is monotonically increasing for two seeds S1 and S2, where 0 < S1 < S2. If you put an upper bound on the sequence via a modulus, you can limit it to a finite space, and I can have my PRNG. However, I also know that the distance between any two sequential digits in the Fibonacci sequence approaches the Golden Ratio Phi. I'm not sure how this would affect my simple PRNG, and if a correlation between successive digits could be identified, but I started scribbling down numbers on a text pad anyway.

Immediately, however, I found something interesting: If both seeds are even, then the whole sequence of numbers would be even. For example, take the following Fibonacci PRNG:

S1 = 6, S2 = 8, mod 10
6 8 4 2 6 8 4 2 6 8 4 2 ...

There are two problems happening here- first, the period of the PRNG is 4 digits- 6, 8, 4, & 2. Second, because even numbers were chosen for the seeds, even numbers are the only possibility for the PRNG. So, either one of the seeds or the modulus must be odd, or the PRNG algorithm needs to be modified.

At this point, I threw my hands up in the air, and said "screw it". I decided to see what history had discovered with simple PRNGs. Turns out, I wasn't far off. A Fibonacci sequence PRNG exists called the Lagged Fibonacci Generator. Here is how it works:

Sn = Sn-j ⊙ Sn-k mod M, 0 < j < k

Where "⊙" is any binary function, such as addition, subtraction, multiplication, or even the bitwise exclusive-or.

First off, it doesn't address the "all evens" problem with my naive generator. If addition is used to calculate the values, then at least one number in the seed must be odd. If multiplication is used, then at least k-elements must be odd. However, what is interesting about this generator, is that rather than picking the first and second elements of the list to calculate the random value (Si-1 and Si-2), any j-th and k-th items in the list can be used (Si-j and Si-k). However, you must have at least k-elements in the list as your seed before beginning the algorithm.

To simplify things, lets pick "j=3" and "k=7" mod 10 addition. I need at least seven elements in the list, and at least one of them must be odd. I've always like the phone number "867-5309", so let's use that as our seed. Thus, the first 10 steps of our generator would look like this:

j=3, k=7, mod 10 addition

        [j]       [k]
 1. 8 6 [7] 5 3 0 [9] => 7+9 = 6 mod 10
 2. 6 7 [5] 3 0 9 [6] => 5+6 = 1 mod 10
 3. 7 5 [3] 0 9 6 [1] => 3+1 = 4 mod 10
 4. 5 3 [0] 9 6 1 [4] => 0+4 = 4 mod 10
 5. 3 0 [9] 6 1 4 [4] => 9+4 = 3 mod 10
 6. 0 9 [6] 1 4 4 [3] => 6+3 = 9 mod 10
 7. 9 6 [1] 4 4 3 [9] => 1+9 = 0 mod 10
 8. 6 1 [4] 4 3 9 [0] => 4+0 = 4 mod 10
 9. 1 4 [4] 3 9 0 [4] => 4+4 = 8 mod 10

Generated: 6 1 4 4 3 9 0 4 8 1

The following Python code should verify our results:

1
2
3
4
5
6
7
8
9
10
11
12
j = 3
k = 7
s = [8, 6, 7, 5, 3, 0, 9]
for n in xrange(10):
    for i in xrange(len(s)):
        if i is 0:
            out = (s[j-1] + s[k-1]) % 10 # the pseudorandom output
        elif 0 < i < 6:
            s[i] = s[i+1] # shift the array
        else:
            s[i] = out
            print s[i], # print the result

Running it verifies our results:

$ python lagged.py
6 1 4 4 3 9 0 4 8 1

It's a "lagged" generator, because "j" and "k" lag behind the generated pseudorandom value. Also, this is called a "two-tap" generator, in that you are using 2 values in the sequence to generate the pseudorandom number. However, a two-tap generator has some problems with randomness tests, such as the Birthday Spacings. Apparently, creating a "three-tap" generator addresses this problem. Such a generator would look like:

Sn = Sn-j ⊙ Sn-k ⊙ Sn-l mod M, 0 < j < k < l

Even though this generator isn't cryptographically secure (hint: it's linear), it meets the above requirements for a good PRNG, provided the "taps" are chosen carefully (the lags are exponents of a primitive polynomial), and the modulus is our traditional "power-of-2" (2M, such as 232 or 264). Supposing we are using a two-tap LFG, it would have a maximum period of:

(2k-1)*k    if exclusive-or is used
(2k-1)*2M-1 if addition or subtraction is used
(2k-1)*2M-3 if multiplication is used (1/4 of period of the additive case)

For a good LFG, it is found that a three-tap generator should be used, as a 3-element spacing correlation can be found in two-tap generators, and that initial taps should be very high for a large modulus. Further, the full mathematical theory hasn't been worked out on Fibonacci generators, so the quality of the generators rests mostly on the statistics of the generated output, and randomness tests.

However, this is simple enough to do by hand, if nothing else than to impress your friends.

Financially Supporting Open Crypto

In April 2014, Heartbleed shook the Internet. OpenSSL had introduced a feature called "TLS Heartbeats" Heartbeats allow for a client-encrypted session to remain open between the client and the server, without the need to renegotiate a new connection. In theory, the feature is sound. Heartbeats should minimize load on busy servers, and improve responsiveness on the client. However, due to a simple oversight in the code, buffers could be over-read, allowing the client to request much more data from the server's memory than needed. As a result, usernames and passwords cached in the server's memory could be leaked to the client.

This was a nasty bug, and it underscored how under-staffed and under-funded the OpenSSL development team is. OpenSSL is the de facto standard in securing data in motion for the Internet. It protects your web connections when visiting your bank's website, and it protects your email communication between your email client and the upstream mail server.

Ars Technica started off an article about tech giants finally agreeing to fund the OpenSSL development. Quote:

The open source cryptographic software library secures hundreds of thousands of Web servers and many products sold by multi-billion-dollar companies, but it operates on a shoestring budget. OpenSSL Software Foundation President Steve Marquess wrote in a blog post last week that OpenSSL typically receives about $2,000 in donations a year and has just one employee who works full time on the open source code.

If that isn't bad enough, Werner Koch, the sole developer and maintainer of the encryption software "GnuPG" is in much the same position as Steve Marquess. ProPublica put up a post about the very sobering financial situation of GnuPG. Quote:

The man who built the free email encryption software used by whistleblower Edward Snowden, as well as hundreds of thousands of journalists, dissidents and security-minded people around the world, is running out of money to keep his project alive.

Werner Koch wrote the software, known as Gnu Privacy Guard, in 1997, and since then has been almost single-handedly keeping it alive with patches and updates from his home in Erkrath, Germany. Now 53, he is running out of money and patience with being underfunded.

To understand just how critical this piece of software is to the Internet and the community at large, OpenPGP (the specification upon which GnuPG is built) is used by software developers around the world to prove the integrity of their software, when downloading it from their website. It's used by operating system vendors, such as Microsoft, Apple, Google, and GNU/Linux to provide package integrity when installing "apps" on your computer or mobile device. People and corporations have used it internally for data at rest as well, such as encrypting backups before sending them offsite.

Thankfully, after ProPublica published their article, Werner Koch, father and husband, got the donation funding he needed to continue focusing on it full time. Thanks to Facebook and Stripe, he has $100,000 of annual sponsored donations to help keep the development of GnuPG pressing forward.

Why is it that the two most fundamental cryptographic tools in our community are so under developed, under funded, and under staffed? I can understand that cryptography is hard. There is a reason why people get doctorate degrees in mathematics and computer science to understand this stuff. But with such critical pieces of infrastructure protection, you would think it would be getting much more attention than it is.

A good rule of thumb for cryptography, is if you want to protect your data in transit, use OpenSSL; if you want to protect your data at rest, use GnuPG. Let's hope that these two projects get the attention and funding they need to continue well into the future for years to come.

If you want to help donate to these two projects, you can donate to GnuPG here and to OpenSSL here. Alternatively, there is a Flattr donation page for GnuPG where you can setup recurring donations here.

Reasonable SSH Security For OpenSSH 6.0 Or Later

As many of you have probably seen, Stribik András wrote a post titled Secure Secure Shell. It's made the wide rounds across the Internet, and has seen a good, positive discussion about OpenSSH security. It's got people thinking about their personal SSH keys, as well as the differences between ECC and RSA, why the /etc/ssh/moduli file matters, and other things. Because of that post, many people who use SSH are increasing their security when they get online.

However, the post does one disservice- it requires OpenSSH 6.5 or later. While this is good, and people should be running the latest stable release, there are many, many older versions of OpenSSH out there, that are still supported by the distro, such as Debian GNU/Linux 7.8, which ships OpenSSH 6.0. Most people will be using the release that ships with their distro.

As a side note, CentOS 5 ships OpenSSH 4.3, and CentOS 6 ships OpenSSH 5.3. Because these are very old releases, and CentOS is still providing support for them, you will need to check the man pages for OpenSSH, and see how your client and server configurations need to be adjusted. It won't be covered here.

So, with that in mind, let's look at OpenSSH 6.0, and see what it supports.

OpenSSH 6.0 Ciphers

The following is the default order for symmetric encryption ciphers:

  1. aes128-ctr
  2. aes192-ctr
  3. aes256-ctr
  4. arcfour256
  5. arcfour128
  6. aes128-cbc
  7. 3des-cbc
  8. blowfish-cbc
  9. cast128-cbc
  10. aes192-cbc
  11. aes256-cbc
  12. arcfour

CTR mode should be preferred over CBC mode, whenever possible. It can be executed in parallel, and it seems to be the "safer" choice over CBC although it's security margin over CBC is probably minimal. The internal mechanisms are more simplistic, which is why modes like EAX and GCM use CTR internally. With that said, CBC mode is not "unsafe", so there is no strong security argument to avoid it. However, modern and older OpenSSH implementations support CTR mode, so there really is no need for CBC.

The "arcfour" protocols are "alleged RC4", but adhere to the RC4 RFC. RC4 has been showing weaknesses lately. Cryptographers have been advising to move off of it, PCI vendors will fail scans with SSL implementations that support RC4, and OpenBSD 5.5 switched to a modified ChaCha20 for its internal CSPRNG. So, it's probably a good idea to move away from the arcfour ciphers, even if it may not be practically broken yet.

However, arcfour is really the only high performance cipher in the OpenSSH 6.0 suite, and is very handy when trying to transfer many gigabytes of data over the network, as AES will pin the CPU before flooding the pipe (unless of course you have hardware AES on board). So, I would recommend the arc4 ciphers as a last resort, and only enable them on private networks, where you need the throughput.

The cast128 cipher was an AES candidate, and is a Canadian standard. To my knowledge, it does not have any near practical security attacks. However, because only CBC mode is supported with CAST, and not CTR mode, and we're disabling CBC mode, it is not included in our final list.

3DES was designed to address the short 56-bit key sizes in DES, which was replaced later by AES. 3DES cascades DES three times, with three distinct 56-bit keys. 3DES also does not have any near practical security attacks, and it is believed to be secure. However, DES was designed with hardware in mind, and is slow, slow, slow in software. 3DES three times as much. It's horribly inefficient. As such, I would recommend disabling 3DES.

Blowfish was designed by Bruce Schneier as a replacement for DES. While Blowfish might still have a considerable security margin, Blowfish suffers from attacks from weak keys. As such, Blowfish implementations must be careful when selecting keys. Blowfish can be efficient in both hardware and software, but it's usually less efficient than AES. Further, Bruce himself recommends that people stop using Blowfish and move to its successor Twofish, or even Threefish. As such, because both stronger and more efficient algorithms exist, I would recommend disabling Blowfish. It really isn't offering anything to OpenSSH clients.

So, in my opinion, I would sort my OpenSSH 6.0 ciphers like so:

  1. aes256-ctr
  2. aes192-ctr
  3. aes128-ctr
  4. arcfour256
  5. arcfour128
  6. arcfour

OpenSSH 6.0 Key Exchange

The following is the default order for key exchange algorithms:

  1. ecdh-sha2-nistp256
  2. ecdh-sha2-nistp384
  3. ecdh-sha2-nistp521
  4. diffie-hellman-group-exchange-sha256
  5. diffie-hellman-group-exchange-sha1
  6. diffie-hellman-group14-sha1
  7. diffie-hellman-group1-sha1

The NIST curves are considered to be insecure. Not because it's some government agency tied with the NSA, but because the curves are not ECDLP rigid, and suffer from a lack of constant-time single-coordinate single-scalar multiplication, they aren't complete, and are distinguishable from uniform random strings. If you want to blame the NSA for rubber-stamping and backdooring the NIST ECC curves, fine. I'll stick with the crypto.

And, although the security margin gap is closing on SHA-1, some commercial SSH providers, such as Github may still require it for your SSH client. So, in your client config, I would put the preference on SHA-256 first, followed by SHA-1. On your own personal servers, you can disable the SHA-1 support completely.

Thus, I would recommend the following key exchange order:

  1. diffie-hellman-group-exchange-sha256
  2. diffie-hellman-group-exchange-sha1
  3. diffie-hellman-group14-sha1
  4. diffie-hellman-group1-sha1

OpenSSH 6.0 Message Authentication Codes

The following is the default order for message authentication codes:

  1. hmac-md5
  2. hmac-sha1
  3. umac-64@openssh.com
  4. hmac-ripemd160
  5. hmac-sha1-96
  6. hmac-md5-96
  7. hmac-sha2-256
  8. hmac-sha256-96
  9. hmac-sha2-512
  10. hmac-sha2-512-96

Things get interesting here, because with HMAC algorithms, successful attacks require breaking the preimage resistance on the cryptographic hash. This requires a complexity of 2^n, where "n" is the output digest size in bits. MD5 is 128-bits, and SHA-1 is 160-bits. All currently known attacks on MD5 and SHA-1 are collision attacks, and not preimage attacks. Collision attacks require a complexity of only 2^(n/2). Thus, for MD5, collision attacks require a complexity of only 64-bits at worst, and SHA-1 requires 80-bits. However, as we know now, MD5 collision resistance is fully broken in practical time with practical hardware. SHA-1 still remains secure, although its collision resistance has been weakened to 61-65-bits. This is almost practical.

Regardless, the HMAC-MD5 and HMAC-SHA1 remain secure, with wide security margins, due to their preimage resistance. The only concern, however, is that in order to succesfully break the preimage resistance of a cryptographic hash function, it requires first breaking its collision resistance. Because MD5 is broken in this regard, and SHA-1 is almost broken, it is advised to move away from any protocol that relies on MD5 or SHA-1. As such, even though HMAC-MD5 and HMAC-SHA1 remain very secure today, it would be best to disable their support. Interestingly enough, even though RIPEMD-160 has the same digest output space as SHA-1, it has no known collision weaknesses, and remains secure today, almost 20 years since its introduction.

Due to the almost practical collision attacks on SHA-1 with a a complexity of 61-65 bits, UMAC-64 probably does not have a wide enough security margin. As such, it should probably be disabled.

I would recommend the following order for your MACs:

  1. hmac-sha2-512
  2. hmac-sha2-256
  3. hmac-ripemd160

OpenSSH 6.0 Configuration

Okay. Now that we've everything ironed out in hardening our OpenSSH 6.0 connections, let's see how this would look in the client and on the server. For both the client config and the server config, it should support algorithms for both OpenSSH 6.0 and 6.7.

For an OpenSSH 6.0 client, I would recommend this config:

# OpenSSH 6.0 client config
Host *
    Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128,arcfour
    KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
    MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160

For an OpenSSH 6.0 server, I would recommend this config:

# OpenSSH 6.0 server config
Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128,arcfour
KexAlgorithms diffie-hellman-group-exchange-sha256
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160

Going back now to Stribik András' post, here is what your configurations would look like for OpenSSH 6.7:

For an OpenSSH 6.7 client, I would recommend this config. Further, ChaCha20-Poly1305 is a high performance cipher, similar to RC4. So we should prefer it as our first cipher, with AES following, and finally disabling RC4:

# OpenSSH 6.7 client config
Host *
    Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
    KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com

For an OpenSSH 6.7 server, I would recommend this config (also disabling SHA-1 from the key exchanges):

# OpenSSH 6.7 server config
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com

Conclusion

It's important that you pay attention to the versions of the clients and servers that you are using, so you can accurately set your configuration. In this case, we looked at what would be necessary to support OpenSSH versions 6.0 and 6.7. There may be slight differences in versions between those two, and you'll need to make the necessary adjustments.

Verifying Keybase Identities

When using Keybase, occasionally, people will track your identity. This has cryptographic value. Your identity on Keybase is based on what you do online and how long you have done it. As people track you, they cryptographically sign your Keybase identity. This creates a snapshot in time that states you've taken the precautions to verify the identity, by checking the digital signature of each of their online proofs. This snapshot is frozen in time, and as more and more people track your identity, the stronger the statement of the validity of that identity. In other words, Keybase compliments the PGP Web of Trust, without actually replacing key signing parties, or actually signing PGP keys.

In this post, I want to discuss what it takes to verify signatures of Keybase identity proofs, so you can verify that Keybase isn't doing anything sneaky the data. In this post, I am going to verify the identity proofs of a friend of mine, Joshua Galvez as an example of how to verify each identity proof out-of-band (not using the Keybase client software).

First, all identity proofs are stored in JSON, which is a standardized format. The JSON object is cleanly formatted for easy readability, so you can examine what has been signed, and exactly what you are verifying. Nothing should be hidden up Keybase's sleeves. To start, I am going to navigate to Josh's Keybase identity page. I see that he has proved he owns a Twitter account, a Github account, a reddit account, and a personal website, all with his personal OpenPGP key.

To verify the proofs, I need to get a physical copy of the statement. Again, I am going to do this all out-of-band, away from the Keybase client software. As such, I'll copy and paste each statement proof into a text editor, and save it to disk, as well as each PGP signature. I'll do this with his Twitter account as an example.

Because of the brevity of Twitter, a full JSON object with a PGP signature can't be sent. So, Keybase keeps this proof on their server, with a link in the tweet pointing to the proof. So, we'll need to get it there. The link in his tweet points to https://keybase.io/zevlag/sigs/0Pl859RFLHZuEi7ozQyrbT1cphZCxYQMuoyM. There is a "Show the proof" link on the page, which gives me all the necessary data for verifying his identity. All I need is his JSON object and his PGP signature. I need to combine them in a single file, and save it to disk. As such, my file will look like this:

{
   "body": {
      "client": {
         "name": "keybase.io node.js client",
         "version": "0.7.3"
      },
      "key": {
         "fingerprint": "12c5e8619f36b0bb86b5be9aea1f03e20cf2fdbd",
         "host": "keybase.io",
         "key_id": "EA1F03E20CF2FDBD",
         "uid": "2b26e905f5b23528d91662374e840d00",
         "username": "zevlag"
      },
      "service": {
         "name": "twitter",
         "username": "zevlag"
      },
      "type": "web_service_binding",
      "version": 1
   },
   "ctime": 1416507777,
   "expire_in": 157680000,
   "prev": null,
   "seqno": 1,
   "tag": "signature"
}
-----BEGIN PGP MESSAGE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools - https://gpgtools.org

owGbwMvMwMVYnXN9yvZMuQbG0wdeJjGE5BnOrlZKyk+pVLKqVkrOyUzNKwGx8hJz
U5WslLJTK5MSi1P1MvMV8vJTUvWyihWganSUylKLijPz84CqDPTM9YyVanVAykGa
0zLz0lOLCooyQWYpGRolm6ZamBlaphmbJRkkJVmYJZkmpVompiYaphkYpxoZJKcZ
paUkpQCNzMgvLkGxVQlsZnxmClDU1dHQzcDY1cjA2c3IzcXJBShXCpYwSjIyS7U0
ME0zTTIyNjWySLE0NDMzMjY3SbUwMUgxMAApLE4tgnqpKrUsJzEd5FqgWFlmciqS
d0vKM0tKUotwaSipLAAJlKcmxUP1xidl5qUAfYscHIZAlcklmSC9hiaGZqYG5kCg
o5RaUZBZlBqfCVJham5mYQAEOkoFRallSlZ5pTk5IPcU5uUDZYEWAe2zUirOTM9L
LCktSlWq7WSSYWFg5GJgY2UCxRgDF6cALB6Z5wowNEXaavBftOToU3lx8YCcUIzU
Y5FN3LxXjrbNOum2oiQyrMMszfWM5Kz+D3O5ZZJabOUO2v99UPBMRmqZy6ZLdh0n
1t92OPT+7ILwL2+Y+rfoLDJxufXh7ykfZos1L66fVhc+e1HOw9rEuChW+eBJkbCE
3y42k3yXNJ5sSpDc9Ujhcewsw3nuM86G/8tbzGUo9OSERcif0wou4Qtnnj2cs6Nz
nk3qLUvHWRfX753v+mPlNMm2hXZbbjzIF3XaK93JZj/5im6QyA3fjy6CbKGtqi1X
Am+d+ljtGD0h5MPRDLPG3Xtmiqps3Hfo7fGvUUwF6gnzHm7xdD42banr8wMHfZ6q
a26o3nPras+GHwERx898vvTq14lfxgKc7wXOXEl30bN7KTa/TERMXK2nafZT9qOb
7sSvSnjLsj9jw1nVGaKcvtd8855Xr6/eySwtsjPOapVT85JPisu3l/ten/XqxPa8
mRxWAvrLdl9NXNS46zYPW61MwJQPYQfYN3DxTX3ucMO5Qd/EftWeF0depDlsme7S
pTmr0Eey8sqkPo9f63/yfFgg9GPP3PY/22KP7+2Ifm7/N6FZ95xh2bXXM9dPvJ79
gCVVbH5m4OKOTI3aD9bWnVekglJLpwboJb6dfPC4V0vmpgWxb0w49JfE3j8fHHNa
jTnvgo3T+vzZ4mcOG06+/2CpebSKtmlTyIyYPzYbDXWczSPVPN28Yhr7rwIA
=V5X5
-----END PGP MESSAGE-----

I'll save this to disk as /tmp/zevlag-twitter.txt

Now, I just need Josh's public PGP key imported from a key server. I can, and should use Keybase here. Instead of using the MIT PGP key server, and running the risk of getting the wrong key, I can be reasonably confident I will get the correct key from Keybase. The raw public key can be accessed by appending "key.asc" at the end of their identity URL. So, in this case https://keybase.io/zevlag/key.asc. So, I'll grab it via the shell:

$ wget -O - https://keybase.io/zevlag/key.asc 2> /dev/random | gpg --import -

Now that I have Josh's public key imported into my GPG public key ring, I am read to verify Josh's Twitter proof of identity:

$ gpg --verify /tmp/zevlag-twitter.txt
gpg: Signature made Thu 20 Nov 2014 11:23:23 AM MST using RSA key ID B7691E80
gpg: Good signature from "Joshua Galvez <josh@zevlag.com>"
gpg:                 aka "Joshua Galvez (Work - Emery Telcom) <jgalvez@emerytelcom.com>"
gpg:                 aka "keybase.io/zevlag <zevlag@keybase.io>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: 12C5 E861 9F36 B0BB 86B5  BE9A EA1F 03E2 0CF2 FDBD
     Subkey fingerprint: DC35 E3CF 1179 41A9 7D72  BC9A 7B6C D794 B769 1E80

At this point, I can confirm that the owner of the private key for 0xEA1F03E20CF2FDBD cryptographically signed a JSON object for Twitter. Further, that individual has access to the Twitter account, so the signature can be posted. After verifying the other accounts, I can be reasonably confident that the individual is who they claim- Josh Galvez. Otherwise, an attacker has successfully compromised all of Josh Galvez's online accounts, as well as his OpenPGP key (or forged a new one), and either compromised his Keybase account, or created one masquerading as him. The former seems more likely than the latter. Further, because I have previously met with and engaged online with Josh, I have no doubt that this is indeed Josh Galvez, and 0xEA1F03E20CF2FDBD is indeed his public key.

So, I can now track Josh through Keybase, which means me cryptographically signing his Keybase identity, and creating a snapshot in time that says "I am reasonably sure this is Josh Galvez, these accounts are part of his online presence, and 0xEA1F03E20CF2FDBD is his OpenPGP key. Staying out of band from the Keybase client software, I can do this entirely with curl(1) and gpg(1).

Navigating to his Keybase identity, I'll click the "Track zevlag" button. A pop-up displays with the following options:

  • in the browser
  • command line with keybase
  • command line with [bash + GPG + cURL]

I have not integrated an encrypted copy of my private key with Keybase, so tracking Josh in the browser is unavailable to me. Further, I wish to do this out-of-band from Keybase anyway, so I'll select "command line with [bash + GPG + cURL]" and click "Continue". This displays that I need to copy and paste the following content into my shell:

echo '{"body": (... large JSON object snipped ....) }' | \
gpg -u 'e0413539273a6534a3e1925922eee0488086060f' -a --sign | \
perl -e '$_ = join("", <>); s/([^\w\.@ -])/sprintf("%%%2.2x",ord($1))/eg; s/ /+/g; print("sig=", $_)' | \
curl -d @- \
  -d type=track \
  -d session=lgHZIDg3ZWNjY2NiNTRiMTBiNThjOTQ2NDJhODA3MzM2NjAwzlSh4WnOAeEzgNkgZjZmNWVmZDg4YzcwZDI2NDNlZGY2ZWYyYTc3M2IyMDLEIM0QqHGrtfga4a%2Bnz7soXFHqFbbiio7PaVGjh7DfyyPG \
  -d csrf_token=lgHZIDg3ZWNjY2NiNTRiMTBiNThjOTQ2NDJhODA3MzM2NjAwzlSkLg7OAAFRgMDEIA8egS4XVUzH%2BkPY8pMJbmMFiN3%2BAdZEdTm7Buvm551L \
  -d plain_out=1 \
  -d uid=2b26e905f5b23528d91662374e840d00 https://keybase.io/_/api/1.0/follow.json

After entering that into my shell, and hitting enter, I am presented with typing in my passphrase for my private key, which in turn signs the object, and uses the Keybase API to post the result. I can then reload my profile, and see that I am now tracking Josh with Keybase. This means that at this point in time, I have made a cryptographic statement regarding the key ownership and identity of Joshua Galvez. Of course, I can revoke that statement at any time, if for any reason I believe his account has become compromised, he himself has become untrustworthy, or for other reasons.

Keybase and The PGP Web of Trust

Recently, I have been playing with my Keybase account, and I thought I would weigh in on my thoughts about it compared to the PGP Web of Trust (WoT).

The PGP WoT tries to solve the following two problems directly:

  1. You have the correct key of the person to whom you wish to communicate.
  2. You have verified that the owner of that key is who they claim to be.

These two problems are solved through key signing parties. Two or more people will meet up, exchange key fingerprints, then verify personal identity, usually through government issued identification. Unfortunately, the PGP WoT is complex, and in practice, rarely, if ever used. The idea behind using the PGP WoT is this:

  • I have verified Adam's identity and confirmed I have his correct key.
  • I cryptographically signed his key as a statement of this verification.
  • Adam cryptographically signed Bruce's key, issuing a similar statement.
  • I haven't met Bruce, but I have met Adam, and trust him.
  • Through Adam, I can make a statement about Bruce's claim to identity.

In practice, if I wished to communicate securely with Bruce, I would see if Bruce's key has signatures of individuals that I have cryptographically signed. If so, I can make a weak statement about his identity, and the ownership of his key through that signature. From that standpoint, I can then determine if I wish to communicate securely with Bruce, or not.

Since using GnuPG these past 10 years, I have probably really used the PGP WoT only 2-3 times. Other than that, it makes for a sweet-looking directed graph.

Keybase is not a PGP WoT replacement. IE, it's not here to replace key signing parties, and it's not a tool for signing other's keys. However, Keybase does make strong statements regarding key ownership and identity. In fact, Keybase has given up on the PGP Wot entirely. Rather than validating government issued identification cards in person, Keybase solves identity through online social proofs. This is handled by what you have accomplished online and how long you have been using the account.

Looking first and accomplishing online tasks. When a user signs up for an account at Keybase, they need to prove identities that they own on the web. This is done by inserting some text at the online account, then cryptographically signing it with your private PGP key, and storing the signature at Keybase. This establishes a relationship between the owner of the PGP key and the online account. The more online accounts that the user can establish, the stronger the proof of identity for that individual.

Currently, accounts can be:

  • Twitter
  • Reddit
  • Hacker News
  • Coinbase
  • Github
  • Websites

For each of these accounts, I can pull down the notice, and verify the signature. Thus, each online account becomes coupled with the owner's PGP key. But, it's important to understand that this is making a statement of online activity. IE- "This is my Twitter account @AaronToponce, and I am Aaron Toponce."

Once the accounts have been proved, you can then make statements about other identities through "tracking". Tracking on Keybase is similar to "following" on other social sites, but it's actually cryptographically useful. Each account has a database object of their online identities (all cryptographically signed remember), among other data, including who they are tracking, and who is tracking them.

When you track someone, you cryptographically sign their identity with your personal PGP key. The previous signature is part of that identity, as well as the current signature. Each time someone is tracked, their identity gets cryptographically updated, and anyone can see when those signatures took place. Think of tracking like cryptographic snapshots, or digital photographs.

Tracking is useful for people whom you wish to communicate, are interested in "following" them online. By looking at the previous snapshots, you can get a sense of the age of that account. The older the account, and the more people tracking the account, the stronger the statement of identity, and that the account has not been compromised. Should the account get compromised at any time, people can revoke their tracking snapshot, thus removing the statement of identity.

Will Keybase improve the overall PGP WoT? I hope so. Currently, the accounts that you can make verifiable proofs with are limited, and you'll notice the Big Players like Google, Facebook, and Pinterest are missing. Currently Keybase is in limited invite-only alpha testing, so it makes sense why those accounts are have not been brought into the system yet. However, Keybase will remain only a "geek it up" thing until those services are included in identity proofs. So, if Keybase wants to improve things with PGP in general, it must get those accounts on board, or it won't make a ripple in the world at large.

Oh, and the Keybase client is Free Software.

SHA512crypt Versus Bcrypt

On the Internet, mostly in crypto circles, you'll see something like the following in a comment, forum post, on a mailing list, other otherwise:

Do not use fast hashes to store passwords on disk. Use bcrypt.

In most cases, however, the understanding of why to use bcrypt isn't entirely clear. You'll hear the standard answer "It's slow", without a real understand as to "how slow?" nor as to "why is it slow?". In this post, I want to explain why bcrypt is slow, some misconceptions about using fast hashes, and where the real strength of bcrypt lies (hint- it's not speed). Finally, I'll close with an alternative that many are starting to talk about as a possible replacement to bcrypt.

First, when people are talking about using bcrypt for password hashing, they are referring to the bcrypt cryptographic key derivation function, designed by Niels Provos and David Mazières. Bcrypt is designed to be intentionally slow and expensive. It was designed specifically with password storage in mind. The motivation is clear- if a password database of any kind is leaked to the Internet, it should be cost prohibitive for password crackers to make any sort of progress recovering the unknown passwords from the known hashes.

bcrypt algorithm
How does bcrypt work though? What is the algorithm? According to the paper, the core bcrypt function in pseudocode is as follows:

bcrypt(cost, salt, input)
    state = EksBlowfishSetup(cost, salt, input)
    ctext = "OrpheanBeholderScryDoubt" //three 64-bit blocks
    repeat (64)
        ctext = EncryptECB(state, ctext) //encrypt using standard Blowfish in ECB mode
    return Concatenate(cost, salt, ctext)

The first function, "EksBlowfishSetup(cost, salt, input)" in the algorithm is defined as follows:

EksBlowfishSetup(cost, salt, key)
    state = InitState()
    state = ExpandKey(state, salt, key)
    repeat (2^cost) // exponential cost by powers of 2
        state = ExpandKey(state, 0, key)
        state = ExpandKey(state, 0, salt)
    return state

In the "EksBlowfishSetup", you'll notice the "repeat" step uses a binary exponential parameter. As the cost is increased, the time it will take to finish the algorithm will take exponentially longer. Bcrypt was designed with this cost parameter to adjust for Moore's law. As computing strength continues to improve, bcrypt should be flexible in its design to adjust for those advancements. This is why the cost parameter is baked into bcrypt, and why people call it "slow".

Finally, you'll notice the "ExpandKey(state, salt, key)" function in the algorithm. It is defined as follows:

ExpandKey(state, salt, key)
    for(n = 1..18)
        P_n  key[32(n-1)..32n-1] XOR P_n //treat the key as cyclic
    ctext = Encrypt(salt[0..63])
    P_1 = ctext[0..31]
    P_2 = ctext[32..63]
    for(n = 2..9)
        ctext = Encrypt(ctext XOR salt[64(n-1)..64n-1]) //encrypt using the current key schedule and treat the salt as cyclic
        P_2n-1) = ctext[0..31]
        P_2n = ctext[32..63]
    for(i = 1..4)
        for(n = 0..127)
            ctext = Encrypt(ctext XOR salt[64(n-1)..64n-1]) //as above
            S_i[2n] = ctext[0..31]
            S_i[2n+1] = ctext[32..63]
    return state

Because bcrypt was based on Blowfish, the "ExpandKey(state, 0, key)" function used in the "EksBlowfishSetup" function is the same as regular Blowfish key schedule since all XORs with the all-zero salt value are ineffectual. The bcrypt "ExpandKey(state, 0, salt)" function is similar, but uses the salt as a 128-bit key.

Also, to clarify, a 128-bit salt is also baked into the algorithm, as you can see. This is to prevent the building of lookup tables for bcrypt, such as rainbow tables. Salts do not slow down crackers, and it's assumed that salts will be leaked with the database. All salts provide is the protection against using a hash lookup table to find the originating plaintext. Because salts are baked into bcrypt, bcrypt lookup tables will never exist. This forces password crackers to brute force the hash.

Understanding password security
There are a few key security elements related to passwords that you must understand. They are the following:

  1. The unpredictability measurement, aka "entropy", of the password provided by the user.
  2. The speed at which brute forcing passwords can commence.
  3. The cryptographic strength of the function itself.

I ordered these for a specific reason- the most likely "weak link" in the chain of password security is password the user provides. History of leaked password databases have shown us that. If users understood real strength behind passwords, they would understand the basic concepts of entropy, even if they weren't familiar with the term itself. If entropy levels were high in all user's passwords, no matter what, then the success of recovering passwords from hashes via brute force would be ineffective. But, 70-80%, and better, of password databases are recovered, because of this simple concept not getting applied. The speed at which password crackers brute forced their way through the hashes in the database would no longer matter, because no amount of practical computing power would be able to work fast enough within the death of the Universe, to recover the user's password.

Sadly, this just isn't the case. People suck as picking passwords.

Key stretching
So, we need to compensate for users picking bad passwords, and bcrypt makes a great leap in this regard. Because of the cost parameter which is part of the algorithm, we can adjust the cost to make password hashing intentionally slow. And, as computing power increases, the cost parameter can continue to be adjusted to compensate. This is what most people understand, when they claim that "bcrypt is slow".

The argument is that cryptographic hashes are designed to be fast, fast, fast. And they're right. Cryptographic hash functions are designed to provide data integrity regardless of the size of the input. If I have a 4.7 GB CD image, I should be able to calculate its digest in reasonable time, so when I transfer the image to another computer, I can recalculate the digest, and compare that the two digests match, in reasonable time.

This would seem like a Bad Thing for password storage, because passwords are short (much shorter than 4.7 GB at least), so password crackers would be able to guess millions or billions of passwords per second using a fast cryptographic hash. You're right, for the most part. Ars Technica ran a story on password cracking with a 25-GPU cluster. It achieves a speed of 350 billion NTLM passwords per second, which means every conceivable Windows NTLM password can be recovered in less than 6 hours using this behemoth. It can work MD5 at 180 billion per second, or 63 billion per second with SHA-1.

At these speeds, the argument against using fast cryptographic hashes to store passwords is sounding pretty strong. Except, that Ars Technica article, and most bcrypt fanboys seem to overlook one thing- key stretching. Just because cryptographic hashes are fast, doesn't mean we can't intentionally slow them down. This is key stretching.

Key stretching is the idea that you reuse internal state for calculating a new key. For cryptographic hash functions, this is "iterations" or "rotations". The idea is taking the cryptographic digest of the input, and using this digest as the input for another hashing round. In pseudocode, you could think of it like this:

salt = random() // programmatically determine a salt randomly
password = input() // get the password from the user
key = ''
cost = 5000

for ROUND in 1 to cost: do
    digest = SHA512(salt, password,  key)
    key = digest
done

If our 25-GPU cluster could work through 50 billion SHA-512 cryptographic hashes per second, by forcing 5,000 SHA-512 calculations before getting to the desired hash, our 25-GPU cluster can now only work through 10 million SHA-512 hashes per second. As the iterative count is increased, the time it takes to calculate the resulting digest increases. As such, we have created a "sha512crypt" that has a similar cost parameter as bcrypt. Now the question remains- does it hold up?

Practical examples
To see if this "key stretching" idea holds up, I wrote two Python scripts- one using SHA-512, and the other using bcrypt. In both cases, I increase the cost parameter from a reasonable starting point, and increased it well beyond a reasonable expectation.

Here is my Python code for "test-sha512.py":

1
2
3
4
5
6
7
8
9
#!/usr/bin/python
import hashlib
password = b'password'
cost = 5000
key = ''
m = hashlib.sha512()
for i in xrange(cost):
    m.update(key+password)
    key = m.digest()

And here is my Python code for "test-bcrypt.py":

1
2
3
4
5
6
#!/usr/bin/python
import bcrypt
cost = 6
password = b'password'
salt = bcrypt.hashpw(password,bcrypt.gensalt(cost))
hash = bcrypt.hashpw(password, salt)

In both cases, I incremented the cost, then timed re-running the script. Of course, Python is an interpreted language, so the absolute times would be much lower if this were implemented in C, or assembly. Further, this was done on my aging T61 laptop. Running it on my 8-core i7 workstation with triple-channel DDR3 would show improved times. It not the times that are critical. What is critical is seeing the exponential back-off as the cost is increased.

Here is a table detailing my findings. Notice that the bcrypt cost increments by a single digit. Because it's binary exponential back-off, the times increase by a power of 2 at each iteration. I also adjusted the sha512crypt cost a little to more closely match the bcrypt timings, even though it's not a strict doubling of each cost value.

bcrypt sha512crypt
cost time iterations time
6 0.032 5,000 0.036
7 0.045 10,000 0.047
8 0.064 20,000 0.064
9 0.114 40,000 0.105
10 0.209 80,000 0.191
11 0.384 160,000 0.368
12 0.745 320,000 0.676
13 1.451 640,000 1.346
14 2.899 1,280,000 2.696
15 5.807 2,560,000 5.347
16 11.497 5,500,000 11.322
17 22.948 11,000,000 22.546
18 45.839 22,000,000 45.252
19 1:31.95 44,000,000 1:30.14
20 3:07.27 88,000,000 3:07.52

In the Python bcrypt implementation, the default cost is "10". For most modern GNU/Linux operating systems, when storing the user password in /etc/shadow with sha512crypt (yes, I didn't come up with the name), the default cost is 5,000 iterations. In both these cases, the cost can be adjusted. In the case of the Python bcrypt module, it's just passing the function with a numerical argument. In the case of GNU/Linux, it's editing PAM by adding "rounds=" to a config file.

As such, sha512crypt can be just as slow as bcrypt. Remember, we are trying to adjust for increased computing power that password crackers will have access to. In both cases, bcrypt and sha512crypt address that requirement.

Bcrypt's additional strength
So, if sha512crypt can operate with a cost parameter similar to bcrypt, and can provide that exponential back-off that we are looking for to slow down password brute force searching, then what's the point of bcrypt? Are there any advantages to running it? It turns out, there is, and I suspect this is a consequence of the design, and not something that was intentionally added.

What we would like is to prevent password crackers from using non-PC hardware on attacking the password database. SHA-2 functions, such as SHA-512, work very well on GPUs. SHA-2 functions work well on specialized hardware such as ASICs and FPGAs. As such, while we could make things slow for CPU or GPU crackers, those password crackers with specialized hardware would still have an upper hand on attacking the password database. Further, by addressing GPU cracking, and making it intentionally slow there, we make like more difficult for CPUs, which means hurting the honest user when trying to login to your web application. In other words, if I adjusted sha512crypt for GPU crackers, such that only 1,000 passwords per second could be achievable on a GPU, that might be a full second, or more, for the user logging into your CPU server. This may or may not be desirable.

So, how does bcrypt attack this problem? Part of the algorithm requires a lookup table stored in RAM that is constantly modified during execution. This turns out to work out very well on a standard PC where the CPU has exclusive access to RAM. This turns out to work out fairly poorly on a GPU, where the cores share the on-board memory, and each core must compete on the bus for access to those registers. As a result, any additional cracking speed is greatly minimized on a GPU when compared to the CPU. In other words, GPU password cracking with bcrypt isn't entirely effective either.

For SHA-2 functions, like SHA-512 however, this is not the case. SHA-2 functions use only 32-bit logic and arithmetic operations, which GPUs excel at. By using a GPU over a CPU for our sha512crypt function, a password cracker can get a couple to many orders of magnitude of additional cracking power.

So, the reason to use bcrypt isn't because "it's slow". The reason to use bcrypt is because "it's ineffective on GPUs".

A better alternative- scrypt
Unfortunately for bcrypt, however, due to its low memory requirement, bcrypt can be implemented in a field programmable gate array (FPGA) or custom ASICs. Understand that bcrypt was designed in 1999, when such specialized hardware had low gate counts, and was few and far bewteen. 15 years later, times have drastically changed. Bitcoin ASICS with SHA-256 FPGAs on board are common place. Hardware AES is common in CPUs and embedded systems. The fact of the matter is, these FPGAs, with their onboard, and fast RAM are well suited to bring bcrypt password cracking well into "fast" territory.

An alternative would be a solution that not only requires address registers to be constantly modified during algorithm execution, but to also exponentially bloat the memory requirement for the increased cost. scrypt addresses this shortcoming in bcrypt. Scrypt is another password key derivation function that was initially designed in 2009 by Colin Percival as part of the Tarsnap online backup service.

Scrypt has all of the advantages that bcrypt provides- baked in salt, exponential cost parameter, and ineffectiveness on GPUs, while also adding an exponential RAM requiremnt per the cost. Because of this RAM requirement, it is no longer cost efficient to build FPGAs with the necessary RAM.

Security, standards, and implementations
Scrypt is only 5 years young. This gives bcrypt a good 10 year head start. In terms of security, this is preferred. We want cryptographic functions that have withstood the test of time with cryptographers constantly attacking and analyzing their functions, primitives, and implementations. The longer it remains "unbroken", the more "secure" we deem the functions to be.

Bcrypt continues to be attacked and analyzed, and is showing no serious sign of weakness, 15 years later. This is good for the security of bcrypt. Scrypt however has seen less scrutiny, mostly due to its young age. However, it has been 5 years, and like bcrypt, no serious signs of weakness have been shown. By comparison, the SHA-2 family of functions was created in 2001, and has been scrutinized much more than bcrypt and scrypt combined, and also is not showing any serious signs of weakness. So, from a security standpoint, the SHA-2, bcrypt, and scrypt functions all seem to be fairly secure.

When looking at governing body standards, NIST has no paper on bcrypt or scrypt. They do recommend using PBKDF2 (another key derivation function (which I haven't explained here, but love)) for password storage, and NIST has standardized on SHA-2 for data integrity. Personally, while I like the ideas of bcrypt and scrypt, I would recommend sticking with the NIST recommendations with high iteration counts, as shown above. Until we see more standards boy interest in bcrypt and scrypt, IMO, you are taking a risk using them for password storage (less so for bcrypt than scrypt at least).

Finally, because of the newness of scrypt, there are less tools for its use in programming languages than bcrypt, and even more so for SHA-2. Further, most programming languages don't include either bcrypt or scrypt in their "standard library" of modules or functions, while SHA-2 is more generally found. And for those implementations, some 3rd party libraries are more trust worthy than others. Because you're dealing with password storage, it's critical you get this right.

Conclusion
While I love the algorithms behind bcrypt and scrypt, I've always advocated for using high iterative counts on the SHA-2 or PBKDF2 functions. Even further is the advocating of teaching people how to understand the concepts behind their password entropy, and improving their own online security. That is the weakest link, and needs the most work, IMO.

So, if you ask me, I'll tell you to use either PBKDF2 or SHA-2 with high iterative counts. Or, if you must absolutely use bcrypt, then I'll recommend that you use scrypt instead.

Super Size The Strength Of Your OpenSSH Private Keys

In a previous post, about 18 months ago, I blogged about how you can increase the strength of your OpenSSH private keys by using openssl(1) to convert them to PKCS#8 format. However, as of OpenSSH verison 6.5, there is a new private key format for private keys, as well as a new key type. The new key type is ed25519. Without going into the details of the strengths of ed25519 over RSA, I do want to identify a new encryption method for your private keys.

In previous versions of OpenSSH, if you provided a passphrase to encrypt your private key, it was converted into a cipher key by first hashing it with MD5, then encrypting your private key. Unfortunately in this case, as usually the problem with all hashed password storage, MD5 is fast, fast, fast. While MD5 is a cryptographic one-way hashing function, it can fall victim to rainbow table attacks, as well as just plain old brute forcing. With a few GPUs, and the right software algorithm, it's not unheard of to try billions of passwords per second to try and achieve the correct hash, or in this case a cipher key.

Key derivation functions (KDFs) however, can be resource intensive, and slow. One in particular is bcrypt, which is very similar to the bcrypt one-way hashing function. With OpenSSH 6.5, when generating ed25519 keys, the bcrypt pbkdf is the default function for creating that cipher key based on your passphrase. To further protect you from brute force searching on your passphrase, ssh-keygen(1) will apply 16 rounds to the bcrypt pbkdf before creating the cipher key which is used to encrypt the private key on disk. On my ThinkPad T61, this takes approximately 1/3 of a second to complete all 16 rounds, or about 50 per second. This is a far cry from the millions I know my T61 can do with MD5.

However, this isn't even the bread and butter of the post: You can convert your existing keys to the new format with OpenSSH 6.5. This means your old DSA and RSA keys, and even the newer ECDSA keys, can all be converted to take advantage of the new format.

Further, you don't have to take the default 16 rounds of encrypting your key. Instead, you can increase that if you want to be a bit more paranoid. Suppose I wish to apply 100 rounds instead of the default 16- a factor of over 6x. To do this, for each of your private keys, run the following:

$ ssh-keygen -o -p -a 64 -f id_rsa
Enter old passphrase: 
Key has comment 'rsa w/o comment'
Enter new passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved with the new passphrase.

At this point, it will take approximately 2 seconds on my T61 to complete the rounds, and encrypt the key. This can be verified whet creating an SSH agent, and adding my key to the agent:

$ eval $(ssh-agent)
Agent pid 17202
$ ssh-add
Enter passphrase for /home/aaron/.ssh/id_rsa: 
Identity added: /home/aaron/.ssh/id_rsa (/home/aaron/.ssh/id_rsa)

When adding my passphrase, it takes a full 2 seconds before returning to a shell on the remote server. Of course, feel free to increase the rounds count. 1000 rounds would take me a full 20 seconds. Probably not sufficient for day-to-day use while at work, but could be applicable in other cases.

When you look at your private keys, with the old version, the header would look something like this:

-----BEGIN DSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,DF7C541751D59241F15DA424506137CE

If you converted your key to PKCS#8 with openssl(1), then your headers would look something like this:

-----BEGIN ENCRYPTED PRIVATE KEY-----
(base64 output)

However, with the new OpenSSH key format, encrypted keys now look like:

-----BEGIN OPENSSH PRIVATE KEY-----
(base64 output)

With bcrypt as the new encrypted storage format, and the ability to adjust the number of rounds, as well as convert older keys, this is a big win for security. Well done OpenSSH team!

UPDATE: It should be noted that when using this new on-disk encrypted format, your OpenSSH private key will no longer be compatible with openssl(1), as previously, the private key was stored in PEM format. Further, using the "ed25519" key type means using the new format automatically, as openssl(1) does not support the ed25519 algorithm.

Use /dev/random Instead Of /dev/null

While writing a shell script the other day, I was redirecting some output to /dev/null, as normal, when something dawned on me. Why don't I redirect my output to /dev/random instead? After all, both Linux random devices are writable by everyone on the system:

$ ls -l /dev/*random
crw-rw-rw- 1 root root 1, 8 Nov 13 15:14 /dev/random
crw-rw-rw- 1 root root 1, 9 Nov 13 15:14 /dev/urandom

Knowing what I know about the Linux cryptographic pseudorandom number generator (CSPRNG), I know that any bits put into the CSPRNG input pool are hashed with the SHA1 cryptographic hash function 512-bits at a time. This includes any data you redirect to it from the shell, as well as output from itself. When data is fed into the CSPRNG input pool, the RNG is reseeded.

To understand this concept of seeding an RNG, let's assume for a moment that the only source of input for the RNG is its own output. If this were the case, we would only need a starting value to "seed" the RNG, then let it run by hashing its own digests. In this scenario, each digest is chosen deterministically, and if we know the input that seeded the RNG, we can predict all of its future outputs.

Think of this scenario like a progress bar. For the SHA1 cryptographic hash, there are 2^160 possible unique digests. Theoretically, our RNG should be able to work through all 2^160 digests only once before starting over, provided there is enough time to do so (we know now this isn't the case, which means SHA1 has been weakened in terms of collision attacks). However, when you change the input by providing something other than the next digest in the queue, you change the next starting point of the RNG. It's as though you've "skipped" to a non-sequential location in your progress bar.

Now, consider constantly reseeding your RNG. This is what your Linux system is actually doing. It's constantly processing timing events from disk IO, network packets, keyboard presses, mouse movements, etc. All these inputs get collected into an "entropy pool", which is then hashed with SHA1 512-bits at a time, as we mentioned earlier. This input changes the sequential ordering of the digest outputs, making the result unpredictable, and non-deterministic.

So, when working on the shell, by redirecting your output to /dev/random, you reseed the CSPRNG, meaning you have changed the digest output ordering to something different than what it would have been had you not redirected those bits. In fact, the more you send data to the CSRNG, the more you reseed it, forever altering the path it takes on its "progress bar".

Now, you may ask why not have some userspace PID running in the background that is always reseeding the CSPRNG? Sure, you can. In this case, I would recommend running Haveged on your system. Haveged will probe much more hardware events on the system than the default install will, and keep the entropy pool topped off at full. The CSPRNG will be constantly reseeded. However, for shell scripts, redirecting to /dev/random instead of /dev/null works.

My only concern with redirecting to /dev/random would be bandwidth concerns. Doing a simple and crude benchmark comparing /dev/null to /dev/random, I get the following on my workstation:

$ for I in {1..5}; do dd if=CentOS-6.5-x86_64-bin-DVD1.iso of=/dev/random; done
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 81.3842 s, 54.9 MB/s
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 76.4597 s, 58.4 MB/s
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 74.6036 s, 59.9 MB/s
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 75.4946 s, 59.2 MB/s
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 74.375 s, 60.1 MB/s
$ for I in {1..5}; do dd if=CentOS-6.5-x86_64-bin-DVD1.iso of=/dev/null; done  
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 59.325 s, 75.3 MB/s
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 56.5847 s, 79.0 MB/s
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 54.4541 s, 82.1 MB/s
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 56.0187 s, 79.8 MB/s
8726528+0 records in
8726528+0 records out
4467982336 bytes (4.5 GB) copied, 57.0039 s, 78.4 MB/s

Seems I get slightly better throughput with /dev/null, which isn't surprising. So, unless you know you need the throughput, I would recommend sending your data to /dev/random over /dev/null.

The Bitmessage Proof Of Work

I've been on the Bitmessage network roughly since it was released. Maybe only a month or two later. One thing that has had me intrigued, although I've never really paid attnetion to ut until now, is Bitmessage's proof-of-work puzzle.

A proof-of-work puzzle is a puzzle your computer solves to generally gain access to some resource. Usually, the intention is to ether prevent a denial of service attack, or to prevent spamming. In the case of Hashcash, that I have blogged about many times here, uses CPU stress to find a solution to a SHA1 digest. Its intent is to fight spam. The guided tour protocol is a proof-of-work puzzle to prevent denial of service attacks on a network or to a server. It forces every client, regardless of resources, to add network latencies by making roundtrip network connections to different servers, before reporting back with the solution to the requested resource. There are other proof-of-work puzzles that can require memory consumption before being granted access. The point is, a proof-of-work puzzle shows the requested resource, server, or destination that the client has spent valuable time solving a puzzle, proving they've done the necessary work. Mining Bitcoin works on this principle.

In the case of Bitmessage, as of protocol 3, the proof-of-work puzzle is a CPU stress test that is based on the size of the message being sent. The idea is to prevent spam from hitting inboxes. Unfortunately, the main client PyBitmessage is written in Python, so its ability to solve the proof-of-work is far slower than a compiled C/C++ implementation.

The proof-of-work is defined as follows:

target = \frac{\displaystyle 2^{64}}{\displaystyle nonceTrialsPerByte \times (payloadLength+payloadLengthExtraBytes+\frac{\displaystyle TTL \times (payloadLength+payloadLengthExtraBytes)}{\displaystyle 2^{16}})}

Where:

nonceTrialsPerByte = 1000 (default difficulty set by the Bitmessage owner during key generation, and defined as '1')
payloadLengthExtraBytes = 1000 (to add some extra weight to small messages)
payload = embeddedTime + encodedObjectVersion + encodedStreamNumber + encrypted
payloadLength = the length of payload, in bytes, + 8 (to account for the nonce which we will append later)
TTL = the number of seconds in between now and the object expiresTime (default is 28 days, plus or minus five minutes)

So, by default for most Bitmessage addresses, this equation can be roughly simplified as:

target = \frac{\displaystyle 2^{64}}{\displaystyle 1000 \times (payloadLength+1000+\frac{\displaystyle 2419200 \times (payloadLength+1000)}{\displaystyle 2^{16}})}

which can be further simplified to:

target = \frac{\displaystyle 2305843009213693952}{\displaystyle 606625 \times (payloadLength + 1000)}

With a small payload, say 100 bytes in size, our target becomes very large:

\frac{\displaystyle 2305843009213693952}{\displaystyle 606625 \times (100 + 1000)} \approx 442,309,956,621

The largest the payload can be is 256 kilobytes. Thus, our target becomes much smaller:

\frac{\displaystyle 2305843009213693952}{\displaystyle 606625 \times (262144 + 1000)} \approx 1,848,953,243

So, the larger the message, the smaller the target. Further, if you increase your difficulty on a new key from 1 to 2, then the "nonceTrialsPerByte" becomes 2000, instead of 1000. This drops the target to an even smaller number. This "target" value becomes the benchmark by which the difficulty of the proof of work is defined.

Now that we have our target value, we must set a trial value, and try to find a number deterministically that becomes smaller than our target. We do this with the SHA512() cryptographic hashing function.

First, we set "trialValue = 99,999,999,999,999,999,999", or about 226 million times larger than our target value could ever be. Then, we take the SHA512(payload), and set a counter to 0 (called a "nonce" in the protocol). Now we enter the following loop:

while trialValue > target:
    nonce = nonce + 1
    resultHash = SHA512(SHA512(nonce||initialHash)), where "||" is concatenation
    trialValue = the first 8 bytes of resultHash, converted to an integer

As you can see, the larger our target is, the easier it will be to find a trial value that is smaller than the target. However, the smaller the target value becomes, the more difficult it will become to find a smaller trial value. That target decreases in value as the difficulty or the message length is increased.

Suppose my target was "385,531,657,911", and the first 8 bytes of my SHA512 digest value in hexadecimal was "0a4b6ff992e295fa". The decimal value of this digest is "741,809,681,334,441,466". In our example, this number is larger than my target, so I'll need to increment my counter by 1, and try again. In fact, the largest my 8-byte digest value in hexadecimal can be, is "00000059c37a3eb6". In otherwords, this is a dynamic Hashcash system with a double SHA512 instead of a single SHA1.

Initially, I thought this proof-of-work system was a bit strange. I couldn't understand why the core developer(s) didn't choose a simple implementation of Hashcash. If the idea is to prevent spam from hitting the network, then Hashcash in and of itself will suffice. However, by placing the core difficulty on the length of the message, you discourage large binary abuse, such as trading images, music, or movies across the network. That's what Bittorrent is for. Since the max message size is now 256 KB as of protocol version 3, even more so. But, spammers could still easily send links to nefarious sites, which would be nothing more than 100 bytes or so, and calculate the proof-of-work easily. So, while large messages might be discouraged, spam could still be a problem. This is where increasing the difficulty upon key creation is useful.

So, I like that the proof-of-work system not only has small message spam fighting built into the system, I also like the fact that it discourages large message abuse as well. It seems, at least from what I've studied at this point, that the proof-of-work system for Bitmessage is well thought through, and fairly mature. However, until the default client is written in C/C++, I fear that the proof-of-work is targeted primarily at Python implementations, which are 30-50x slower than their C/C++ counterparts. So, we may still see message abuse on the network until this is addressed.

Using The Bitmessage Storage Service

While hanging out on the "privacy" channel on Bitmessage, someone sent the following:

"You have no files saved. For instructions please send a message to BM-2cUqBbiJhTCQsTeocfocNP5WCRcH28saPU with the subject 'help'."

This is actually pretty cool. No doubt should you call into question a faceless storage provider, but I thought I would play around with it. When you send 'help' to the address, you get the following information:

"You send commands to the service via messages, the message subject is used to tell the server what you want to do. You are authenticated via the address you send from, so be sure to keep it safe. Below are the possible commands:

LIST - Lists all the files you currently have saved to your account.
NEW [file_title] - Save a new file to your account. e.g "NEW my top secret file", put the contents of your file into the message body.
UPDATE [file_id] [new_file_name (optional) ] - Update an existing file saved to your account. The new_file_name is optional and is only used to rename your file. e.g "UPDATE a567f My even more top secret file"
REMOVE [file_id] - Removes a file saved to your account. e.g "REMOVE a567f".

TOP TIP!!!
When sending the LIST command, type some random blurb into the message. If you send multiple LIST commands to the server in a short period of time the Bitmessage client see's your requests as duplicate messages and ignores them. The random blurb ensures the server always hears you."

Intrigued, I started thinking about this. Due to the limitation of BM sending up to 256KB messages, your file can be no bigger than 256K, unless you setup a striped RAID array. Then you can chop up the file into multiple messages. However, the messages will likely be decrypted at the other end, and will very likely be stored unencrypted. So, they need to be encrypted, then converted into base64 before sending to the address.

Initially, I figured GnuPG would work for this. But then I thought that it's not a good fit, because I lose plausible deniability. So, instead, I'll use the 'dm-crypt' module with the cryptsetup(8).

First, I need my block devices. I'll setup some files with dd(1), then add them to loopback devices. Notice that I'm building the files from /dev/urandom. This is critical, so no metadata, including encryption boundaries, is leaked:

$ dd if=/dev/urandom of=~/file1 bs=256k count=1
$ dd if=/dev/urandom of=~/file2 bs=256k count=1
$ ls -l file?
-rw-rw-r-- 1 user user 262144 Nov 29 07:57 file1
-rw-rw-r-- 1 user user 262144 Nov 29 07:57 file2
$ sudo losetup /dev/loop1 ~/file1
$ sudo losetup /dev/loop2 ~/file2
$ losetup -l
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop1         0      0         0  0 /home/user/file1
/dev/loop2         0      0         0  0 /home/user/file2

Because I want plausible deniability, I'll use cryptsetup(8) first, _then_ create the RAID array. If I create the RAID array first, the two files will reveal information that they belong to an array. I don't want any metadata leaked at all.

$ sudo cryptsetup create aes-crypt-1 /dev/loop1
$ sudo cryptsetup create aes-crypt-2 /dev/loop2
$ ls -l /dev/mapper/aes-crypt-?
lrwxrwxrwx 1 root root 7 Nov 29 07:47 /dev/mapper/aes-crypt-1 -> ../dm-0
lrwxrwxrwx 1 root root 7 Nov 29 07:47 /dev/mapper/aes-crypt-2 -> ../dm-1

I can now create the RAID array, format it wxth ext2, and mount it. A couple notes: I'll first want to set the RAID chunk size to something low, otherwise I won't be able to put down a filesystem on the block device. So, I chose the minimum according to mdadm(8), which is 4KB. Then, when formatting with ext2, I'll only get a total of 64 inodes, which means a total of 64 files. I'm going to increase this to 256, so I can intentionally fragment the filesystem before putting down the data. I'll explain the reason for the intentional fragmentation in a second.

$ sudo mdadm --create /dev/md0 --level 0 --raid-devices 4 --chunk 4 /dev/mapper/aes-crypt-{1,2}
$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat Nov 29 07:47:19 2014
     Raid Level : raid0
     Array Size : 464
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat Nov 29 07:47:19 2014
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 4K

           Name : example:0  (local to host example)
           UUID : 2bca1bf9:3af4a5d1:1989bb34:9b46bb9c
         Events : 0

    Number   Major   Minor   RaidDevice State
       0     253        0        0      active sync   /dev/dm-0
       1     253        1        1      active sync   /dev/dm-1

Now the formatting. Notice I'm changing the number of inodes. Also, we don't need to set aside any space for the root user: It's occupying precious disk space.

$ sudo mkfs.ext2 -N 256 -m 0 /dev/md0
$ sudo mount /dev/md0 /mnt
$ ls /mnt
lost+found
$ df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        427K   15K  412K   4% /mnt
$ df -i /mnt
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/md0          256    11   245    5% /mnt

Now before putting data into the filesystem, I'm going to fill it with small files of random data, then remove every n-th file to create enough space to put down my data. This fill force my data to be intentionally fragmented. The reason for this is to avoid a snapshot attack on the files I'll be storing remotely. If I did not fragment the data, then as I update the filesystem, only small incremental changes will take place. This will allow the attack to "subtract" a previous filesystem iteration from the current, allowing them to know where my stored data resides, as well as possibly figuring out what it contains. Because we're talking about a 512 KB filesystem here, even encrypted, disk I/O isn't a concern.

First, how big can each file be at a maximum? It appears to be 1721 bytes.

$ echo '(412*1024)/245' | bc -l # 245 available inodes, 412K available space
1721.99183673469387755102

Next, create the files, each with 1721 bytes:

$ for i in {1..245}; do sudo dd if=/dev/urandom of=/mnt/file$i bs=1721 count=1 2> /dev/null; done

Unfortunately, the filesystem filled up before completion. As such, we have empty files. So, we'll find those and remove them:

$ find /mnt -empty -type f | wc -l
25
$ sudo find /mnt -empty -type f -delete
$ ls /mnt/file* | wc -l
204

Now, I'm ready to fragment the filesystem. I know that one file I want to copy is 8820 bytes in size. So I need to free up 6 non-contiguous files. according to my math, I need to free up every 34th file:

$ echo '8820/1721' | bc -l
5.12492736780941313190
$ echo '204/6' | bc -l
34.00000000000000000000
$ sudo rm /mnt/file{$((34*1)),$((34*2)),$((34*3)),$((34*4)),$((34*5)),$((34*6))}
$ df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        427K  415K   12K  98% /mnt

I'm now ready to copy in my 8820-byte file:

$ sudo cp ~/secret-file.txt /mnt
$ df -h /mnt                      
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        427K  424K  3.0K 100% /mnt

Now I can tear everything down:

$ sudo umount /mnt
$ sudo mdadm --stop /dev/md0
$ sudo cryptsetup close aes-crypt-1
$ sudo cryptsetup close aes-crypt-2
$ sudo losetup -d /dev/loop1
$ sudo losetup -d /dev/loop2

Now I need to convert my two files to base64, copy, paste, and send the resulting output to BM-2cUqBbiJhTCQsTeocfocNP5WCRcH28saPU. The subject of the first message would be "NEW file1", while the subject of the second message would be "NEW file2". The body of each message would be the base64 output for file1 and file2.

$ base64 file1 | tr -d '\n'; echo
yAFtXPy7NgHl5Q0ueJiZgjOlYyrocWaxcvA6CjKF0rNd10sTfNvMCmQrL0cA79oO0 ...(snip) ...


$ base64 file2 | tr -d '\n'; echo
T3GTNvwiewXnnOHTjITNpukyLz4d8iv8wl/JP0YjY0v5s5euF1qv4WwMv9Ejl9AsNMm5NXoK/hFQK ...(snip)...

Of course, sending 2 messages with 256 KB in size will take some time to calculate the PoW, so be prepared for that (I don't know why the PoW is what it is. It should have been a double SHA256() like Bitcoin. Meh)

When you want to update your files, you wll need to build everything backup, except you won't "create" the RAID array, you'll "assemble" it, and you won't format your filesystem, obviously:

$ sudo losetup /dev/loop1 file1
$ sudo losetup /dev/loop2 file2
$ sudo cryptsetup create aes-crypt-1 /dev/loop1
$ sudo cryptsetup create aes-crypt-2 /dev/loop2
$ sudo mdadm /dev/md0 --assemble /dev/mapper/aes-crypt-1 /dev/mapper/aes-crypt-2
$ sudo mount /dev/md0 /mnt

You'll also need to move your file off the filesystem and rewrite all the random files, before copying the new data on. This is important, because we're trying to prevent a snapshot attack against our encrypted filesystem. So, on every commit to the BM storage service, as best as we can control, every bit on the filesystem needs to be changing.

As the help mentions, sending frequent "LIST" commands can cause your message to be ignored, unless the body of the message includes unique random data for each message sent. So, the "TOP TIP" of sending random data in the body of the message is a good idea. As such, I'll run the following for each "LIST", practically guaranteeing that I'll see a unique string, and the query will succeed:

$ dd if=/dev/urandom bs=2k count=1 2> /dev/null | rhash --sha3-512 -
4b5b19c9a8a61cf724cd710c6fd0c54edb46de2bfa55f2ec9a179a590b808993593d333f95dd6c607e9d366c385037cc0a600d262898e3b4c5be26479d3c962c  (stdin)

I'll copy and paste the result into the message body before sending.

I don't know that I'm ready to store sensitive data with it, but I do think it's pretty cool. The 256KB limit for the file size is pretty damning, so this isn't something to use for intense media storage, but instead, mostly for documents, such as password storage, browsing bookmarks, student grades, etc. It works, although it's definitely a niche product for a niche group. I wouldn't expect my wife to use this.

Where Cryptographic Hashing Algorithms Fail

What Is A Cryptographic Hashing Algorithm?
Cryptographic hashing algorithms are one-way functions that produce a message digest that represents a given input. Because the keyspace is so astromically large, it should be practically infeasible to find a different input that represents the same digest. The input is typically referred to as the message while the output is typically referred to as the digest.

Cryptographic hashes usually hold to four main principles:

  • Computing a digest should be easy for any given message (some would also say it should be fast).
  • When changing the message, the digest should exhibit the "avalanche effect".
  • It should be practically infeasible to find two messages that produce the same digest.
  • It should be practically infeasible to produce the message for a given digest.

Of all of the many cryptographic hashing algorithms in the world today, the following list are still considered to be secure:

  • RIPEMD-160, RIPEMD-256, RIPEMD-320
  • SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256
  • SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128, SHAKE256
  • Skein-256, Skein-512
  • Whirlpool

SHA3 is a new standard adopted by the United States National Institude of Standards and Technology (NIST) which uses Keccak as the underlying function. Keccak uses the sponge contruction for creating its digests, which has some interesting properties that we'll look at in a second.

The Workhorse Of Cryptography
Many call cryptographic hashing algorithms the workhorse of cryptography, and rightfully so. These hashing functions are used for a great deal of things:

  • Password storage- Hash the password, and store to disk.
  • Pseudo-anonymous user- Hash the username, and pseudonymize.
  • Data deduplication- Hash the data blocks, and compare.
  • Message authentication- Hash the message, and send.

Cryptographic hashes are used in a lot of ways. PGP fingerprints are nothing more than SHA-1 hashes of a timestamp. Similarly, OpenSSH fingerprints are MD5 results. ZFS uses SHA-256 for data and metadata integrity in its filesystem. Bitcoin uses double SHA-256 functions for the block chain, transactions, and mining. Cryptographic hashing algorithms are used EVERYWHERE. Unfortunately, developers keep reinventing the wheel, and don't pay attention to the problems that come with cryptographic hashing algorithms.

Failure #1- Raw Speed
While certainly an advantage, the speed at which cryptographic hashing algorithms creates the digest can also be a big drawback. Consider storing passwords to disk. Hopefully by now, developers know not to store the passwords in plaintext in the database. If not, get a new job. So, if we hash the password with say SHA-512, then according to our principles discussed at the beginning of this post, it should be infeasible to find the password (message) that produced the hash stored on disk.

If this hashed password database is leaked, the password cracker will be attempting to hash as many messages as possible to retrieve the given digest. Typically, this is done using custom word lists, hashing each word, and comparing it to what is stored in the password database. The faster the password cracker can go, the sooner they can recover the digests. On my wimpy laptop, using the CPU alone, I can hash approximately 1 million passwords per second with SHA-512. Using cheap GPU clusters, it's feasible to go upwards of 50 billion passwords per second, for basement-type miscreants living in their mother's basement.

Possible Speed Fix #1- Key Streching
One way around brute forcing passwords is a concept called key stretching. This is the concept where I would take the hash of the hash of the message. In other words, the message is recursively hashed. So, if my password is hashed 5,000 times, then it would take the password cracker 5,000 times more work to get to the original message. So, my wimpy laptop is reduced to going through only 200 passwords per second, instead of 1 million. Anything to slow the attackers down, is a good thing.

A password key stretched twice would look like this, where "H" is the hashing algorithm:

H(H(password))

For example, taking the SHA-256 of "password" twice gives the following hexadecimal output:

SHA-256(SHA-256("password")) = 73641c99f7719f57d8f4beb11a303afcd190243a51ced8782ca6d3dbe014d146

Possible Speed Fix #2- Key-based Derivation Functions
Another alternative, and there is a good following for this on the Internet, is using key-based derivation functions. Without getting into too much detail here (I can save this for another post), key-based derivation functions like PBKDF2, bcrypt(), and scrypt() are slow, slow, slow, and very resource intensive. As such, this greatly limits the speed in which the password cracker can recover my password, much like key stretching previously mentioned.

Failure #2- Hashed Messages
Unfortunately, there's this thing called rainbow tables. Rainbow tables simply put, are large databases with message to digest mappings. Rainbow tables are built by hashing messages of all types, and storing both the message and the digest in the database. Then, all it takes is to look up the digest in the database. If it exists, you have the message that produced it. In this case, you have the password.

Thankfully, rainbow tables are usually bloated in size, typically several terabytes. The standard password cracker isn't going to dedicate 10 TB of disk just to store rainbow tables for MD5, SHA-1, and NTLM hashes. However, due to the one-to-one relationship (in practice) of messages to digests, rainbow tables are a very effective way to recover messages from digests only.

Possible Message Fix- Salting Hashed Messages
The number one way to thwart rainbow table attacks, is to prepend a cryptographic nonce, also called a "salt" to the password. The larger the space of the salt, the more possibilities the password could be when stored to disk. For example, if the salt is chosen from a base64 set of characters, and the salt is 8 characters long, then the salt's keyspace is 64^8 or 281,474,976,710,656 possible salts. This means that the rainbow table must store 281,474,976,710,656 digests for every message. As mentioned, rainbow tables are already several TB in size. Now they must be 281 trillion times larger.

A salted message looks like this, where "H" is the hashing algorithm, and "||" indicates concatenation:

H(salt || message)

For example, if my salt was "FhuVTwA710", and the password is "password", then taking the SHA-256 would give the following output:

SHA-256("FhuVTwA710" || "password") = 4c700717f3e5eb92a872362108f2f716d2ff179ea94d1f10853a50e181a43663

Failure #3- Salted Message Authentication Codes
Message authentication codes, or MACs for short, are a way to send a message between two parties that have previously authenticated. The idea is to send a message from Alice to Bob, such that when Alice's message reaches Bob, he knows whether or not if the message has been tampered with, based on if the hash of the message produces the same hash as what Alice sent. Alice and Bob must have met previously, and agreed on a key that they would use as a "salt" for each message they send.

In other words, their MAC setup looks like this, again where "H" is the hashing algorithm, "key" is the key both Alice and Bob agreed on, and "message" is what is being sent:

H(key || message)

For example, suppose the message was "The quick brown dog jumps over the lazy dog.", and the agreed-upon key is "J6n^,1=/#;RCA5bC10". Then our SHA-256 MAC would look like this:

SHA-256("J6n^,1=/#;RCA5bC10" || "The quick brown dog jumps over the lazy dog.") = 10c98713fbaa574ad9b2ffc053662c89f6fd387a8a350f6324b966d36936d1d3
MAC = ("The quick brown dog jumps over the lazy dog.", "10c98713fbaa574ad9b2ffc053662c89f6fd387a8a350f6324b966d36936d1d3")

Alice would send the full MAC (both the message and its digest). The idea is if the message changes, the digest will change, and as such, Bob will be able to detect tampering. Eve, a malicious 3rd party, should not be able to send fraudulent message to Bob, because Eve does not know the key Alice and Bob agreed on.

However, SHA-256 uses the Merkle–Damgård construction. Merkle–Damgård is iterative in nature- IE: block-by-block. So, while Eve may not know the private key, she was able to grab both the message and its digest in transit. So, all Eve needs to do is concatenate additional data after the message, know the length of the key, calculate the new digest, and ship this message to Bob. This is known as the length-extension attack.

In other words, if you know what the digest of H(key || message) is, then you can easily determine what H(key || message || ANYTHING) is. So to compromise the MAC system, you just need to see one message and digest MAC, then you can impersonate the sender from there on out. This should bother you.

Possible MAC Fix #1- The Secret Suffix
At this point, you may be thinking to switch the order of the key and the message, or H(message || key). This is known as the "secret suffix", and it is certainly a very valid fix to address the length-extension attack. However, it comes with a large assumption that you are making about the underlying cryptographic hashing algorithm.

That assumption is that the algorithm does not have any known digest collisions. In other words, if two messages m1 and m2 can be found to produce the same digest, then the attacker will be able to use this collision on the sent message, to use a different start message, putting the hashing algorithm state in the same starting position, and thus producing the same MAC, even though the key is still unknown.

Currently, both SHA-1 and SHA-2 do not have any known collisions, but it's only a matter of time.

Possible MAC Fix #2- The Enveloped MAC
Another possible solution is to envelope the message around the key, or H(key || message || key). This requires that the attacker know the key length, in order to identify the starting point of the message, and thus be able to forge a valid MAC. While more secure than the secret suffix, there has been some research in this area that suggests weaknesses to this approach, even when two different keys are used.

Possible MAC Fix #3- SHA-224, SHA-384, or SHA-3 (Keccak)
An actual solid fix is to use SHA-3, which is based on Keccak, for your hashing algorithm. SHA-3 is not vulnerable to the length-extension attack, and as such, can be used for a MAC. This is due to the fact that SHA-3 is based on the sponge construction, which is not an iterative block-by-block compression function like Merkle–Damgård is. With the existence of SHA-3, we may not need to worry about the next section for sending MACs.

Also, SHA-224 and SHA-384 are not vulnerable to the length-extension attack, due to the internal truncation of the internal state, whereas MD5, SHA-1, SHA-256, and SHA-512, among others, output the entire state.

Solution- The Hash-based Message Authentication Code (HMAC)
HMAC securely addresses the MAC length-extension attack. It does so by taking the hash function twice in a deterministic manner, first with an inner key, and again with an outer key. The algorithm is shown here:

  1. Prepare the key:
    1. If the key is less than the hashing algorithm block size, then append zeros to the key until it is the same size as the hashing algorithm block size.
    2. If the key is greater than the hashing algorithm block size, then take the hash of the key, using the digest as the new key.
  2. Prepare the ipad and the opad:
    1. To create the ipad, repeat the byte 0x36 the hashing algorithm block size times.
    2. To create the opad, repeat the byte 0x5C the hashing algorithm block size times.
  3. Take the XOR of the key and the ipad. Call this the ipad_key.
  4. Take the XOR of the key and the opad. Call this the opad_key.
  5. Take H(ipad_key || message). Call this digest1.
  6. Take H(opad_key || digest1). This is your MAC.

HMAC has the flexibility that ANY hashing function can work that is not vulnerable to preimage attacks. This includes MD5 and SHA1, as well as the other functions listed at the beginning of this post. The security of HMAC rests on the fact that Eve can take as many MAC messages and digets as she wants, but she'll never be able to determine the digest of the a message she hasn't seen yet.

Conclusion
In reality, HMAC can be used to store passwords. It can be used as data integrity. It can be used to pseudonymize usernames. In fact, a good rule of thumb on when to use HMAC and when not to, is asking yourself the following question: Is what I am storing or sending need to be secret? If so, HMAC.

Cryptographically Secure Pseudorandom Locally Administered Unicast MAC Addresses

Recently, Apple released the ability for iPhone 5c and newer hardware to create a spoofed software MAC address for 2.4 GHz and 5 GHz wireless access points. The MAC address is locally administered, and a unicast address. This has sparked a small discussion in various forums about how to generate valid locally administered unicast MAC addresses. It is necessary that the MAC address is unicast, as many Cisco switches and routers will block non-unicast addresses. It's not necessary that the address be locally administered, however. This just takes the address out of the globally administered range, and the possibility of conflict with other devices connected to the same switch.

According to the IEEE, in order to create a unicast address, the least significant bit of the most significant byte must be 0. This means the most significant byte must be an even number, eliminating half of the 256 valid possibilities for the first byte. This leaves us with only 128 numbers. In order to be locally administered, the second least significant bit in the most significant byte of the address must be 1. This eliminates another 64 addresses from the total space for this byte. As such, there are only 64 possible values this address can start with. The rest of the bytes can be as random as you wish, giving you a total space of 70,368,744,177,664 addresses to choose from.

In the Unix shell, you can execute the following code, which should be fairly platform agnostic:

$ random_mac() {printf '%02x' $((0x$(od /dev/urandom -N1 -t x1 -An | cut -c 2-) & 0xFE | 0x02)); od /dev/urandom -N5 -t x1 -An | sed 's/ /:/g'}
$ random_mac
3a:cc:9a:1a:fb:00

Here are 16 valid locally administered unicast MAC addresses generated from the shell:

$ for i in {1..16}; do random_mac; done
ca:4a:db:c0:b5:d3
c2:67:92:8b:3c:f2
1e:ca:bb:2d:0c:2b
fa:86:2e:61:90:8c
c6:f4:17:50:5f:c2
12:1b:db:55:c9:36
fe:5f:7f:34:36:49
3a:2e:be:b0:11:17
4a:85:af:03:ca:3c
22:64:7c:49:fd:1f
4a:cb:16:5a:18:1c
d6:10:25:5b:86:42
a2:2d:0f:1c:49:c5
a2:b7:a0:46:72:1c
d2:ac:73:2c:55:5b
5a:56:45:b7:94:61

If you wanted to have a random mac address assigned to your wireless NIC every time you brought up your network interfaces on Debian or Ubuntu, you could write the following shell script, and place it in the "/etc/network/if-pre-up.d/" directory:

1
2
3
#!/bin/sh
LLADDR=$(printf '%02x' $((0x$(od /dev/urandom -N1 -t x1 -An | cut -c 2-) & 0xFE | 0x02)); od /dev/urandom -N5 -t x1 -An | sed 's/ /:/g')
ip link set address $LLADDR wlan0

Make sure it's executable:

$ sudo chmod a+x /etc/network/if-pre-up.d/random_mac.sh

Playing Card Ciphers

For the past couple of weeks, I've been focused heavily on hand ciphers for field agents. Although I'm certainly no expert on cryptography, aside from the One Time Pad (OTP), I've had a hard time finding any hand cipher that would be considered secure in the computer age. It's certainly no doubt that field agents are very likely using computing with SSL and GPG, among other crypto tools, to communicate with each other. The romantic days of "Spy versus Spy" encrypting and decrypting notes by hand, doing dead drops in a tree stump in a park, and speaking 15 foreign languages fluently, are probably over. Regardless, I could not bring myself to believe that there were absolutely no secure hand ciphers to study.

I had already been familiar with Bruce Schneier's "Solitaire" card cipher, and always considered that a fairly creative use for a hand cipher. Despite its bias, it's still very secure, although slow to execute by hand, and very error prone. But this got me thinking- has anyone else attempted to create hand ciphers with playing cards, and if so, how far have they taken it?

After searching the Internet, I found five additional playing card ciphers, and implemented one mechanical cipher into a playing card cipher, Each comes with their own varying levels of security. While I was at it, I created my own playing card cipher, and I'm still currently evaluating its security. That brings to total list to eight playing card ciphers:

As you can see, I've spent a great deal of time learning each of the algorithms, and have typed them up by hand on my own personal wiki. Quadibloc is the only card cipher, at the time of this writing, that I am still learning and working on. I'm hoping this can be a centralized repository for all things playing card ciphers. My goals for this personal project are:

  • Publish software implementations of each card cipher.
  • Publish online videos giving a tutorial of each card cipher
  • Learn the strengths and weaknesses of each card cipher.

The big advantage of using playing cards, in my opinion, is the ability for the deck to maintain state while working through the algorithm. A standard deck of cards can have a maximum key space of 52! which is about 238-bits of entropy. This is likely larger than many SSL keys on the Internet protecting your bank login. So, provided the algorithm is non-linear, mixes the deck thoroughly on each round, and is not biased, it is possible that the algorithm could resist attacks from even the most well funded adversaries.

There is still work to be done, and I doubt this will be of any value to the general cryptographic community. After all, the OTP can be mathematically proven to be unbreakable, and computer algorithms are fast and error-free. So, who in their right mind would want to learn hand ciphers with playing cards, when they won't have a mathematical proof of unbreakability, they're slow, and error-prone?

Me.

Talon

I've been obsessing over the past couple weeks trying to improve Bruce Schneier's solitaire cipher, aka "Pontifex". The more I think about it, the more I realize that there just isn't a lot that can be done about the bias of Pontifex without severely slowing down the already slow algorithm. So, instead of trying to improve on his algorithm, I came up with my own - "Talon".

This cipher discards the two jokers. You only need the standard 52-cards from a poker or bridge set (4 suits, Ace through King). As with Pontifex, this is a output feedback mode stream cipher. Also, when determining the value of the output card, the same suit order is used:

  • Clubs - face value + 0
  • Diamonds - face value + 13
  • Hearts - face value + 26
  • Spades - face value + 39

An unkeyed deck would have Ace through King of Clubs, followed by Ace through King of Diamonds, followed by Ace through King of Hearts, followed by Ace through King of Spades.

This algorithm is executed in 4 steps. You will be making four discard piles, or "talons", labeled 1, 2, 3, & 4 from left to right:

  1. Create four discard piles. With the deck face-up in your hand, place the top card in discard pile #1, the 2nd card in discard pile #2, the 3rd card is discard pile #3, and the 4th card in discard pile #4. For example, if the deck was unkeyed, then the Ace of Clubs would be in discard pile #1, the Two of Clubs in #2, the Three of Clubs in #3, and the Four of Clubs in #4.
  2. Note the face value of discard pile #1, ignoring suit, and count that many cards minus 1 from the top of the deck, and place them on top of discard pile #1. If the card was a Jack, then count 10 cards from the face up deck in your hard, and place them on top of the Jack. Do the same for the other three piles, in order (#2, then #3, then #4). In other words, the first card placed down in the discard pile, or "talon", will determine the total number of cards in that stack.
  3. Collect the piles by placing discard pile #1 on top of pile #2 on top of pile #3 on top of pile #4, and place the stack behind the face up deck in your hand. If all 52 cards were in discard piles (13 cards in each pile), then place the newly collected stack in your hand, face up.
  4. Find the output card by looking at the face value of the top card, including suit. Count down that many cards into the deck, minus 1. If the top card is a Queen of Hearts, then the value would be 12 + 26 = 38. Count 37 cards from the top of the deck, and record the face value of the 38th card, including suit. If the top card is the King of Spades (13 + 39 = 52), count 51 cards from the top of the deck, which would mean recording the value of the bottom card in the deck. If the top card is the Ace of Clubs, do not count any cards, and record "1" as your output card. This ensures that each card is equally likely to be the output card as the rest in the deck.

The key lies in the initial order of the deck, as with other designs. It can be keyed with random shuffling, bridge puzzles, or passphrases. If a passphrase is used, step 4 is replaced:

  1. Pass cut. Get the numerical value of the first character in your passphrase. As with the original step #4, count that many cards minus 1 from the top of the deck, and cut them below the rest of the cards. A = 1, B = 2, ..., Y = 25, Z = 26. This step will break the reversibility of Talon, while keying the deck.

Example:

Suppose we start with an unkeyed deck. Our top card will be the Ace of Clubs, with the deck face up in our hands, and the bottom card will be the King of Spades.

After step 1, we would have the following 4 discard piles:

#1  #2  #3  #4
--------------
AC  1C  2C  3C

After step 2, our discard piles would look like:

#1  #2  #3  #4
--------------
AC  2C  3C  4C  <-- Bottom of discard piles
    5C  6C  8C
        7C  9C
            10C

Remaining in my hand would be:

After step 3, the order of my hand would now be:

JC,QC,KC,AD,2D,3D,4D,5D,6D,7D,8D,9D,10D,JD,QD,KD,AH,2H,3H,4H,5H,6H,7H,8H,9H,10H,JH,QH,KH,AS,2S,3S,4S,5S,6S,7S,8S,9S,10S,JS,QS,KS,AC,5C,2C,7C,6C,3C,10C,9C,8C,4C

For step 4, the Jack of Clubs is the top card. Thus, its numerical value is 11 + 0 = 11. Counting 11 cards gives me the Nine of Diamonds as my output card. I would write down 22 as my output number (9 + 13 = 22).

After another round, my deck would be ordered as:

9S,10S,JS,QS,KS,AC,5C,2C,7C,6C,3C,10C,9C,8C,4C,2D,3D,4D,5D,6D,7D,8D,9D,10D,JD,JC,QD,KD,AH,2H,3H,4H,5H,6H,7H,8H,9H,QC,10H,JH,QH,KH,AS,2S,3S,4S,5S,6S,7S,8S,KC,AD

Because my top card is the Nine of Spades, it's numerical value is 9 + 39 = 45. Counting 45 cards gives me the Four of Spades as my output card. I would write down 43 as my output number (4 + 39 = 43).

Talon is reversible, does not need an IV like Mirdek, and less error-prone than Pontifex. It greatly reduces the chance of a bias, by fast mixing the internal state through the discard piles with 4 cuts in total in 1 round. Unfortunately, according to Bruce Schneier, the chances of this cipher being secure and efficient are negligible.

I see about two new cipher designs from amateur cryptographers every week. The odds of any of these ciphers being secure are slim. The odds of any of them being both secure and efficient are negligible. The odds of any of them being worth actual money are virtually non-existent.

Currently, I do not have an implementation in Python or C. I also do not know the length of the internal PRNG. My testing shows it is sufficient for small messages, such as the length of a tweet, which is practical for field operation. My testing also shows that there is no internal bias in the system. One thing I did catch during my analysis, is that the sum of the discard piles follows the Poisson Distribution. I'm not sure how this will affect the security of Talon, if any.

The Outrageous Fares Of The Utah Transit Authority

I hope my readers don't mind, but I'm going to break from the standard geekery for a second, and type up a personal post about something that has been bugging me for the past few years.

For those that don't know, I live in Utah, and work in Salt Lake City for the best local ISP on the planet. I also teach a Linux certification course at the University of Utah 1 night per week as a faculty adjunct instructor. As a benefit of teaching at the University, I get a monthly Utah Transit Authority (UTA) premium transit pass that allows me to ride FrontRunner, TRAX, and standard bus services, without paying out of my personal pocket.

I'll spare you my personal philosophy that all public transit should be paid for in gas taxes at the pump. However, according to a report by KSL, the UTA is one of the most expensive transit systems in the United States. The Salt Lake Tribune also ran a similar story. Just in case the link goes dead, here is a paragraph that I found very telling:

Of 169 U.S. transit agencies that participated in the APTA survey, only five will now have higher base fares for buses than UTA. They are in New York City, $2.75; San Francisco, $4; Nashville, $4; Sugar Land, Texas, $3.50; and Eden Prairie, Minn., $3.

Another four systems match the $2.50 to be charged for buses by UTA. They are in Atlanta, Pittsburgh, Portland and Sacramento. The 2012 APTA survey said the median base fare for buses among U.S. transit agencies was $1.50 per trip.

UTA's TRAX also will have among the highest fares nationally for light rail.

No agency included in the 2012 APTA survey now has fares higher than the $2.50 to be charged by UTA — but four match it. They are in Sacramento, San Diego, Pittsburgh and Dallas. Of 21 light-rail agencies surveyed by APTA in 2012, the median light rail base fare was $2 a trip.

That means among transit systems nationally that have lower bus or train fares than UTA are such places as Chicago, Los Angeles, Phoenix, Denver, Philadelphia, Cleveland and Dallas.

Now, I take advantage of my university pass on the UTA, when it's valid. However, it's only valid during trimesters. Between trimesters, because I am adjunct faculty, my card is deactivated on the last pay period of the trimester, and activated on the first pay period of the trimester. So, there is about a full month where I cannot use my university pass, and must pay full fare ride transit.

However, the UTA is offering a 20% discount if you use their tracking FAREPAY card. It's a pre-paid card that you can use as often as you need, provided a balance remains on the card to pay the fare. Because I am currently using the FAREPAY system for about 3 months of the year, I was curious how I could optimize my costs between trimesters. Currently, I drive to a FrontRunner station (a diesel commuter train) which takes me to a TRAX stop (a light rail train). The stop is in the "free fare" zone, so there is no charge to take TRAX to the stop near my office.

So, is it cheaper in both gas and UTA fares to drive to the closest FrontRunner station? Or should I drive a station or two away? Where will I get the cheapest per month cost for getting into work from home? I only live 30.7 miles away from the office by the most direct route.

Turns out, it doesn't involve the UTA at all. Not only is driving directly into the office considerably faster (45 minutes one way commute versus 65 minutes on transit), but it's cheaper. Considerably cheaper.

Here is a table showing my work. There are a total of 6 FrontRunner stations between my home and my office. In order, they are Clearfield, Layton, Farmington, Woods Cross, North Temple, and Salt Lake Central. I work a standard 9-5 M-F day job, and once per week, I must take TRAX up to the university.

First, I have a fuel efficient car. I've averaged about 32-35 miles to the gallon with mixed city and freeway driving from home to work and back. I've actually logged 45 mpg many times, but I'm usually doing almost straight freeway driving with hypermiling strategies, and it isn't that difficult to get 40 mpg in my car. Regardless, I'll use 32 miles per gallon as the basis for my commuter costs. With an average Utah state gas rate of $3.61 per gallon, this puts my monthly costs for gasoline at $150 per month to commute round trip to work from home. Looking over my payment history with my bank, this value is almost exactly spot-on.

Here are the distances from my house to each of the FrontRunner stations:

To Station One Way Gallons PPG Gas Round Trip
Clearfield 4.2 mi 0.13125 $3.61 $0.47 $0.95
Layton 6.6 mi 0.20625 $3.61 $0.74 $1.49
Farmington 13.4 mi 0.41875 $3.61 $1.51 $3.02
Woods Cross 20.6 mi 0.64375 $3.61 $2.32 $4.65
North Temple 29 mi 0.90625 $3.61 $3.27 $6.54
Salt Lake 30.3 mi 0.946875 $3.61 $3.42 $6.84

Looking strictly at UTA fares, here is what it would cost me to get into the office from each FrontRunner station:

From Station Fare 80% Fare Round Trip 80% Round Trip
Clearfield $4.30 $3.50 $8.60 $7.00
Layton $3.70 $3.00 $7.40 $6.00
Farmington $3.10 $2.50 $6.20 $5.00
Woods Cross $2.50 $2.00 $5.00 $4.00
North Temple $2.50 $2.00 $5.00 $4.00
Salt Lake $0.00 $0.00 $0.00 $0.00

So, combining gas prices and UTA fares, here would be the total cost per stop to get to my office:

From Station One-way 80% One-way Round Trip 80% Round Trip
Clearfield $4.77 $3.97 $9.55 $7.95
Layton $4.44 $3.74 $8.89 $7.49
Farmington $4.61 $4.01 $9.22 $8.02
Woods Cross $4.82 $4.32 $9.65 $8.65
North Temple $5.77 $5.27 $11.54 $10.54
Salt Lake $3.42 $3.42 $6.84 $6.84

It's clearly cheaper to drive all the way into the Salt Lake Central station, and take TRAX to the office, as that rail fare is in the "Free Zone", and no extra cost to me. However, if I don't want to drive all the way into Salt Lake, then driving to the Layton station 6.6 miles away is the most optimal.

I can't forget that once per week, I need to take TRAX up to the university to teach. This is a $5 round trip ticket, which is good for all destinations on TRAX, all day long, or $4 with FAREPAY. Including that into my final cost, what would my gas and UTA fare costs look like in 5-day work week? Per month? Per year? (Note: the "80%" headers show the cost of gas plus the 20% FAREPAY discount. It is not 80% of each total.)

From Station Weekly 80% Weekly Yearly 80% Yearly Monthly 80% Monthly
Clearfield $52.74 $43.74 $2,742.38 $2,274.38 $228.53 $189.53
Layton $49.45 $41.45 $2,571.17 $2,155.17 $214.26 $179.60
Farmington $51.12 $44.12 $2,658.08 $2,294.08 $221.51 $191.17
Woods Cross $53.24 $47.24 $2,768.45 $2,456.45 $230.70 $204.70
North Temple $62.72 $56.72 $3,261.21 $2,949.21 $271.77 $245.77
Salt Lake $39.18 $38.18 $2,037.47 $1,985.47 $169.79 $165.46

If I drove to the Salt Lake Central station 5 times per week, and paid for a round trip TRAX ticket to the university once per week, I'm looking at about $170 per month, or $165 with FAREPAY. If I take FrontRunner from my most cost-efficient station, it's $214 per month, or $180 per month with FAREPAY. From the closest station, I'm looking at $230 per month or $190 per month with FAREPAY. UTA charges $198 per month for a premium pass (which is required to use FrontRunner and express buses), which doesn't include my gas costs driving to each station.

$214 is over a 60% increase over commuting to work in my car (the FAREPAY discounts ends Dec 31, 2014), and $198 is almost a 50% increase. I will spend an extra $700 per year to the UTA for using their services. Over 10 years, that's almost half the cost of a base model Toyota Corolla, brand new. So, when purchasing a new car, if there are plans on using the UTA, realize that the cost of your new car went from $15,000 to $22,000.

As a frequent rider of the UTA system, watching people use their passes, gauging who is riding by dress, demeanor, and conversation, I would dare say that 80-85% of ridership is only riding FrontRunner and TRAX, because they have an education pass or their pass is paid for or reimbursed by their employer. This means that most people are riding the UTA, because they don't have to drive, and they're not paying for it personally, which is probably due to the astronomical fare prices. Of course, I don't have access to their ridership data, so this is just speculation based on observation while riding. I doubt I'm far off though.

I would like to say that the fare goes to quality services, buses, and trains, but it doesn't. The FrontRunner and TRAX stops have very little overhead protection from the sun and weather, and even fewer seats. The FrontRunner has three state-of-the-art Bombardier double-decker commuter cars, and 1 refurbished 1970s Comet car. Some TRAX trains are new, many are old and tattered. Service reliability is not great across the rail systems (I don't have any numbers, just personal experiences), and some stations are in need of repair (the Clearfield station was getting some tile replaced, but it's only half-finished, and has remained so for a few months, with no evidence of progress).

It is true that I don't have to deal with traffic jams on the freeway, but it has rarely taken me more than an hour to reach the office, even in the most deadlocked situation. I don't have to deal with bad weather driving, which is a bonus, and I can get work done while on the train, provided I can find a table to work on in the commuter car. Despite these benefits, I hardly think they are worth the extra $700 per year you will give the UTA for these luxuries, provided the setbacks of their infrastructure.

As a former resident of Toronto, Canada, the UTA fares are outrageous. I came from a city where fares are currently less than $135 per month for access to all TTC buses, street cars, light rail, and subways. The TTC rail stations are vastly superior to UTA stations, providing adequate overhead protection, generous seating, and are well-maintained. The system is much more reliable (I didn't experience a single delay in over two years) and the train cars are more up-to-date in a wider fashion. I wish I was familiar with other transit systems to compare, but the TTC is the only other transit system I'm deeply familiar with. Notice also that while an individual ride on the TTC might be $3, the monthly pass is $133. Compare to the UTA where a single ride is $2.50 and the monthly premium pass is $198 (it's true that the TTC does not include Go Transit, the long-distance commuter rail, and Go Transit does not offer discounts for TTC card holders, so this may not be exactly apples-to-apples).

I hope the UTA comes down considerably in price (about half), because when the University of Utah decides it doesn't want to pay the UTA for my transit pass, I'll be back on the road commuting in my car.

UPDATE:
As a partial solution to reducing fares, aside from increasing the state gas tax, maybe the executive officers, directors, and managers of the Utah Transit Authority could take a pay cut. I'd wager that $511,000 puts the COO of the UTA in the top 1 or 2 percent of Utah salaries.