While writing a shell script the other day, I was redirecting some output to /dev/null, as normal, when something dawned on me. Why don't I redirect my output to /dev/random instead? After all, both Linux random devices are writable by everyone on the system:
$ ls -l /dev/*random crw-rw-rw- 1 root root 1, 8 Nov 13 15:14 /dev/random crw-rw-rw- 1 root root 1, 9 Nov 13 15:14 /dev/urandom
Knowing what I know about the Linux cryptographic pseudorandom number generator (CSPRNG), I know that any bits put into the CSPRNG input pool are hashed with the SHA1 cryptographic hash function 512-bits at a time. This includes any data you redirect to it from the shell, as well as output from itself. When data is fed into the CSPRNG input pool, the RNG is reseeded.
To understand this concept of seeding an RNG, let's assume for a moment that the only source of input for the RNG is its own output. If this were the case, we would only need a starting value to "seed" the RNG, then let it run by hashing its own digests. In this scenario, each digest is chosen deterministically, and if we know the input that seeded the RNG, we can predict all of its future outputs.
Think of this scenario like a progress bar. For the SHA1 cryptographic hash, there are 2^160 possible unique digests. Theoretically, our RNG should be able to work through all 2^160 digests only once before starting over, provided there is enough time to do so (we know now this isn't the case, which means SHA1 has been weakened in terms of collision attacks). However, when you change the input by providing something other than the next digest in the queue, you change the next starting point of the RNG. It's as though you've "skipped" to a non-sequential location in your progress bar.
Now, consider constantly reseeding your RNG. This is what your Linux system is actually doing. It's constantly processing timing events from disk IO, network packets, keyboard presses, mouse movements, etc. All these inputs get collected into an "entropy pool", which is then hashed with SHA1 512-bits at a time, as we mentioned earlier. This input changes the sequential ordering of the digest outputs, making the result unpredictable, and non-deterministic.
So, when working on the shell, by redirecting your output to /dev/random, you reseed the CSPRNG, meaning you have changed the digest output ordering to something different than what it would have been had you not redirected those bits. In fact, the more you send data to the CSRNG, the more you reseed it, forever altering the path it takes on its "progress bar".
Now, you may ask why not have some userspace PID running in the background that is always reseeding the CSPRNG? Sure, you can. In this case, I would recommend running Haveged on your system. Haveged will probe much more hardware events on the system than the default install will, and keep the entropy pool topped off at full. The CSPRNG will be constantly reseeded. However, for shell scripts, redirecting to /dev/random instead of /dev/null works.
My only concern with redirecting to /dev/random would be bandwidth concerns. Doing a simple and crude benchmark comparing /dev/null to /dev/random, I get the following on my workstation:
$ for I in {1..5}; do dd if=CentOS-6.5-x86_64-bin-DVD1.iso of=/dev/random; done 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 81.3842 s, 54.9 MB/s 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 76.4597 s, 58.4 MB/s 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 74.6036 s, 59.9 MB/s 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 75.4946 s, 59.2 MB/s 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 74.375 s, 60.1 MB/s
$ for I in {1..5}; do dd if=CentOS-6.5-x86_64-bin-DVD1.iso of=/dev/null; done 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 59.325 s, 75.3 MB/s 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 56.5847 s, 79.0 MB/s 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 54.4541 s, 82.1 MB/s 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 56.0187 s, 79.8 MB/s 8726528+0 records in 8726528+0 records out 4467982336 bytes (4.5 GB) copied, 57.0039 s, 78.4 MB/s
Seems I get slightly better throughput with /dev/null, which isn't surprising. So, unless you know you need the throughput, I would recommend sending your data to /dev/random over /dev/null.
{ 2 } Comments