Image of the glider from the Game of Life by John Conway
Skip to content

New Home

If you notice the blog feels a bit more snappy and responsive, that's because it is. It is now moved to a new resting place on MUCH more capable hardware than it was on. Previously, it was on an old HP Pavilion desktop that was a repossession back in 2005. I purchased it from the company for $20. A mere Pentium Celeron at 700 Mhz with 512 MB of PC-133 RAM. It's been powering this blog in my basement, and many others since, on a non-redundant 10 GB hard drive.

Recently, I purchased and built some new hardware with much more impressive specs, complete with high availability and redundancy everywhere. ZFS filesystem with SSD read-only L2ARC caches, gigabit networking (and behind a new fat pipe), 20 Gbps Infiniband, more beef behind the CPU and RAM, and even a fairly solid GPU for some general purpose computing. Basically, this blog went from being a 1979 Buick LeSabre to a 2008 Shelby GT 500.

However, there are some wrinkles that still need ironing out. Hopefully, I can get those taken care of soon. In the meantime, please pardon any dust, should you notice it. Thanks.

Ramadan, Take Two

Two years ago, I participated in the Islamic holy month of Ramadan. I blogged about my experiences, and you can read them here: Looking forward to Ramadan, Ramadan - Week One, Ramadan - Week Two, Ramadan - Week Three, An Open Letter to Pastor Terry Jones, and Ramadan - Week Four. Well, I intend to participate again this year, and I intend to blog about my experiences in a similar format as I did two years ago- once per week, summarizing how the week went.

Two years ago, I had three reasons for participating:

  1. Raise awareness about the Islam faith and promote religious tolerance.
  2. Grow closer to my God.
  3. Turn my personal weaknesses into strengths.

This year will be no different for me. The same three reasons above will apply. However, instead of reading the Holy Quran, as Muslims typically do, I will be reading one of my holy books, The Book of Mormon, from cover to cover. As I mentioned two years ago, I am a Christian belonging to The Church of Jesus Christ of Latter-day Saints. The Book of Mormon, along with the Holy Bible, we regard has Holy Scripture. The Book of Mormon is a bit more lengthy than the Quran, with approximately 270,000 words, where as the Quran has approximately 80,000 words. I felt rushed reading the Quran in one month, so I can only imagine how I'm going to feel reading 3x the amount of literature in the same time span. Should be interesting. I will still make attempts to attend the local mosque in Salt Lake City, at least once per week.

One interesting side note, when I fasted two years ago, I did it from sunrise to sunset. It wasn't until later that I learned that I am supposed to be fasting from dawn until sunset, which is about 30 minutes longer each day. Knowing this, I'll make sure to follow this a bit better. Also, last time, I chewed gum during the month to keep my breath smelling fresh. I learned that this was breaking the fast, so no chewing gum this year. Because the month is July 19 through August 18 this year, the days are longer than they were two years ago, which means this will certainly be more challenging. We start with a fast lasting roughly 16 hours on July 19.

See you then.

Libvirt, Tyan Motherboards, and UUID

I recently built two servers that I plan on using for a sandbox with various technologies (Infiniband, ZFS, RDMA, GlusterFS, Btrfs, Ceph, LXC, KVM, etc, etc, etc). So, getting everything installed and running, I ran into a rather interesting bug. I installed KVM and libvirt, and started rolling out some virtual machines. I wanted to test live migration using a GlusterFS+ZFS shared storage, however, I was met with the following warning:

(kvm1) # virsh migrate --live --unsafe --verbose www-dev qemu+ssh://aaron@kvm2.example.com/system
aaron@kvm2.example.com's password:
error: internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009

Curious, I checked out the board system UUID through DMI on both servers, and sure enough, they had the same UUID!:

(kvm1) # dmidecode -s system-uuid
00020003-0004-0005-0006-000700080009
(kvm2) # dmidecode -s system-uuid
00020003-0004-0005-0006-000700080009

Apparently, this is a known issue with Tyan motherboards. I have the Tyan Thunder n3600b (S2927-E). Digging through the BIOS, there is no option to change it. Short of flashing the BIOS, which may or may not support assigning new UUIDs, or replacing the chip, I didn't know what to do. So, I started digging deeper, and I found that libvirt actually supports changing what UUID is reported (even though it's not actually changed in tho BIOS). This can be done by editing the /etc/libvirt/libvirtd.conf file (notice it is the libvirtd.conf, and not the libvirt.conf file). So, you just need to generate a random UUID, and edit the config:

(kvm1) # cat /proc/sys/kernel/random/uuid
c0118352-aceb-4632-b0fa-014264e85fe0
(kvm2) # cat /proc/sys/kernel/random/uuid
2bce9972-dd6a-4318-a2b0-9f93706decdc

The line you need to modify is the "host_uuid" line. Something like this:

On kvm1:

host_uuid = "c0118352-aceb-4632-b0fa-014264e85fe0"

On kvm2:

host_uuid = "2bce9972-dd6a-4318-a2b0-9f93706decdc"

At this point, restart the libvirt-bin service, and you should be good to go:

# /etc/init.d/libvirt-bin restart
Restarting libvirt management daemon: /usr/sbin/libvirtd.
# virsh capabilities | grep uuid
c0118352-aceb-4632-b0fa-014264e85fe0

I can now successfully migrate my virtual machines with no error. However, it's unfortunate that motherboard vendors are not properly generating unique UUIDs for their boards, especially ones marked as "server" motherboards.

Aptitude Madness

I always use the "-R" or "--without-recommends", and this is why:

root@yin:~# aptitude install virtinst
The following NEW packages will be installed:
  acl{a} colord{a} consolekit{a} dconf-gsettings-backend{a} dconf-service{a} fontconfig{a} fontconfig-config{a}
  hicolor-icon-theme{a} libatk1.0-0{a} libatk1.0-data{a} libcairo-gobject2{a} libcairo2{a} libck-connector0{a}
  libcolord1{a} libcups2{a} libdatrie1{a} libdbus-glib-1-2{a} libdconf0{a} libdrm-intel1{a} libdrm-nouveau1a{a}
  libdrm-radeon1{a} libdrm2{a} libexif12{a} libfdt1{a} libfile-copy-recursive-perl{a} libfontconfig1{a}
  libgd2-xpm{a} libgdk-pixbuf2.0-0{a} libgdk-pixbuf2.0-common{a} libgl1-mesa-dri{a} libgl1-mesa-glx{a}
  libglapi-mesa{a} libgphoto2-2{a} libgphoto2-l10n{a} libgphoto2-port0{a} libgtk-3-0{a} libgtk-3-bin{a}
  libgtk-3-common{a} libgtk-vnc-2.0-0{a} libgudev-1.0-0{a} libgusb2{a} libgvnc-1.0-0{a} libieee1284-3{a}
  libjasper1{a} libjbig0 liblcms2-2{a} libltdl7{a} libpam-ck-connector{a} libpango1.0-0{a} libpolkit-agent-1-0{a}
  libpolkit-backend-1-0{a} libpolkit-gobject-1-0{a} libsane{a} libsane-common{a} libsane-extras{a}
  libsane-extras-common{a} libthai-data{a} libthai0{a} libtiff4{a} libv4l-0{a} libv4lconvert0{a} libvde0{a}
  libxcb-glx0{a} libxcb-render0{a} libxcb-shm0{a} libxcomposite1{a} libxcursor1{a} libxdamage1{a} libxen-4.1{a}
  libxfixes3{a} libxft2{a} libxinerama1{a} libxpm4{a} libxrandr2{a} libxrender1{a} libxxf86vm1{a} openbios-ppc{a}
  openbios-sparc{a} openhackware{a} policykit-1{a} python-libvirt{a} python-libxml2{a} python-pycurl{a}
  python-urlgrabber{a} qemu{a} qemu-system{a} qemu-user{a} sane-utils{a} ttf-dejavu-core{a} update-inetd{a}
  vde2{a} virt-viewer{a} virtinst
0 packages upgraded, 93 newly installed, 0 to remove and 0 not upgraded.
Need to get 96.3 MB of archives. After unpacking 301 MB will be used.
Do you want to continue? [Y/n/?]

Consider it without recommends:

root@yang:~# aptitude -R install virtinst
The following NEW packages will be installed:
  python-libvirt{a} python-libxml2{a} python-pycurl{a} python-urlgrabber{a} virtinst
The following packages are RECOMMENDED but will NOT be installed:
  qemu virt-viewer
0 packages upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 2,258 kB of archives. After unpacking 5,403 kB will be used.
Do you want to continue? [Y/n/?]

5 packages versus 93. 2 MB versus 96 MB. That is all.

Another Reminder About Passwords

Two things are prompting this post. First, the recent leak of LinkedIn passwords, and second, family/friends' email accounts getting hacked. It's amazing to me how many posts there have to be on the Internet about password security, and how little attention people pay to them. One could say that many of the weak password demographic doesn't read tech blogs, and if they did, they wouldn't understand most of the post. Even then, I've had friends in the tech industry who should know better, and still ended up with hacked accounts. So, while I might be reaching a limited demographic, and for those who I am reaching, won't care, I'm covering it anyway.

To prevent a compromise of your account because of your password, all you need to do, are two things:

1. Use different passwords for every account online

This is probably the most difficult step for most. Remembering 100 passwords or more can be a major pain in the butt. Everyone has their way of doing it too, but from what I've seen with most people, a single password is used on multiple accounts. This is especially critical for finance and corporate accounts. No one really cares if your personal email or fitness account is hacked, but you might care when your savings is emptied, or your boss might care if sensitive data is leaked.

So, I would recommend the following system for using different passwords on every account. First, generate and print a password card. I've blogged about this before. Essentially, your passwords are stored in plain text on the card itself. You pick a row color and column symbol on the card as the starting point for your password, then go from there. That becomes the password for your account. Second, I would install KeePass. For every password you create from your card, and add to your account, make note of it in the encrypted database, including where the password starts, the direction it takes, and how long it is. This way, should you forget your starting location, you have an encrypted database to get access to all the passwords you've created.

2. Use passwords with a great deal of entropy

I hate "password strength meters", because they are always completely arbitrary, and really don't communicate to the user what that strength is or where it comes from. Usually, they just assign points to things like uppercase letters versus lowercase, extra points for symbols and numbers, points for length, etc. Like playing tetris, if you fit all the pieces of your password together, maybe you can get a high score. To me, these are pointless and not helpful. Instead, you should be concerned about the entropy your password belongs to.

Think of entropy like a haystack. Your password is the needle. Aside from burning down the haystack, can you find the needle? Of course, the larger the haystack, the harder it will be to find the needle. I have also blogged about this in the past. Thankfully, Gibson Research Corporation has put together a web application that uses this analogy. Entropy can be defined in a simple equation: length of your password times the log base 2 of the character set search space. In other words, it's not arbitrary points. It shows you the size of your haystack. The larger the haystack, the more difficult it will be to find your needle. Play with some passwords in that web site, and you'll get an idea of how this works.

They key point here, however, is to help people understand how password attacks work. Attackers don't start by incrementing through the alphabet, starting with 'a'. Instead, if brute forcing, they will start with common words in a dictionary, and popular modifications of those words (think "leet speak"). They will use common phrases, then append and prepend numbers to these dictionary words and phrases. Believe it, or not, but this is a very effective way to get a vast majority of passwords. Why? Because the haystack is small. Very small. If your needle is in that haystack, it will get be found.

So how do you get a larger haystack? Well, first use uppercase and lowercase letters, numbers and symbols. We want a large character set to search through. But, make the password LONG. You would be amazed at how much bigger your haystack is with a 9 character password versus an 8 character password. Length will buy you much more hay then some convoluted, difficult to remember, pain in the butt password. Length is key. Different character sets are also important, but length gets you so much more hay.

Conclusion

Think. Think about your haystack. Think about being an attacker. Think about your data. If you would just sit down, and think your passwords through, you would be ahead in the game. Remember, different passwords for different accounts, and big haystacks.

Zombie Proccess- What They Are and How To Handle Them

First off, a zombie process isn't really a process. At least it's not executing anymore. A zombie process is more of a "state", and that state is "defunct". However, we typically refer to them as "zombie processes", so I'll stick with convention here. Second, a zombie process on a Unix system is a child process that has not been waited on by the parent. In a typical scenario, when a child process is finished executing its task, the chain of events will go something like this:

  1. Child process issues the signal SIGCHLD to the parent.
  2. Parent receives SIGCHLD, issues the "wait()" system call.
  3. Parent now receives the exit code of the child.
  4. Parent reaps the child from the process table.

So, when the child process has finished execution of its task, it will report the exit code to the parent. At this point, the child process will remain in the process table until it receives further instruction from the parent. This wait is the defunct, or zombie state. So, in reality, child processes are in this state all the time. It's just that normally, the parent process acts on it immediately. When the parent does not respond, then we have the zombie state of that child process.

You can check if there are any zombie processes on your system with the following command:

$ ps -eo pid,ppid,user,args,stat --sort stat

Any state of "Z" is a zombie state. So, the question becomes, how do you clean out the zombie, if it is causing issues with your system? Well, you have 3 options:

  1. Physically wait around. Sometimes, the parent is busy, and just hasn't acknowledged the child. When the parent is free, it could clean it up.
  2. Send the "SIGCHLD" signal to the parent process. The above command will give you that output in the "PPID" column.
  3. Fully kill the parent process. Any child processes will be orphaned, and picked up by INIT. INIT does frequent reaping of child processes and will reap any zombie states.

Install ZFS on Debian GNU/Linux

Table of Contents

Zpool Administration ZFS Administration Appendices
0. Install ZFS on Debian GNU/Linux 9. Copy-on-write A. Visualizing The ZFS Intent Log (ZIL)
1. VDEVs 10. Creating Filesystems B. Using USB Drives
2. RAIDZ 11. Compression and Deduplication C. Why You Should Use ECC RAM
3. The ZFS Intent Log (ZIL) 12. Snapshots and Clones D. The True Cost Of Deduplication
4. The Adjustable Replacement Cache (ARC) 13. Sending and Receiving Filesystems
5. Exporting and Importing Storage Pools 14. ZVOLs
6. Scrub and Resilver 15. iSCSI, NFS and Samba
7. Getting and Setting Properties 16. Getting and Setting Properties
8. Best Practices and Caveats 17. Best Practices and Caveats

UPDATE (May 06, 2012): I apologize for mentioning it supports encryption. Pool version 28 is the latest source that the Free Software community has. Encryption was not added until pool version 30. So, encryption is not supported natively with the ZFS on Linux project. However, you can use LUKS containers underneath, or you can use Ecryptfs for the entire filesystem, which would still give you all the checksum, scrubbing and data integrity benefits of ZFS. Until Oracle gets their act together, and releases the current sources of ZFS, crypto is not implemented.

Quick post on installing ZFS as a kernel module, not FUSE, on Debian GNU/Linux. The documents already exist for getting this going, I'm just hoping to spread this to a larger audience, in case you are unaware that it exists.

First, the Lawrence Livermore National Laboratory has been working on porting the native Solaris ZFS source to the Linux kernel as a kernel module. So long as the project remains under contract by the Department of Defense in the United States, I'm confident there will be continuous updates. You can track the progress of that porting at http://zfsonlinux.org.

UPDATE (May 05, 2013): I've updated the installation instructions. The old instructions included downloading the source and installing from there. At the time, that was all that was available. Since then, the ZFS on Linux project has created a proper Debian repository that you can use to install ZFS. Here is how you would do that:

$ su -
# wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_2%7Ewheezy_all.deb
# dpkg -i zfsonlinux_2~wheezy_all.deb
# apt-get update
# apt-get install debian-zfs

And that's it!

If you're running Ubuntu, which I know most of you are, you can install the packages from the Launchpad PPA https://launchpad.net/~zfs-native.

UPDATE (May 05, 2013): The following instructions may not be relevant for fixing the manpages. If they are, I've left them in this post, just struck out.

A word of note: the manpages get installed to /share/man/. I found this troubling. You can modify your $MANPATH variable to include /share/man/man8/, or by creating symlinks, which is the approach I took:

# cd /usr/share/man/man8/
# ln -s /share/man/man8/zdb.8 zdb.8
# ln -s /share/man/man8/zfs.8 zfs.8
# ln -s /share/man/man8/zpool.8 zpool.8

Now, make your zpool, and start playing:

$ sudo zpool create test raidz sdd sde sdf sdg sdh sdi

It is stable enough to run a ZFS root filesystem on a GNU/Linux installation for your workstation as something to play around with. It is copy-on-write, supports compression, deduplication, file atomicity, off-disk caching, encryption, and much more. At this point, unfortunately, I'm convinced that ZFS as a Linux kernel module will become "stable" long before Btrfs will be stable in the mainline kernel. Either way, it doesn't matter to me. Both are Free Software, and both provide the long needed features we've needed with today's storage needs. Competition is healthy, and I love having choice. Right now, that choice might just be ZFS.

Mount Raw Images

Just recently, I needed to mount a KVM raw image file, because it was depending on a network mount that was no longer accessible, and any attempts to interact with the boot process failed. So, rather than booting off a live CD, or some other medium, I decided to mount the raw image file. After all, it is ext4.

However, mounting an image file means knowing where the root filesystem begins, which means knowing how to offset the mount, so you can access your data correctly. I used the following:

First, I setup a loop back device, so I could gather information about its partition setup:

# losetup /dev/loop0 virt01.img
# fdisk -l /dev/loop0

Disk /dev/loop0: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009bdb7

      Device Boot      Start         End      Blocks   Id  System
/dev/loop0p1        37943296    41940991     1998848   82  Linux swap / Solaris
/dev/loop0p2   *        2048    37943295    18970624   83  Linux

Partition table entries are not in disk order

In this case, the virtual machine filesystem is 21.5 GB in size, in reads and writes in 512 byte blocks. Further, it appears as though swap occupies the second partition, while the ext4 root filesystem occupies the first, and begins at sector 2048, or byte 2048*512=1048576.

So, now I just need to tear down the loop back device, and create it again with an offset of 1048576 bytes, at which point, I should be able to mount the device:

# losetup -d /dev/loop0
# losetup /dev/loop0 virt01.img -o 1048576
# mount /dev/loop0 /mnt
# ls /mnt
bin/   home/            lib32/       mnt/   run/      sys/  vmlinuz@
boot/  initrd.img@      lib64/       opt/   sbin/     tmp/  vmlinuz.old@
dev/   initrd.img.old@  lost+found/  proc/  selinux/  usr/
etc/   lib/             media/       root/  srv/      var/

At this point, I can edit my problematic /mnt/etc/fstab file to fix the troubled boot, and boot it up.

Tighten the Security of "Security Questions"

Some of you may remember the email hack of Sarah Palin's email by David Kernell in 2008. The Wikipedia article describes how this was done:

The hacker, David Kernell, had obtained access to Palin's account by looking up biographical details such as her high school and birthdate and using Yahoo!'s account recovery for forgotten passwords.

Ever since then, I decided to change how I answer these "security questions" on websites. Knowing what I know about security and cryptography, I applied what I knew to these security questions. Here's how I handle them now:

  1. Generate a random string of characters, known as a "salt". Something like "Ga0Au1Ieshea".
  2. Answer the question. If the question is "What is your mother's maiden name?", suppose the answer is "Smith".
  3. Apply MD5(salt+answer). In this case, it would be MD5(Ga0Au1IesheaSmith) which results in "28e03f4c2d90b8c1120bf541927976f1".

So, when the site is asking you "What is your mother's maiden name?", the answer you would provide is "28e03f4c2d90b8c1120bf541927976f1".

Obviously, there are a couple concerns that you should be aware of. First, the form field might have a character limit. Adjust accordingly. You could provide the first x-characters, based on the restriction. Personally, I've never seen this restriction, but I certainly won't say that it hasn't been implemented. Second, it's critical that you generate a strong random salt, and that you keep the salt private. If the salt is known, or weak, then this whole thing falls apart, and you're no better off than just providing the answer to the question.

But, if you do everything correct, then you have tightened down these lame "security questions", and the attacker will not be any more successful than hacking your account password. And, by using a cryptographically secure hashing algorithm, the output will always be the same. Feel free to use SHA1 or some other hashing algorithm instead of MD5.

Hello ZNC

After nearly 6 years of running Irssi behind GNU Screen and Tmux, I've ditched it in favor of ZNC. Don't panic, I'm still running Irssi locally, but this does allow be to try out different IRC clients, without being disruptive to the channels I'm in (including giving yet another assessment to WeeChat), and it will actually deliver the back buffer, unlike the irssi-proxy module.

Setting it up was rather painless. I installed it using my operating system vendor's packaging system, ran ZNC, and it asked my some questions out the gate. I accepted only defaults during this first run, but made sure that I loaded the web admin module, and bound to a port via SSL. After the installer finished, I logged into the web interface (securely), and began configuring ZNC the way I would like it.

The web interface is easy to use. It is loaded with plenty of options and features, and the layout is clean and intuitive. One thing that I learned quickly was how users are setup. Unlike Irssi, where you setup network definitions for the network you want to connect to, then assign nicknames based on those networks, with ZNC, a username IS a network. So, because I connect to multiple IRC servers, I need a username (and password) for each. So, setup a username, tell it the server you will be connecting to, set other options, such as buffer backlog, and save. Do this for each server you wish to connect to. Lastly, poke a hole in your firewall for your clients to connect through, and you're set.

So far, I've been very pleased with ZNC. It's a solid IRC bouncer. And it's great to not need to setup an SSH tunnel to bind to Irssiproxy, so I can load local clients securely (this was a pain to setup on Android (which BTW, Yaaic is a SOLID Android client)). Suspect more posts about ZNC on this blog.

Encrypt Your Irssi Config

Actually, this can work for any config that you want to encrypt. Because I'm such an IRC addict (admittidly), and use Irssi as my client of choice, AND the fact that others have asked me about it after blogging about encrypting your IMAP/SMTP passwords with Mutt, I figured this was an appropriate title.

The Problem
You are running Irssi on a shared shell provider. Many people also have logins to the provider. You worry that the administrators of the service could see your usernames and passwords in your software configs.

The Solution
In all reality, just don't put your login credentials in the configuration file, if the utility does not support encrypting them. Plain and simple. It sucks typing in your credentials every time you run the software, but it is the best solution. However, if you want the convenience of having your credentials automatically provided, yet you want them securely stored, then this may be the next best solution.

First, have the site administrator install the eCryptfs utilities:

% sudo aptitude install ecryptfs-utils

Now, create a private encrypted mount, mount it, move your Irssi config (or whatever) into the private directory, create a symlink, start the application, then unmount the encrypted mount:

% ecryptfs-setup-private
% ecryptfs-mount-private
% mkdir ~/Private/configs
% mv ~/.irssi/config ~/Private/configs/irssi-config
% ln -s ~/Private/config/irssi-config ~/.irssi/config
% irssi
% ecryptfs-umount-private

There are a few drawbacks to this setup, that you should be aware of. First, you won't be able to "/reload" or "/save" unless you remount the encrypted ~/.Private filesystem. Second, anything else that Irssi is doing, will not be encrypted on disk, such as autologging channels and queries. You could put those in the encrypted filesystem as well, but then you would not be able to unmount it. It would need to remain mounted, which means the site administrators would still be able to see the login credentials. Third, the encrypted filesystem in ~/.Private/ could be removed or corrupted by the site administrators (at which point, I would stop using the service). Regardless, you would be without an Irssi config entirely. Best to keep a backup.

Until Irssi provides a way to allow encrypting the server or NickServ passwords with GnuPG, OpenSSL, or some other utility, this seems to be the best way to do it.

Setup Network Interfaces in Debian

If you're not using NetworkManager or Wicd, or some other similar tool to automatically manage your network interfaces for you, this post is for you. In the Debian world, you have a single file that manages your network interfaces. It can manage VLANs, bonded interfaces, virtual interfaces and more. You can establish rules on what the interface should do before brought online, what it can do while online, and what it can do after online. These same rules could be applied for taking the interface down as well. Let's look at some of these.

First, let's look at the basic setup for getting an interface online with DHCP. The file we'll be looking at this entire time is the /etc/network/interfaces file:

auto eth0
allow-hotplug eth0
iface eth0 inet dhcp

The first line tells the kernel to bring the "eth0" interface up when the system boots. The second line tells the kernel to start the interface if a "hotplug" event is triggered. The third line defines the configuration of the "eth0" interface. In this case, it should use IPv4, and should request an IP address from a DHCP server. A static configuration could look like this:

auto eth0
allow-hotplug eth0
iface eth0 inet static
    address 10.19.84.2
    network 10.19.84.0
    gateway 10.19.84.1
    netmask 255.255.255.0

The first two lines remain the same. In the third line, we have decided to use static addressing, rather than dynamic. Then, we followed through by configuring the interface. It's important to note that the indentation is not required. I only indented it for my benefit.

What about bonding? Simple enough. Suppose you have 2 NICs, one on the motherboard, and other in a PCI slot, and you want to ensure high availability, should the PCI card die. Then you could do something like this:

auto eth0
iface eth0 inet manual
    post-up ifconfig $IFACE up
    pre-down ifconfig $IFACE down

auto eth1
iface eth1 inet manual
    post-up ifconfig $IFACE up
    pre-down ifconfig $IFACE down

auto bond0
iface bond0 inet static
    bond-slaves eth0 eth1
    # LACP configuration
    bond_mode 802.3ad
    bond_miimon 100
    bond_lcap_rate faste
    bond_xmit_hash_policy layer2+3
    address 10.19.84.2
    network 10.19.84.0
    gateway 10.19.84.1
    netmask 255.255.255.0

Technically, I don't need to tell the kernel to bring up interfaces eth0 and eth1, if I tell the kernel to bring up bond0, and slave the eth0 and eth1 interfaces. But, this configuration illustrates some points. First, there are the pre-up, up, post-up, pre-down, down, and post-down commands that you can use in your network interfaces(5) file. Each does something to the interface at different times during the configuration. Also notice I'm using the $IFACE variable. There are others that exist, that allow you to create scripts for your interfaces. See http://www.debian.org/doc/manuals/debian-reference/ch05.en.html#_scripting_with_the_ifupdown_system for more information.

On the bonded interface, I'm putting in two slaves, then setting some bonding configuration that I want, such as using 802.3ad mode. Of course, the interface is static, so I provided the necessary information.

What if we wanted to add our bonded interface to a VLAN? Simple. Just append a dot "." and the VLAN number you want the interface in. Like so:

auto bond0
iface bond0 inet manual
    bond-slaves eth0 eth1
    # LACP configuration
    bond_mode 802.3ad
    bond_miimon 100
    bond_lcap_rate faste
    bond_xmit_hash_policy layer2+3

auto bond0.42
iface bond0.42 inet static
    address 10.19.84.2
    network 10.19.84.0
    gateway 10.19.84.1
    netmask 255.255.255.0
    # necessary due to a bonding bug in vlan tools
    vlan-raw-device bond0

Bring the interface up, the verify that the kernel has assigned it to the right VLAN:

$ sudo cat /proc/net/vlan/config
VLAN Dev name    | VLAN ID
Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
bond0.42        | 42  | bond0

Notice that I specified "vlan-raw-device bond0". This is due to a bonding bug in the VLAN tools, where merely specifying which VLAN the interface should be in by its name is not enough. You must also tell the kernel the bonded interface that the VLAN interface should be in.

How about bridged devices:

auto bond0
iface bond0 inet manual
    bond-slaves eth0 eth1
    # LACP configuration
    bond_mode 802.3ad
    bond_miimon 100
    bond_lcap_rate faste
    bond_xmit_hash_policy layer2+3

auto bond0.42
iface bond0.42 inet manual
    post-up ifconfig $IFACE up
    pre-down ifconfig $IFACE down
    # necessary due to a bonding bug in vlan tools
    vlan-raw-device bond0

auto br42
iface br42 inet static
    bridge_ports bond0.42
    address 10.19.84.1
    netmask 255.255.255.0
    network 10.19.84.0
    gateway 10.19.84.1

The only new thing here is the "bridge_ports" command. In this case, our bridged device is bridging our bond0.42 interface, which is in VLAN 42. Imagine having a KVM or Xen hypervisor that has a guest that needs to be in several VLANs. How would you setup all those bridges? Simple. Just create a VLAN interface for each VLAN, then create a bridge for each bonded interface in that VLAN.

Lastly, what about virtual IPs? I've heard that you can assign multiple IP addresses to a single NIC. How do you set that up? Simple. Just add a colon ":" the append a unique number. For example, say I have only one NIC, but wish to have 2 IP addresses, each in different networks:

auto eth0
iface eth0 inet static
    address 10.19.84.2
    netmask 255.255.255.0
    network 10.19.84.0
    gateway 10.19.84.1

auto eth0:1
iface eth0:1 inet static
    address 10.13.37.2
    netmask 255.255.255.0
    network 10.13.37.0

It's important to note that you generally only need one default gateway to get out. Your kernel will route packets accordingly. If you must specify multiple gateways, then you must manually make edits to the kernel's routing table, if everything isn't setup correctly.

Of course, we could combine everything we learned here. See if you can make out what each interface is doing:

auto eth0
iface eth0 inet manual
    pre-up ifconfig $IFACE up
    post-down ifconfig $IFACE down

auto eth1
iface eth1 inet manual
    pre-up ifcanfig $IFACE up
    post-down ifconfig $IFACe down

auto bond0
iface bond0 inet manual
    bond-slaves eth0 eth1 eth2 eth3
    # LACP configuration
    bond_mode 802.3ad
    bond_miimon 100
    bond_lacp_rate faste
    bond_xmit_hash_policy layer2+3

auto bond0.42
iface bond0.42 inet static
    address 10.19.84.2
    netmask 255.255.255.0
    netwark 10.19.84.0
    gateway 10.19.84.1
    # necessary due to a bonding up in vlan tools
    vlan-raw-device bond0

auto bond0.42:1
iface bond0.42:1 inet manual
    pre-up ifconfig $IFACE up
    post-down ifconfig $IFACE down
    # necessary due to a bonding bug in vlan tools
    vlan-raw-device bond0

auto br42
iface br42 inet static
    bridge_ports bond0.42:1
    address 10.13.37.2
    netmask 255.255.255.0
    network 10.13.37.0

Lastly, MTU. There is a lot of misinformation out there about frame size. In my professional experience, setting the MTU to 9000 bytes does not result in improved performance. Not noticeably at least. But it does have an effect on the CPU. Setting a larger frame size can result in much lower CPU usage, both on the switch, and in your box. However, some protocols, such as UDP, might break with a 9k MTU. So, use appropriately. At any event, here is how I generally set my MTU when dealing with multiple interfaces:

auto eth0
iface eth0 inet manual
    pre-up ifconfig $IFACE up
    post-down ifconfig $IFACE down
    mtu 9000

auto eth1
iface eth1 inet manual
    pre-up ifcanfig $IFACE up
    post-down ifconfig $IFACe down
    mtu 9000

auto bond0
iface bond0 inet manual
    bond-slaves eth0 eth1
    # LACP configuration
    bond_mode 802.3ad
    bond_miimon 100
    bond_lacp_rate faste
    bond_xmit_hash_policy layer2+3
    mtu 9000

auto bond0.42
iface bond0.42 inet static
    address 10.19.84.2
    netmask 255.255.255.0
    network 10.19.84.0
    gateway 10.19.84.1
    mtu 9000
    # necessary due to a bug in vlan tools
    vlan-raw-device bond0

auto bond0.43
iface bond0.43 inet static
    address 10.13.37.2
    netmask 255.255.255.0
    network 10.13.37.0
    mtu 1500
    # necessary due to a bug in vlan tools
    vlan-raw-device bond0

Note that I set the MTU to 9000 on all interfaces except for bond0.43, which is 1500. This is perfectly acceptable. In all reality, setting the MTU to 1500 on bond0.43 is just capping what bond0 can really do. But, it is important to set the MTU on each interface, otherwise the frame size of 1500 bytes will get set, and you'll end up chopping up your packets anyway. You must also set the MTU to 9000 on the switch ports as well, and any other server and interfaces that you want jumbo frames on.

Randomize First, Then Encrypt Your Block Device

This blog post is in continuation of the previous post, where I showed why you should not use ECB when encrypting your data. Well, when putting down an encrypted filesystem, such as LUKS, you've probably been told that you should put random data down on the partition first BEFORE encrypting the disk. Well, this post will illustrate why, and it's simple enough to do on your own GNU/Linux system.

I'll be using bitmaps in this example, as I did in the previous, except I'll use a different image. First, let's create a "random filesystem". Encrypted data should appear as nothing more than random data to the casual eye. This will be our target image for this exercise.

$ dd if=/dev/urandom of=target.bmp bs=1 count=480054
$ dd if=glider.bmp of=target.bmp bs=1 count=54 conv=notrunc

Here is what my target "encrypted filesystem" should look like (converting to GIF format for this post). Click to zoom:

Plaintext image Target filesystem

Now let's create a file full of binary zeros. This file will be the basis for our block device, and imitates an unused hard drive quite well. I have chosen ext2 over other filesystems, mostly because the size restriction with these files. Feel free to increase the file sizes, and use ext3, ext4, XFS, JFS, or whatever you want.

The file "400x400.bmp" is a white bitmap that is 400x400 pixels in size, rather than the 200x200 pixel "glider.bmp". This is to accommodate for the larger filesystems used in this post, and make the illustrations more clear. For your convenience, download the 400x400.bmp and glider.bmp for this exercise.

In these commands, "$" means running the command as an unprivileged user, "#" means running as root.

$ dd if=/dev/zero of=plain-zero-ext2.bmp bs=1 count=480054
# losetup /dev/loop0 plain-zero-ext2.bmp
# mkfs.ext2 /dev/loop0
# mount /dev/loop0 /mnt
# cp glider.bmp /mnt
# umount /mnt
# losetup -d /dev/loop0
$ dd if=400x400.bmp of=plain-ext2.bmp bs=1 count=54 conv=notrunc

This should give us a reference image to see what a "plaintext" filesystem would look like with our file copied to it. Now, let's setup two encrypted filesystems, one using ECB and the other using CBC, and we'll compare the three files together:

First the ECB filesystem:

$ dd if=/dev/zero of=ecb-zero-ext2.bmp bs=1 count=480054
# losetup /dev/loop0 ecb-zero-ext2.bmp
# cryptsetup -c aes-ecb create ecb-disk /dev/loop0
# mkfs.ext2 /dev/mapper/ecb-disk
# mount /dev/mapper/ecb-disk /mnt
# cp glider.bmp /mnt
# umount /mnt
# dmsetup remove ecb-disk
# losetup -d /dev/loop0
$ dd if=400x400.bmp of=cbc-zero-ext2.bmp bs=1 count=54 conv=notrunc

Now the CBC filesystem:

$ dd if=/dev/zero of=cbc-zero-ext2.bmp bs=1 count=480054
# losetup /dev/loop0 cbc-zero-ext2.bmp
# cryptsetup create cbc-disk /dev/loop0
# mkfs.ext2 /dev/mapper/cbc-disk
# mount /dev/mapper/cbc-disk /mnt
# cp glider.bmp /mnt
# umount /mnt
# dmsetup remove cbc-disk
# losetup -d /dev/loop0
$ dd if=400x400.bmp of=ecb-zero-ext2.bmp bs=1 count=54 conv=notrunc

What do we have? Here are the results of my filesystems. Click to zoom:

Plaintext filesystem ECB filesystem CBC filesystem

How do they compare to our target filesystem? Well, not close really. Even when using CBC mode with AES, we can clearly see where the encrypted data resides, and where it doesn't. Now, rather than filling our disk with zeros, let's fill it with random data, and go through the same procedure as before:

First the "plaintext" filesystem:

$ dd if=/dev/urandom of=plain-urandom-ext2.bmp bs=1 count=480054
# losetup /dev/loop0 plain-urandom-ext2.bmp
# mkfs.ext2 /dev/loop0
# mount /dev/loop0 /mnt
# cp glider.bmp /mnt
# umount /mnt
# losetup -d /dev/loop0
$ dd if=400x400.bmp of=plain-urandom-ext2.bmp bs=1 count=54 conv=notrunc

Now the ECB filesystem:

$ dd if=/dev/urandom of=ecb-urandom-ext2.bmp bs=1 count=480054
# losetup /dev/loop0 ecb-urandom-ext2.bmp
# cryptsetup -c aes-ecb create ecb-disk /dev/loop0
# mkfs.ext2 /dev/mapper/ecb-disk
# mount /dev/mapper/ecb-disk /mnt
# cp glider.bmp /mnt
# umount /mnt
# dmsetup remove ecb-disk
# losetup -d /dev/loop0
$ dd if=400x400.bmp of=cbc-urandom-ext2.bmp bs=1 count=54 conv=notrunc

Finally, the CBC filesystem:

$ dd if=/dev/urandom of=cbc-urandom-ext2.bmp bs=1 count=480054
# losetup /dev/loop0 cbc-urandom-ext2.bmp
# cryptsetup create cbc-disk /dev/loop0
# mkfs.ext2 /dev/mapper/cbc-disk
# mount /dev/mapper/cbc-disk /mnt
# cp glider.bmp /mnt
# umount /mnt
# dmsetup remove cbc-disk
# losetup -d /dev/loop0
$ dd if=400x400.bmp of=ecb-urandom-ext2.bmp bs=1 count=54 conv=notrunc

Check our results. Click to zoom:

Plaintext filesystem ECB filesystem CBC filesystem

Much better! By filling the underlying disk with (pseudo)random data first, then encrypting the filesystem with AES using CBC, we have a hard time telling the difference between it and our target filesystem, which was our main goal.

So, please, for the love of security, before putting down an encrypted filesystem on your disk, make sure you fill it with random data FIRST! The Debian installer, and many others, does this by default. Let it run to completion, even if it takes a few hours.

ECB vs CBC Encryption

This is something you can do on your computer fairly easily, provided you have OpenSSL installed, which I would be willing to bet you do. Take a bitmap image (any image will work fine, I'm just going to use bitmap headers in this example), such as the Ubuntu logo, and encrypt it with AES in ECB mode. Then encrypt the same image with AES in CBC mode. Apply the 54-byte bitmap header to the encrypted files, and open up in an image viewer. Here are the commands I ran:

$ openssl enc -aes-256-ecb -in ubuntu.bmp -out ubuntu-ecb.bmp
$ openssl enc -aes-256-cbc -in ubuntu.bmp -out ubuntu-cbc.bmp
$ dd if=ubuntu.bmp of=ubuntu-ecb.bmp bs=1 count=54 conv=notrunc
$ dd if=ubuntu.bmp of=ubuntu-cbc.bmp bs=1 count=54 conv=notrunc

Now, open all three files, ubuntu.bmp, ubuntu-ecb.bmp and ubuntu-cbc.bmpp, and see what you get. Here are my results with the password "chi0eeMieng7Ohe8ookeaxae6ieph1":

Plaintext ECB Encrypted CBC Encrypted

Feel free to play with different passwords, and notice the colors change. Or use a different block cipher such as "bf-ecb", "des-ecb", or "rc2-ecb" with OpenSSL, and notice details change.

What's going on here? Why can I clearly make out the image when encrypted with EBC? Well, EBC, or electronic codeblock, is a block cipher that operates on individual blocks at a time. ECB does not use an initialization vector to kickstart the encryption. So, each block is encrypted with the same algorithm. If any underlying block is the same as another, then the encrypted output is exactly the same. Thus, all "#000000" hexadecimal colors in our image, for example, will have the same encrypted output, per block (thus, why you see stripes).

Compare this to CBC, or cipher-block chaining. An initialization vector must be used before the encryption can begin. The password in our case is our initialization vector. It is hashed to provide a 256-bit output, then AES encrypts the hash, plus the first block to provide a 512-bit output, 256-bits for the next vector, and 256-bits encrypted output. That vector is then used to encrypt the next 256-bits. This chaining algorithm continues to the end of the file. This ensures that every "#000000" hexadecimal color will have a different output, thus causing the file to appear as random (I have an attacking algorithm to still leak information out of a CBC-encrypted file, but that will be for another post).

Hopefully, this simple illustration convinces you to use CBC, or at least to not use ECB, when encrypting data that might be public.

Why I Cryptographically Sign My Email

Yesterday, I received a disturbing phone call. Someone very close to me, call him John, might lose his job, because a slanderous, offensive email was sent with forged headers, claiming to be John. John certainly did not send the mail, and those close to John know that the tone of the mail does not seem like something John would send. The email made its way to John's boss, human resources, IT, and other departments. The director of IT said that whoever sent the email, will get fired. Hopefully, they understand the principle of innocent until proven guilty, and all that John has to do, is cast reasonable doubt that he sent the mail. Examining the mail headers should deliver that doubt. I've told John that I would be willing to examine the headers, along with his IT department, to help in any way I can. Hopefully, this ends well.

I've never known anyone personally that this has happened to, until now. But, I've been cryptographically signing my email since 2004. Every single one. I have almost 10,000 emails in my Sent folder, all of which are signed. Further, I think I've been very clear to my friends and family, that it is their responsibility to verify the signature. Should they receive an email claiming to come from me, they should doubt the authenticity of the mail if it is not signed.

Of course, this does not prove anything about future email. I may wish to stop signing my mail at anytime. But, all I need to do is cast reasonable doubt that I sent the mail. A back history of over 7 years and 10,000 cryptographically signed emails should cast enough reasonable doubt as to the message is question, should I be placed in that situation. Along with anyone being able to forge email headers, it's all over. Unless you can clearly, logically, and rationally prove that I sent the mail, there is enough doubt surrounding it, that I remain innocent.

I know others don't see email the same way I do, and treat their email experience differently, such as John. And in all reality, if setting up OpenPGP or S/MIME wasn't such a major PITA, it might be more widely used. But for the time being, all I can do is continue to lead by example. For me, the 15 minutes it took for initial setup, and having to provide a passphrase every time I wish to send an email, is peanuts compared to threats, such as this. Of course, if the organization John worked for required S/MIME on their email (I've worked for one such organization that made this requirement), then it would be clear that the mail was a fake.

UPDATE: Turns out that this organization has a utility to send messages to anyone in the organization. It's not email, but some custom, proprietary application. Further, it requires no authentication. Anyone can send messages to anyone pretending to be whoever they wish.

Switch to our mobile site