Image of the glider from the Game of Life by John Conway
Skip to content

{ pts“” } Search Results

Bitlbee and OTR

I'm actually surprised that I haven't blogged about this before, seeing as though I use it daily. Further, seeing as though I seem to be on a security blogging trip, it only seems fitting to discuss OTR support in Bitlbee now.

OTR, or Off-The-Record messaging is the ability to have encrypted and authenticated communication with full deniability and perfect forward secrecy. The idea is simple. Just as you want to be able to deny anything you say to a journalist when "off-the-record", you should be able to have that ability with instant messaging. It works by not cryptographically signing the message. It's just encrypted, and your buddy decrypts it on the other end. Further, each message is encrypted with a one-time AES session key. This means that even if you successfully snatch the AES key for encrypting and decrypting the message, it only applies to the current message, and does nothing for the previous messages, nor does it do anything for you on future messages. Thus, you have perfect forward secrecy with your chat. However, I want to spend some time on the inner workings of how authentication and trust are established, if the messages aren't signed for validation.

First, OTR is fully supported in Bitlbee version 3.0 and later. For Debian and Ubuntu packages, you will need to install both the "bitlbee" and "bitlbee-plugin-otr" packages if you want to take advantage of it. Of course, you could grab the source, and compile it in yourself as well. Once installed, you will want to generate master keys for each account you wish to use OTR with. You can get a list of your accounts with "account list" in the &bitlbee window of your IRC client:

15:20 <@aaron> account list
15:20 <@root>  0 (gmail): jabber, (connected)
15:20 <@root>  1 (twitter): twitter, aarontoponce (connected)
15:20 <@root>  2 (identica): twitter, eightyeight
15:20 <@root> End of account list

I wish to use OTR with my Gmail account. Thus, I need a key for the account. You can run "otr keygen <account-no>" for this:

otr keygen 0

It will take some time to generate the key, as it grabs entropy from /dev/random. You may want to add more entropy to the pool by running "du -sh /" from a different terminal to help speed things up. Wiggle the mouse, type in text editors, do other things to generate environmental noise. After it's finished, you can see the key fingerprint by running "otr info":

15:28 <@aaron> otr info
15:28 <@root> private keys:
15:28 <@root> - DSA
15:28 <@root>     8D57F662 D991493F 1084427E 31C7E1B9 2074B713
15:28 <@root>  
15:28 <@root> connection contexts: (bold=currently encrypted)
15:28 <@root>   (none)

Now, you're ready to communicate with your buddies via OTR. The question that will likely come up now is how do you know if the person you wish to communicate with secretly is really the person you expect? Well, for the years that I've used OTR, I've trusted the individual implicitly. I've communicated with their account enough that when we both decide to "go secure" with our chats, that it's no big deal. I just trust their keys, they trust mine, and we go about our day. However, you may not want to do that, but want to explicitly acknowledge that you are communicating with who you think.

This can be solved a number of ways:

  1. You could call them on the phone, and verify fingerprints verbally.
  2. You could agree on a shared secret before initiating the communication electronically, and use that shared secret to trust the keys.
  3. You could issue a challenge/response question to the opposite party. If they answer correctly, trust the key.

Personally, I like the challenge and response system. It's not too terrible difficult to think of a question that only your buddy would know, and issue the challenge for them to respond. Of course, they need to do the same with you. This is known as the Socialist Millionaire Protocol. Use "otr smpq <nick> <question> <answer>". It could go something like this:

otr smpq foobar "What color pen did I use on your whiteboard yesterday? one word, lowercase" red
15:35 <@root> smp: initiating with foobar...

On the other end, the nick "foobar" would have seen this:

06:42 <@root> smp: initiated by aaron with question: "What color ped did I use on your whiteboard yesterday? one word, lowercase"
06:42 <@root> smp: respond with otr smp aaron <answer>

At this point, "foobar" wishes to respond, as he thinks he knows the question. He would use "otr smp <nick> <answer>":

otr smp aaron red
06:43 <@root> smp: responding to aaron...
06:43 <@root> smp aaron: correct answer, you are trusted

At this point, I now trust that foobar is who he says he is, so I can trust any OTR communication from him. However, he hasn't verified that I am who he thinks I is. After all, I could be someone pretending to be Aaron, and get sensitive information from him. So, he should issue the same challenge and response to me. Something that only Aaron would know, to thwart any potential baddies. Should I answer correctly, he can then trust my fingerprint, and we will have 2-way OTR communication.

When everything is said and done, Bitlbee will print out the following:

15:36 <@root> smp foobar: secrets proved equal, fingerprint trusted

You can also verify it with "otr info <nick>":

15:37 <@aaron> otr info copecd
15:37 <@root> foobar is; we are to them
15:37 <@root>   otr offer status: none sent
15:37 <@root>   connection state: cleartext
15:37 <@root>   fingerprints: (bold=active)
15:37 <@root>     13C45179 F2A8645B 466463E8 228DBE16 787D1A82 (affirmed)

As already mentioned, there are other ways for verifying that the person on the other end is who they say they are, and OTR in Bitlbee supports it. You will notice too, that once the fingerprints have been trusted on both sides, the text is green in the chat window. If not trusted, the text is red. As far as I know, there is no way to change the color, or have any other sort of visual clue on whether or not trusted encryption is taking place in the communication.

Also, is should be worthy to note that Bitlbee OTR only supports conversations through Bitlbee, not your IRC client. This means that if you are using Irssi, Weechat, or some other IRC client, Bitlbee OTR won't help you for IRC messages. You'll need an OTR plugin for that as well. This is definitely a drawback. For example, if you use Irssi like I do, there is an OTR plugin for Irssi. Not only will it handle IRC, but any message through IRC, including those of Bitlbee. However, the Irssi OTR plugin hasn't been updated in two years, and it has security holes that haven't been addressed. The Bitlbee OTR plugin is up-to-date and in my opinion, much more streamlined. I don't use private IRC messages much, so the Bitlbee OTR plugin works just fine for me.

Now, start using OTR with Bitlbee. If you love your Bitlbee as much as I do, you will. 🙂

The Sad State of Hashcash

So today, I received an email from one of the readers of this blog. He wanted to get into OpenPGP with his email, and asked if I could help him get started with some tutorials, how-tos, etc. I was flattered that he valued my opinion. So, I responded to each of his questions and discussion points the best I could. However, during the reply, I reminded myself of Hashcash.

Hashcash is a really slick concept. The motivation is to combat spammers by using your CPU to calculate a SHA1 string starting with the first 20-bits as zeros based on a random number. Basically, a "proof-of-work" system. If the random number, combined with the timestamp and the recipient's email address, doesn't provide those first 20-bits of zeros, a new random number is chosen and hashed. Once the random number is found, it's attached to the header of the email, and sent. The recipient of the email can then hash the full string, and see if the first 20-bits of the resulting SHA1 are zeros. If so, along with some other validity checks, then the hash is considered valid, and you can rest assured that the email was not sent by a spammer.

A valid stamp:

You can verify this with:

% echo -n '' | sha1sum
00000d0a92fad840b05009c8692f43593c692589  -

Notice how the first 20-bits of the SHA1 hash are zeros? This is a valid hash based on the random number, the email address and the timestamp.

How does this defeat the spammers? Well, because finding the right random number that will produce a hash with the first 20-bits of zeros is computationally expensive. There will be one in at least 2^20 hashes, but the search in finding one is expensive. A modern computer can find one in under a second. But, if you wish to send bulk emails all at once, you need to create a hash "token" or "stamp" for each one. Due to the design of the algorithm to be slow, this will seriously hinder your ability to be efficient at sending bulk mail. Spammers be damned.

The proof-of-idea work is slick. The sender of an email has done the work necessary, and you can easily validate the work is been done much faster than it took to create. However, it appears from the website that all activity on the software has been abandoned since 2006, including updating the web page. Software is hard to find, and for the software that does exist, it's only compatible with very specific versions of software, leaving old or new software out of the game.

A couple examples.

Penny Post is an extension for Thunderbird. The SourceForge page houses the extension that is only compatible with Thunderbird versions earlier than 3.0. However, there is a Github project working on a Thunderbird 3.0-compatible extension. However, this extension is not compatible with Thunderbird 3.1. The extension page has also been pulled from Mozilla Addons.

Hashcash-sendmail is a Perl script for Mutt. It works by coupling the hashcash binary and the sendmail binary together to deliver your stamped mail. However, if you are using Mutt's buitin SMTP support, then the Perl script isn't usable. Also, it appears that the author of the script has lost his domain, and there hasn't been any updates since 2004, it seems.

Lastly, I have a Windows XP laptop for work with Thunderbird 3.1 installed. Of course, Penny Post isn't compatible with 3.1, so I'm already out of luck. However, even the Windows Hashcash binaries are out of date by several years.

Nevermind trying to implement it into more popular MUAs, like webmail clients (Gmail, Yahoo!, MSN, AOL, etc.), Outlook, Groupwise, etc. It just doesn't exist. Yet, SpamAssassin has full support for identifying the "X-Hashcash" email header.

No matter where I look, I reach dead ends. This is unfortunate, because I see Hashcash as a slick way of beating spam, yet it appears that the spammers are laughing all the way to the bank. They're not worried because no one is using it. And I have a feeling that no one is using it, because no one is developing anything for it. Everything out there is at least 5 years old and aging.

Maybe another slick proof-of-work system will come along. Who knows? I would like to work it into my daily email routine, but it appears that doing so with the current state of affairs is futile. I guess I could work on developing scripts and such that could be easily implemented into various MUAs, but with the amount of stuff I want to do after graduation, I'm guessing it's fairly low on the priority list.

Elliptic Curve Cryptography in OpenSSH

I've been meaning to add this as a post, as it's light and quick, but as the release of OpenSSH 5.7, Elliptic Curve Cryptography has been implemented. Why should you care? The generated keys are substantially smaller, the algorithm is faster and lighter, giving a break to slower CPUs and the cryptanalysis hasn't shown any substantial weaknesses, unlike traditional RSA or DSA.

To generate an ECC SSH key for your host, you need to use the "ecdsa" encryption type. The bit strengths are 256, 384 and 521. Generally speaking, the equivalent DSA keys would require 4-times the bit strength of ECDSA keys. In other words, a 256-bit ECDSA key is equivalent in strength to a 1024-bit DSA key.

Pull up your terminal, and type:

% ssh-keygen -t ecdsa -b 256

Go through the prompts, and you should have your generated private and public keys. Then, copy the key over to your remote server, and start using:

% ssh-copy-id -i ~/.ssh/ user@server.tld

Of course, the remote server does need to support ECC in order to take advantage of ECDSA keys, which means it too needs to be running OpenSSH 5.7 or later. Here's a result of the key sizes:

% ls -l ~/.ssh/*.pub
-rw-r--r-- 1 aaron aaron 604 Feb 17 20:05
-rw-r--r-- 1 aaron aaron 176 Feb 17 19:41
-rw-r--r-- 1 aaron aaron 220 Feb 17 19:42
-rw-r--r-- 1 aaron aaron 268 Feb 17 19:42
-rw-r--r-- 1 aaron aaron 228 Feb 17 20:07
-rw-r--r-- 1 aaron aaron 398 Feb 17 20:08

As you can clearly see, ECDSA keys are substantially smaller compared to their DSA counterparts and a bit smaller than equivalent RSA keys. Also, it should be mentioned that when setting up the OpenSSH server on a new host for the first time, you can also choose to have ECDSA host keys generated for the server, rather than the standard RSA or DSA keys.

I don't recommend wiping your existing RSA or DSA keys in favor of ECDSA quite yet. Plenty of OpenSSH and proprietary SSH servers exist that do not support ECC. Thus, your newly generated ECDSA key won't work, even if you copy it to the authorized_keys file. However, if you have the servers that support it, then why not give it a go, and see what you think?

Twitter and Identica support with Bitlbee

I just discovered this today, and anything that makes my Irssi and Bitlbee experience better is always worth a blog post.

First off, before beginning, make sure you have the latest version of Bitlbee. The options I'll be going through in this tutorial require version 1.2.8 or later. If you don't have at least this version, this tutorial might not work for you completely. Twitter support was added in 1.2.6, but getting it to work with Identica might not work. I'll outline both Twitter and Identica, as they have different setup requirements.

Twitter support
Twitter no longer supports the username/password combination to authenticate your client with the service. Instead, they use OAuth for the authentication. Bitlbee supports Twitter's OAuth. So, when asked to provide a password with the account, just give something totally bogus. Your password won't actually be used. So, in your Bitlbee root window, type the following, replacing "username" with your actual Twitter account username. Also, make sure to replace the account number with the appropriate number from "account list". I'll assume it's "1".

account add twitter username bogus_password
account list
account set 1/mode chat
account 1 on

Notice, I set the mode to "chat". This makes the Twitter account behave like a standard multi-user chat room. As a result, Bitlbee will create a new room in your IRC client, and add all account that you are following to that room. The great thing with this, is you have full nickname tab completion if your IRC client supports tab-completing nicks. No need to use Twirssi,, or other services/plugins if you're already using Bitlbee. Personally, I dig it. However, the mode also supports "one", meaning you have a nickname that you must interface with to send all tweets through (like an IM bot) or "many", in which every account you follow will have its own "buddy" in your buddy list, that you could send messages to directly. Personally, I like chat best. Play with whatever works for you.

Identica support
I'm told that the BZR branch of Bitlbee (current stable version is 1.2.8, development 1.3) supports Identica natively. However, I only have 1.2.8 installed, so we get to hack up a solution (a rather elegant one at that). The developers pride themselves in keeping the API 100% backwards compatible with the Twitter API. This means that even though Bitlbee doesn't have native Identica support, it does support it indirectly through the Twitter API. So, we'll make a slight change, and we'll be good to go.

A couple notes on this method that is worth mentioning. Identica doesn't support OAuth. As such, you do need to setup your actual username and password. Unfortunately, this is sent over the Internet in PLAIN TEXT. If you're at all concerned about the security of your Identica account, I would recommend interfacing with the Jabber bot, as it uses SSL for its connection, and your username and password are actually never sent. However, with the Jabber bot, you don't have nickname account tab completion, and you do if you setup Identica the same way we just setup Twitter.

So, weigh out your options. Personally, with me, I prefer using the Jabber bot and using to do some light text munging fore me to make it rather slick. Not only for the SSL support, but also because I really like the "reply #[message_id]" feature of the Jabber bot, and you lose the ability to send a reply to a specific message with this method.

With that said, let's see how to set it up. In your main Bitlbee root window (remember, you must use your Identica username and actual password here as OAuth isn't supported), type:

account add twitter username password
account list
account set 2/mode chat
account set 2/oauth false
account set 2/base_url
account on 2

I assumed that setting up Twitter was account "1", so I'm assuming setting up Identica will be account "2". Replace as needed for your instance. Also, notice that we also set the mode to chat, just like we did with Twitter. However, we changed the OAuth mode to false, because Identica doesn't support it, and we changed the "base_url" to "". This is needed, so you're not sending your messages to Twitter, obviously.

With both Twitter and Identica (or either one), you should have chat windows where you can interface with the service directly using the API. The great thing here is full nickname account tab completion, and the chat window looks great, because it's using your IRC native theme and controls, rather than setting up some weird theme (I'm looking at you Twirssi).

If Bitlbee is running in Irssi, as is the case for me (just search for Irssi on my blog :)), then this make Irssi a full-blows messaging suite with some pretty extensive scripts that make your IM/IRC/microblogging very versatile, flexible and complete. For me this takes Irssi + Bitlbee to a whole new level. World domination is nearly complete with messaging. Jabber, Facebook (which is really Jabber), Twitter, Identica, IRC all in a single client.

Yeah. I'm happy.

Email Netiquette - Part 2

This is the second in a series of four. The first can be found at Continuing our discussion from the previous post, I'll expound on points four through six in this post.

  1. Use plain text (preferred) or HTML
  2. Top-post only when forwarding. Bottom-post otherwise.
  3. Trim your replies.
  4. Keep you signature under five lines, and use the signature separator "-- " (dash, dash, space).
  5. Do not attach unnecessary files, keep attachments small, and don't attach proprietary formats.
  6. Keep the width of your message under 80 characters
  7. Use a client that sends threading headers.
  8. Reply only to the necessary people (don't abuse CC: or "reply to all").
  9. Be short and concise. Don't ramble (stay on topic).
  10. Break up your paragraphs.
  11. Use proper spelling, grammar and punctuation (avoid CAPS).
  12. Don't answer spam, and don't send out spam.

Keep your signature under five lines and use the signature separator "-- " (dash, dash, space).
Email signatures can be a great way to communicate to your target audience a little bit about yourself. Generally, email signatures are used for contact information, in case someone wants to get in contact with you outside of email. Other email signatures might include some art, or fancy font, or just an abstract representation of something completely off the wall. Whatever the case may be, there are a few things to keep in mind with email signatures.

First, keep your signature short and concise. No one wants to see a lengthy signature, width or length. A good rule of thumb, is to keep the length under five lines. When email signatures get lengthy, they begin to distract the reader from the message. Especially if loud colors and font sizes are used in an HTML signature. Remember, it's the subject of your email, not what's in your signature, that is most important. So, keep the signature light, small and concise. You can use really anything in your signature. That's up to you. It can be contact information, such as cell phone or business fax, it could be a random quote you cherish, or something abstract. I use the first 5 generations of the glider from John Conway's Game of Life. It's plain text, it's not noisy, and it's only 3 lines. Plus, I always get at relpy every so often, asking what the signature means. Great conversation starter.

Second, when using signatures, it's important to use the signature separator, which is been standardized as "-- ", or "dash, dash, space". Most email clients that I'm aware of will prepend this to your signature by default. However, if you are unsure, check your email settings or preferences to make sure this is set appropriately. The reason for this, is some mailing list managers will trunk signatures out of view, so the body of the text is the only thing visible. Some mail clients can be setup in this manner as well. Because it's the body of the text that is important, and not the signature of the one sending the mail, many people just prefer to have their client chop the signature entirely. By making sure "-- " is in configured correctly, you are being considerate to those who wish not to be bothered by the noise a signature can create.

Do not attach unnecessary files, keep attachments small, and don't attach proprietary formats.
On technical mailing lists, email attachments are generally frowned upon. The reason being, is that usually the message can be conveyed without an attachment. If a screenshot is needed to help clarify its meaning, then there are many free image hosting services that would be appropriate for displaying the image. Then, a simple reference to the URL of the hosted image would be provided in the mail. This keeps the email itself light on used bandwidth for those reading your message.

The biggest complaint of email attachments is the size of the attachment. My mother will send me videos she finds hilarious, emotionally moving, or whatever. These videos are usually 10-20 MB in size, so I get to sit for a few minutes, waiting for my email to load, because the attachment is downloading. Rather, if she would provide a URL reference to the video online, I could parse the email much faster. So, if you must provide an email attachment, try to keep the size to a minimum. Zip it up, if necessary, to help decrease the size. I understand this is not always possible, but a good rule of thumb, would be to keep attachments to under 100 KB. This would mean that the email would load up for most people in a second or two. Even those who are still on a dial-up account, the message could be received in 30 seconds at the worst.

Lastly, don't assume that I have a license to view your attachment. While Microsoft Office might be nearly ubiquitous on most computers, sending a DOC or PPT file is usually in bad form. Instead, use standards-based formats, such as PDF, HTML or plain text. I once received an email that contained an XPS attachment. I literally had no clue what that was, and I did not have a program to open it. Going to the Google Machine, I found that this is an "XML Paper Specification" format designed by Microsoft to be the "PDF Killer". I found that it was exported from Microsoft Word 2007, and I needed Microsoft's XPS Viewer to view the utility. But, that utility is only available for Windows operating systems, and at the time, I was using my Debian GNU/Linux laptop. Long story short, I couldn't open the file. So, I had to reply to the sender of the email to please send me a PDF version of the attachment, as I had no ability to open an XPS file. I was polite, and in return, he was polite is accommodating my request.

Keep the width of your message under 80 characters.
This might sound like an odd netiquette rule, but wrapping your message at 80 characters makes it easier for the recipient to read your message. In fact, the psychology department at Witchita State University did a study on this very thing. Which is better for reading text? Long columns of text or shorter columns? The results of the survey showed that people could read faster with greater accuracy and have better comprehension with two-column justified text than three-column (too short) or one-column (too long).

Translating this to email, people don't want to read lengthy columns of text. When you wrap your text to a shorter justification, but not too short, as the study shows, it's easier for the reader to comprehend what you're talking about, and they can read through the text quicker. Major publishers know this as well. Pick up your favorite novel, and count the number of characters on a single line. I have a paperback copy of Frankenstein, by Mary Shelley, and each line is wrapped at exactly 60 characters. I have another paperback copy of Macbeth, by Shakespeare. Each line wraps at exactly 50 characters. Looking through all my novels, I'm actually struggling to find a book that has more than 85 characters on a single line. The Debian System, written by Martin Krafft, wraps at 85 characters.

The standardized accepted practice for email, is to actually wrap your email text at 72-75 columns. This gives enough room for others to reply to your message, which will usually prepend the two characters "> " to your original message, and still keep the length of the mail under 80 characters. As would be expected, Microsoft Outlook seems to struggle with this when writing emails initially, but can be configured to wrap at 80 characters for replies.

Tagged ,

Facebook Chat In Bitlbee

It's no surprise that Bitlbee is my chat client of choice. After all, I've blogged about it before. So, when I heard rumors that Facebook would be releasing their chat to outside clients over XMPP, I was excited to see the day when I could add it to my running Bitlbee instance. Lo and behold, that day has come.

Adding your Facebook account to Bitlbee is rather painless, as it is with any other account. The only catch, is you have to have a Facebook username set before you can continue. Once that is set, in Bitlbee, from your "&btilbee" status window, you can add the account:

account add jabber <username> <password>
account on

That's it! You should be up and running with a new XMPP connection to the Facebook chat. However, rather quickly, you'll notice that the usernames in your "blist" roster are their user identification number on Facebook, rather than their name. Something like "u123456789". Who is that you wonder? Well, in your "&bitlbee" window, you could run:

info u123456789

Then, using that information, you could rename them one-by-one as they login. However, this is a pain. Fortunately, if you're running Bitlbee with Irssi, then there is an Irssi Perl script for renaming these automatically for you, as they login. You can find that script here. Save it as "" in your "~/.irssi/scripts directory, create a symlink in the autorun directory, load it, and you're set. Here's what you would do behind the command line:

wget -O ~/.irssi/scripts/
ln -s ~/.irssi/scripts/ ~/.irssi/scripts/autorun/

Now in Irssi, load it:


Now, each of your Facebook buddies will have their user ID number renamed to "FirstnameLastname" format. The script only works for Facebook chat, so no worries about it mucking up other XMPP connections, and it only renames buddies that haven't already been renamed. It also saves it to your Bitlbee config (which is /var/lib/bitlbee/username.xml) every time it renames a buddy.

So, there you go. Bitlbee, XMPP and now Facebook, married together. What a beautiful relationship.

OFTC, SSL, NickServ and Irssi

I'm on a bit of an IRC kick with the blogging lately, mainly because it seems I'm usually fine tuning my settings, and I like to share what I find. Hopefully, someone finds these posts useful. For today's post, I've picked setting up an SSL connection on OFTC and securely identifying to NickServ when connecting. with Irssi. Before beginning, it should be noted that the instructions for this tutorial can also be found on the OFTC site. I'm merely taking that tutorial, and posting only the Irssi instructions here, more or less. However, if you use another client, you should read over that tutorial instead.

Generating an OpenSSL Certificate
OFTC runs a forked version of hyperion-ircd that they call oftc-hybrid. It's a patched version of hyperion that Freenode was running on their servers before the switch to ircd-seven. It supports IPV6, SSL, and CertFP with NickServ, that I'll cover later in this post.

Before connecting to OFTC, we want to generate an OpenSSL certificate. This certificate will be used for authenticating to NickServ, and really isn't related to setting up an SSL connection to OFTC. However, when you connect, you will be presenting OFTC with the generated certificate, and at that point, you will be able to add it to NickServ, because it's been presented. If you already have your own personal certificate you want to use, then you can skip this step, and move on to connecting with SSL.

I'm going to assume you have OpenSSL installed. If you're running any modern Unix-like operating system, such as GNU/Linux or one of the BSDs, chances are very high that it's been installed by default. If not, install it, and continue with the rest of the post.

In this step, we're going to generate our own self-signed personal OpenSSL certificate. So, fire up a terminal, type in the command below, and follow the on-screen instructions. The values you put here do not matter to OFTC in the least, so fill them in any way you wish. In my case, I'll fill in the data for my personal certificate, but you fill in the values as you see fit. Replace "nick.key" and "nick.crt" with your IRC nick that you use for this connection.

cd ~/.irssi
mkdir certs
cd certs
openssl req -nodes -newkey rsa:2048 -keyout nick.key -x509 -days 365 -out nick.crt
Generating a 2048 bit RSA private key
writing new private key to 'nick.key'
Country Name (2 letter code) [US]:US
State or Province Name (full name) [Texas]:Utah
Locality Name (eg, city) [San Antonio]:Ogden
Organization Name (eg, company) [Stealth3]:eightyeight
Organizational Unit Name (eg, section) [ISP]:OFTC
Common Name (eg, YOUR name) []:Aaron Toponce
Email Address []

Now, if you look, you'll have two newly generated files: "nick.key", which is your private key and "nick.crt" which is your public self-signed certificate. Because "nick.key" is your private key, you want to guard it appropriately. Its permissions should be modified to only be readable (and maybe even writable) by you, and you alone. Also, because we have both the private and public key set, let's go ahead and combine the files into one PEM file that we'll present to NickServ when connecting:

cat nick.crt nick.key > nick.pem
chmod 0400 nick.key nick.pem

Connecting with SSL
At this point, we are ready to connect to OFTC, and present our certificate to the server. So, fire up Irssi if you haven't already:

irssi -!

Now that we're in Irssi, we want to setup our SSL connections to OFTC. You should have ~/.irssi/certs/nick.pem available to send.

We will need to retrieve the CA certificate for verifying the server certificate. I personally like to put all my CA certificates in my certificates store, and this is where I deviate a little from the tutorial on the OFTC site. OFTC has their certificate signed by Software in the Public Interest, so we'll need their CA certificate. Fortunately, Debian, Ubuntu, and many other GNU/Linux operating systems provide this certificate for us, so we just need to identify the location of the certificate, and plug that into Irssi. If you don't have that certificate, refer to the tutorial on the OFTC site for obtaining it and installing it on your system.

So, with Irssi waiting for our command, let's tell it out to connect to OFTC:

/network add oftc
/server add -auto -ssl -ssl_cert ~/.irssi/certs/nick.pem -ssl_verify -ssl_cafile /etc/ssl/certs/spi-cacert-2008.pem -network oftc 6697
/connect oftc

When we successfully connect, we should see that OFTC has accepted our self-signed certificate in the MOTD, and it should also show that we are connected securely to the network with SSL:

16:20 [oftc] Irssi: Looking up
16:20 [oftc] Irssi: Connecting to [] port 6697
16:20 [oftc] Irssi: Connection to established
16:20 [oftc] []: *** Looking up your hostname...
16:20 [oftc] []: *** Checking Ident
16:20 [oftc] []: *** Found your hostname
16:20 [oftc] []: *** No Ident response
16:20 [oftc] []: *** Connected securely via TLSv1 AES256-SHA-256
16:20 [oftc] []: *** Your client certificate fingerprint is 4A5463CE416649F72818B22945681D28250C1ACA
16:20 [oftc] >>> Welcome to the OFTC Internet Relay Chat Network eightyeight

Sweet! We're connected securely, and OFTC accepted my client certificate by printing the fingerprint for me. If the fingerprint is not displayed, then OFTC has not accepted my certificate, and I need to review the steps outlined above, or on their site.

Authenticating to NickServ with SSL
From here on out, our connection is secured, and we can enjoy the safety that is encrypted packets. So, at this point, we need to register our nick with NickServ, if we haven't already. It should be pointed out that Services has a complete help document provided. Messaging "help" to any of the Services bots will give you a break down of the available commands and their syntax. I would highly recommend becoming familiar with how to use the provided documentation.

/msg NickServ help
04:22 [notice(NickServ!] *** NickServ Help *** 
04:22 [notice(NickServ!] ACCESS: Maintains the nickname ACCESS list.
04:22 [notice(NickServ!] CERT: Maintains the nickname client certificate list.
04:22 [notice(NickServ!] DROP: Releases your nickname for use.
04:22 [notice(NickServ!] ENSLAVE: Enslave a nickname to this master nickname.
04:22 [notice(NickServ!] HELP: Shows this help.
04:22 [notice(NickServ!] IDENTIFY: Identify your nickname.
04:22 [notice(NickServ!] INFO: Get information on a nickname.
04:22 [notice(NickServ!] LINK: Link this nickname to a master nickname.
04:22 [notice(NickServ!] LIST: Shows a list of nicknames matching a specified pattern.
04:22 [notice(NickServ!] RECLAIM: Release your nickname for you to use.
04:22 [notice(NickServ!] REGAIN: Release your nickname for you to use.
04:22 [notice(NickServ!] REGISTER: Registers a nickname for your usage.
04:22 [notice(NickServ!] SENDPASS: Send a password reset request.
04:22 [notice(NickServ!] SET: Set nickname properties.
04:22 [notice(NickServ!] STATUS: Shows the identified status of a nickname
04:22 [notice(NickServ!] UNLINK: Unlink this nickname from a master nickname.
04:22 [notice(NickServ!] *** End of NickServ Help ***

In this case, we're interested in registering and then identifying. The syntax for registering is providing your password and email as arguments. Providing the correct email is key, so should you lose your password and you can no longer authenticate with SSL, you can have NickServ email you your password. So, I would recommend putting in a valid email here, and not some throw away string:

/msg nickserv register password
/msg nickserv identify password

At this point, our nick is registered and we are identified to NickServ. If the nick you're wishing to register is already registered, then you'll either need to pick a different nick, or join #oftc on the network, and see if staff can assign that nick to you. At any event, you must be identified with a valid nick before you can proceed with the next steps.

Now that we are identified to NickServ, we need to add our hostmask to the access list. This is beneficial, so we won't be asked to identify when we connect, which is what we want. So, we need to find our hostmask. This is simple enough by running a /WHOIS on yourself, and identifying your host string. So, it might be something like this:

/WHOIS eightyeight
04:29 [oftc]      nick  | eightyeight 
04:29 [oftc]      host  |
04:29 [oftc]     gecos  | eightyeight
04:29 [oftc]    server  | [Freemont, CA, USA]
04:29 [oftc]      info  | user has identified to services
04:29 [oftc]  hostname  | 
04:29 [oftc]      info  | is connected via SSL (secure link)
04:29 [oftc]      idle  | 0d 0h 1m 48s [signon: Sat Jan 30 16:20:12 2010]

In this case, my host is "". This is what I want to provide to NickServ:

/msg nickserv access add *

Good. Now we're ready to add our self-signed certificate. Remember, when you connected, in the beginning of the MOTD, your certificate fingerprint was displayed. You will want to copy this output and paste it here. You can also use the "openssl" command to get the fingerprint of your cert, you will just need to remove the colons out of the string when providing it to NickServ. In my case, my fingerprint was 4A5463CE416649F72818B22945681D28250C1ACA, so I'm going to add that. Change the string for your fingerprint:

/msg nickserv cert add 4A5463CE416649F72818B22945681D28250C1ACA

That's it! At this point, if you were to connect again (you must /disconnect and then /connect. /reconnect doesn't apply the SSL settings, apparently (BUG?)), you will find that you will automatically identify to NickServ with your cert, and without a password. As you can imagine, this is highly secure. If you are not taking advantage of SSL, then your password can be sniffed on the wire, and your account could be compromised. In this case, we're neglecting the password, and using public key cryptography instead to authenticate. I don't know at this point if a MITM attack would be successful. So, bonus.

To see that it has succeeded, when you connect you should see the following at the end of the MOTD:

05:14 [oftc2] >>> You have set user mode +i
05:14 [oftc2] >>> You have set user mode +R
05:14 [oftc2] [notice(NickServ!] You are connected using SSL and have provided a matching client certificate
05:14 [oftc2] [notice(NickServ!] for nickname eightyeight. You have been automatically identified.

Congratulations! You are now connected securely to OFTC via SSL and you have identified to NickServ successfully with your self-signed certificate. The major benefits of this method, is you can ditch any client-side identification scripts, which is always a bonus. Further, your packets are now encrypted between you and the OFTC servers. OFTC also utilizes server-to-server encryption, so if you're physically connected to a server in the United States, and someone you're in private message with is physically connected to a server in Europe, not a single plaintext packet is sent on the wire between your client and his (assuming he's connected via SSL as well).

A final not on SSL connections, is your computer clock that you are running Irssi on needs to be accurate. It would be recommended to use NTP to keep your clock in sync with Internet time servers. If your clock is too far off, you won't be able to negotiate the OpenSSL handshake, and as a result, not be able to take advantage of the encrypted traffic. Also, OpenSSL certificates need to be valid. If your certificate expires, then you will not be able to present it to the server, and as a result, not be able to authenticate to NickServ. The dates on expiration are up to you, but validity is important. You can see the dates of your certificate with the following command:

openssl x509 -noout -in nick.pem -dates

Enjoy the security.

Freenode, SSL and SASL Authentication with Irssi

Last night, Freenode made the migration from hyperion-ircd to a fork of charybdis-ircd they're calling ircd-seven. There are a few notable changes in the new ircd code that are worth mentioning here that are of benefit to end users and clients. They are the ability to use OpenSSL encryption between client and server and the ability to use SASL authentication for authenticating to Services. Of course, as is standard, I'll document this with Irssi, but the general rules apply to most IRC clients.

Connecting with SSL
Freenode is listening for SSL connections on ports 6697, 7000 and 7070. I don't know what the logic here is for that, but does it matter? A port is a port is a port. So, for Irssi, setting this up is rather simple.

/server add -auto -ssl -network freenode 6697

Boom! Done.

Now, if you want to verify the Freenode server SSL certificate against a certificate authority (CA), then you'll need to download the CA certificate from the authority that signed the server certificate. In this case, its, and their CA certificate file can be found here: However, using the file in its native DER format for Irssi wasn't working for me. So, using openssl, I converted the binary DER data file to PEM format, at which the Freenode certificate would properly verify:

cd /usr/share/ca-certificates
openssl x509 -inform der -outform pem < /usr/share/ca-certificates/ > GandiStandardSSLCA.pem
ln -s /usr/share/ca-certificates/ /etc/ssl/certs/GandiStandardSSLCA.pem

With the CA certificate installed in the standard CA certificates store, I modified my server string in Irssi:

/server add -auto -ssl -ssl_cacert /etc/ssl/certs/GandiStandardSSLCA.pem -network freenode 6697

Unfortunately, as much as I would like this to work, it doesn't. I kept ending up with this error:

[freenode] Irssi: Connecting to [] port 7070
Irssi: warning Could not verify SSL servers certificate:
Irssi: warning   Subject : /OU=Domain Control Validated/OU=Gandi Standard Wildcard SSL/CN=*
Irssi: warning   Issuer  : /C=FR/O=GANDI SAS/CN=Gandi Standard SSL CA
Irssi: warning   MD5 Fingerprint : F8:40:2C:D9:D6:46:1F:D0:38:5D:ED:21:69:8B:17:C4

Digging deeper, it appears it's failing with:

2 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT: unable to get issuer certificate
the issuer certificate could not be found: this occurs if the issuer certificate of an untrusted certificate cannot be found.

After a bit of hacking, and the help with Bazerka in #irssi, we found that my specific version of OpenSSL doesn't like the certificate chain. Because Irssi is using these libraries, it took a bit of mucking about to find enough data points, that you need to be running an extremely recent SVN build of Irssi (there's a bug with some SSL certificate verifications that affect us here), also with OpenSSL version 0.9.8k or later. I am not running either on Debian stable, so am I stuck not being able to verify the certificate Freenode gives me?

Well, not quite. The Gandi certificate is signed by UTN-USEFirst-Hardware, which in turn is signed by AddTrust External Root (if your browser has a CA certificates store, you can visit, and get the details of the certificate there, or use "openssl s_client" to download it and examine the details). So, if you have the USEFirst and AddTrust CA certificates, then you can verify those instead with older versions of OpenSSL or Irssi, and you'll be golden. So, if you have a CA certificate store, as most GNU/Linux distributions do, you can set the following instead:

/server add -auto -ssl -ssl_verify -ssl_capath /etc/ssl/certs -network freenode 6697

This will succeed, and when connected, you'll see usermode "+Z" meaning you're using a secure connection, and you've properly verified the server certificate Freenode is handing out. Notice the difference with "-ssl_capath" here and "-ssl_cacert" from above. This is key to making this work.

Authenticating with SASL
Okay, after setting up SSL with Freenode, the next task for me was using SASL authentication rather than a server password to authenticate to NickServ. It should be noted that using SASL authentication is entirely optional! You don't have to use this method if you don't want. However, using the SASL authentication script I'm going to point to in a second has one nice feature that might be of interest to you: using Blowfish encryption on your password, and sending that to NickServ, should you not be using an SSL connection at all. If you're not interested in using an SSL connection, at least you can encrypt your password on the wire when authenticating using SASL.

Anyway, setting this up means getting Irssi in shape for SASL. By default. Irssi doesn't support SASL authentication out of the box, so we need a Perl script to make it happen. You can find that Perl script here. After downloading the script, put it in your ~/.irssi/scripts directory, and link against it in the autorun directory. Something like this:

cd ~/.irssi/scripts/
cd autorun
ln -s ../

Now, you just need to load it in Irssi, and setup your username and password for authentication. A word of note here: when setting up SASL authentication, you need to be using your primary nick with NickServ, not any nick that you've linked against, or it will fail. I don't know why this is, but that's the case. So, in my case, my primary nick is "atoponce" and my secondary nick is "eightyeight". I use my secondary nick for all my IRC sessions, but when using the SASL command below, you must use your primary nick. While we're at it, we'll save everything we've done up to this point in the config:

/sasl set freenode primary-nick password DH-BLOWFISH
/sasl save

First, if you haven't noticed already, you need some Perl libraries in place before you can run this script, namely Blowfish, DH and BIGNUM. If you're on Debian or Ubuntu, you can install them with:

aptitude install libcrypt-blowfish-perl libcrypt-dh-perl libcrypt-openssl-bignum-perl

Notice, I"m using DH-BLOWFISH in my example. "PLAIN" is also completely valid there for your mechanism. Also, notice I'm using "/sasl save" to save the settings to disk. You'll want this, so should you need to restart Irssi, everything will be in place, and you won't have to go through this procedure again.

If you've followed this tutorial rather closely, when you connect, you should see something like the following at the beginning of the connection:

16:05 [freenode] Irssi: Looking up
16:05 [freenode] Irssi: Connecting to [] port 6697
16:05 [freenode] Irssi: Connection to established
16:05 [freenode] []: *** Looking up your hostname...
16:05 [freenode] []: *** Checking Ident
16:05 [freenode] []: *** Found your hostname
16:05 [freenode] []: *** No Ident response
16:05 [freenode] Irssi: CLICAP: supported by server: identify-msg multi-prefix sasl 
16:05 [freenode] Irssi: CLICAP: requesting: multi-prefix sasl
16:05 [freenode] Irssi: CLICAP: now enabled: multi-prefix sasl  
16:05 [freenode] >>> eightyeight!88@oalug/member/pdpc.supporter.monthlybronze.eightyeight atoponce You are now logged in as atoponce.
16:05 [freenode] Irssi: SASL authentication successful
16:05 [freenode] >>> Welcome to the freenode Internet Relay Chat Network eightyeight

You want to see "SASL authentication successful" in the output. If it fails then you will still need to provide your password manually to NickServ. You will likely need to review the steps outline above finding anything you might have missed. Remember, you're authenticating with your primary NickServ nick, not any others linked to it. In the output, you can see I'm authenticating with "atoponce", but using "eightyeight" when I actually connect.

One last work about SASL authentication: you no longer need a server password if you're utilizing this. Before, Freenode supported a server password that you could append to the end of your "/server" string for authentication. Freenode still supports this, although in "username:password" syntax rather than just "password". But, SASL authentication overrides the need for a server password, so you can take that out of your settings. It's not hurting anything if you leave it, but it's not doing anything beneficial either.

With all that out of the way, I want to point out one major change that I welcome. That is the ability to join more than 20 channels simultaneously. Previously, with hyperion-ircd, you had to get Freenode staff to grant you usermode "+u" which gave you the ability to sit in more than 20 channels with one connection. If you're an IRC addict like I am, 20 is pretty freaking limiting. However, ircd-seven now supports the ability to connect to 120 simultaneous channels. You can see this in the MOTD output when you connect (emphasis placed):

16:05 [freenode] >>> CHANTYPES=# EXCEPTS INVEX CHANMODES=eIbq,k,flj,CFLMPQScgimnprstz CHANLIMIT=#:120 PREFIX=(ov)@+ MAXLIST=bqeI:100 MODES=4 NETWORK=freenode KNOCK STATUSMSG=@+ CALLERID=g are supported by this server
16:05 [freenode] >>> SAFELIST ELIST=U CASEMAPPING=rfc1459 CHARSET=ascii NICKLEN=16 CHANNELLEN=50 TOPICLEN=390 ETRACE CPRIVMSG CNOTICE DEAF=D MONITOR=100 are supported by this server
16:05 [freenode] >>> FNC TARGMAX=NAMES:1,LIST:1,KICK:1,WHOIS:1,PRIVMSG:4,NOTICE:4,ACCEPT:,MONITOR: EXTBAN=$,arx WHOX CLIENTVER=3.0 are supported by this server

Very nice!

So, there you have it. SSL connectivity with SASL authentication and the ability to join up to 120 channels simultaneously on the new IRCD at Freenode. I personally welcome all these changes, and it's nice to see that every IRC server I'm currently connected with provides a secure connection. Call me paranoid, but I'm enjoying SSL. for Irssi and Other Script Goodies

So, I was browsing A Guide to Effectively Using Screen and Irssi, and I came across this little gem:

Hilight Window

See the irssi screenshot above. The section labeled "1" is a split window called "hilight". Anything that is hilighted (set using the /hilight command) will be logged to that window.

To do this, first load the script. The script I use is a modified version of cras's that logs timestamps as well. It is available here:

Put the script in ~/.irssi/scripts/autorun/ and type /run autorun/ in irssi.

Next, create the split window. This is done with the /window command. See /help window for details on how this works.

  /window new split
  /window name hilight
  /window size 6

The above commands will create a new split window (as opposed to a "hidden" window, which privmsg, channel, and status windows are by default), call it hilight (so the script knows where to send the information) with a height of 6 lines.

Now, have someone address you in a channel using "yournick: hello". If you did everything correctly, it should be logged to the split window. If you want to have all lines containing your nick hilighted, type /hilight yournick. See /help hilight for advanced features. Use /layout save to save your layout settings and have irssi automatically recreate the split hilight window on startup.

For me, irssi is more than just an IRC client. It's a complete messaging center. I access IRC, Jabber and push to microblogging sites, such as Facebook, and others. Because I'm running behind GNU screen, I want to be aware of any messages while I'm away. Of course, Irssi does this for you automatically, by putting your hilights in the status window. For me, that's a busy window, and it's easy to lose hilights if they sit long enough. So, I'd rather have the hilgihts go to a separate window. Enter that script listed above, along with splitting the window for immediate access.

Now you have a split screen window that your highlighted messages are going to. However, they're also going to your status window when you're away. This is known as your "awaylog". You can change that setting if you want. By default, it logs 'msgs hilight'. If you want to disable it, now that you have a new hilight window, you can set:

/set awaylog ""

Note, that your new hilight window will only log hilights, not msgs. For me, this is no big deal, because msgs are already in their own window by default anyway, and the point of this is to keep all the messages in one place. So, this is a win/win for me.

Along with this script, there is other script goodness that I take advantage of with this fabulous client. Listed below:

  • Query the server to see who is away and who is not by running /anames. Prints out a list similar to /anames, but gives those whore are marked as away a different brightness to their nick.
  • Because I'm a microblogging nerd, I like to know in my status bar how many characters I've currently typed before I hit enter, to know whether or not I'm under the 140 character limit. Works like a charm, character-by-character.
  • This script rocks! Allows me to do search and replace with text in my client. I use it mainly for the Jabber bot. See this post for more details.
  • You can put the number of people in a channel, including the number of ops, halfops, voices, etc in your status bar.
  • This puts a trackbar on your window where you last were in the conversation last time you were watching it. Very useful to pick back up where you left off in a conversation when you return to that window.
  • Using GNU screen is solid with irssi. However, when I am away, I want to mark myself away with the server. So, all I have to do is detach screen, and this script will mark me away with my configured away message. No public announcements either.

Of course, I use a few others, such as the bitlbee scripts, but these are the heavy hitters. Must-haves for any serious irssi user/hacker.

How Travelers Can Protect Their Data

I used to travel quite extensively around the country, and even had the opportunity to leave the country and go abroad. My laptop was always with me. As a result, I was very concerned for the integrity and safety of my data. As such, I took the necessary precautions that travelers can take when their laptops are with them. This post is hopefully informational should you decide to travel with your faithful friend (I call my laptop "Kratos"- the Greek God who always did Zeus' will and bidding).

First, a disclaimer. This post is not meant to be a sure method for defeating attackers. Rule number one in computer security is that if an attacker has physical access to your machine, all bets as to data integrity and physical safety are off. However, than doesn't mean that you can make the process so tedious and time consuming for the attacker, that he will likely not bother and move to another victim. This post is about those methods. If they're going to attack you, why not at least make it challenging for them?

If you have the ability, this post requires wiping your disk by starting from scratch. So, if you have data on that disk, you should probably back that up first. If it's a new laptop, and you're not invested into the operating system, then maybe you don't need to worry about it. Just realize, that from this point on, if you decide to "follow along" with your own equipment, this will wipe your data, and if you didn't back up your data first, you're the moron, not me.

Okay, with that out of the way, shall we continue?

Step One: Prepare your hard drive.
The goal of this step is to install an encrypted filesystem. So, before we do that, we need to do some preparation. In order to get to that point, you will need to write random or pseudorandom data to the entire disk. This will take some time. My experience has show that laptop drives usually operate around 30MBps, so if you have a 300GB drive, this will take you just under 3 hours. The reason for doing this is to confuse the attacker just exactly where the encrypted filesystems reside. If the entire disk is underlined with random or pseudorandom data (it doesn't necessarily need to be cryptographically secure here), then when looking at the drive level, it will be practically improbable to determine where the encrypted filesystem starts and where it ends. If you skip this step, then it's quite obvious, and rather than wast his time on the entire disk, the attacker can focus his efforts on just the obvious encrypted portions of the disk.

Now, some tools for installing encrypted filesystems will already have this step built in, such as the Debian installer, but some won't. You'll need to discover your vendor's documentation to see if this is the case. I would say it doesn't hurt to be safe, and take this step anyway, but it's up to you.

There are many utilities for writing random or pseudorandom data to the drive. Probably the best tool will be DBAN, or Derik's Boot and Nuke. This utility is generally used for destroying data, but in this case, we'll use it for preparing data. Download the live CD, burn it, and reboot your machine. I would recommend selecting the "PRNG Stream" from the menu. This will normally write pseudorandom data to the disk 4 times. However, it shows a progress report on the number of passes, so after it completes its first pass, you can reboot. It's important to note that selecting "Quick Erase" will do a single pass of zeros. This isn't what we want. We're trying to deter attackers by not giving them the boundaries of our encrypted filesystems. If you choose "Quick Erase", then you'll be clearly showing them where those boundaries exist. As tempting as it may be, don't select it.

If you're familiar with Linux live CDs, you can boot into a live environment, such as KNOPPIX, pull up a terminal and run the following, assuming the drive you're preparing is "/dev/sda":

dd if=/dev/urandom of=/dev/sda

The point is getting random or pseudorandom data down on the entire disk. However you accomplish that, is up to you.

After a few hours pass (depending on the size of your drive, and if you cancel the operation after a single pass of PRNG Stream), you are now ready to reboot into your operating system installer if it provides the ability to encrypt the filesystems, or into a separate utility for doing so.

Step Two: Set up volumes or partitions and encrypt
With the Debian installer, and most GNU/Linux installers, you can set up your partitions or logical volumes, then tell the installer to encrypt them, even with some options on the cryptography. When you've defined your filesystem boundaries (I'm not going to cover that here), and you're ready to encrypt, you'll inevitably be required to type in a username and passphrase. Some encryption utilities will use this passphrase as a seed for the encryption algorithm, so the stronger the passphrase, the stronger the seed, and this the more unlikely an attack will be successful on the filesystem. So, choose wisely and choose securely.

Step Three: Install the operating system
Whether it be Windows, Mac, Linux or whatever operating system that supports encrypted filesystems, you're now ready to install it. Follow the operating system's installer to the end, reebot, and make any additional final preparations to your computer before putting down the data. You should at this point be able to boot the computer, provide the necessary username and passphrase, and use your operating system as normal. If not, you'll need to spend some time with your operating system's documentation or encrypted filesystem documentation to get to that point. This post isn't about that, so Google might be your friend here.

Okay, so now we have a usable operating system running on top of a fully encrypted drive. If we were to stop here, we wouldn't make things very challenging for the attacker. We want to do that. So, we're going to start adding some hurdles along the way. If the attacker has the stamina, then so be it. I'm guess that most attackers, when faced with each of these hurdles, likely won't bother, and move to their next victim, rather than waste time trying to figure out how to get from Point A to Point B.

Step Four: Password protect your BIOS
This will vary widely on hardware, so consult your vendor's documentation on how to boot into your laptop BIOS and set an administrator password. However, this functionality should be provided on most modern BIOSes. When found, go ahead and set the password. It can be whatever you want. I would recommend making it hard to guess, but it doesn't really need to be on the same level as the encryption passphrase you provided earlier. Just don't make it successful to a dictionary attack, and you should be good. Don't reboot. Stay in your BIOS for the next step.

Step Five: Change your boot order to boot off the hard drive first
The reason for setting the administrator password in the BIOS was so we can tell the BIOS that we always want it booting from the hard drive first, rather than from the floppy, CDROM, network or USB. This step is necessary to hopefully avoid the Evil Maid attack, something I've already blogged about here. In summary, the Evil Maid attack is booting your computer from a USB or CDROM, replacing your bootloader by installing a custom bootloader with a keylogger, and powering down. Then, when you boot your machine, and enter the encryption passphrase, it gets stored on disk, or sent over the network to a remote server. After you leave your laptop a second time, the attacker comes back to your computer, boots off the hard drive, provides the newly discovered encryption credentials, and steals your data.

So, if your laptop is BIOS password protected to only boot from the hard drive, this is a good deterrent. Why? Well, in order to remove the password off the BIOS, so the attacker can boot from some other medium, they will need to disassemble the laptop to get to the motherboard, and flash the BIOS. This is easier said than done on laptops. Have you ever taken your laptop apart? I have. I've take apart both my old HP Pavilion and my current ThinkPad T61. They're a royal pain, and extremely time consuming.

A good attacker will be paranoid for time. They don't want to get caught. If it means spending 3 hours disassembling a laptop just to flash the BIOS, so they can install their custom bootloader and keylogger, chances are high he'll move on to another victim. Now, that's not to say that every attacker can't do this, or they know they have the time, and your data is that valuable to them. Maybe the attacker is skilled at disassembling Dell, Lenovo and HP laptops, so it's only a 30 minute inconvenience that he knows he can make. But, maybe not. At least this is a moderately challenging task, and I'd be willing to bet most attackers won't bother.

Step Six: Physically lock down your laptop or take it with you
Again, just another deterrent, but locking your laptop down to a secure location could provide enough of a challenge to deter physical theft, should all efforts being made at getting to your data fail. After all, there is value in the hardware itself. EBAY is probably making a killing of such scenarios without knowing specifics. This doesn't mean the attacker isn't skilled at lock picking or doesn't have a strong set of bolt cutters with them. However, if the time it takes to remove the laptop from the premises is a challenging effort, the attacker likely won't bother, and move on.

With that said, I had my car broken into once. They were after my stereo. Thankfully, they were caught in the act, and found guilty in court of seven counts of theft and property damage, among other things. However, in the car before mine, they couldn't successfully remove the deck from the dash. It was bolted down. So, out of frustration, they physically destroyed the deck and the dash. Not out of failing to remove it, but out of anger for not succeeding. Your laptop may fall victim to such physical damage.

So, if you can carry it with you, you probably should. When I was on the road, I took my laptop with me everywhere I went for fear of physical damage or theft. I would take it with me to dinner. I would take it with me to events. I would take it with me sight seeing. I was paranoid. Sure, I run the risk of damage while traveling with it, but I know how to treat my bag carrying the laptop. At least then I'm somewhat in control. Further, an attacker can't attack what isn't there. But, when I couldn't take it with me, I would lock it down securely, and hope it remained in tact when I returned.

Step Seven: Remove the data and/or encrypt it a second time
Many operating systems support encrypting directories and files on top of the filesystem itself. This means you can have an encrypted directory in your home folder, where the valuable data resides. Should the attacker successfully get access to your encrypted filesystem, if you chose a different passphrase for your encrypted directory, hopefully, they won't get access to that.

But, keeping that sort of sensitive data on the drive might not be wise, even if it is encrypted. So, it would be best to have that data on an encrypted USB disk. Your only concern should be making sure you don't lose that drive. Even if it's not stolen data, lost data still sucks. Backups here help.

At my place of employment, we're developing a virtualization solution where all the developers will have virtual desktops in our datacenter. The idea is to keep the data off of the developer's laptop. So, when they login to their laptop, they then must login to the VPN, then use RDP or SPICE (yeah, we're deploying RHEV) to login to their remote desktop, and work from there. At this point, the laptop becomes a mere dummy terminal, not storing a single piece of data- even email. There are concerns, like if the developer doesn't have Internet access, or if the datacenter is compromised, but from a traveling perspective, keeping the data off of the traveling laptop is a net win. Some hotels might have crappy WIFI, but at least security has come first, and the data is safe.

Appendix A: Learn how to remove and restore your bootloader
This is a crucial skill, I think. It doesn't really fit into the above steps per se, so I've added it as an appendix. The idea is simple. When traveling from another country to the United States, the Department of Homeland Security thinks it's fun to ignore the Constitution, and seize and search your laptop without a warrant. Bruce Scheier has covered this extensively, so I'll let you read up on his posts about the topic. If you're running an encrypted filesystem, they can detain you until you provide them with the passphrase, at which point they can then image your drive, keeping your data. This is wrong on so many levels, but you have a good deterrent- wipe your bootloader before landing.

When I traveled to Canada for training, I was already aware of the DHS doing this at customs. So, before being required to turn off my laptop during landing, I wiped the bootloader, and prepared a script in my mind should the DHS want my to power on my laptop. I was resolved that I wouldn't lie, as that would be perjury, but I would dance around the issue as best I could. The script would go something like this:

Agent: Can you power on your laptop please?
Me: Sure, but while on the road, something happened, and it will no longer boot. It says it's missing an operating system. I'm hoping to get it fixed when I get back to the office.
Agent: Will you power it on anyway please?
Me: Sure.
(I power on the computer, at which point, it behaves exactly as described.)
Agent: Okay, thank you. Carry on.

When I was returning from my Canada trip, and passing through customs, the agent asked me to remove the laptop from my bag and open it. I was already prepared with a removed bootloader, and my heart was racing to go through the script. When I opened the laptop, we proceeded to swipe it looking for traces of explosives. When he was satisfied, he said thank you, I put the laptop back in my bag, and was on my way. I was a bit bummed that I didn't get to defeat the DHS at their own game, but was relieved at the same time that I didn't miss my flight home.

After I was on US soil, I boot off a rescue CD, and restored my bootloader, and was able to boot back into my Debian install without trouble. This takes some practice and know-how, but I think it's really quite worth it should that scenario ever present itself. Of course, who knows what would happen? Maybe I would be detained until they could fix the problem with my laptop, at which point, I would still be required to turn over the passphrase, and they image the disk. Who knows? Still worth a shot, and it's easy to do, if you know what you're doing. Just don't lie.

Appendix B: Stay with your belongings through metal detectors
Again, this is something that doesn't really fit in the steps above, so it's in the appendix as well. When you are entering an airport, and your belongings have to go through XRAY, there is an attack to steal laptops that is rather trivial and easy to setup. All it requires is three people- two attackers and the victim.

The attackers find a victim with a laptop (or bag obviously carrying a laptop) they want. They both position themselves immediately in front of the victim when standing in line to go through security. By the time the first attacker reaches the metal detector, the victim has likely placed their personal belongings on the belt to go through the XRAY machine. The first attacker goes through the metal detector without a problem. He waits at the end of the conveyor belt to get his belongings as well as snatch the laptop. The second attacker, however, causes problems going through. Every time he attempts to go through, something in his pockets, or otherwise, causes the detector to go off. Now, generally, it only takes 2 or 3 attempts before the agent will just get his magic want, and swipe him down from head to foot. But, two to three attempts is all the time that is needed for the victim's bag or laptop to go through XRAY, at which point the first attacker takes the computer, and disappears into the crowd before the victim even had an opportunity to get through. It's sneaky, it's effective, it's fast and it's clean. Further, TSA isn't keeping track of who's belongings belong to who. For all they know, that was their laptop, not yours.

How do you avoid this attack? When I traveled, I stood at the XRAY machine with my hand on my laptop bin, and I sent it through the same time I went through. I never gave it a chance to get ahead of me. This would slow down the line a bit sometimes. In fact, I would let people go ahead of me while I waited. I took no chances. I'll go through metal detection faster than my laptop will go through XRAY, so I can wait for it to come down the belt right into my own hands. It requires a bit of patience and stubbornness, but I think it's worth it. You'll likely not bump into the cranky people behind you again, so no biggie.

So, there you have it. Those are the procedures and steps I would take when traveling with my laptop. I would recommend the same to you. Really, it boils down to determination, knowledge and a bit of luck. You can avoid the worst if you are sufficiently paranoid. There's nothing wrong with taking the extra precautions to protect your data and your laptop from theft or damage. Of course, these steps aren't bullet proof, and everything comes at a cost. There might be a slight inconvenience to the traveler to jump through some of these hoops. But, what is it worth? If the cost of the inconvenience outweighs the cost of the data, then some or all of these steps might not be necessary. If the cost of the data outweighs the cost of the inconvenience, then I would say stick to each step religiously. That's just me.

Keeping Time In Debian With Virtualbox

I've been encountering an interesting issue recently with Debian running as a guest in side of VirtualBox on Windows XP. When I initially installed Debian, I told it to adjust the hardware clock to UTC. Of course, this was the mistake I made. Windows operating systems want the hardware clock set to local time, then the software clock can just read the time directly without changes. Historically, Unix and Linux operating systems set the hardware clock to UTC, then offset the time based on your timezone. So, without thinking, while installing Debian, I told it to adjust the hardware clock to UTC. However, it doesn't seem to have worked, as my hardware clock has stayed on local time.

Why is this a problem? Well, when booting Debian in VirtualBox, it wants to mount the volumes (just a file residing in Windows), but the last mount date timestamp shows a date in the future. This is because I'm 7 hours behind UTC. So, on every boot, I am dropped to an sulogin prompt, where I need to provide the root password to fix the system. Because the last mount date timestamp is in the future, I need to run:

# e2fsck -fy /dev/work/root

This will update the timestamp to the current hardware clock time, at which point I can reboot, and remount the drives. However, when init is loaded, it executes /etc/init.d/ This script either sets or does not set the hardware clock based on a setting in /etc/default/rcS. Pulling up the file, this is what I found:

# /etc/default/rcS
# Default settings for the scripts in /etc/rcS.d/
# For information about these variables see the rcS(5) manual page.
# This file belongs to the "initscripts" package.


Notice the setting "UTC=yes". This means to change the hardware clock to UTC time when booting. Of course, this also means setting the timestamp on the mounted filesystems to the UTC date. Because Windows is my host operating system, I don't want to do this. So, changing the value to "no" fixes the issue I'm having with the last mount time on my volumes being in the future. I probably should have mentioned that this init system is good old fashioned SysV Init. I haven't upgraded to Upstart yet, although that's on the TODO. I'm not sure how this post would change with Upstart in the picture, but when I upgrade, I'll likely post the solution if I'm faced with the same issue again.

Hope this post helps someone in the future who has also had this problem.

More ZSH Prompt Love

Ever since discovering ZSH 3 years ago, I've been addicted, but it wasn't until a good 2 years into using the prompt on a daily basis that I decided to do some radical work with my prompt. I've blogged about this before a couple times, making improvements along the way: post 0, post 1, post 2, post 3. Check out those posts if you're interested in what I've done to the prompt, and extra screenshots.

At the Utah Open Source Conference, I gave a BOF on Unix shells. The turnout was good, and we had a great discussion. I presented on my default prompt for ZSH, showing all the hidden features of the prompt. However, I had forgotten that I had removed battery status from my prompt, because I was depending on APM, which is no longer compiled in the kernel. A couple people have asked me since then why I'm depending on APM and not ACPI. I don't have an answer, other than that was just what I coded. So, last night, I put up an ACPI implementation, and it works great. As with the APM implementation, if the battery percentage is less than 15%, the percentage display is red. If it's less than 50% but greater than 14%, it's yellow, and if it's less than 100% but greater than 49%, it's blue. If it's 100%, or the tool "acpi" is not installed, then it doesn't show up. Here's a screenshot below:

Battery Percentage in ZSH prompt

While hanging out in our local LUG channel for the Ogden Area Linux Users Group, I got talking with Seth about prompts. He decided to change his, including adding the dog from Nethack randomly "moving" in the prompt. He also mentioned changing the color of the path if the present working directory was not writable. I really liked this idea, and decided to implement it in my prompt. Here's a screenshot of that in action:

Path color change in ZSH prompt

I change the path color to yellow if the present working directory is not writable, as it's noticeable enough to catch your attention, but subtle enough to not get in the way, and be distracting.

As usual, if you want the source, here it is. Yes, it's public domain, as mentioned in the code, so have at it.

Mobile LVM

Today, as my wife and I were headed into Target, I thought of the cheap USB thumb drives they usually have on sale, and I was tempted to purchase some. Then I got to thinking: what if I could use those thumb drives as one disk, using LVM, and have the ability to take that LVM structure from computer to computer? For example, say I have 6 2GB USB thumb drives. I have 12GB of storage total. Maybe I want to fit a DVD ISO or two on the disks. LVM would be perfect for this, if it remains on one computer. Wouldn't it be nice if I could take those 6 drives to another computer, scan for the LVs, and mount them, keeping all my data in perfect order? Well, after a bit of hacking about, I figured it out, and it's cleaner than you would think.

I'm not going to bother teaching you about the concepts behind LVM here. Suffice it to say, that LVM provides complete flexibility and control over your disk pools, where editing and manipulating partitions would be troublesome. The idea behind LVM is to create a pool of disk space, whether it comes from one drive, or many, and have the ability to chop up that pool to create mount points easily, as well as resizing the volumes, either larger or smaller.

So, to get started, let's keep it simple. I have two 32MB USB thumb drives with me right now for this post. When I plug them into my computer, my Linux kernel might recognize them as /dev/sdy and /dev/sdz, for example. You can find these results by running "fdisk -l" as root, checking the end of the dmesg command, or checking the end of /var/log/messages.

If they have a filesystem on them, and your desktop mounts them automatically, like GNOME or KDE will traditionally do, then you'll need to unmount the devices. Once unmounted, we'll need to partition the devices, and label the partitions as "Linux LVM". I'll leave that step up to you. Some good utilities of making this happen are fdisk, sfdisk or parted. You will only need one partition on each drive. Make sure the partition covers the whole disk, and make sure the partition is labeled as "Linux LVM". If the partition is not labeled appropriately, it could cause problems for you later down the road.

Now that you have your disks partitioned, and labeled correctly, let's start building the LVM structure. This is done by creating physical volumes first, then adding them to a disk pool, and chopping up the disk pool as needed for our mount points. Caution: This next step will erase any filesystem, and as a result, any data on the drives.

Pull up a terminal, type as root, and pay attention to the output:

# pvcreate /dev/sd{y,z}1
  Physical volume "/dev/sdy1" successfully created
  Physical volume "/dev/sdz1" successfully created

Now, time to add these two physical volumes to a drive pool. This next step is important, because you will give a name to the volume group. This name must be unique! Reason being: if you take this LVM structure to another computer, and it already has LVM implemented with a volume group that has the same name as yours, you'll run into snags. So, for me, I used my GnuPG keyID. I figure that will be unique enough, that I shouldn't encounter it on any computers I plan on using this with. But, you can name it whatever you want. Name it something that is useful to you. Of course, name it something very unique.

So, continuing in your terminal, type as root and watch the output:

# vgcreate 8086060F /dev/sd{y,z}1
  Volume group "8086060F" successfully created

Cool, at this point, I have about 64MB of space that I can chop up any way I see fit. Maybe I want a 50MB volume and a 14MB volume. Maybe I want one massive 64MB volume. Maybe I want 64 1MB volumes. The point is, you decide. When I create my logical volumes, I'll be using the "lvcreate" command, which is rather detailed, so spending some time in the man pages will be of value.

Before continuing, we need to find out exactly how much space I have in my pool. LVM is keeping some metadata on the disks, so I will be losing some space. But how much? This is important to know when I start creating my logical volumes. I can get this data by running the "vgdisplay 8086060F" command:

# vgdisplay 8086060F
  --- Volume group ---
  VG Name               8086060F
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               52.00 MB
  PE Size               4.00 MB
  Total PE              13
  Alloc PE / Size       0 / 0   
  Free  PE / Size       13 / 52.00 MB
  VG UUID               F0pWrc-030s-03Uo-SoLl-7Tvf-ZETc-3hcxfG

"Free PE/Size" is what we're looking at. In this case, LVM is using 12MB of metadata stored on the disks for its operations. If each extent is 4MB and I have 52MB of space, then that means I have 13 physical extents that I can use. This is the "PE" number. So, I'm going to use that number when creating my logical volume. I'm also going to name it something personal; something that has some meaning to me. Because this will be holding my personal data, I'll name it "personal".

Pull up a terminal, and as root:

# lvcreate -n personal -l 13 8086060F
  Logical volume "personal" created

Sweet! I have a logical volume that I can now put a filesystem on, mount, and start moving data to. So, let's get to it:

# mke2fs -j /dev/8086060F/personal
... [Output snipped] ..
This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Next, let's mount it:

# mount /dev/8086060F/personal /mnt
# echo "Testing file on LVM" > /mnt/file.txt

We now at this point have our LVM structure created, formated, mounted and data on it. Now, the key is to take these thumb drives out of the computer, take them to a separate computer, and rebuild the exact LVM structure keeping the data in tact. After all, that's what we're after, right? Mobile LVM?

Unmount the device:

# umount /mnt

If you get an error here, run fuser, with its various options, to find why the umount is failing.

Now with the logical volume unmounted, we need to deactivate it. This effectively takes the volume offline, so it can't be accessed for data retrieval or storage. This can be handled with the "lvchange" command. Looking at the man page, in order to activate or deactivate a logical volume, you need to pass the "-a" switch. "-a y" would activate it, and "-a n" would deactivate it.

In your terminal:

# lvchange -a n /dev/8086060F/personal

No output will be there, but the device "/dev/8086060F/personal" should no longer exist. Now, we need to do the same thing with the volume group, telling LVM that we are finished with this group, and we no longer need its data. Surprise, surprise, this is done with the "vgchange" command, and we pass the same switch with its argument:

# vgchange -a n 8086060F
  0 logical volume(s) in volume group "8086060F" now active

At this point, it is safe to unplug the drives from your computer, and plug them into the new computer.

It's typically best practice to notice how the Linux kernel identifies the drives when plugging them into a new machine. Knowing this information won't necessarily be of vital importance to us during this tutorial, but it could be of importance when troubleshooting. Let's say the kernel recognized the drives as /dev/sdk and /dev/sdl.

At any event, we need to have LVM2 and Ext3 installed on this new machine, if they aren't already. Once those are installed, all we need to do is run pvscan to search the system for any new physical volumes. It should find our newly plugged in thumb drives, with all their metadata:

# pvscan
  PV /dev/sdk1   VG 8086060F   lvm2 [24.00 MB / 0    free]
  PV /dev/sdl1   VG 8086060F   lvm2 [28.00 MB / 0    free]
  Total: 2 [52.00 MB] / in use: 2 [52.00 MB] / in no VG: 0 [0   ]

Cool. It found them, and it's telling me that they belong to a volume group called "8086060F". If this volume group already exists on the new computer, LVM will let me know. This is why we needed to create a new volume group that had a very unique name.

All that's left, is to activate the volume group, then activate the logical volumes, and I should be able to mount the volume, and access the data. Let's give it a try:

# vgchange -a y 8086060F
  1 logical volume(s) in volume group "8086060F" now active

Sweet! So far so good. Notice too that I passed "-a y" to activate the group, where previously, I passed "-a n" to deactivate it. Now the logical volume:

# lvchange -a y /dev/8086060F/personal

No output, but can I mount it and access the data?

# mount /dev/8086060F /mnt
# cat /mnt/file.txt
Testing file on LVM

YES! WE DID IT! We've rebuilt the LVM structure on a completely different computer, and our data remained untouched. At this point, I can modify, add, remove data on the LVM to my hearts content. When I'm finished, as you're already aware, I can unmount the volume, deactivate the LV, deactivate the VG and remove the drives for the next computer.

This process, as you have figured out, has quite a few steps to it, and it requires some knowledge about how LVM works. However, this pays off, I think, and it's rather straight forward.

Not all is peaches and cream. You might have made a mistake during the process. Maybe you pulled out the drives before deactivating, and when you get to the new computer, it won't build the LVM structure, or something equally as troublesome. LVM keeps a cache on all it's operations in "/etc/lvm/cache/.cache". You can safely remove this file, if it gets in your way. LVM will recreate it as necessary. That might fix your problem, it might not, but it's worth pointing out.

I currently have 10 USB thumb drives, each of differing sizes as well as 3 mobile external hard disks. I've got roughly 200GB of raw storage at my disposal. With just flat filesystems, I can't put down a 100GB file, unless I have a drive large enough to support it. The largest drive in my collection is a mere 80GB, so LVM fits the bill perfectly in making this possible, by combining all the disks. And because I can tear it down and rebuild it regardless of the computer I'm sitting at, as long as LVM2 and the Ext3 filesystem are supported, I can access the data.

Of course, you can choose any filesystem you want here. Just remember, however, that XFS does not support shrinking the filesystem. But, it's your drives, so do what you want.

Further, if you really wanted to have fun, because you have multiple disks, you could totally take advantage of Linux software RAID. Because the structure we outlined above doesn't cover redundancy, if you lose a disk, your data could be corrupted. So, RAID would make sense, however, it complicates the mobility, by making sure Linux software RAID is also installed on the target machine, and it adds an extra step to activating the drives by rebuilding the RAID array first THEN rebuilding the LVs. And of course, if you're paranoid, you could add encryption on top of it with cryptesetup and LUKS. Again, though, another step getting to your data when tearing down and rebuilding. All thoughts for another post.

I don't care what you say, this is just too cool for school.

Rescue LILO On LVM With Ubuntu

I was faced with an interesting challenge tonight. But first, why I was faced with the challenge to begin with.

I had been teaching Linux to system administrators for more than a year before taking my current job as a system administrator myself. During a couple of the courses, the students had the chance to learn about the boot procedure from pressing the power button to the login screen and everything in between. This meant getting a deep, personal understanding of your MBR, the Linux kernel and the System V init scripts.

During the lecture on the MBR, I always lectured how GRUB is the defacto standard, and LILO has basically been replaced. I would then teach the students possible situations that they might face should their MBR be corrupt or missing, and how to fix it. Of course, fixing it meant troubleshooting, and learning to use the rescue media that ships with your distribution, be it RHEL or SLES. Well, I never thought that I would face the situation personally on my own machine, as I tend to be much more careful with my own machines than someone else's (like a training center).

First, when I installed my desktop, I wanted to take advantage of LVM. I have two disks in the machine, so talking total advantage of the space was important. LVM fits the bill nicely. However, I didn't plan well enough, and put my boot partition on a logical volume. This means that GRUB doesn't get installed by default, and instead, you get LILO. Further, this also means that there is no nice pretty splash screen while booting, but it's back to the old kernel and init script output. Oh well. This system remains up 90% of the time anyway, so no big deal.

Then, earlier today, I thought to myself "GRUB 2 is supposed to handle the boot partition on a logical volume". I surely would rather have GRUB than LILO, plus I miss the slick boot splash screen. So, I pulled up a terminal, installed the 'grub2' package, and all it's dependencies, ran 'grub-install /dev/sda', then rebooted to a black screen saying it couldn't find my boot partition. GREAT! Here I am, thinking this would be no sweat, and I'm left without a bootable computer. No worries, I thought. I've taught my students over, and over, and over again on getting them out of this jam, surely I can do it myself. So, I grabbed an Ubuntu LiveCD, and went to work.

First thing first. I need to make a decision about GRUB or LILO. Do I want to figure out why GRUB puked on me, or should I just stick with LILO, and be done with it? Either way, I need to make a decision. I decide to stick with LILO. So, I boot into the live environment, pull up a terminal, and get to work. As mentioned, every last bit of disk space is on a logical volume. The LiveCD doesn't come with LVM support by default, so I need to install it, and load the module:

sudo aptitude install lvm2
sudo modprobe dm_mod

Now with LVM installed, and the module loaded, I can mount the volume and get to work. Hold on though. Not so fast. In order to mount the volume, I need to call the volume by device. It's not there, if I search under /dev. Get back in your terminal to find out why by running 'lvscan':

sudo lvscan

This will scan the volumes, detecting any available. Also, 'lvscan' will let us know if the volumes are active or inactive. In my case (and same with you if you're following along with this tutorial), the volumes are inactive. So, we need to make them active:

sudo lvchange -a y /dev/janus/root

/dev/janus/root is my volume that contains my root filesystem, including /boot, which is needed to bring by box into working order. Now with my volume active, I can reference it, and mount it. But that's not all I need to mount. I need to mount /proc for the kernel, and mount /dev for all the correct devices that the kernel sees on my system. So, back into the terminal we go:

sudo mount /dev/janus/root /mnt
sudo mount -t proc none /mnt/proc
sudo mount -o bind /dev /mnt/dev

Okay. We're getting there- slowly, but surely. Now that everything is mounted, it's time to change to that filesystem, and start fixing stuff. This is done with the 'chroot' command. I won't go into vast details about the 'chroot' command. Basically, it just changes your root filesystem to the directory you specify. In this case, we're going to change our root filesystem to /mnt, where my logical volume is mounted that holds the actual filesystem that resides on my disk, not the LiveCD:

sudo chroot /mnt

Cool. Our prompt should show that we are on the '/' filesystem, meaning that any files we alter, we're altering on disk. So, back at the beginning of the post, I mentioned that I installed GRUB, and it failed to boot, so I decided to stick with LILO, rather then figure out why it failed (I can do that later). I uninstalled LILO when I installed GRUB, so I need to reinstall. Remember, that because we ran 'chroot' just now, we are now operating on disk, as that's where we reside. So, installing any packages will be persistent across boots. If this is the first time you're installing LILO, then you'll need to take an extra step. If you're just rescuing an existing LILO system as I am, it won't be needed. The extra step is to run 'liloconfig' to create /etc/lilo.conf. Otherwise, just run 'lilo' itself, then reboot:

sudo aptitude install lilo
sudo liloconfig # Answer yes to everything
sudo lilo

LILO should now be installed in the MBR of the disk, and you should have a bootable box at this point. So, the only things left to do are leave the chrooted evironment and reboot, hoping everything works. If you received any warning about LILO when installing to the MBR, ignore them. If you receive any FATAL errors, you won't have a bootable box, and will have to troubleshoot further from there.

sudo reboot

At this point, my box booted fine, and I'm typing this post right after the rescue. Everything is in place. I hope this finds some help for someone who is in a similar boat that I was just in fixing their LILO on LVM.

rm -rf /

DISCLAIMER: This works on Debian testing, Debian unstable, Ubuntu 8.04 and Ubuntu 8.10. I have not verified it to work on other systems. If you hose your box, because you gave it a try, and it didn't work, don't blame me. You're the stupid one for trying it out on a production machine. If you're curious, but unsure, just take my word for it, or install a virtual machine.

I came across an interesting post today, so I thought I'd give it a go on a virtual machine that I didn't mind thrashing. The subject of the post is in the title, namely as root, running 'rm -rf /'. Have you tried this on the Ubuntu or Debian? It won't work:

root@host ~# rm -rf /
rm: cannot remove root directory `/'
root@host ~# echo #?

If you're nervous about running the above command, then pass the interactive switch to rm, (rm -ri /) to force rm to ask you on every last item to remove (you can answer no, or cancel with Ctrl-c). Why is rm refusing to remove the root directory? From the man page:

do not treat ‘/’ specially

do not remove ‘/’ (default)

Running 'rm -rf /' is the same as running 'rm -rf --preserve-root /', which of course makes no sense. Has this always been the case for rm? No. First off, Solaris made this a standard in Solaris 10. Second, --preserve-root as default has been the default of Ubuntu since 8.04, as it came upstream from Debian, and I'm guessing further upstream from GNU coreutils (probably v6.10, although I can't verify).

Preserving root as default prevents easy mistakes, such as missing the assignment of variables:

root@host ~#: FOO="/home/aaron/tmp" rm -rf $FOO/
rm: cannot remove root directory `/'

Notice, I forgot to end my variable statement with a semicolon, so FOO never got assigned, and rm proceeds forth with removing / instead of /home/aaron/tmp like it should have. How about another example:

root@host ~#: rm -rf / tmp/*
rm: cannot remove root directory `/'

In this case, I wish to delete all the contents of the /tmp directory, but I typed too fast, and put a space between / and tmp/*, and thus, rm attempts to remove the root directory which is not what I wanted at all!

Good to see this implemented in the latest versions of Debian and Ubuntu.