Image of the glider from the Game of Life by John Conway
Skip to content

More Filesystem Foo

Well, not exactly "benchmarking" in the strictest sense, but interesting data I find nonetheless. Setting out on my voyage to learn more about filesystems that the Linux kernel supports, I went looking for which filesystem does the best job at managing space. No speed tests. No data integrity. No feature comparisons. Just space conservation. Of course, I plan on investigating these filesystems further on those notes, and will report my findings, but suffice it for now to compare space utilization.

First, I have 6 2GB USB thumb drives for this test. Unfortunately, 2 of them are slightly smaller than the other 4. As such, I felt that LVM would be a good solution for making sure each filesystem was put on the exact same storage container.

The result? 6 logical volumes exactly the same size, each with 486 PEs with 4MB per PE. Each filesystem was mounted to it's own directory under /mnt:

aaron@kratos:~ 4149 % df -h /dev/mapper/test-*
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/test-ext2   1.9G  2.9M  1.8G   1% /mnt/ext2
/dev/mapper/test-ext3   1.9G   35M  1.8G   2% /mnt/ext3
/dev/mapper/test-jfs    1.9G  376K  1.9G   1% /mnt/jfs
/dev/mapper/test-reiser 1.9G   33M  1.9G   2% /mnt/reiser
/dev/mapper/test-vfat   1.9G  4.0K  1.9G   1% /mnt/vfat
/dev/mapper/test-xfs    1.9G  288K  1.9G   1% /mnt/xfs

Next, I needed to populate these filesystems with some data. I ran the following for-loop:

1
2
3
for i in ext2 ext3 vfat xfs jfs reiser; do
    dd if=/dev/zero of=/mnt/$i/foo.img bs=1024 count=500000
done

Let's see how they fared:

aaron@kratos:~ 4166 % df -h /dev/mapper/test-*
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/test-ext2   1.9G  492M  1.3G  28% /mnt/ext2
/dev/mapper/test-ext3   1.9G  524M  1.3G  29% /mnt/ext3
/dev/mapper/test-jfs    1.9G  489M  1.5G  26% /mnt/jfs
/dev/mapper/test-reiser 1.9G  521M  1.4G  27% /mnt/reiser
/dev/mapper/test-vfat   1.9G  489M  1.5G  26% /mnt/vfat
/dev/mapper/test-xfs    1.9G  489M  1.5G  26% /mnt/xfs

VFAT, XFS and JFS all seem to do fairly well on data conservation. Knowing that the FAT filesystem isn't very robust, or feature-packed, looking at just this data, I would be willing to spend some time further with JFS and XFS. However, to be fair, I'll give FAT a good look in respect to features.

It is a pity however, that Sun Microsystem's ZFS is licensed under the CDDL. I would rather enjoy working with that filesystem, I think, as it supports a great set of features. Unfortunately, unless ZFS is ported to the GPL, it's unlikely that we'll see it in kernel space, and I'm not really interested in an implementation of it under FUSE.

{ 5 } Comments

  1. Roger Binns using Firefox 2.0.0.14 on Ubuntu 64 bits | April 24, 2008 at 2:36 am | Permalink

    Note that unix filesystems typically under report free space by 5% as that is reserved for root. In any event a better test is to create a directory structure with numerous small files such as untarring linux kernel source (directories can take up quite a bit of space) and then see what is the largest file you can create in the remaining space (dd if=/dev/urandom of=/mnt/$i/foo.img) - using /dev/zero allows the filesystem to create sparse files taking up no space hence using urandom.

  2. hk47 using Firefox 3.0b5 on GNU/Linux | April 24, 2008 at 5:01 am | Permalink

    Try:
    $ tune2fs -m 0 /dev/mapper/test-ext2

    That should give you roughly the same amount of free space on (a freshly created) ext2 as on vfat or xfs. Likewise for ext3 minus journal data.

    You might be interested in reading this article, though it's from 2006:
    http://www.debian-administration.org/articles/388

  3. Sacha using Konqueror 3.5 on GNU/Linux | April 24, 2008 at 4:31 pm | Permalink

    You do realise that the data you entered using 'dd' took exactly the same space on each filesystem right? So what was the compare?

    The only reason some filesystems have less used than others is because of the 5% reserved for root on ext2/ext3 and reiser filesystem. You can disable this using -m 0 when creating or tuning.

    You attempted to compare but somehow your conclusion is that all filesystems are equal. Try enabling most compression possible on each filesystem (since you don't care about speed) and don't use 'dd' to add data. Try pasting the linux kernel source on to there or something.

    You'll find that reiser fs wins in such a benchmark (I have seen many of these before). It has a better compression.

  4. textshell using Debian IceWeasel 2.0.0.13 on Debian GNU/Linux 64 bits | April 25, 2008 at 8:24 am | Permalink

    I think your test is not very good. Some filesystems reserve space for metadata at file system creation and count it all as used and some filesystems account for the metadata only if it's used.
    So if you want to test how good they manage space add files to the FS (as root to avoid 5% for root reserves and such things) with realistic average size (or size distribution) until you get no space left on device and compare the total size of all saved files. (using du without accounting for filesystem overhead)

  5. Danyel Lawson using Firefox 2.0.0.14 on Mac OS | May 9, 2008 at 11:38 am | Permalink

    For reiserfs you may want to make sure you haven't enabled notall in the mount options. Tailing allows reiserfs to use variable block sizes. Filling in the gaps between large files using big block sizes.

Post a Comment

Your email is never published nor shared.

Switch to our mobile site