Image of the glider from the Game of Life by John Conway
Skip to content

ZFS Administration, Part XVII- Best Practices and Caveats

Table of Contents

Zpool Administration ZFS Administration Appendices
0. Install ZFS on Debian GNU/Linux 9. Copy-on-write A. Visualizing The ZFS Intent Log (ZIL)
1. VDEVs 10. Creating Filesystems B. Using USB Drives
2. RAIDZ 11. Compression and Deduplication C. Why You Should Use ECC RAM
3. The ZFS Intent Log (ZIL) 12. Snapshots and Clones D. The True Cost Of Deduplication
4. The Adjustable Replacement Cache (ARC) 13. Sending and Receiving Filesystems
5. Exporting and Importing Storage Pools 14. ZVOLs
6. Scrub and Resilver 15. iSCSI, NFS and Samba
7. Getting and Setting Properties 16. Getting and Setting Properties
8. Best Practices and Caveats 17. Best Practices and Caveats

Best Practices

As with all recommendations, some of these guidelines carry a great amount of weight, while others might not. You may not even be able to follow them as rigidly as you would like. Regardless, you should be aware of them. I’ll try to provide a reason why for each. They’re listed in no specific order. The idea of “best practices” is to optimize space efficiency, performance and ensure maximum data integrity.

  • Always enable compression. There is almost certainly no reason to keep it disabled. It hardly touches the CPU and hardly touches throughput to the drive, yet the benefits are amazing.
  • Unless you have the RAM, avoid using deduplication. Unlike compression, deduplication is very costly on the system. The deduplication table consumes massive amounts of RAM.
  • Avoid running a ZFS root filesystem on GNU/Linux for the time being. It's a bit too experimental for /boot and GRUB. However, do create datasets for /home/, /var/log/ and /var/cache/.
  • Snapshot frequently and regularly. Snapshots are cheap, and can keep a plethora of file versions over time. Consider using something like the zfs-auto-snapshot script.
  • Snapshots are not a backup. Use "zfs send" and "zfs receive" to send your ZFS snapshots to an external storage.
  • If using NFS, use ZFS NFS rather than your native exports. This can ensure that the dataset is mounted and online before NFS clients begin sending data to the mountpoint.
  • Don't mix NFS kernel exports and ZFS NFS exports. This is difficult to administer and maintain.
  • For /home/ ZFS installations, setting up nested datasets for each user. For example, pool/home/atoponce and pool/home/dobbs. Consider using quotas on the datasets.
  • When using "zfs send" and "zfs receive", send incremental streams with the "zfs send -i" switch. This can be an exceptional time saver.
  • Consider using "zfs send" over "rsync", as the "zfs send" command can preserve dataset properties.


The point of the caveat list is by no means to discourage you from using ZFS. Instead, as a storage administrator planning out your ZFS storage server, these are things that you should be aware of, so as not to catch you with your pants down, and without your data. If you don’t head these warnings, you could end up with corrupted data. The line may be blurred with the “best practices” list above. I’ve tried making this list all about data corruption if not headed. Read and head the caveats, and you should be good.

  • A "zfs destroy" can cause downtime for other datasets. A "zfs destroy" will touch every file in the dataset that resides in the storage pool. The larger the dataset, the longer this will take, and it will use all the possible IOPS out of your drives to make it happen. Thus, if it take 2 hours to destroy the dataset, that's 2 hours of potential downtime for the other datasets in the pool.
  • Debian and Ubuntu will not start the NFS daemon without a valid export in the /etc/exports file. You must either modify the /etc/init.d/nfs init script to start without an export, or create a local dummy export.
  • Debian and Ubuntu, and probably other systems use a parallized boot. As such, init script execution order is no longer prioritized. This creates problems for mounting ZFS datasets on boot. For Debian and Ubuntu, touch the "/etc/init.d/.legacy-bootordering file, and make sure that the /etc/init.d/zfs init script is the first to start, before all other services in that runlevel.
  • Do not create ZFS storage pools from files in other ZFS datasets. This will cause all sorts of headaches and problems.
  • When creating ZVOLs, make sure to set the block size as the same, or a multiple, of the block size that you will be formatting the ZVOL with. If the block sizes do not align, performance issues could arise.
  • When loading the "zfs" kernel module, make sure to set a maximum number for the ARC. Doing a lot of "zfs send" or snapshot operations will cache the data. If not set, RAM will slowly fill until the kernel invokes OOM killer, and the system becomes responsive. I have set in my /etc/modprobe.d/zfs.conf file "options zfs zfs_arc_max=2147483648", which is a 2 GB limit for the ARC.

{ 9 } Comments

  1. Asif using Firefox 17.0 on Windows 7 | January 8, 2013 at 12:09 am | Permalink

    Awesome series! You just helped me to learn and plan deployment of ZFS for my Home NAS in a day. I have gone through the whole series and it's been easy to follow while also providing details on necessary parts! Thank you!

  2. aasche using Firefox 18.0 on GNU/Linux | January 30, 2013 at 3:57 pm | Permalink

    18 Parts - enough stuff for a small book. Thank you very much for your efforts :)

  3. ovigia using Firefox 18.0 on GNU/Linux 64 bits | February 25, 2013 at 12:27 pm | Permalink

    great tips...

    thank you very much!

  4. Michael using Firefox 19.0 on Windows 7 | March 20, 2013 at 2:03 pm | Permalink

    Nice series !!!

    I looked and my rhel6/oel6 box doesn't have a "/etc/modprobe.d/zfs.conf " file anywhere. Is that something you added & just put that one command in (options zfs zfs_arc_max=2147483648)?

    I was also curious how you came up with 2GB as your limit & how much RAM your storage box has and whether you are using the box for anything else?

    My box is currently dedicated to just ZFS & currently has 16GB and I was considering expanding to 32GB. If that scenario any idea what a good arc max is?

    Thanks again !!!

  5. Aaron Toponce using Google Chrome 25.0.1364.160 on GNU/Linux 64 bits | March 20, 2013 at 2:39 pm | Permalink

    Yeah. the /etc/modprobe.d/zfs.conf is manually created. The 2 GB is just an example. It's up to how much RAM you have in your system. You should keep it under 1/4 your RAM size, IMO.

  6. Mike using Safari 8536.25 on Mac OS | April 9, 2013 at 10:48 am | Permalink

    Just want to add my thanks for a great series and all the obvious effort that went into it. While I have enough desktop experience, I am a complete newbie to servers in general and ZFS in particular. You've given me the confidence to proceed.

  7. Graham Perrin using Safari 537.73.11 on Mac OS | December 2, 2013 at 11:48 am | Permalink

    Please: is the ' zfs-auto-snapshot script' link correct? Unless I'm missing something, it doesn't lead to the script.

  8. Aaron Toponce using Debian IceWeasel 24.1.0 on GNU/Linux 64 bits | December 4, 2013 at 4:23 pm | Permalink

    Fixed. Sorry about that. I don't know what caused it to change.

  9. Joshua Zitting using Google Chrome 31.0.1650.63 on Mac OS | January 7, 2014 at 9:32 pm | Permalink

    This is an AWESOME Tutorial!!! I read every word and added it to bookmarks for safe keeping! Great work!!! My next project is Postgres... You havent done a Tutorial on it have you?? if so you should start charging!

{ 4 } Trackbacks

  1. [...] Best Practices and Caveats [...]

  2. [...] Best Practices and Caveats [...]

  3. [...] Best Practices and Caveats [...]

  4. [...] Best Practices and Caveats [...]

Post a Comment

Your email is never published nor shared.

Switch to our mobile site