Comments on: ZFS Administration, Part VIII- Zpool Best Practices and Caveats https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/ Linux. GNU. Freedom. Tue, 31 Oct 2017 18:00:46 +0000 hourly 1 https://wordpress.org/?v=5.0-alpha-42199 By: Martin Zuther https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-270805 Sun, 25 Jun 2017 21:45:39 +0000 http://pthree.org/?p=2782#comment-270805 Hi Aron,

thanks for the great ZFS tutorial! I do have a question though. Where does the following recommendation come from?

"Do not mix disk sizes [...] in a single VDEV. In fact, do not mix disk sizes [...] in your storage pool at all."

You can find it all over the net, but there seems to be no one who ever explains it or points to the ZFS documentation. I'd like to exchange a 2 TB disk for a 3 TB one in a two-mirrored-disk setting (utilising the "autoexpand" property) if that matters.

Martin

]]>
By: asmo https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-270717 Wed, 21 Jun 2017 17:18:33 +0000 http://pthree.org/?p=2782#comment-270717 @ pdwalker

I guess he ment that you can use /zpool when you created a pool without creating any datasets.

]]>
By: c0x https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-269782 Mon, 27 Mar 2017 04:59:45 +0000 http://pthree.org/?p=2782#comment-269782 ~# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
storage 31,9T 18,1T 13,8T - 16% 56% 1.00x ONLINE -
zds 14,2T 6,85T 7,40T - 27% 48% 1.00x ONLINE -
ftp 7,16T 5,66T 1,49T - 33% 79% 1.00x ONLINE -

how to defrag this?

]]>
By: Brian Lachat https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-268750 Mon, 12 Dec 2016 02:17:48 +0000 http://pthree.org/?p=2782#comment-268750 First, Thanks so much for such a great write up. You state "Email reports of the storage pool health weekly for redundant arrays, and bi-weekly for non-redundant arrays." Perhaps I overlooked it but I don't see where It states how I can automate this. Would you please elaborate.

Thanks,
Brian

]]>
By: Sebastian https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-261965 Tue, 08 Mar 2016 09:34:06 +0000 http://pthree.org/?p=2782#comment-261965 Just to add something to my previous comment....
I made a mistake about the SATA Express Connection on my motherboard, its not separate from the 6 SATA Ports, so I can't use it, since I need all 6 SATA Ports to connect the WD Red Disks.
Can I use a similar setup with just one SSD in the M.2 Slot, and how would I partition that to have SLOG and L2ARC for two pools??

Or should I look into buying a PCIe Card to get additional SATA ports for the second SSD device??

]]>
By: Sebastian https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-261963 Tue, 08 Mar 2016 09:29:20 +0000 http://pthree.org/?p=2782#comment-261963 Very nice articles all together, they have helped a a lot!!! Thanks

I have one question.
My setup is a box running proxmox using a RaidZ 10 setup using 4 USB 3 Sticks (each 32GB) as the root files system.
For storage I used 6x 3TB WD Red in a second ZPool (RaidZ2). I will use different datasets in the storage pool to store all my different data. (VM disks, Movies, Personal Documents and so on...)
My motherboard only has 6 SATA Ports (currently used by the 6 WD Red Disks). Thats why I went for the USB Stick install of proxmox (I working very well, no issues so far).

Now I wanted to add Zil and L2ARC to my setup. Would I add both of them to both pools?? For me it makes sense that both pools should get a Zil and L2ARC to enhance performance.
My Mainboard has one M.2 (Socket 3) and one additional SATA Express connector.

My idea was to purchase 2 SSDs (both 64 or 128GB) and connect them to the M.2 and SATA Express ports.
Then I would partition both to have 2 partitions with 1GB size and 2 partitions with the rest of the size shared (eg. 64GB SSD would partition into: sda1 (1GB) sda2 (1GB) sda3 (31GB) sda4 (31GB) in theory)
After that I could mirror the SLOG (Zil) over both devices for both pools and stripe the L2ARC for both pools over both SSDs.

Hopefully my explanation was clear.
Would that make sense, or is it bad to use the same SSD Device for two different pools?

]]>
By: John Mac https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-261078 Mon, 08 Feb 2016 19:46:30 +0000 http://pthree.org/?p=2782#comment-261078 The caveat/recommendation below is in most of the zfs zpool best practices guides, but can't find an explanation as to why. Information on why using mixed disk counts across vdevs is a bad practice is appreciated.

"Do not mix disk counts across VDEVs. If one VDEV uses 4 drives, all VDEVs should use 4 drives."

]]>
By: Kai Harrekilde https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-261072 Mon, 08 Feb 2016 14:04:42 +0000 http://pthree.org/?p=2782#comment-261072 Aaron,

May I suggest to make a list of "Current Best Practice" with respect to attributes at pool creation time?
I would add "compression=lz4" and "xattr=sa" to such a list along with ashift=12/13, autoexpand and autoreplace.

It also seems that the L2ARC is a rather dubious win, according to /u/txgsync on https://www.reddit.com/r/zfs

]]>
By: Ed https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-255240 Thu, 26 Nov 2015 14:24:24 +0000 http://pthree.org/?p=2782#comment-255240 >> Ron Fish wrote;
>> Can a pool be spanned across multiple JBODs? In other words once i fill up my current pool am I dead ended and will have to create a new pool.

yes. zpools can be made up of anything you want it to be. USB drives, various hard disks, probably even floppy drives (shudder). Of course you are as fast as your slowest device, so it's better to be consistent. Just remember, if you 'zpool add', it's going to be there forever.

>>I am using a ZFS pool for data storage in an environment that generates tons of data and the pool is nearly full with no more disk slots left in the array case.

your options:
1) increase the size of the hard disks. You'll have to play games moving your data around to free a disk. Remember you cannot remove a disk (anything you did a zpool add on from a zpool, but you can remove a mirror (zpool attach).

2) if you are not using zfs compression, then you are sitting on a gold-mine of capacity. make a ZFS file system, enable compression and move your data from the non-compressed FS to the compressed. I had a fs with 1.1TB of data compress down to 300GB. As a bonus, all the nightly roll-up jobs making compressed archives could be turned off, leaving my historical data was immediately available.

I recommend reading the MAN page and look in the examples for zfs send | zfs receive. That's how you can pull your data from one zpool into another. Then you can play games with creating a file on NAS mounted storage, making it into a zpool, then transferring ZFS FS's out to free up space (think of it like archiving data)

]]>
By: Ed https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-255239 Thu, 26 Nov 2015 14:09:08 +0000 http://pthree.org/?p=2782#comment-255239 >The storage pool will not auto resize itself when all smaller drives in the pool have been replaced by larger ones. You MUST enable this feature,

100% true

>and you MUST enable it before replacing the first disk. Use "zpool set autoexpand=on tank" as an example.

It is easier to set autoexpand=on first then change to a larger drive, but it is not mandatory. If you forget (as I have done in the past), you can then 'zpool set autoexpand=on tank after-the-fact, then run 'zpool online -e tank devicename'.

]]>
By: Ron Fish https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-254161 Fri, 06 Nov 2015 18:26:50 +0000 http://pthree.org/?p=2782#comment-254161 Can a pool be spanned across multiple JBODs? In other words once i fill up my current pool am I dead ended and will have to create a new pool.

I am using a ZFS pool for data storage in an environment that generates tons of data and the pool is nearly full with no more disk slots left in the array case.

]]>
By: Xavier https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-236091 Thu, 02 Jul 2015 09:31:09 +0000 http://pthree.org/?p=2782#comment-236091 * Don't trust df for monitoring, use zfs command (or zpool).
* Be careful with snapshots because they can fill the pool (and df is not aware of this).

]]>
By: Magnus https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-230357 Sun, 29 Mar 2015 13:47:47 +0000 http://pthree.org/?p=2782#comment-230357 Regarding 12*6 = 49 TB, well. 6 TB is actually 5.457 TiB (not 5.859). So that gives a total of 53.72 TiB usable space (with 1/64 removed). Which is "only" missing 4.72 TB (and quite close to 10%). I know linux does reserve 5% for root by default - I don't know if that translates to ZFS and/or whatever OS you are running, but might be worth looking into.

]]>
By: Von Hawkins https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-229311 Mon, 16 Mar 2015 19:51:27 +0000 http://pthree.org/?p=2782#comment-229311 Minor edit.
>>Email reports of the storage pool health weekly for redundant arrays, and bi-weekly for non-redundant arrays.

It seems that you meant semi-weekly. Or am I reading this wrong. It makes sense to me that redundant arrays would need half the reporting frequency, but you stated that they need twice the frequency.

Very helpful series. I am in the middle of purchasing a private cloud lab for home and plan to use FreeNAS. There is quite a lot to learn, but it seems most doable. --thanks

]]>
By: pdwalker https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-228599 Fri, 06 Mar 2015 06:57:03 +0000 http://pthree.org/?p=2782#comment-228599 Hi Aaron,

Great guide. It's tremendously helpful.

One question though, under Caveats, the second last point is
Don't put production directly into the zpool. Use ZFS datasets instead.

Could you explain what that means? I'm afraid I don't follow what you mean by that.

]]>
By: Aaron Toponce https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-228502 Tue, 03 Mar 2015 19:01:52 +0000 http://pthree.org/?p=2782#comment-228502 Hmm. 11 TB seems excessive. How are you getting to that data? The best way to see what is available versus what is used is with "zfs list". Add the "USED" and the "AVAIL" columns to get a better idea of what's available. Also, remember:

ZFS uses 1/64 of the available raw storage for metadata. So, if you purchased a 1 TB drive, the actual raw size is 976 GiB. After ZFS uses it, you will have 961 GiB of available space. The "zfs list" command will show an accurate representation of your available storage. Plan your storage keeping this in mind.

So, first 6 TB according to the drive manufacturer is 6*1000^4 bytes. The actual raw size for the filesystem is 5.859 TiB. For a 12-disk RAIDZ-2, this means you'll have 10 * 5.859 TiB = 58.59 TiB. Then, 1/64 of that is used for metadata, so you have roughly 57.67 TiB usable. So, the question now remains, where is that other 8 TiB going? Not sure, but see what you get from summing the "USED" and "AVAIL" columns in "zfs list", and see where that puts you.

]]>
By: Aaron Toponce https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-228501 Tue, 03 Mar 2015 18:51:24 +0000 http://pthree.org/?p=2782#comment-228501 Cool! Glad that works. I would put more through it with IOZone3 and Bonnie++, just to get a better representation of what the pool can do. But it sounds like "ashift=12" is the right option for your pool.

]]>
By: John Naggets https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-228415 Sat, 28 Feb 2015 18:11:15 +0000 http://pthree.org/?p=2782#comment-228415 I have one remark though: I am using a 12 disks RAIDZ-2 pool with 6 TB disks which should give me in theory 60 TB of usable storage space, at the end I see only 49 TB of usable space, so somehow 11 TB get lost. Is this normal? I would expect some loss of space but 11 TB sounds quite a lot to me.

]]>
By: John Naggets https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-228414 Sat, 28 Feb 2015 18:01:04 +0000 http://pthree.org/?p=2782#comment-228414 Well I now tried out the ashift=12 option and yes I can also see a general performance gain of around 30% using simple "dd" read and write tests.

]]>
By: Aaron Toponce https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-228350 Fri, 27 Feb 2015 22:30:39 +0000 http://pthree.org/?p=2782#comment-228350 I would. Some people swear by it that ashift=12 leads to substantially better performance, even when using the full disk. I personally haven't seen it, and I've worked with a good deal of AF disks. Shrug. It can't hurt, at least. So, I guess, why not?

]]>
By: John Naggets https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-228333 Fri, 27 Feb 2015 14:06:23 +0000 http://pthree.org/?p=2782#comment-228333 Regarding the advanced format, my disks are 6 TB disks from Seagate (model: ST6000NM0034) and I checked they are using the 512e category of advanced format. Do you recommend me to set ashift=12 also for this category of disks?

]]>
By: Aaron Toponce https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-227899 Tue, 17 Feb 2015 20:29:42 +0000 http://pthree.org/?p=2782#comment-227899

What do you mean with "ZFS does not restripe data in a VDEV"? I'm assuming here that if a disk gets broken and replaced, the resilvering process will rewrite the stripes to re-ensure redundancy, correct? I can understand when adding a new VDEV to a ZPOOL, that it may not start spreading the data out across the VDEVs. But since you can't grow a VDEV, I don't quite get the statement about intra-VDEV restriping.

What is meant, is that ZFS doesn't automatically rewrite data stripes when a new VDEV is added to the pool. For example, consider the following pool:

# zpool status pthree
  pool: pthree
 state: ONLINE
  scan: none requested
config:

	NAME             STATE     READ WRITE CKSUM
	pthree           ONLINE       0     0     0
	  raidz1-0       ONLINE       0     0     0
	    /tmp/file1   ONLINE       0     0     0
	    /tmp/file2   ONLINE       0     0     0
	    /tmp/file3   ONLINE       0     0     0

If you were to add another 3 disks in a RAIDZ1, thus creating a RAIDZ1+0, the newly created "raidz1-1" VDEV won't be automatically balanced with the data that resides on the "raidz1-0" VDEV:

zpool status pthree
  pool: pthree
 state: ONLINE
  scan: none requested
config:

	NAME             STATE     READ WRITE CKSUM
	pthree           ONLINE       0     0     0
	  raidz1-0       ONLINE       0     0     0
	    /tmp/file1   ONLINE       0     0     0
	    /tmp/file2   ONLINE       0     0     0
	    /tmp/file3   ONLINE       0     0     0
	  raidz1-1       ONLINE       0     0     0
	    /tmp/file4   ONLINE       0     0     0
	    /tmp/file5   ONLINE       0     0     0
	    /tmp/file6   ONLINE       0     0     0

ZFS will favor the "raidz1-1" VDEV for new writes, until the overall pool is balanced. And as data is modified, the data in the "raidz1-0" VDEV will eventually be balanced with the data on the "raidz1-1" VDEV, but that doesn't happen automatically, just because the new VDEV was added.

That is what is meant. I guess I could be clearer in that.

Also, with modern home-NAS systems with SATA disks being around 6TB and higher, would a weekly scrub even be possible? A ZFS scrub takes longer than a MDADM-check, and that already takes me three days.

It depends entirely on how much data stored in the pool, how the pool is built, what type of drives make up the pool, and how busy the pool actually is. It may be possible, it may not. Your mileage will certainly vary here. On some production servers, weekly scrubs work fine in 15x2TB pool arrays. On other storage servers, sometimes the scrub doesn't complete for a few weeks. So, I would say, it's "best practice", but it might not actually be doable, depending on some situations. I'll update the post.

]]>
By: Mark https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-227887 Tue, 17 Feb 2015 14:22:16 +0000 http://pthree.org/?p=2782#comment-227887 What do you mean with "ZFS does not restripe data in a VDEV"? I'm assuming here that if a disk gets broken and replaced, the resilvering process will rewrite the stripes to re-ensure redundancy, correct? I can understand when adding a new VDEV to a ZPOOL, that it may not start spreading the data out across the VDEVs. But since you can't grow a VDEV, I don't quite get the statement about intra-VDEV restriping.

Also, with modern home-NAS systems with SATA disks being around 6TB and higher, would a weekly scrub even be possible? A ZFS scrub takes longer than a MDADM-check, and that already takes me three days.

]]>
By: Sanjay https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-208158 Sun, 12 Oct 2014 13:38:55 +0000 http://pthree.org/?p=2782#comment-208158 Aaron, kudos for this sweeping overview of zfs!

>> Use whole disks rather than partitions. ZFS can make better use of the on-disk cache as a result.
>> If you must use partitions, backup the partition table, and take care when reinstalling data into
>> the other partitions, so you don't corrupt the data in your pool.

Most recommenations (including yours) state that inability to use the on-disk cache the only reason for
this recommendation. Are there any other reasons at all?

I see more advantages from using slices than using a whole disk (especially todays large capacity disks).

By carving each drive into n slices, an irecoverable error on any one slice will require a rebuild of only
that slice and not the whole drive.

Potentially, this can significantly reduce the rebuild time required to restore the zpool back to a healthy
state, and based on the specific nature of the slice failure, one can then decide whether to preemptively
migrate the other 7 slices as well, or continue using them a little longer.

Of course, this advantage will not be available when there's a whole disk failure; but then, aren't partial
failures more common than whole disk failures?

The other advantage, from a data integrity point of view is, that I'm guaranteed that the on-disk cache
cannot be used. (Given that the on-disk cache on HDDs is volatile, does zfs actually use this? And if it does,
how is write integrity preserved?)

I realise that this would necessitate an slog on a low latency non-volatile device, but when zfs is the choice
based on data intregrity requirements, I don't see any other possibility that meets the requirement.

]]>
By: Mark Moorcroft https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-132164 Thu, 17 Apr 2014 00:33:47 +0000 http://pthree.org/?p=2782#comment-132164 Follow-on question to my last: If using mpath to achieve load balancing/failover how would you go about getting ledctl or zfswatcher to work with mpath device names?

]]>
By: Mark Moorcroft https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-132162 Thu, 17 Apr 2014 00:11:05 +0000 http://pthree.org/?p=2782#comment-132162 Do you address SAS expanders with HBA's and Linux multipath devices anywhere here? My SAS drives appeared twice in /dev because the expander has multipath/failover . It wasn't clear if I should use mpath devices to build my zraid2 or what the best practice is in this case. I suspect this falls under "don't use mdadm/LVM", but I'm not sure. How do you get multipath load balancing with ZFS without using multipathd? My SAS drives ARE dual channel 512k block. LSI suggested just unplugging one of the cables to the backplane OR springing for their "new" hardware that they offer multipath support for. Of course I doubt that works with ZFS.

--help 🙁

]]>
By: Mark https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-132082 Wed, 12 Mar 2014 10:56:26 +0000 http://pthree.org/?p=2782#comment-132082 "For the number of disks in the storage pool, use the "power of two plus parity" recommendation. "

What is the rational behind this? I have 5x4TB drives, which given the drive size means that I should be using RAIDZ-2 but that doesn't fit within the guidelines. What kind of performance hit can I expect to take? Is it mainly a cpu bound issue?

"Don't put production directly into the zpool. Use ZFS datasets instead. Don't commit production data to file VDEVs. Only use file VDEVs for testing scripts or learning the ins and outs of ZFS."

Can you elaborate on this? I don't quiet understand what you are saying in the above two points.

]]>
By: Thumper advice https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-130934 Sun, 01 Dec 2013 09:37:04 +0000 http://pthree.org/?p=2782#comment-130934 hi, I'm currently using a thumper (Sun X4500) and i'd like to give a try to ZFS on my SL64 x86_64. I'd like to export through NFS 22 x 1To hard drives. I know that there are a lot of options so basically I wanted to setup a RaidZ-1 with 21 hdd plus 1 spare drive. What do you think of that ? what about dedicating drives to ZIL and so on ?
Thnks.
François

]]>
By: Aaron Toponce https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-130461 Wed, 06 Nov 2013 01:40:15 +0000 http://pthree.org/?p=2782#comment-130461 Restore from backup. That's probably the best you've got at this point. Corrupted data, and too many lost drives out of a RAID are very likely not recoverable.

]]>
By: JohnC https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-130430 Mon, 04 Nov 2013 14:29:50 +0000 http://pthree.org/?p=2782#comment-130430 The Linux kernel may not assign a drive the same drive letter at every boot. Thus, you should use the /dev/disk/by-id/ convention for your SLOG and L2ARC. If you don't, your zpool devices "could end up as a SLOG device, which would in turn clobber your ZFS data."

I think this has just happenned to me. I had a controller fail, after a series of reboots, I acquired a new controller. Now the disks on the new controller are fine, but the othed disks are "Faulted" with "corrupted data". I am sure the data is on them, but the order may be different. Loss of 8 of the 16 x 3Tb drives in a Raidz3 configuration is fatal.

The status is Unavail with "the label is missing or invalid". How does one recover from this? Can it be done?

Is there a fix for this

]]>
By: Ghat Yadav https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-129863 Wed, 16 Oct 2013 11:00:28 +0000 http://pthree.org/?p=2782#comment-129863 hi
very nice and useful guide... however as a home user, I have a request for you to add one more section on how to add more drives to a existing pool.
I started with a 4x4tb pool with raidz2. I have a 12bay device, and just populated 4 slots, for budget reasons, when I set it up I felt I will buy more disks in the future as they become cheap... but looks like thats not possible.
once you create a zpool you cannot expand it (or am I wrong)...
If I get 2 more 4TB disks, now if my budget allows, how do I best use them ?
Ghat

]]>
By: Dzezik https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-129610 Thu, 12 Sep 2013 20:20:24 +0000 http://pthree.org/?p=2782#comment-129610 raw size of 1TB drive is 931GiB
-> 1TB is 10^12, GiB is 2^30
-> (10^12)/(2^30)~=931

so
ZFS gives You 916,77GiB

]]>
By: Home Server (& Network) Setups - Page 4 https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-129540 Wed, 04 Sep 2013 08:58:05 +0000 http://pthree.org/?p=2782#comment-129540 […] 8 drive raidz2s, which makes sense, but ZFS best practices according to this blog says different: Aaron Toponce : ZFS Administration, Part VIII- Zpool Best Practices and Caveats For the number of disks in the storage pool, use the “power of two plus parity” […]

]]>
By: Aaron Toponce : ZFS Administration, Appendix B- Using USB Drives https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-127470 Tue, 09 Jul 2013 04:07:50 +0000 http://pthree.org/?p=2782#comment-127470 […] Best Practices and Caveats […]

]]>
By: Aaron Toponce : ZFS Administration, Part XIII- Sending and Receiving Filesystems https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-127234 Tue, 02 Jul 2013 13:24:52 +0000 http://pthree.org/?p=2782#comment-127234 […] Best Practices and Caveats […]

]]>
By: Aaron Toponce : ZFS Administration, Part XVII- Best Practices and Caveats https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-124824 Fri, 19 Apr 2013 11:00:08 +0000 http://pthree.org/?p=2782#comment-124824 [...] Best Practices and Caveats [...]

]]>
By: Aaron Toponce : ZFS Administration, Part X- Creating Filesystems https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-124486 Wed, 20 Mar 2013 18:37:58 +0000 http://pthree.org/?p=2782#comment-124486 [...] Best Practices and Caveats [...]

]]>
By: ZFS Homeserver Festplatten Beratung https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-123945 Thu, 31 Jan 2013 13:47:17 +0000 http://pthree.org/?p=2782#comment-123945 [...] [...]

]]>
By: Aaron Toponce https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-123123 Mon, 14 Jan 2013 16:13:06 +0000 http://pthree.org/?p=2782#comment-123123 Yes. Typo. You know how when you get a word in your head, it seems to get applied to everything you type? Yeah. That happened here. Thanks for the notice. Fixed.

]]>
By: Roger Hunwicks https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-123091 Sat, 12 Jan 2013 18:00:04 +0000 http://pthree.org/?p=2782#comment-123091 When you say "zpool set auoresize=on tank”

Do you really mean "zpool set autoexpand=on tank"

I get an "invalid property" for both "set autoresize" and "set auoresize".

Great series - thanks 🙂

]]>
By: Aaron Toponce : ZFS Administration, Part II- RAIDZ https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-122842 Tue, 08 Jan 2013 04:26:46 +0000 http://pthree.org/?p=2782#comment-122842 [...] Best Practices and Caveats [...]

]]>
By: Aaron Toponce https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-122328 Sat, 29 Dec 2012 13:24:44 +0000 http://pthree.org/?p=2782#comment-122328 Fixed the numbers in the post. Thanks!

]]>
By: boneidol https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-122317 Sat, 29 Dec 2012 03:10:17 +0000 http://pthree.org/?p=2782#comment-122317 "For the number of disks in the storage pool, use the “power of two plus parity” recommendation. This is for storage space efficiency and hitting the “sweet spot” in performance. So, for a RAIDZ-1 VDEV, use three (2+1), five (4+1), or nine (8+1) disks. For a RAIDZ-2 VDEV, use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks. For a RAIDZ-3 VDEV, use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks. For pools larger than this, consider striping across mirrored VDEVs."

this differs from http://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ ( and agrees with my math ( and you instructions 🙂 )

]]>
By: Aaron Toponce : ZFS Administration, Part I- VDEVs https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-122032 Thu, 20 Dec 2012 15:09:20 +0000 http://pthree.org/?p=2782#comment-122032 [...] Best Practices and Caveats [...]

]]>
By: Aaron Toponce : ZFS Administration, Part III- The ZFS Intent Log https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-122030 Thu, 20 Dec 2012 15:08:49 +0000 http://pthree.org/?p=2782#comment-122030 [...] Best Practices and Caveats [...]

]]>
By: Links 16/12/2012: Humble Indie Bundle 7 Rants, ownCloud KDE Client | Techrights https://pthree.org/2012/12/13/zfs-administration-part-viii-zpool-best-practices-and-caveats/#comment-121846 Sun, 16 Dec 2012 02:30:25 +0000 http://pthree.org/?p=2782#comment-121846 [...] ZFS Administration, Part VIII- Zpool Best Practices and Caveats [...]

]]>