Comments on: ZFS Administration, Part II- RAIDZ https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ Linux. GNU. Freedom. Mon, 09 Oct 2017 10:42:05 +0000 hourly 1 https://wordpress.org/?v=4.9-alpha-41547 By: xaoc https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-271668 Tue, 22 Aug 2017 09:04:19 +0000 http://pthree.org/?p=2590#comment-271668 I have strange situation and can't explain it . I will appreciate your comment on bellow setup:
zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test_3x3s 327T 1.11M 327T - 0% 0% 1.00x ONLINE -
dmadm@s1349014530:~$ sudo zpool status
pool: test_3x3s
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
test_3x3s ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdi ONLINE 0 0 0
sdj ONLINE 0 0 0
sdk ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
raidz3-1 ONLINE 0 0 0
sdo ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
sdu ONLINE 0 0 0
sdv ONLINE 0 0 0
sdw ONLINE 0 0 0
sdx ONLINE 0 0 0
sdy ONLINE 0 0 0
sdz ONLINE 0 0 0
raidz3-2 ONLINE 0 0 0
sdaa ONLINE 0 0 0
sdab ONLINE 0 0 0
sdac ONLINE 0 0 0
sdad ONLINE 0 0 0
sdae ONLINE 0 0 0
sdaf ONLINE 0 0 0
sdag ONLINE 0 0 0
sdah ONLINE 0 0 0
sdai ONLINE 0 0 0
sdaj ONLINE 0 0 0
sdak ONLINE 0 0 0
sdal ONLINE 0 0 0

errors: No known data errors
df -h
Filesystem Size Used Avail Use% Mounted on
udev 189G 0 189G 0% /dev
tmpfs 38G 850M 37G 3% /run
/dev/md0 103G 1.9G 96G 2% /
tmpfs 189G 0 189G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 189G 0 189G 0% /sys/fs/cgroup
tmpfs 38G 0 38G 0% /run/user/1002
test_3x3s 231T 256K 231T 1% /test_3x3s
##########################################################################################################
zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test_3x3s 326T 1.11M 326T - 0% 0% 1.00x ONLINE -
dmadm@s1349014530:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 189G 0 189G 0% /dev
tmpfs 38G 858M 37G 3% /run
/dev/md0 103G 1.9G 96G 2% /
tmpfs 189G 0 189G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 189G 0 189G 0% /sys/fs/cgroup
tmpfs 38G 0 38G 0% /run/user/1002
test_3x3s 230T 256K 230T 1% /test_3x3s
zpool status
pool: test_3x3s
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
test_3x3s ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
sdi ONLINE 0 0 0
sdj ONLINE 0 0 0
sdk ONLINE 0 0 0
sdl ONLINE 0 0 0
sdm ONLINE 0 0 0
sdn ONLINE 0 0 0
sdo ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdr ONLINE 0 0 0
sds ONLINE 0 0 0
sdt ONLINE 0 0 0
raidz3-1 ONLINE 0 0 0
sdu ONLINE 0 0 0
sdv ONLINE 0 0 0
sdw ONLINE 0 0 0
sdx ONLINE 0 0 0
sdy ONLINE 0 0 0
sdz ONLINE 0 0 0
sdaa ONLINE 0 0 0
sdab ONLINE 0 0 0
sdac ONLINE 0 0 0
sdad ONLINE 0 0 0
sdae ONLINE 0 0 0
sdaf ONLINE 0 0 0
sdag ONLINE 0 0 0
sdah ONLINE 0 0 0
sdai ONLINE 0 0 0
sdaj ONLINE 0 0 0
sdak ONLINE 0 0 0
sdal ONLINE 0 0 0

In few words ... If I undesrtand it correctly:
2 VDEVs RAIDZ3 should use 6 disks for parity (3 for each VDEV)
3 VDEVs RAIDZ3 should use 9 disks for parity (3 for each VDEV)
And it is logical to have less usable space with 3 VDEVs compared with 2 VDEVs, but practicaly it seems that with 2 VDEVs configuration I have less usable space?

]]>
By: TMS https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-269559 Sun, 05 Mar 2017 17:47:10 +0000 http://pthree.org/?p=2590#comment-269559 Very nice article, but you are incorrect wehn you say mirror is ALWAYS faster. No it isn't. For sequential reads Raidz is faster. Same with writes. IOPS are always faster on a mirror.

]]>
By: gsalerni https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-269076 Fri, 13 Jan 2017 18:08:49 +0000 http://pthree.org/?p=2590#comment-269076 re. Alvins post (9) about trying to assemble a raidZ pool made up of 1tb vdevs which were in turn a variety of single disks, mirrors and stripes. Although you can't nest vdevs (other than disks and files) - could he not use madam to construct the various 1tb metadisks using md mirrors and stripes as required and then create a zfs raidz out of those? I imagine that wouldn't perform great but would it work? zfs wouldn't care that the raw disks were in fact meta disks would it?

]]>
By: Eric https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-266418 Sat, 10 Sep 2016 10:17:40 +0000 http://pthree.org/?p=2590#comment-266418 How come it seems like most documentation say mirrored is always faster than raidz(n), but benchmarks always seem to show the opposite? (https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/) (https://calomel.org/zfs_raid_speed_capacity.html)

]]>
By: Frank https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-261828 Wed, 02 Mar 2016 02:25:46 +0000 http://pthree.org/?p=2590#comment-261828 Hi Aaron,
Thank you once again for your great summaries.
I'm embarking on building a large array for scientific data storage (actually I'm building two identical arrays, one for backup).
I wonder if the plan is sane:

The arrays will require around 100TB of storage eventually but I'm starting with 8x HGST 8TB SAS disks.

So I was thinking of doing a striped set of two RAIDZ1 vdevs.
If my calculations are right, this gives 64TB-(2x8TB)=48TB Storage
The data will be backed up to a clone server using zfs send nightly
and also to tape.

NAME
bigpool
raidz1-0
8TB disk1
8TB disk2
8TB disk3
8TB disk4
raidz1-0
8TB disk1
8TB disk2
8TB disk3
8TB disk4
logs
mirror-1
1.2TB SSD
1.2TB SSD

In due course, I'd add another 8 disks (again as 2x striped RAIDZ1 vdevs)
for a total storage capacity of 96TB (close enough to 100TB).

I'm a little worried that recovering from a disk failure on a vdev with 4x 8TB disks may be a bit risky. The other two options I considered were:
- 8 disk RAIDZ3 array (eventually striped with another 2 of these)
- striped mirrors (but the capacity loss is expensive)

I'd be curious to hear your recommendations

]]>
By: Jim https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-237842 Wed, 22 Jul 2015 17:07:44 +0000 http://pthree.org/?p=2590#comment-237842 Thanks for the ZFS docs. What's the best practice for creating a zpool for use in a RAID1+0 or RAIDZn array from the point of view of future drive replacement?

What is the likelihood of a replacement drive having a slightly smaller actual capacity than the drive it's replacing? Since we cannot shrink a zpool once created, what would happen if a replacement drive is found to be 1 sector smaller than the failed drive? I assume ZFS issues an error saying that it cannot populate the new drive?

Is it best practice to manually partition drives prior to adding to the initial zpool so that all members of the pool are a known, precise size? Or is this generally a non-issue?

]]>
By: Rares https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-236030 Wed, 01 Jul 2015 05:52:04 +0000 http://pthree.org/?p=2590#comment-236030 Amassing documentation. If I'l ever meet you, the beer is on me:D

]]>
By: Ben https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-233723 Sat, 30 May 2015 00:14:41 +0000 http://pthree.org/?p=2590#comment-233723 Thank you so much for putting all this information up. I had spent a good week reading up on ZFS and how to use it, but was still confused beyond belief. I am a long term windows user and I'm just getting into linux and what it can do. Your postings here are laid out so well that it helped me understand how everything works. Thank you again!

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-227947 Wed, 18 Feb 2015 20:03:10 +0000 http://pthree.org/?p=2590#comment-227947

would you still recommend having 12 RAIDZ-1 vdevs of 3 disks each like you mention in your answer?

That depends on what you plan on doing with your pool, how much space you need, and how you expect it to perform. If it's just a backup server, that runs nightly backup jobs, and is guaranteed to finish before the next cycle, then performance probably isn't that big of a deal (until you need to do a restore at least).

Regardless, I can't fully answer that. I would build the pool multiple ways, banchmark it, stress test it, fill it and refill it, fail drives, and over all, put it through a stringent series of tests, and see which configuration would be the best for you. Parity-based RAID can be a performance killer but it can be worth it in some scenarios.

]]>
By: John Naggets https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-227945 Wed, 18 Feb 2015 18:26:19 +0000 http://pthree.org/?p=2590#comment-227945 Thanks for your extensive answer! Actually I was planning to do a RAIDZ-2 of 12 disks because my server can host up to 36 disks. So my plan would be to start with one RAIDZ-2 vdev of 12 disks and then increase the storage always by 12 disks, ending up with 3 RAIDZ-2 vdevs of 12 disks each.

Or would you still recommend having 12 RAIDZ-1 vdevs of 3 disks each like you mention in your answer? The thing is that with your setup I would be "loosing" in total 12 disks (parity) whereas with my config I would be only "loosing" 6 disks.

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-227907 Tue, 17 Feb 2015 20:46:44 +0000 http://pthree.org/?p=2590#comment-227907

I am still hesitating about the size of my RAIDZ-2 array. Would it be ok to use 12 disks on a RAIDZ-2 array? isn't that too much? and what about using 9 disks in a RAIDZ-2 array? does the rule of an even number for RAIDZ-2 still apply nowadays?

I wouldn't do RAIDZ2 personally. With 12 disks, I would do RAIDZ1 of 3 disks each. Thus, I would have 4 RAIDZ1 VDEVs:

# zpool status pthree
  pool: pthree
 state: ONLINE
  scan: none requested
config:

	NAME             STATE     READ WRITE CKSUM
	pthree           ONLINE       0     0     0
	  raidz1-0       ONLINE       0     0     0
	    /tmp/file1   ONLINE       0     0     0
	    /tmp/file2   ONLINE       0     0     0
	    /tmp/file3   ONLINE       0     0     0
	  raidz1-1       ONLINE       0     0     0
	    /tmp/file4   ONLINE       0     0     0
	    /tmp/file5   ONLINE       0     0     0
	    /tmp/file6   ONLINE       0     0     0
	  raidz1-2       ONLINE       0     0     0
	    /tmp/file7   ONLINE       0     0     0
	    /tmp/file8   ONLINE       0     0     0
	    /tmp/file9   ONLINE       0     0     0
	  raidz1-3       ONLINE       0     0     0
	    /tmp/file10  ONLINE       0     0     0
	    /tmp/file11  ONLINE       0     0     0
	    /tmp/file12  ONLINE       0     0     0

errors: No known data errors

At least then, you can keep your performance up, while maintaining one disk failure in each VDEV (a total of 3 disk failures maximum). It only comes at the cost of losing 1/3 of the raw disk space, which IMO, isn't that bad.

]]>
By: John Naggets https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-226613 Fri, 30 Jan 2015 17:43:35 +0000 http://pthree.org/?p=2590#comment-226613 I am still hesitating about the size of my RAIDZ-2 array. Would it be ok to use 12 disks on a RAIDZ-2 array? isn't that too much? and what about using 9 disks in a RAIDZ-2 array? does the rule of an even number for RAIDZ-2 still apply nowadays?

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-139575 Sat, 05 Jul 2014 15:12:11 +0000 http://pthree.org/?p=2590#comment-139575 Fixed! Thanks for the edit.

]]>
By: TK https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-139557 Sat, 05 Jul 2014 06:21:11 +0000 http://pthree.org/?p=2590#comment-139557 Typo:
# zpool create tank raidze sde sdf sdg sdh sdi
should read:
# zpool create tank raidz3 sde sdf sdg sdh sdi

That aside, these are all very informative ZFS articles, as are most of your others on th evariety of topics you cover.
Regards,
--TK

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-132125 Sat, 12 Apr 2014 21:36:34 +0000 http://pthree.org/?p=2590#comment-132125 I haven't seen anything regarding maximum drive size. Of course, you need to benchmark your own system, but the more storage you have, the more storage you have. Generally speaking too, the more spindles you have, the better performance will be.

]]>
By: Chris https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-132115 Thu, 03 Apr 2014 20:01:55 +0000 http://pthree.org/?p=2590#comment-132115 Great articles! Thanks a lot. I was wondering if you have any source for the comments on maximum drive size for the various raidz types? I am very interested why someone thinks maximum 2TB for raidz-2 (as I want to create an array of 8 disks, each 4TB large in a raidz-2 configuration).

]]>
By: Heny https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-131887 Tue, 14 Jan 2014 16:34:24 +0000 http://pthree.org/?p=2590#comment-131887 ZFS RAIDZ as declustered RAID, how to acheive it?

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-131300 Sat, 21 Dec 2013 06:02:34 +0000 http://pthree.org/?p=2590#comment-131300 I've updated the image (finally) to reflect the inconsistencies I had before.

]]>
By: Aaron Toponce : ZFS Administration, Appendix B- Using USB Drives https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-127472 Tue, 09 Jul 2013 04:08:30 +0000 http://pthree.org/?p=2590#comment-127472 […] RAIDZ […]

]]>
By: Aaron Toponce : ZFS Administration, Part XIII- Sending and Receiving Filesystems https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-127233 Tue, 02 Jul 2013 13:24:37 +0000 http://pthree.org/?p=2590#comment-127233 […] RAIDZ […]

]]>
By: Veniamin https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-125562 Tue, 30 Apr 2013 06:54:46 +0000 http://pthree.org/?p=2590#comment-125562 Thanks for articke.
I wonder how RAIDZ will work with two or more parity stripes.
I think that in the case of data is longer than recsize x n_data_disks, raidz slpits it into several writes.

]]>
By: Aaron Toponce : ZFS Administration, Appendix A- Visualizing The ZFS Intent LOG (ZIL) https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-124829 Fri, 19 Apr 2013 11:03:23 +0000 http://pthree.org/?p=2590#comment-124829 [...] RAIDZ [...]

]]>
By: Aaron Toponce : ZFS Administration, Part IX- Copy-on-write https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-124814 Fri, 19 Apr 2013 10:57:49 +0000 http://pthree.org/?p=2590#comment-124814 [...] RAIDZ [...]

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-124601 Wed, 27 Mar 2013 19:44:30 +0000 http://pthree.org/?p=2590#comment-124601 Correct. The image isn't 100% accurate. I may fix it, but yes. If you lose too much of a single stripe, then you can't recreate the data. For each stripe written, and this is where my image needs to be updated, a parity bit is written. So, if a stripe crosses the disks twice, then there will be extra parity bits.

Thanks for pointing this out.

]]>
By: ssl https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-124583 Tue, 26 Mar 2013 17:54:04 +0000 http://pthree.org/?p=2590#comment-124583 I don't quite understand how zfs could recover from certain single disk failures in your example (picture) .. say for example you lost the last drive in your raidz-1 configuration as shown. for the long stripe (A) you lose the parity bit as well as the data in block A4... How could this possibly be recovered, unless zfs puts additional parity blocks in for all stripes whose length exceeds the number of disks??

]]>
By: Aaron Toponce : ZFS Administration, Part VII- Zpool Properties https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-124248 Wed, 20 Feb 2013 15:33:45 +0000 http://pthree.org/?p=2590#comment-124248 [...] RAIDZ [...]

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-124202 Thu, 07 Feb 2013 17:24:28 +0000 http://pthree.org/?p=2590#comment-124202 No, this is not possible. Other than disks and files, you cannot nest VDEVs. ZFS stripes across RAIDZ and mirror VDEVs, and there's no way around it. You need to rethink your storage.

]]>
By: Alvin https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-124011 Sun, 03 Feb 2013 04:53:11 +0000 http://pthree.org/?p=2590#comment-124011 Okay here's one for you, I can't find ANY documentation ANYWHERE for using brackets (parentheses) to describe what drives to select when creating a zpool. For example, I am in a VERY sticky situation with money and physical drive constraints. I have figured out a method to make the best use of what I have but it results in a pretty unorthodox (yet completely redundant and failproof [1 drive] way) of getting it all to work AND maximize the use of my motherboard's ports to make it completly expandable in the future. I am basically creating a single-vdev pool containing a bunch of different raid levels, mirrors, and stripes.

HOWEVER, this is how I have to do it, because of hardware constraints.
If you were to imagine how to use the zpool create, this is how it would look USING BRACKETS. BUT THERE IS NO MENTION OF HOW TO USE BRACKETS PROPERLY in any zfs documentation. Basically either brackets, commas, &&s, etc, anything that would give me the desired affect.

zpool create mycoolpool RAIDZ1 ((mirror A B) (mirror C D) (mirror E F) (G) (stripe H, I) (stripe J, K, L) (M))

Yes I have 7 1TB 'blocks' or 'chunks' in a RAIDZ1, each consisting of different configurations.

You see, if I were to do this without the brackets, it would create this mess:
zpool create mycoolpool RAIDZ1 mirror a b mirror c d mirror e f g h i j k l m
^^Basically you see here that I would end up with a RAIDZ1 across 3 mirrors, the third of which consisting of a redundancy level such that 8 drives could fail... not what I want.

And yes, I have indeed seen all the warnings and read countless people say "you shouldn't" but NEVER have I seen anyone deny that it could be done and NEVER have I seen anyone actually answer on HOW to do it.

I've made up my mind that this is the method and approach that I need to take so please heed your warnings as much as you can as they will be said in vain.

Thank you very much in advance for a response!!!

]]>
By: Aaron Toponce : ZFS Administration, Part V- Exporting and Importing zpools https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-122838 Tue, 08 Jan 2013 04:25:44 +0000 http://pthree.org/?p=2590#comment-122838 [...] RAIDZ [...]

]]>
By: Aaron Toponce : ZFS Administration, Part XI- Compression and Deduplication https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-122831 Tue, 08 Jan 2013 04:23:55 +0000 http://pthree.org/?p=2590#comment-122831 [...] RAIDZ [...]

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-122327 Sat, 29 Dec 2012 13:24:12 +0000 http://pthree.org/?p=2590#comment-122327 Fixed. Thanks!

]]>
By: boneidol https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-122315 Sat, 29 Dec 2012 02:42:56 +0000 http://pthree.org/?p=2590#comment-122315 "Instead, in my opinion, you should keep your RAIDZ array at a low power of 2 plus parity. For RAIDZ-1, this is 3, 5 and 9 disks. For RAIDZ-2, this is 4, 8 and 16 disks. For RAIDZ-3, this is 5, 9 and 17 disks"

hi I don't understand these numbers above

Z1 = 2^1 + 1 , 2 ^2 + 1 , 2^3 +1 = 3,5,9
Z2 = 2^1 + 2 , 2^2 +2 , 2^3 +2 = 4,6,10
Z3 = 2^1 + 3 , 2^2 +3 , 2^3 +3 = 5,7,11

Sorry!

]]>
By: boneidol https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-122314 Sat, 29 Dec 2012 02:36:05 +0000 http://pthree.org/?p=2590#comment-122314 "In relatiy" <- trivial typo

]]>
By: Aaron Toponce : ZFS Administration, Part VIII- Zpool Best Practices and Caveats https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-122025 Thu, 20 Dec 2012 15:07:20 +0000 http://pthree.org/?p=2590#comment-122025 [...] RAIDZ [...]

]]>
By: Aaron Toponce : ZFS Administration, Part XII- Snapshots and Clones https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-122021 Thu, 20 Dec 2012 15:06:19 +0000 http://pthree.org/?p=2590#comment-122021 [...] RAIDZ [...]

]]>
By: Aaron Toponce : ZFS Administration, Part VI- Scrub and Resilver https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-121820 Thu, 13 Dec 2012 13:07:10 +0000 http://pthree.org/?p=2590#comment-121820 [...] RAIDZ [...]

]]>
By: Aaron Toponce : Install ZFS on Debian GNU/Linux https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-121815 Thu, 13 Dec 2012 13:05:13 +0000 http://pthree.org/?p=2590#comment-121815 [...] RAIDZ [...]

]]>
By: Aaron Toponce : ZFS Administration, Part I- VDEVs https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-121811 Thu, 13 Dec 2012 12:59:16 +0000 http://pthree.org/?p=2590#comment-121811 [...] RAIDZ [...]

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-120670 Sat, 08 Dec 2012 14:43:39 +0000 http://pthree.org/?p=2590#comment-120670 "A bad idea", no. However, it's also not optimized. My hypervisors are using RAIDZ-1 with 4 disks, as I needed the space. My motherboard does not have enough SATA ports for 5 disks, and I need more space than what 3 disks would give. Thus, RAIDZ-1 on four disks it is. You do what you can.

]]>
By: Mark https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-120505 Sat, 08 Dec 2012 06:00:26 +0000 http://pthree.org/?p=2590#comment-120505 Aaron, I've enjoyed reading the article. Is it really a bad idea to use 5 disks in a RAID-Z2 arrangement? I have 5 x 2TB disks that I want to use in my FreeNAS box, and prefer to have dual parity (rather than RAID-Z1).

]]>
By: Aaron Toponce : ZFS Administration, Part III- The ZFS Intent Log https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-119744 Thu, 06 Dec 2012 13:00:33 +0000 http://pthree.org/?p=2590#comment-119744 [...] The previous post about using ZFS with GNU/Linux concerned covering the three RAIDZ virtual devices .... This post will cover another VDEV- the ZFS Intent Log, or the ZIL. [...]

]]>
By: David https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-119448 Wed, 05 Dec 2012 21:58:09 +0000 http://pthree.org/?p=2590#comment-119448 Very helpful articles! I've been using ZFS for the past year, and have been extremely impressed by it. Looking forward to your L2ARC and ZIL article, as that's something we'll definitely be wanting to add in the near future.

]]>
By: Aaron Toponce https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-119341 Wed, 05 Dec 2012 16:40:06 +0000 http://pthree.org/?p=2590#comment-119341 Np. However, the 3rd post will be covering more VDEVs (there is an order to my chaos). In this case, I'll be covering the L2ARC and the ZIL. Hope to have it up tomorrow morning. Might be a day late though.

]]>
By: Jon https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/#comment-119332 Wed, 05 Dec 2012 16:11:34 +0000 http://pthree.org/?p=2590#comment-119332 Thanks for the pair of articles. I've started messed around with ZFS on one of the scrap server that sits next to my desk. I've read the docs and FAQs, but it's good to see a different perspective of the basic setup.

I look forward to your next article since, as of last night, one of the drives in the test server has started racking up SMART errors at an alarming rate. I guess I'll get to test resilvering in the real case and not just by faking a drive failure. :O

]]>