Comments on: ZFS Administration, Part XIV- ZVOLS Linux. GNU. Freedom. Thu, 15 Feb 2018 18:04:15 +0000 hourly 1 By: Anonimous Mon, 20 Feb 2017 15:26:23 +0000 Just as a point, not to cause any discussion.

I had seen some apps that refuse to work is there is no swap area defined (some als refuse to even start, no swap error message shown, etc).

I had seen some apps that causes swap area to be used among there is plenty free ram at the same time (they are not so common, thanks who know why, etc); by the way, how an app can force data go to swap when there is free ram ata same time?

But the worst case is when you can not add more ram to your motherboard (when i personally by motherboards i also buy the top most ram they can support), some motherboards (quite old, or not so much) only allows 2GiB of RAM (talking about PC, not laptops, etc).

And there is also the other part counting, what if adding RAM is multiply the cost of the entire PC by four or five times? Example: A laptop with touch screen that can be turned (TabletPC) with 3GiB of ram, with a max of 4GiB said by vendor, but having some people tested it with 8GiB and with 16GiB (really max, there are no bigger ram than 8GiB and it only has two modules), now the costs, the 4GiB module (2x4GiB=8GiB) cost arround 300 euros each (the tablet cost 300 euros with 3GiB of ram), so putting 8GiB is making the tabletPC cost the triple, but the 8GiB module (2x8GiB=16GiB) cost more than one thusand euros each, both arround 2500 euros, so cost would be more than eight times the cost of the tabletpc... and much much more than a new computer.

Sometimes adding more ram is not an option, some can not hold more than 2GiB, others is too much expensive.

So could you explain a little how ZFS would go in a system with 2GiB RAM and 4GiB on SWAP with only one disk (laptop) of 500GiB? Understanding no dedup is being used, of course; and the top most important point: how to configure it to not be a pain in terms of speed!

I mean: Ext4 is great (i had no loose i had noticed), but i can not trust it for silent file changes... i do not mind if HDD breaks, i have OffLine backUPs...

I better explain it a little: If i use Ext4 for BackUPs on external media, silent changes can occur, if i use ZFS they will be detected (at leas most of them); since i use 3 to 7 external HDDs, only one powered at same time because i am a really paranoid on loosing my data (tutorials maid by me), all with Ext4, i can suffer from silent corruption (never seen it yet, but not impossible), ZFS would be great to detect them if they occur.

Till i can use ZFS i may think my method to avoid silent corruption is great, i use 7-Zip to compress LZMA2 one directory or file, then i put such 7z file on one external disk, then unplug it, then on another, ... up to 7 disks... 7-Zip has an internal checksum, but how can i be sure all 7 copies had not have a silent corruption ata the same time? so i can not recover data from inside 7z files (all copies are bad)... to avoid at most that, i check 7z integrity prior to copy it on the 7 external disks... but it does not warranty at all co corruption can occur.

If i just can put ZFS on each of that 7 external DISKs i would have another level of trust.

By the way... a lot of times the Ext4 has been powered of (freeze) at brute force... but i am really lucky, i never lost anything, neither seen any of such silent changes... but i am paranoid, they can happen, so better to be safe.

Resuming: How would you configure ZFS for rootfs (i do not like to create partitions for /home, etc, since i am so paranoid i make periodically full clones of all system on external media) for a laptop (only one hdd) with only 2GiB (3GiB at most) of RAM, with 500GiB HDD, but only 64GiB for rootfa and 64GiB for data partition, the rest is used by other OSs... better is you can explain it for SolidXK distro, thanks; thinking of having a similar response as having a Ext4 over a LUKs over a LUKs over a LUKs over a logical partition (i hate primary partitions)... and of course, having encryption enabled (better if ZFS encryption with cascade of TwoFish and Serpent algoriths, since i collaborate on coding the break of AES-128 up to AES-8192).

Thanks in advance for any help, and also thanks for your great turorial i am reading with pleasure.

By: Mael Strom Tue, 07 Feb 2017 08:20:35 +0000 It maybe necroposting, but it can make a difference...

2Ekkehard: in your case you need to use sparse zvols ( -s key) and enable iscsi export to acts like ssd, with rpm=1 and unmap=on, and use windows 8 or above (XP, Vista and 7 are unable to send unmap command through iscsi). So just used (or touched by snapshot) blocks will be keep, others will discard.

By: Ekkehard Mon, 25 Apr 2016 08:55:31 +0000 Thanks for your awesome series, it was the only resource I studied before diving into ZFS, and I feel like I have a quite deep understanding now; creating and deploying my first (home) NAS with ZFS (published over Samba and rsync) was a snap.

I have tried iSCSI as well (I use a FreeBSD-based NAS), it works beautifully to provide partitions to Win7 as "native" NTFS drives.

Obviously, some of the benefits of datasets do not apply to ZVOLS (i.e., ZFS cannot know which blocks are actually in use and which are not, when the file system has been "cycled" a few times), I wonder whether you have practical experience with how ZVOLs evolve over time.

For example, say I use a 100GB NTFS-formatted ZVOL to backup a windows partition (using robocopy, some Windows imaging software, or whatever other tool), to be able to keep all the windows permissions that do not survive with rsync- or Samba/CIFS-based copying. I don't know much about NTFS internals, but I assume that NTFS will, sooner than later, have touched every block at least once; at this point, ZFS will see the 100GB as active. Every further block change will directly lead to more use (at least when there are snapshots around). When I delete or "overwrite" files (in the NTFS world), ZFS will not notice, etc..

Compared to a dataset, where ZFS knows about actual files, the ZVOL will thus have a drawback as long as the dataset is smaller. But if I compare a 100G ZVOL with a dataset that actually *uses* 100G of data, it should pretty much be the same (in terms of using up storage in the pool), no matter what file operations I do, right? (All with deduplication off, of course).

Regarding compression, there should not be a noticable difference between a ZVOL and a dataset, right?

I have a hourly-daily-weekly-monthly snapshot scheme; would you say ZVOLs will eat up the space quicker than a comparable dataset when many snapshots are in use?

By: John Naggets Sat, 10 Oct 2015 11:57:35 +0000 How do you create a ZVOL using 100% of the available size with its -V option?

By: Luca Tue, 08 Sep 2015 11:49:21 +0000 Thanks for these very interesting articles on zfs!
I want to set up some VM with Xen on ZFS and it is not clear to me which is the best solution for guest disk images: on LVM I use one LV and, when it fills up, I have to extend it. On ZFS I suppose I must create a ZVOL of some extent but what do to when is full? What is the smartest way to manage the VMs on ZFS?

By: Mark Tue, 17 Feb 2015 20:43:28 +0000 Since ZFS is so vulnerable to fragmentation, would BTRFS on ZVOLs be a working combination? You'd miss the shared space usage between datasets, but you'd gain the advantages of the now stable BTRFS filesystem, while maintaining the reliability of RAID using ZFS. BTRFS RAID is still unstable, but it can defragment. And one can grow a ZVOL and BTRFS when needed. BTRFS is still evolving, while ZFS is stable, but Oracle will likely never release the new code (goodbye encryption) and the v5000 code isn't evolving much either. Wonder where compression would be more effective though. What's your opinion?

By: cbu Mon, 20 Oct 2014 16:31:13 +0000 Hi,
First thank for this incredible tutorial.
I created a zvol and I am trying to attach it to a VM but I could not. Every time that I try to create a new Block device Storage Pool from virt-manager I get:

RuntimeError: Could not start storage pool: internal error: Child process (/usr/bin/mount -t auto /dev/zvol/tank/rhel/disk1 /var/lib/libvirt/images/sss) unexpected exit status 32: mount: /dev/zd0 is write-protected, mounting read-only
mount: unknown filesystem type '(null)'

How can I get a block device available for Qemu? Thanks!

P.S: I am using centOS 7 and zfs-0.6.3-1.1

By: sammand Wed, 04 Jun 2014 03:26:31 +0000 @Zyon

Yes ZVOLs can be replicated using DRBD and we support it in ZBOSS Linux distribution.
You are free to check it out

By: Andy Sat, 12 Apr 2014 12:58:26 +0000 @Jack

kpartx is probably what you'd need for getting at partitions within a ZVOL holding a VM's disk. Certainly it works for images taken from entire real disks using dd.


By: Thoughts and feelings on data storage implementations after four years of immersion. | EpiJunkie Sat, 22 Mar 2014 17:56:11 +0000 […] finally booted I would have to reconfigure the iSCSI LUNs due to the encryption. After the LUNs/zvols were reconfigured they were presented to the ESXi machine via iSCSI as a datastore which contained […]

By: Zyon Tue, 11 Mar 2014 01:17:14 +0000 When you say "you cannot replicate them across a cluster", that means I cannot use DRBD over ZVOL?

By: Jack Relish Mon, 26 Aug 2013 21:25:46 +0000 Excellent guide. I was wondering if you had any insights on mounting partitions that exist on a ZVOL?

For example, lets say that I have a ZVOL /dev/tank/vm0, which was used as the root device for a VM. At some point, the VM breaks or for whatever other reason I want to be able to access the contents of it's filysystem. Is it possible to expose the internals of the ZVOL? I'm sure that it could be done manually an tediously by getting the start offset of the partition and then mounting it in the same way you would a raw image file, but if there is a slicker way to do so that would be incredible.

By: Aaron Toponce Wed, 07 Aug 2013 16:08:33 +0000 Not sure what you're saying. When you put ext4 on top of a ZVOL, ext4 is just a standard run-of-the-mill application wishing to stare data on ZFS just as much as anything else. So, the data is pooled into a TXG just as anything else. TXGs are flushed in sequential order to the ZIL. The contents of the ZIL are flushed to disk synchronously. So, the data is always consistent.

Suppose you have a VM that is using that ZVOL for its storage. Suppose further that your VM crashes. At worst case, the ext4 journel is not closed. So, at next boot, you will be forced to fsck(8) the disk. What's important to know, is that the data on ZFS is still consistent, even if ext4 may have lost some data as a result of the crash. In other words, closing the journal did not happen before the rest of the data blocks were flushed to disk.

By: Ahmed Kamal Mon, 22 Jul 2013 20:56:09 +0000 When you put ext4 on top of a ZVOL, and snapshot it .. You say it's "consistent" I guess it's only crash-consistent .. There is no FS/ZVOL integration to ensure better consistency, right

By: Aaron Toponce : ZFS Administration, Appendix B- Using USB Drives Thu, 09 May 2013 12:00:50 +0000 [...] ZVOLs [...]

By: Aaron Toponce : ZFS Administration, Part XI- Compression and Deduplication Tue, 08 Jan 2013 04:24:10 +0000 [...] ZVOLs [...]