What the fuck is this thing. I keep hearing people talk about it like it's the best filesystem in the universe...

What the fuck is this thing. I keep hearing people talk about it like it's the best filesystem in the universe. explan to a brainlet what the hype is about?

Other urls found in this thread:

lwn.net/SubscriberLink/747633/ad7f94ed75c8779e/
phoronix.com/scan.php?page=news_item&px=Btrfs-Data-Bug-Hole-Read-Comp
btrfs.wiki.kernel.org/index.php/RAID56
theregister.co.uk/2017/08/16/red_hat_banishes_btrfs_from_rhel/
news.ycombinator.com/item?id=14907771
zfsonlinux.org
open-zfs.org/wiki/Main_Page
insights.ubuntu.com/2016/02/18/zfs-licensing-and-linux/
blog.mthode.org/posts/2018/Feb/native-zfs-encryption-for-your-rootfs/
github.com/zfsonlinux/zfs/commit/ae76f45cda0e0857f99e53959cf71c7a5d66bd8b
github.com/zfsonlinux/zfs/commit/7da8f8d81bf1fadc2d9dff10f0435fe601e919fa
wiki.archlinux.org/index.php/LVM#RAID
jodybruchon.com/2017/03/07/zfs-wont-save-you-fancy-filesystem-fanatics-need-to-get-a-clue-about-bit-rot-and-raid-5/
hgst.com/sites/default/files/resources/HGST-Power-Disable-Pin-TB.pdf
youtube.com/watch?v=fE2KDzZaxvE
twitter.com/NSFWRedditImage

It's more than a file system, it's a volume manager and a file system combined. It makes it very convenient to build software raids. The other large strength behind ZFS is that it is a copy on write (CoW) file system, making it amenable for efficient snapshotting.

If you need to ask though, you probably don't need it.

it's a file system that's got some neat features

don't use mysql with it though

Thats just due to misconfiguration though. You can disable compression and whatever the other issue was on the dataset you store your mysql database on.

sage because this could have been in the sticky.

Literally the only reason people used FreeBSD over Linux for a while.

Object storage systems are better.

a meme

ZoL has now over taken them in features, which is quite funny.

t. (You)

Hell, even XFS is being extended to have some of the goodies you get with ZFS: lwn.net/SubscriberLink/747633/ad7f94ed75c8779e/

I formatted all of my storage drives to XFS in Linux.

Dead wife.

That's ReiserFS you dumb fuck.

They haven't implemented that yet

An almost related question: is CoW enabled by default on btrfs, and how do I take advantage of it?

By not storing your files on an FS that has been known to lose data. You actually have to be retarded to use it over zfs. If you need the features zfs provides use zfs, it won't lose your data.

I used butterfs for like 2 years without any troubles on openSUSE. Didn't utilize any raid functionality or similar though

I don't think fucking faceberg of all would use a failing FS, do you?
If you can't answer a question stay silent, don't spout retarded memes.

Number of bugs causing data loss/corruption in ZFS: 0
Number of bugs causing data loss/corruption in BTRFS: more the ZFS[1][2][3]
[1]phoronix.com/scan.php?page=news_item&px=Btrfs-Data-Bug-Hole-Read-Comp
[2]btrfs.wiki.kernel.org/index.php/RAID56
[3]theregister.co.uk/2017/08/16/red_hat_banishes_btrfs_from_rhel/

It's oracle so I won't touch it use Btrfs if you don't want to get sued.

This
Only recommend for systems thousands of terabytes aka datacenters.

Ture but btrfs isn't funded by CIA niggers.

What did they mean by this?

...

There's some background on the decision by RedHat to discontinue BTRFS. It's as simple as not having enough employees familiar with it.
news.ycombinator.com/item?id=14907771

Does Linux have decent ZFS drivers yet? If not, is anyone seriously working on them? Now that Larry is bringing the guillotine down on Sun, what is the future of the Sun tech like ZFS, and various Solaris derivatives like Illumos & OpenIndiana?


whynotboth.spic
t. eternally butthurt resource fork nostalgic

zfs is paranoid, it thinks every hard drive will lie, do bad things and needs to be kept in check, the zfs devs have actually dealt with hardware vendors who's hard drives did shit things like telling the OS it's written to disk but only put it in its cache, zfs finds those kinds of bugs.
bitrot stopped being a meme after zfs detected it happening.


What's even worse about some of those btrfs dataloss bugs is that it was discovered by a guy going through the FreeBSD handbook zfs section using btrfs equivalents.

Yes. To the point where it is now laughable that people claim FreeBSD is the ZFS platform.
zfsonlinux.org
open-zfs.org/wiki/Main_Page

Not really since zfs on linux will always involve out of tree modules because of "muh non compatible license" cancer. I build them in on gentoo but distros can never ship binary linux kernels with built in zfs drivers. On top of that zfs on linux is still behind on features. For example they just got the built in encryption feature this month if I'm not mistaken.

That landed late last year, it was even in the latest MacOS release of OpenZFS. FreeBSD to my knowledge has yet to port encryption.

Also Ubuntu ships ZFS binaries, insights.ubuntu.com/2016/02/18/zfs-licensing-and-linux/

Didn't FreeBSD nix XFS support recently?

I think I am incorrect. I got it from the following
blog.mthode.org/posts/2018/Feb/native-zfs-encryption-for-your-rootfs/
He appears to be referencing one of these two commits:
github.com/zfsonlinux/zfs/commit/ae76f45cda0e0857f99e53959cf71c7a5d66bd8b
github.com/zfsonlinux/zfs/commit/7da8f8d81bf1fadc2d9dff10f0435fe601e919fa
This looks like it only applies to zroot disks, which stills looks to suffer from the linux/freebsd problem of not encrypting the kernel. Unrelated but someone really should write a proper bootloader for linux that can handle this like the openbsd one can.
Interesting, though I assume it has to remain a module for that to apply. I still don't think SPL and the actual ZFS code will ever be shipped even as a normally disabled feature in the kernel. Which in turn will continue to hurt the adoption of zfs on linux.

That literally changes nothing. The irony is that Illumos is the ZFS platform, not FreeBSD.

ZFS is the best RAID solution out there, but it's got steep hardware requirements: a large amount of ECC RAM for L1 read caching, a large solid-state drive for L2 caching, and a high-throughput SSD for write caching. This is in addition to your hard drive array, which should have (at least!) two parity drives for redundancy. It's quite expensive to build a decent ZFS array, but it performs very well and has an impressive feature set.

On the software side, it's pretty easy to setup on Linux if you're comfortable on a commandline. I wouldn't recommend FreeNAS (a customized ZFS-centric FreeBSD distro) because the GUI configs are not bidrectional with those on the commandline. So if you setup everything in the GUI and need to drop down to the terminal for fine tuning, it's not gonna work so well.

It's reasonable: I've lost data to CRC failures more than once.

Are ZFS and BTRFS the only filesystems that offer stronger hash checks?

I started messing around with ZFS for fun on a 3TB drive, was considering just keeping it and expanding it to mirror vdev, then adding another mirror vdev later for raid10 (can't imagine using any more space than that, I don't store too much stuff).

But my desktop only has 16gb of non-ECC ram, and I don't have separate drives for caching/logs.

So what might be a better setup for a home user? mdadm with xfs? how does xfs handle power loss? (no UPS)

ZFS would definitely not be worth it; way too much overhead. I would grab a cheap, used LSI SAS/SATA card from ebay for hardware RAID. I've no knowledge on how XFS handles power loss, sorry.

Those aren't hardware requirements, friendo.

If you want to reap the actual benefits of using ZFS, they are. But strictly speaking, you are right.

not him, but you only learn by asking you know.
Might not need it now but will need it after learning about it.

That expression isn't to be taken literally. Just as "If you have to ask, you can't afford it", only means the price is high.

This sounds like the use case for dkms

Sounds like a fun way to lock yourself out of your own files.

It's a meme (just like zsh, void and other things).

aka, you know, petabytes and such

It's a GOOD meme.

Yeah I've heard about those hardware requirements. Is this an issue for btrfs as well?

No, ZFS makes sense on desktop & soho file storage too, even if just for snapshotting functionality and quote management. Raw device-mapper snapshots (used by LVM) are lame, and using volumes for space managment is clumsy and obtuse.

Those hardware requirements are a meme. It comes from that the fact that since ZFS was designed to be used on very large storage array it uses something like 90% of your ram as arc by default. You can that down to one or two GB, or even disable it and still reap the other benefits of zfs. They would equally apply to btrfs if anyone was dumb enough to actually trust large portions of data with it.

Yes. That is issue for ANY STORAGE CLAIMING TO BE RELIABLE.
ZFS on no ECC, low memory, single-parity system is shit, because anything will be shit in these conditions.
Also ECC requirement is overrated, scrub-of-death is mythical situation,. You have larger chance of power supply failing and burning your drives with surge, than memory failing in that specific way that can cause it.

ZFS is the SystemD of file systems. it's a dumpster fire. sure zfs has some nice features, but it's unstable

What?
What are you talking about?
Implying you even understand those.
You have absolutely no idea what you're talking about. You deserve to be mocked.

Hi bot-kun!

That would be btrfs you moron.

I've used ZFS for years now, starting on FreeBSD with (3) 2tb drives, then built a big boy (for you) filer with (12) 4tb drives. On the new filer, I use Oracle Solaris as the OS, because they created and maintain the ZFS file system, so I went straight to the source. The downside is you have to know how to use Solaris, which is a bit different from Linux. You can do a search for ZFS version and read about the different features that are available. First off, I would recommend ZFS if you are concerned about your data AND you can afford the additional hardware. By additional hardware, I would recommend at least 2 hard drives of the same make and model. The reason is that ZFS offers the ability to do RAID in many different forms, but also with this feature, the added bennefit of checksumming your data when written, with the ability to re-build your data if it's found to have been corrupted. You can read more about what causes corruption, and it's somewhat rare and probably wouldn't affect you, but it does happen and is real. With at least 2 drives, you can create a software mirror, where a copy of the data resides on each hard drive. If a file becomes corrupt, or if an entire hard drive fails, you don't loose your important data. If a file is corrupt, a the file can be rebuilt by running a scan against the drive, and can be repaired from the second copy from the second hard drive. If a drive fails, again, you still have a full single copy on the working drive. This is just an example, there are other very cool drive configurations you can run, RAID 5 (single parity), RAID 6 (double parity), you can assign hot spares, you can create two RAID 5s or RAID 6 and stripe data accross them. You can create RAIDs out of USB drives to use as a more reliable backup (still have the data if one USB fails). And you get the supper cool snapshot ability. I configure mine to create a snapshot weekly, and if I lose a file, screw up a file, or just want to go back in time on a file, I can do that easily. There are a lot of cool things you can do with ZFS. Like I mentioned, I like running ZFS on Solaris, but I don't use that as my main computer, it's a dedicated file server. FreeBSD would be my recommendation if you want to use it on your workstation, and after that, any Linux distro of your choice that makes it available. I know Debian and Arch both have support for it. Oracle maintains a great information library online for using the file system. If you just want to try it, you can install the file system on most linux distros, then try it using a couple of USB drives, or even using several image files using dd or fallocate. One last thing, what's nice about ZFS is it's not hard drive contoller dependant. If you have a hardware RAID array, and the drive controller fails, you better be able to find another exact model, or you can't read the data on your hard drives. With ZFS if you computer mobo goes tits up, just unplug your hard drives, throw them in another system, and rediscover them with ZFS and your data is back online.

Sounds like some interesting features, mostly related to doing software RAID. My question is: what is the advantage to doing this over using some other filesystem + Linux's Logical Volume Management, which can handle software RAID as well? Keep in mind that LVM can also do snapshots and whatnot as well.
wiki.archlinux.org/index.php/LVM#RAID

I don't know if I can fully explain the pros and cons of each, as I'm not as experienced with LVM. I would say, after reading the link, that ZFS would be more robust, although like I said, I haven't used LVM much. One main benefit, at least to me, is the checksum of your data when written to validate the integrity of the data. There are additional attributes you can assign or tweak on the volumes you create, such as encryption, compression, and check-summing as mentioned above. You can probably do the same thing with LVM (I know encryption is an option) but with ZFS it's all rolled in. While LVM says it handles snapshots, how many can it handle? I know with mine, I have it take a snapshot every week, and I've had it going back 8+ months before I go and clean it up. I know with some filesystems, having that many snapshots would essentially grind the system to a halt, VMWare's builtin snapshots for example. Also with RAID on ZFS, it's fairly easy to replace a failed drive and rebuild the array. I assume you can do that with LVM, but I'd have to search it. Does LVM support hot-spares, because that's another nice insurance policy to have if one of your drives fail, it can start rebuilding right away. I hope that provides some additional thoughts on the subject, all just my two cents. If your worried about the protection of your data, give ZFS a look, as I believe it offers a more robust solution in that area that LVM does.

With ZFS being a multilayered solution, you've got many opportunities for tighter integration. What happens in LVM if an error is detected in RAID? Does it just flag a failure, or does it try to automatically correct it?

jodybruchon.com/2017/03/07/zfs-wont-save-you-fancy-filesystem-fanatics-need-to-get-a-clue-about-bit-rot-and-raid-5/

Dropped, everything after that isn't worth reading.
The meatgrinder of hardware firmware is known to be horrible, made by barely functional autists on cocaine.
"Just trust the hard drives" is one of the dumbest things I will hear this week.

Later:

Later:

The cognitive dissonance is strong with this one.

This faggot also wrote the equivalent of winzip which makes him think he knows what's best for a filesystem designed to run on mixed and untrustworthy media.

Watch out! Looks like we found ourselves another liberalist!

It is the best filesystem. It is also neither perfect or fully automagic. You need to understand the options, and choose them appropriately for your use case, or you could have bad performance. It could could use some updating to fix some areas that it is starting to trail behind in.

And no, it doesn't fucking require ecc, any more than any other filesystem requires. Just giving you a heads up, the myth that ZFS must use ECC is entirely because of a very autistic forum user that brow beats everyone he talks to into agreeing with him. Hell, ZFS even has an unsupported function that error checks ordinary memory anyways to better catch problems [see ZFS_DEBUG_MODIFY flag (zfs_flags=0x10)]. ZFS doesn't not need expensive hardware, quite the opposite it was built so you could survive using shitty hardware.

The only major issue with ZFS are the fanboys that very loudly broadcast their opinions while overestimating what zfs is.

No fucking kidding. Hell, the SATA standard was even changed because firmware/hardware is so shit. Over the next few years, the 3rd SATA power pin will start being used to force the drive completely off to reset the drive without having to pull it (important for servers). Basically, servers which have a shitton of drives were RMA'ing drives as dead, but all they really needed was to lose power to be reset. Some western digital and HGST drives already have this.

If you have a new drive what won't spin up (but might work somewhere else, like a usb enclosure or different set of power cables), they you may need to tape the power pins. The first three pins, which are 3.3v can all be taped off, as none of them are normally used.

As in pin number three or what?

So what you're saying is that the 3.3V rail might stop a disk from booting up if connected?

This guy is a perfect example where a little bit of knowledge can be a dangerous thing.

The CDDL version of ZFS is maintsined by the Illumos project. It's copyleft free software. Oracle can't go back and un-CDDL the old versions of OpenSolaris' software, so you're fine, otherwise the project would have been raped already.

I heard somebody say that it can access files on HDDs without spinning the disks. I don't know how the fuck that would work though, and when I installed OpenIndiana I definitely heard my hard drive spinning, so that guy might have just been retarded.

Yes, power pin 3. You can just tape all three for simplicity.

Yes, it's a hard OFF switch basically, same as physically disconnecting the drive. Expect to see this popping up in newer in newer drives, especially in drives meant for "enterprise". In drives using the standard. Some sata power cables don't give any problems, as they may not even bother supplying 3.3v power, but ones that always supply 3.3v will keep the drive from powering on.

HGST already has some drives that use the new standard, and it also supplies along with them a short molex to sata power adaptor. Obviously these won't help if you are using a backplane, and you'll need to tape. hgst.com/sites/default/files/resources/HGST-Power-Disable-Pin-TB.pdf

It's not a big deal, unless you aren't aware of it. Then you'll get a nice new drive, spend the next hour(s) trying to figure out why it won't fucking turn on in some situations. HGST really should have used a jumper or something though for this transition period.

You're probably thinking of a read cache.

Sounds like ZFS accessing files from it's cache. I don't know enough about it's behavior to know if it will serve things from the cache without spinning up disks, but if it has the data already in a cache it doesn't technically need to.

How does one take advantage of the new standard? Supply the 3.3V rail with power and put a switch in line?

Exactly. There are no available consumer options as far as I am aware. I know it's being used in enterprise, but I'm not sure how exactly. So DIY is the only way to go right now.

That bit is extra hilarious, he THINKS the hardware is working as expected because he has no other mechanism to tell him otherwise.
He trusts hardware isn't lying, but it does, all the time.

Vid related:
youtube.com/watch?v=fE2KDzZaxvE

Tell me about XFS. I heard it's good with large files and ideal for HD cartoons.

Just stick with EXT4.

ext4 is a lot if the power ggoes out

That's why you gget a UPS.

XFS will soon get extent CoW, though. Makes it pretty attractive while waiting 100 years for btrfs to become stable.

If you want CoW, just use zfs.

wtf would you do that for? You need to add a windows partition? go back to reddit

>>>/apple/

The fact that you've never needed to do this for reasons not associated with Windows suggests you are new.

You'll have better luck waiting for HAMMER2 to be ported to linux, even when its' latest release is still only "experimental".

And things like writing the correct data to the incorrect location. Which is why ZFS always tries to store the checksum away from the data.

FreeBSD doesn't need it because you could put your zpool on GELI devices since day one.

FreeBSD never had serious / working XFS or ext[234]fs support.

Not cross platform though.

Those are absolutely not requirements. You don't need to have an L2ARC or a separate zlog device.

It's recommended to have a 64-bit system because 32 bits of virtual address space isn't quite enough for the way its cache works, but even that's not a hard requirement.

True. If you're constantly moving your disks between machines with different OSes then yeah it's going to be a problem for you.

It can run on 512MB RAM on a 32-bit OS, at least ZoL can. I was an early adopter that found some very strange bugs caused by doing some very odd things.
You might have to flush the caches every few minutes, but it's possible.
A few early ZoL updates destroyed pools.
I remember having panic attacks when shit fucked up during pool updates.

I managed to get it working on a dumpster P3 with 1G RAM and garbage disks.
Would I do it again? Not unless I absolutely have to.

BSDfag here. An appropriately set vfs.zfs.arc_max in /boot/loader.conf and ZFS will be rock solid with whatever amount of RAM you have.

How does snapshotting work anyways? I never understood the magic behind it.