Is it good yet?


Other urls found in this thread:

oy vey


It works, and it's decent enough in my limited experience, but it doesn't just werk. It's recommended to manually add the "nossd" option if you use a SSD. That should tell you a lot.

implying oracle isn't more like that

Your advice is outdated.
>Currently, using the ssd mount option has a negative impact on usability and lifetime of modern SSDs. This will be fixed in 4.14, see this commit for more information:
This is only the first paragraph of the commit message. You should read the whole thing though. It goes into great detail what -o ssd actually does.

How did they make such a disaster out of this fs. It's been in development for forever, it's a layering violation and duplicates a ton of work between itself and LVM, it has a history of data loss bugs(!), it's still struggling to support SSDs almost a decade after datacenters converted to them, and its transparent compression is incredibly bad compared to squashfs. It's the Star Citizen of filesystems, all promise and no delivery.

Thanks. I'll adjust my fstab.

How much worse is it? Can it be justified by squashfs being write-once?

It's awful compared to it. When I tried on our firmware, it doubled it. squashfs can compress better in theory as they don't have to worry about a lot of things, but for a clean fs with no activity on it I didn't expect it to be so bad. I was also using pretty light compression on squashfs.

speaking of file systems is there any reason to leave ext2 aside from the filesize limit increase and the overall partition size increase? i've used ext4 but then I disable journaling so secure delete works (ya i know encrypt whole system etc, but i want to be able to actually overwrite files if i want).

whats the advantage of ext4 over ext2 except for limit increases when journaling is disabled?

Did you have a lot of small files that compress well? I think btrfs compression can't span multiple files, because it works per extent, while squashfs doesn't have that limitation.
I'm not an expert, but I think that large of a difference might be explained purely by the different use cases.

It's a base Debian plus bits and pieces, so about 600MiB of mundane.

The last time I've tested BTRFS it killed itself after 2 weeks. That was a year ago. This is just sad.

ZFS/BTRFS compression isn't really meant to compete with squashfs. As far as I know they just lz4/lzo/whatever compress files transparently.

I did a test with my 766MB Debian nspawn container.
zip compresses files individually, like btrfs.
Compressed tar (both and .tar.gz) spans files, like squashfs.
766M stable.tar358M stable.tar.gz358M stable.tar.zip582M
Zipping the tree directly without tarring it first makes it almost twice as large. It otherwise uses the exact same compression method each time.
And that's without taking into account that short compression times are much more important for btrfs than for squashfs.


How many kWh of electricity would we get by throwing you live into a furnace? That's your contribution to society. Keep your mouth shut.

I think he was pointing out the contributers as a comparison of the quality. The filesystem is adequete but as points out it doesn't always "just werk" like EXT3/4 or VFAT would.

No. If you have the need to use it, just use a BSD with ZFS.

ext2 will have generally better support on other *NIX compared with ext4 if you have to work with different operating systems.

Found the Jew.

Can you shrink it offline?
Can you grow it offline?
Are there any major bugs that cause permanent data corruption?
It's 66% ready.

Nearly every other filesystem is better than BTRFS as of today.
XFS is shit because you can't resize it offline.


I am using it and think it's adequate, not great. Good features are COW related things like reflink copies, snapshotting with send|receive, subvolumes. Compared to ZFS you can boot Linux from it and it supports balancing (lets you ad-hoc RAID-1 with whatever drives you have on hand).
Downside is it could blow up in mysterious ways. The performance also sucks.

how bout no

I'm so glad they focused on duplicating features found in LVM rather than waste time on these minor quibbles.

Related question:
What is the chance of somebody writing a decent Linux ZFS driver?
Without one, I'm honestly not sure what the future of ZFS is supposed to be.

Can it now pass the FreeBSD Handbooks testing your zfs? iirc just trying that test suite could crash the fs.

Illumos is far from abandonware, zfsonlinux and Illumos are still progressing, canonical are helping if that makes you feel better.

Probably openzfs will replace it. The license autism is pointless as even oracle mix gpl code with cddl.

Yeah, its a great idea to position yourself so that Oracle can mess with you. Oracle can mix cddl code because they are the copyright holders you fucking retard.

Red Hat have the balls of Oracle with their Linux base. Also the openzfs have no oracle code in it and based on the API, the Illumos devs can sue Oracle too. It's a bit too risky to fuck with the giants behind Linux.
Feel free pretending being more smart than the lawyers of RH and Canonical.

So what you're saying is that Linux' shitty implementation of ntfs is safer to use than btrfs?

What a fucking retard you are.
No they don't? It's GPL and as long as CentOS can exist, Oracle Linux can too.
The only thing RedHat can do (and already does) is to make it as inconvenient as possible to use (e.g. huge monolithic patches).
This isn't about Oracle code, it's about license incompatibility you idiot.
They can't. Oracle owns the "mainline" code by inheriting it from Sun. Illumos/OpenZFS is a continuation of the open-source release by Sun. From all standpoints except historical, those are two different code bases.
Oracle is:
1/ a giant
2/ known for suing everyone they possibly can
Feel free to continue being retarded. Law, and especially the English tradition, has many mutually exclusive interpretations that are, if needed, resolved in court. Unless such resolution occurs, the compatibility of OpenZFS with Linux is just an opinion of RedHat and Canonical.

What's the point of Linux supporting like 15 file systems? What was wrong with ext{2,3,4}?

I've been using it the past month and so far it's as reliable as ext4 was. backups and the removal of needing lvm is nice too.

Various reasons.
FAT and NTFS for interoperability with DOS and Windows.
HFS for interoperability with OS X.
XFS was ported over from IRIX by SGI. I don't think anyone from outside specifically asked for it, but it's nice to have (after it has been properly stabilized which took more than a decade).
JFS was ported by IBM from OS/2 (itself a port of JFS2 from AIX). Same as XFS, no one really asked for it, but it's stable and has low CPU usage.
ISO9660 and UDF for optical disks (UDF also works on flash drives and HDDs).
Various cluster filesystems to be used with clusters.
Miscellaneous filesystems also for interoperability (MINIXFS, some version of UFS, SysV filesystem, etc.). I wonder how many of them still remain in the kernel.
Reiser3 used to be pretty advanced in its time, although it had its share of problems. Used to be the biggest ext2/3 competitor, but Reiser4 was never merged to the kernel after the lead developer killed his wife, his company disintegrated, and no one picked up the work that still needed to be done.
BTRFS as an improved substitute to ZFS which has license problems.
F2FS by Samsung for better handling of flash-based storage.
NILFS because someone still believes in log-structured filesystems.

I replaced all my Ext4 partitioned disks with Btrfs.

Due to dynamic inode you get more room back, just on a 3TB with a lot of large files it can mount up to 200Gb



Pro tip:
If you have to send a similar archive to a windiot who can't handle any other format than a .zip that his system supports out-of-the-box, then zip it twice. First with no compression to merge the files into a single stream (like tar), then zip the .zip with full compression.

Because TRIM can have negative effects on many HW configs and you want it to be a separate configuration knob.

Discard mount option is too aggressive since it sends trim commands for every delete. Many older or shittier SSDs would lag or corrupt. Running fstrim cron job is recommended instead.

Basically only RAID5/RAID6 are still problems due to the write hole, but RAID5/RAID6 is bad in general on modern drives due to the insane parity rebuild times on multi-TB drives. I use 4x6TB in BtrFS RAID10 for my home server and it works great.

You're not even producing opinions at this point. You're pattern-matching on the same level as a fucking vb6 script.