RAID

I've never done RAID before. I was planning on getting started on it by making some backup server, speed is not the main goal, but being sure that data won't get corrupted on it, so maybe having BTRFS on a spare machine or something and 3 or 4 large HDDs in RAID 5, but since I've never done it I thought I'd ask.

What do you use to keep it low power running 24/7? (A single-board computer would be ideal, but none have more than one SATA port or USB 3.0... speeds would be abysmal)

Also, what happens when one drive fails? I'd wish to set it up to have a red LED or something and just swap the disk (I could code this much, I guess).

If you have some RAID server at home, how did you do it? Do you just buy some hardware controller?

Get a Xeon D board and use zfs with raidz2 with 6 HDDs (one for each SATA port) and one m.2 ssd for OS, programs. Also, get 64 GB of ram.

Ironic shitposting is still shitposting.

Xeon D are low power and have 10Gb ethernet ports which is nice for a NAS device.
The server can then be used for a firewall and personal webserver as well without breaking a sweat.

Also, I only suggested M.2 because I suggested he use up all 6 SATA ports.
A thumbdrive could be used too, but it wouldn't be useful as a data cache for frequently requested files.

Finally, the 64GB of ram is because zfs is notoriously ram hungery.
I've heard the rule of thumb is ~1GB per TB in your array, so if he has, say, six 8TB drives, that's 48GB of ram needed, and since ram comes in powers of two for most boards, that would be 64 GB. I suppose 32GB might work.

That's the config I'd make for a small company's backup server. I'm talking about home use.

Pic related is my current backup server. I'm almost fine with the speed, I just want the peace of mind of "my backups will surely still be there and an LED will blink if I need to swap a bad disk".

If all you want is peace of mind just get another USB HDD and mirror it.

This.

RAID is a fucking massive pain in the arse when you actually have to recover things. ZFS likewise.

When there's a highly-organized and proven system administration regimen, then complicated filesystem techniques can be appropriate. It's for the sort of places where telling the staff to put 400 man-hours into creating perfectly-identical redundant hardware can be done "for free".

If you're someone at home, and looking at putting

DO NOT, I REPEAT, DO NOT FALL FOR THE ZFS MEME

What is good?

I thought with RAID a drive would fail, my backup box would be offline for a while as it "recovers" into another spare disk, and then I could use it again.

Why isn't this the case, isn't this what RAID is for?

PS I got the point that I should "just mirror it", but how/why is that easier than RAID 5?

How bad/good are devices like pic related? Has anyone tried them?

In particular: how does data recovery work with those things when a drive dies?

I have a QNAP, I like it, QTS is friendly while still being Linux-based (thought not RMS friendly as much). If a drive fails, I pull it, replace it, RAID rebuilds, is still accessible during the rebuild but will be slower. Went with RAID 6 for two parity drives, up to two drives can fail at a time without any data loss. I have it on a UPS just in case.

ha ha oh wow. You want to believe you needs RAID.

It's a full blown computer, right? With a closed-source OS, all of your data on it, 24/7 access to the internet and very convenient to set up all of your "cloud"/"smart home" with, right? The botnet potential of such a thing is scary, I'd love something that only talks USB3 or PCIe...


OK, so you have your Samba share, and one of the files is corrupted, you run a disk check and indeed that file is gone. With RAID this wouldn't happen, this is why I want RAID.

Plus if an HDD dies I want to just suffer economically and not in headaches too.

is it possible to put like half a dozen M.2 NVMe SSDs in RAID 0 to make the computer boot super fast?

zfs is ram hungry if you don't turn down the aggressive caching settings. Also the deduplication options are notorious for exhausting ram.

ZFS isn't a meme you idiot.

Storage arrays (RAID, ZFS) are not, by themselves, a backup solution. Backup is where you put data on cold media that's not going to get touched should your network get infected, you experience a power spike, or you accidentally wipe the volume.

That said, a low-power FreeNAS box is pretty much exactly what the OP is looking for. I think the new releases can even act as a host for VirtualBox VMs.

This literally has nothing to do with RAID.

Don't, enable wake on LAN or get a mobo or comp w/ IPMI and make a powershell/bash script to login to the storage server & power on.

ITT fags who cant afford a real RAID card and have to use fake raid instead.

You dont know what queue depth is do you?


They're all shit unless you're willing to spend $1k on an Areca 5066. They all use shitty Jmicro controllers.

Mirror them how? Is there anything wrong with RAID1?

NEED
ECC ram
Raid card with integrated battery backup and ECC ram
HGST HDD drives

HGST is the most reliable drive manufacturer
https:// www. backblaze.com/blog/hard-drive-benchmark-stats-2016/

I use RAID 1 on my home servers
I have 128GB of ram per node for daily use
cp /hot_directory /dev/shm/hot_directory

working from ram reduces drive wear and is faster than your shit
writing a script that writes to disk at regular intervals is convenient

quiet, the adults are talking
NVME SSD's are nice if you cannot afford more ram
but watch board controller temps if using gpu cluster and NVME drives

Raid 5 best practice is replace drives frequently since the failure rate during rebuild is high

If you have ever tried to recover RAID 5, then you can also comment
All other neckbeards learn something from LinusTechTips on data recovery first

OP
this guy knows his shit as well

How much more reliable is than software RAID1? Does RAID1 ever fuck up?

Software "raid1" is the same as
cp -r /file /sda1
cp -r /file /sda2

This is only useful for devices using hdd in adverse conditions

Without ECC ram, a bluetooth keyboard/mouse/dildo can corrupt ALL copies

RAID cards can fail,
but are designed to isolate the failure to only one drive
and throw errors immediately, warning you of the failure

MoBo SATA controllers lack those features,
and many disregard/disobey cache write through
so you do not know what was really written to disk

MoBo SATA controller
/tmp -> SATA -> /sda
/tmp -> SATA -> /sdb

If you do not flush the cache (reboot) after write
symlink /tmp /tmp/sda
symlink /tmp /tmp/sdb

RAID
/tmp -> /dev/RAID/tmp -> /sda /sdb
if /dev/RAID/tmp =/= /tmp
or /sda =/= /tmp
or /sdb =/= /tmp
then
echo "SHIT FUCK!!!"
else done

/sda -> /dev/RAID/tmp
/sdb -> /dev/RAID/tmp
if /sda =/= /sdb
then
echo "SHIT FUCK!!!"
else
cp /sda /tmp/sda
done

Filtered.

You can get an ITX board with a low-power Atom or Jaguar CPU. Both should be fast enough for a home server. If you feel adventurous you could also look for an SBC with PCIe (some have WiFi or 3G modems as a mini-PCIe card, there exist adapters to fullsize PCIe) and connect a SATA controller to that. CPU performance could be a bottleneck though, especially if you want RAID 5 or 6.

Whatever you want to. You can have a mail sent to you or even an arbitrary script be run. It's yours to configure.

Fuck hardware RAID. Nothing beats the flexibility of a software solution unless your hardware solution cost massive $$$$. Make sure your CPU has fast vector units for parity calculations though, otherwise CPU load might be ugly (modern x86 CPUs are all OK, but it's hit-and-miss on low-power ARM SoCs).

Yes, and for damn near all uses it's retarded. With even a single pleb-tier TLC SATA SSD storage is not the bottleneck anymore when starting the system or apps.

Unless you're a professional who needs to process uncompressed 4K video in real time or something, don't bother.

ZFS is fucking overkill for a home server in the first place.


Who cares, a modern box can pull negligible power when idle. And those torrents ain't gonna ever be completely idle.

And thats what MPIO is for

Thats what enterprise class SSDs with built in supercaps are for


if it is a shitbox

if by
you mean
then yes.

BUT Linus can teach you what NOT to do


user gets it

used enterprise > flagship consumer

Read the OP. A cheap shitbox is exactly what he wants to build.

Indeed, for a backup server I just want data to be 100% safe. I don't see why you need to spend EEC memory when btrfs checksums your data each write (that's what it's doing, right?)

So other than a RAID 6 I don't see how it could go wrong except
1) 3 drives die at once
2) house fire or other massive SHTF scenario
The first case is extremely unlikely and in the second case I have bigger problems than data loss.


How come you can feed two datacenter-type servers with that?

Clarification: I mean simple storage server