Data Loss & Bitrot

I am fatally retarded.

Can someone here give me the straight dope on bitrot, and non-traumatic data loss in general?

I have over 2 TBs of Chinese cartoons, transferred onto a brand-spankin' new drive. After some research, it was the most stable I could find

All of the data is just sitting there. I don't plan to move or rewrite it until the drive begins to fail. There are some currently airing shows and ongoing manga that I follow, so new data is intermittently written to the disk.

But what about loss? Brace yourselves for my retarded questions:

- Does any data loss whatsoever occur when simply copying files from one drive to another?

- Is bitrot a real problem? Does it occur significantly within the span of an average human lifetime? If so, what can I do to avoid it happening with a veritable mountain of data on a single drive?

Other urls found in this thread:

borgbackup.readthedocs.io/en/stable/
code.kliu.org/hashcheck/
servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/
lainchan.org/res/2173.html
endchan.xyz/tech/res/10460.html
petapixel.com/2016/02/16/glass-disc-can-store-360-tb-photos-13-8-billion-years/
github.com/chrsigg/par2tbb
scientificamerican.com/article/fact-fiction-glass-liquid/
seclists.org/nmap-announce/2001/25
twitter.com/AnonBabble

no unless the transfer is interrupted somehow, obviously.

Your hard drive is more likely to break before you can concern yourself with this stupid tinfoil shit.

yes. that is why you mirror your shit on multiple disks.


you are either trolling or are retarded

- Does any data loss whatsoever occur when simply copying files from one drive to another?
Usually no, but unless you use a program that calculates the hash of both copies of a moved filed before deleting, you can't be sure. There's also sorts of things that can theoretically happen, but rarely ever do.

- Is bitrot a real problem? Does it occur significantly within the span of an average human lifetime? If so, what can I do to avoid it happening with a veritable mountain of data on a single drive?
Cosmic rays and variances in the "strength" of the written bit are real. Hard drives use magnetism and witchcraft, and SSD's basically use pockets of electrons which like to tunnel out over time, and also witchcraft. The rate which hard drives will silently correct errors for you goes up with their density and usage. Correctable errors happen often. Last I checked, you may or may not encounter a real "not hard drive failure related" uncorrectable error in your lifetime with normalfag tier usage. Data centers with petabytes of drives do have to deal with this as a fact of life. Memory corruption is a thing as well, which means if you use a retarded server filesystem like ZFS because "It's safe" then you'll have a bunch of data perpetually sitting in memory waiting to be corrupted as well, which is why it is recommended to pay for ECC-RAM and compatible motherboards if you choose to do that.


Also, if you plan on keeping things you like on a single drive, you might as well toss it in the garbage right now to save yourself the effort of saving things.
If you want to protect your data, you must have multiple drives, with (preferably incremental) backups stored in a place where fire/lightning-strikes/flood/thieves won't get to them. There is no way around this reality.
You have a small probability of data errors occurring in a normal lifetime. You have a 100% probability of your hard drive dying in a normal lifetime. Don't be the faggot that posts those "guys my only hard drive died, help you rebuild my memes!" threads.

The easiest way to do this is to build a budget NAS, have two drives in raid 1 that you archive things to (and back up your computer to) on your local network. Then once a month or so you back your raid array up to another disk and then hide it away again. If you like extra error checking, you can look into snapraid as an ad-hoc backup, and way to periodically check and correct your data for errors.


Have an unrelated picture from 2006.

Mechanical HDDs die or eat your data frequently. Powered on or powered off, the clock is ticking. I'd not save anything I wanted to keep on a HDD without RAID. Good SSDs (basically only Intel and Samsung's SATA drives) on the other hand might outlive you. We've still got a bunch of servers on X25-Ms from nearly a decade ago because those fuckers are immortal.

I've never heard of "bitrot" breaking or degrading literally anything at all ever.

why would you openly admit that you are retarded?

To warn others. So they won't make the same mistakes you made.

Nice ad hominem, too bad about your argument though.

i didn't make an argument

It's caled an URE. Most of the time it can't be corrected, small amount of time it can.
It's usually 10 TB read per error per manufacturers specifications.
There's filesystems to lessen the chance, like ZFS.
Then of course there's the drive straight up dying.
If you're paranoid about integrity, use ECC memory and do remote backups. I use borg backup. Fork of attic.
borgbackup.readthedocs.io/en/stable/

You have some choices:
a) Easiest and most transparent "in-use" is to use ZFS. No, not ButterFace, ZFS. Ultimately for a Wangblows/Crapbuntu desktop setup I'd suggest making a secondary headless system sit somewhere preferably on the wire directly with the router, fill it with a shitton of HDDs and SDDs, and use SmartOS or OpenIndiana as the operating system (can't beat ZFS on its native kernel). Beware the hardware you choose to run Illumos kernel-based solutions on, as Illumos is a fair bit pickier than Loonix.

b) would be "less easy" in that you'd need to do something each time you store or modify a file. This is more appropriate for archiving. Basically you'd grab MultiPAR and use it to create .PAR parity files. If you set a base directory, it recursively does it across all files in all folders in the base directory. This won't prevent bitrot, but it will give you a recovery mechanism for when it does occur.

exactly.

so?????????

For chink cartoons it doesn't matter, you have the torrent files or whatever and can use them for verification anyway.

sha1 is shattered though

- Keep multiple backups in different safe locations
- Make file checksums and check them periodically
- sha256 recommended

/thread

shamoanjac pls go.

...

For collision, yes, not for preimage. Also, this is about accidental corruption, not intentional.

bitrot is far less of a problem than this

Neuron-rot is far more likely than bit-rot in this case. Make multiple checksummed copies of your brain as a precaution.

lol. sha1 was deprecated decades ago. only retards who know nothing about crypto still use sha1. unfortunately linus is such a retard

I can't believe people are still perpetuating shit like this.

I can't believe people are still perpetuating shit like this.

t. the actual retard here

Create backups/use RAID 6 and use ZFS.
Alternatively instead of ZFS, use .torrent files to check torrented files and generate checksums for non-torrented files.

I have 21 TB of anime, manga, and visual novels. OP is a casual.

Isn't that the guy who was the mod on the Holla Forums Minetest server that got absolutely buttblasted when I tipped everyone's houses over?

What are the recommended brands for storage devices? Or just recommended types of storage devices in general? For long-term backup, that is. (´・ω・`)

I capture gook cartoons off of laserdisc, shit that hasn't been reissued on dvd. Download and preserve, the Japs don't give a fuck. Otakubell.com for the stuff I've done so far.

WD Reds for NAS

Thanks for the replies, everybody.

So, from what I've gathered:

#1 - Back up EVERYTHING (I already do this for my most important files, not large media)

#2 - Preferably mirror animango collection on at least another drive in a RAID setup.

Oh, there's something pretty important I neglected to mention: way more often than not, the drive is powered down. Not connected at all to my tower at the moment. When torrenting stuff yeah, otherwise no.

An old laptop HDD lasted for around 45,000 power-on hours before crapping out. I only power on the collection's drive when absolutely necessary. This does significantly help with longevity, right?

Is what said about "eating data" true? If I left the drive alone and never powered it on for, say, a year, would there be plenty of errors or lost data?

#3 - Look into ZFS in particular, possibly ECC RAM as well. I'm not at all interested in NAS. I'm the only one who will ever access the data willingly, and if rendering fault protection I'd go straight for RAID.

I'm not sure why mechanical drives start losing sectors, but it tends to just happen over time even to sectors you don't write to. I think it's almost always an issue with head/motor wear, and then the click o' death. If you go mechanical, RAID is mandatory (not RAID 0) due to the high failure rate. You also need monitoring, like SMART set to email you on spoops, and to be diligent in replacing drives (everyone drags their feet and then guess what happens).
I'd recommend not trying to build your own array. Ask yourself if you really need your 2DQTs local or if you could stick them on one of Amazon's data storage systems (EC2, S3, Glacier) and stream. It's less headache, more reliable, and can be cheaper than a RAID array if you properly figure TCO. Note that Amazon is trying to be part of the Jewish NWO so you risk having your data stolen, but if your account stays low-key it should be fine. Also be extremely careful with how much you open that up as they have infinite scalability and you can get a breathtakingly massive monthly bill if you aren't diligent (a mistake at work cost us $50k). If you aren't terrified of it you aren't in the right mindset and shouldn't use it.

Yeah I've already used CrystalDisk for a while now: once it registered said old laptop drive as "Bad", I decommissioned it, and used my backup machine 'till it was replaced.

I think in the endgame I'd still be really determined to set up my own array. As you said, Amazon is a goliath and I'd hardly even trust them to store your CC info. Besides, I already have a Windows machine for vidya, and lord knows what they do with user data. They can do anything and everything.

SENSEI!

Bitrot is real, but so is PSU and ESD related damage. Do not expect your shitty Corsair modular power octopus to save your arse. Get a decent unit made by Super Flower or Seasonic, and use your anti-static clips.
Earlier posters have already posted the best info for protecting files (i.e. RAID and ZFS).
also, video files should be pretty safe from complete loss due to bitrot, as missing blocks are skipped/approximated by most ffmpeg based players.


I have heard wonders about blu ray disks and tape drives

Also, I have a question. With flash bitrot, especially NAND, I know that filesystem integrity degrades relatively quickly. However, I'm not sure about the effect of time on NAND integrity. Does time simply force the transistors in the NAND flash to lose their charge and flip bits, or do the cells physically degrade as well, preventing reuse. I.E. Does time kill SSD filesystem or SSD hardware?
(AFAIK, NAND needs to be rewritten before 4-5 years or cells lose charge and you end up with lost files)

Opinion discarded.
See Everything else is denial.

Rarely.


On modern HDD/SSD it isn't as they gonna crap themselves before significant dataloss due the bitrot. Magnetic tapes in other hand are very vulnerable to it, but they are rarely used as a backup system by average joe.

Pic related.
Do that with software that makes it easy for you to hash entire directories and check them later. You keep a tiny file that will tell you if any file is corrupted.
Use this (you're surely a GUI normie using Windows)
code.kliu.org/hashcheck/

Other than that learn what RAID is, do backups, and don't worry that much about an autistic collection of chinese cartoons.

Never been an issue for me on my testbed WD 1TB HDD I bought in 2005.

Still powers on, no data loss, not a single bad sector, nothing. Beautiful machine, Red Rider.

By the way anybody who uses ZFS here's a bit of advice. Don't fall for the RAIDZ2 meme for medium sized servers. Stripes of RAIDZ (RAID 50 basically) is more then enough to keep your data safe from drive failures while giving you much higher IOPS. I have two 5 disk RAIDZs with 8TB drives and in under less then ideal conditions (145k MTBF and 100MBps rebuild speed) I have a 1% chance of data loss after 7 years. This number is assuming I have my pool 100% filled but since ZFS rebuilds only used data instead of the entire disk like traditional RAID does, my chance of failure is way lower. I will end up upgrading my disks way before data loss is ever an issue.

This is a very good tool to estimate how reliable your setup should be.
servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/

What's the best storage medium for long-term, off-site storage?
M-discs? magnetic tape?

also, any reason not to use ZFS?
I've been reading up on some of its features, like deduplication, and i'm very interested.
is ZFS suitable for single drives (not in RAID)?

Don't buy into the autistic LTO tape meme. For small amounts of data high quality DVDs or BDs with DVDisaster for ECC is your best bet. Otherwise redundant hard drives are the most practical way to archive data. Preferably you would also make ECC for them using ZFS or Snapraid to put your paranoid mind at ease.
Reminder that archiving does not mean making a time capsule. Yearly checkups are advised.


I see you haven't done your homework on ZFS. Read more into it. Deduplication is not meant for home use as it eats all your RAM. Single drives are fine as long as they're each in their own pool.
Reminder that ECC isn't required any more then it is for any other filesystem and that you don't need 1GB of RAM per TB of storage. Don't listen to the retards in the FreeNAS forum.

ZFS is a meme. It's a goofy filesystem that only works well for certain uses and all its 'benefits' are either layering violations that are handled in better ways in Linux or are no longer relevant with modern storage (SSDs).

Good threads about it here

lainchan.org/res/2173.html

endchan.xyz/tech/res/10460.html

lainchan is also full of reddit spacing
go back

When did proper formatting become Reddit? another retarded meme.

keep telling you that, faggot

By your own idiocy you committed Reddit spacing, kys cuckchanner.

...

...

Kek get rekt dumb dork.

all me

Physical longevity, yes. So, bitrot is entropy: data is stored using some physical configuration which slowly undoes itself over time. For spinning platter disks, reading and rewriting the data is a defense against bitrot, because the magnetic field that constitutes the bits gets refreshed by the write. I don't know about SSDs, though.

I don't actually know about the timescales of bitrot. While I realize that, in a modern HDD, the field of polarized magnetic media that constitutes a bit is measured in atoms, I don't know the rate at which the atoms lose their polarization. My understanding is that the drive will probably experience physical failure first.

At the end of the day, duplication is your only protection against entropy and the laws of physics.

bitrot is a metaphor for code that fails because it wasn't updated when circumstances changed. You targeted version # and in version #+7 they changed the API slightly and now all of your shit is on fire.

data doesn't have bitrot.

You want copies of your important data because the *medium* of your data may break down over time. Optimal media gets scratched. Spinning disks fail. SSDs... if you're not writing to them constantly, I'm not sure about how bad they get.

If an SSD is not powered up for a certain period of time, the electrons can "leach" out of the cells causing irreparable data corruption.

Why not use ZFS along with multiple different disk backups? At worse, you're not getting any more protection. At best, you're getting the added protection from any potential bitrot or other associated problems with disks and file transfer. Another thing: if not ZFS then what?

Replace HD once every 6 months.

Magnetic data on traditional hard drives is fairly stable and will could last anywhere from 10-30 years without corruption depending on the drive. The biggest non mechanical corruption problem for high density platters is cosmic rays. As long as the corruption is limited enough, then the firmware reading and writing the data will detect the problem and fix it. I'm not sure if disk drives refresh the data though.

SSD's use what are essentially pockets of full of electrons. Electrons do this thing called tunneling, where they deciding to just fucking pass through whatever barrier is present. The thinner the barrier, the higher the probability of this occurring. This leads to loss of electrons in that pocket over time, and eventually it's called a 0 rather than a 1.
I wouldn't leave any SSD's unpowered for over a year if you value what's on it. As long as it has some power, the firmware will actively refresh the data.
The data retention on SSD's is much high when they are new, than when they are highly used.

bitrot does happen. I have images that are 15 to 20 years old that show small visual errors suddenly. That's even with backing them up often. When a bit flips and ruins half a jpeg, that change gets synced to your RAID1 drive. So RAID is not backup at all. Like one above said, take incremental backups, so you have one version of the each change in every file. If bit corruption happens, you can just revert back. The time machine feature on macos works well for automating this.

Copy drive, transfer to new drive. Never use HDs over 250GB. Repeat process once every 8-14 months.

Feels good man. Almost time to make new backups too.

t. data horder

Contrary to what mega man toddlers will believe, arguments are absolutely worthless. Debating endlessly on the internet is a waste of time and calling someone out for not sautéing, but deep frying his chicken for breakfast is pedantic. That first part was an ad hominem and the last part a metaphor.

If you start using "orthogonal" you will have reached the pinnacle of internet discussion wastefulness matching only SlackerNews. That was a minor word transformation to express how incontinent HN is. I'm sure there is a word for it, but I haven't felt inclined to purchase the entire Sadlier Vocabulary Workbook collection and work through them.

To be so devoid of unoriginal thought or self control to move with your life that you feel the need to point out logical fallacies shows that grammer nazis never die, but evolve into even larger wastes of bits. You are a mental infant and you should take some time off from your intellectual crusade to realize the reason you don't have a gf is because your personality is like a roll of one ply toilet paper. You're abrasive and when people try to work with you, all you do is break down and smear shit on them.

Saged.

lol

...

Tape is superior

Then ECC is also a meme and google should save on their critical database servers by using regular memory because i've never ran a single integrity check on my chinese cartoons stored over 6 years ago in my 250 GB Seagate drive.

Thanks again for the replies, everybody. I didn't expect my chit thread to live this long.

I plan on buying a number of large drives for the upcoming holidays, in order to consolidate my media collections and provide generous free space for future growth. When all is said and done, I plan to own ~7TBs of data, +/- 500GB.

Needless to say, I won't be able to mirror all of that. The largest single storage size currently available at retail is 12TBs, and those drives go for about $450.

Reportedly, Seagate is slated to increase that maximum capacity limit in 2018 or 2019 by releasing individual drives of (a) larger size(s). I plan on waiting until then to buy an 'ultimate' backup/mirror drive. Not only do I want my planned-purchase drives to last a very long time: I don't want to fragment backups across multiple drives.

Borscht?

I don't know how much use the drives saw in the server but I have +13yo HDDs still alive and well. They've been moved carelessly countless of times etc. No problems whatsoever. As I'm writing this message they are hanging freely, in my server without a bracket or anything. WD is my recommendation for HDDs.

ZFS is a meme.

opinion discarded

Is it a bad one though? Works fine on my shitposting machine.

BitRot is a meme, your data can't magically disappear

How long will an SD card survive?

Don't buy magnetic data for long term storage, people

...

Actually, thanks to M-Discs, optical media might actually be your best option for long-term data storage

INCORRECT
you should use datacrystals petapixel.com/2016/02/16/glass-disc-can-store-360-tb-photos-13-8-billion-years/

You can use ZFS with a capped ARC cache size. Actually desktop users of ZFS who don't set zfs_arc_max are retarded.

no
yes

ZFS (RAIDZ minimum, RAIDZ2 or 3 ideal) + ECC + weekly scrub + lto backup
I started with a non-ecc build and was getting checksum errors on my pool. That was years ago.
ZFS needs ECC to behave properly.
Don't be a little bitch. ECC is cheap.


github.com/chrsigg/par2tbb

there are hard drives with glass platter. most of them are still up and running.

I can't see how this is true. Glass is actually a fluid, deformed by gravity over long periods of time.

Nevermind, guess that was just an old wive's tale. My high school chemistry teacher was a retard.

Apparently it isn't fully wrong (but it isn't a liquid), just the "flow" is so fucking slow even the medieval glasses aren't thicker at the bottom because of it (some aren't thicker at the bottom).
scientificamerican.com/article/fact-fiction-glass-liquid/

They have platters made of glass, coated with a thin layer of magnetic media.

Think: cassette tape glued to glass

Long. Now data is another matter - it's flash, so until the charge leaks.
Still, years at least. Unless exposed to enough of penetrating radiation that japanese cartoons will not be your greatest problem.

use tape

flash has low write/read

You can call me a troll, but I have 40Tb of WD Red NAS drives for my home setup that are allegedly rated to 10^e14 chance of a URE but, but I have wholly rebuilt my 'fault-tolerant/striped' FS twice (capacity upgrades not failures) without a single URE. ECC memory is imo the only prerequisite to a reasonably resilient personal system as long as your FS needs are met. I will probably go RAID10 when I can afford a $300/mo electricity bill for this retarded hoarding hobby.

I replicate critical application and certain user data (private cloud) to an unlimited Google drive account after encrypting it w/ truecrypt. You can all fight me if you'd like and call me a normie, but the setup is comfy and cost-efficienct, and I personally believe URE is an absolute meme in the real world. You won't lose an entire disk, at worst you lose 1 sector. I have read-out and rewritten thousands of Tb in an enterprise setting, (various FS, but always ECC RAM) and have never personally encountered a URE. Rather than thinking myself extremely lucky (what's the binomial probability of avoiding a URE across a few Pb in the long run?) I just think everyone who bitches about this vastly overestimates the incidence and magnitude of this issue.

Oh, I also use BTRFS (inb4 broken) because ZFS's licensing situation, although understandable, is just more of the same retardation that plagues the community and prevents it from moving forward.

Bullshit. Your bad RAM was the problem before. Switching to ECC fixed it because it would have been solved just buying new normal RAM.
I've run ZFS on Prescott CPUs no problem, not ever once seeing a checksum error, and this was several years ago when ZFS 0.6.1 and 0.6.2 were "9999" in Portage.

Why? They are tiny by today's standards. Keeping track and control over terabytes of data dispersed over dozens of small HDDs is hell. Unless you're extremely pedantic about what you save where, it always leads to confusion and duplication of data. That's surely why said he wants to cosolidate his data on a drive as large as possible (it's about the only sane way of getting a grip on a data collection accumulated over longer periods of time on many separate drives).

Man TBH I have HDDs from like 5, 6 years ago and they work fine. You okay?

Shitpost detected

How likely are archival grade (gold) DVD-R's to degrade over time? It seems like a good option for backing up my chinese cartoons.

Overkill, I would not recommend doing this more frequently than once a month.

I heard that gold optical media is a scam as they're objectively worse than the usual ones.

M-DISCS, stored in the dark at 30-40% relative humidity and at 50-60°F, will long outlast you and any descendants who remember you.


I was wondering whether someone was going to point out that everyone in this thread has been using the word "bitrot" wrong.

to be fair, it is more fitting this way

The problem isn't just the media itself. You also need a drive to read it, a motherboard to connect it to, an operating system and drivers to detect it, and software to be able to finally read the files off the medium and make sense of them. The stack of abstraction layers is staggeringly high, while the technology in each of the layers is becoming obsolete and thus contributing to effective data rot. With an ancient papyrus you only needed to preserve the papyrus (it was all the "hardware"), the "software" (i.e. understanding of the glyphs on it) was lost to time and they had to be reverse-engineered to make sense of. Unfortunately It won't be as easy with digital data.

Print out a copy of Linux's and gcc's source code
Print out a copy of a free program that reads the files and the rest of GNU while you are at it
They should be able to figure the rest out from there.

...

Disk to Disk(nas raid) to TAPE taken offsite is the safest way.
3-2-1: 3 copies. 2 on site. 1 off site.

RAID is not for backup. It is only for keeping large amount's of data "hot" and intact. To be doing backup correctly you still need to put the data on to removable media that is stored off site. Tape is the only thing right now that is viable for multiple TBs. Harddrives are not designed to sit powered off for years. And HDD interfaces come and go
lto drives are backwards compatible. So if you used LTO-5 now and in 10 years we are on LTO-11 that new drive will be able to read LTO 1 through 10.
And with a hot RAID you could end up with a softeware fuckup that takes everything out. You still need 1 copy "offline"

>>>/reddit/

Bitrot wasn't a thing when Spinrite was created, but it can usually correct it using its "level 3" scanning to refresh the surfaces to push a waning charge to its extreme strength. Can confirm data resurrection from fallen laptop HDDs or 15 year old HDDs. Also works on SSDs. Steve Gibson talks about it on his podcasts.
grc.com

Is the way HDD platters are magnetised inherently different from how the tape is magnetised (making the former less stable and more prone to changes which will result in corruption of data)? In recent years one always kept hearing that SSDs are unreliable and lose data integrity if left unpowered, but that HDDs are safe in that regard. What orders of magnitude of average safe offline (unpowered) storage time would we be talking about when comparing tape, HDDs, and SSDs (possibly considering additional factors if applicable)?

How credible is this guy really? Many sources which themselves seem quite credible appear to have been more or less sceptical of him in the past (Spinrite specifically was being called from placebo or snakeoil to outright scam).

...

seclists.org/nmap-announce/2001/25
apparently we have to choose just one of the above

Yea that's definitely not true all of the things spinrite does are pretty well documented phenomena

How about HDD Regenerator (which seems to be a similar kind of software)?

Laser disks degrade over time, cheap ones have very shitty protective layer over the data film which can literally rot in non-climate controlled storage, expensive ones have it better but still susceptible. It's piss easy to destroy data by damaging the top side of the disk. Both are physical damage and can be avoided. Hard drives' platters lose magnetic charge over time, to the point that data tracks become unreadable, same with magnetic tape (which also gets fucked up by moisture on the tape). Solid state storage leaks electric charge and eventually becomes unreadable as well. High energy space rays can on rare occasion flip bits in solid state drives; if it has built-in error correction facilities, it should be fine most of the time, but oherwise the bit stays damaged and you're shit out of luck. Damage to individual bits will instantly corrupt any lossless compressed files, such as archives, PNGs and FLACs. Lossy codecs are built around assumption that data integrity is compromised to start with so they are usually fine if any part of the data is broken - it'll be a few broken pixels or wrong sounding piece of audio. Broken bits in the headers usually mean the whole file is unreadable. Raw data is invulnerable to corruption, but there will be broken pixels/samples if individual bits get fucked up.

Back up your data on solid gold laser disks if you want it to last.

Also no data is never lost during transmission.

So what does it do essentially? Forces data it reads to be written back so it is "refreshed"? What if the data read is corrupted already?

It could be both. It's well known that platters, like any permanent magnets sitting next to opposite permanent magnets, lose magnetization. Some materials tend to just spontaneously lose magnetization slowly. And it's a known fact that writing data to a platter sets its tracks to a full magnetization whenever you do it. Of course chances are you will replace the whole drive sooner than you need to remagnetize the data, as they are simply not designed to operate for decades, and are not designed to be serviceable.

If it's already busted then you're shit out of luck.

This is not accurate. It's kind of like RAM errors you hear "o the chance of a bit error is very low", and then you scale it up to 64 gigabytes worth of RAM and all of the sudden the chance is meaningful.

According to seagate itself their archival drives have a bit error rate of 10^14 bits. Meaning you get one such unrecoverable read every 12 terabytes of reading or so. This is data coming from the company itself that has the incentive of downplaying the problem.

Programs like spinrite can help minimize this.

Why would you store files in RAM? It's very susceptible to cosmic rays, and non-ECC RAM can't even correct bits flipped due to high energy beam impacts (or any other reason). Same with SSDs but they normally have error correction facilities. Shielding won't do shit to protect from these, they're basically hard X-rays.

Did you even read the fucking post, it was an example of a similar problem not the problem itself.

Does ECC RAM protect against rowhammer?

What if you want to use HDDs for reliable long-term offline (unpowered) storage? Should you once in a while power up the HDD and rewrite data on it? The HDAT2 program seems to have a feature called "Renewal sectors data", described as "Data are read out from the sector and write back" (the program's author isn't a native English speaker hence the odd grammar, but it should be clear what it means nevertheless).

If person A calls person B a "charlatan" then how could both be credible at the same time? If B is indeed a charlatan then B is not credible by virtue of what the world "charlatan" means, and otherwise if contrarily to what A says B is *not* a charlatan, then A is not credible for saying untrue things about B.

How do you do that given that smallest new HDDs available are 500 GB?

Bitrot is real you stupid nigger. I have backup drives from 07 with an untouched copy of Darker than Black and the copy that I actively use and has been passed from computer to computer looks and sounds significantly worse. If you want to get into serious discussions about it join #8/a/ on Rizon. Or you can look on Bakashots and see a shit ton of comparison shots of the same frame from the same file from dozens of people and see exactly how it fucks your archival shit up.

So why the actual fuck is ECC not a standard feature on RAM? Hard disks have ECC, SSDs have ECC, even fucking processor caches have ECC. Why is RAM ECC some kind of esoteric hardware feature present only on expensive server shit?

...

Do you know how HDDs work? The surface of a disk platter is not a pristine storage medium such as DRAM, HDDs are dependent on ECC to even be able to work and preserve data integrity. A HDD without ECC would be inherently broken, while ECC in DRAM is an additional protection against errors which are usually signs of faulty memory.

So you need like at least a dozen or so tapes if you would like to backup weekly to tape?

You're fucking dumb

nice reading comprehension

What is the best thing to do with data which you probably won't use in a long time (or maybe never) but still kinda don't want to outright delete? Put everything on one HDD then disconnect it and store it away? Is going through the ordeal of archiving stuff to DVDs still any kind of viable option given that HDD storage is basically as cheap (or even cheaper given the time you would need to spend burning an disk's worth of data to DVDs)?
Yea no, the initial investment is rather high, would make much more sense to spend that kind of money on HDDs instead (maybe tape would be worth it to archive important shit rather than shit that is just barely worth keeping).

M disc

M disc is pretty cool. I have never used one. Very expensive per gigabyte though.

...