Did you back up today user?

I keep seeing
Are you prepared for shit like: rm -rf /home, mechanical failure, house fire, burglary, meth-head mother selling your stuff?

Attached: hard-drive-failure.jpg (359x275, 35.07K)

Other urls found in this thread:

git.launchpad.net/deja-dup/tree/deja-dup/monitor/monitor.vala#n204
digint.ch/btrbk/
tarsnap.com/
spinics.net/lists/linux-btrfs/msg40909.html
twitter.com/AnonBabble

I like to stay light. I don't actually store anything important.
The last time I backup'd something was ~6 months ago.

only the seagate disk that started making loud noises. it was always making clicking noises but the noise got louder recently.

Robocopy.exe "C:\Users\user\Documents\memes" "Z:\backup\memes" /DCOPY:DA /MIR /FFT /Z /XA:SH /R:0 /TEE /XJD
Yup I backup everyday! Here's a Windows-equivalent for rsync-style backup, for any reformed *nix anons.

this
all i have on my drive is some anime and music and shit. literally nothing lost if something happened to my hard drive

I store all my data securely in the Cloud :)

If it is your own vps, and you encrypt your shit, I have no problem with that.

VPS is still someone else's computer, usually rented with real bank info. I see nothing good in that.

A-at least data storage is in RAID5.
A-at least I back up really important stuff like passwords and scripts.

Attached: 13226151977166.jpg (374x588, 62.97K)

Nothing beats local airgapped storage of course, but it is less botnet than Googledrop Boxdrive, you can always move your data with ease if provider loses your trust and good crypto makes sure you don't really need to trust provider that much to begin with.

While you encrypt the data, does it really matters what external storage you use? Be it Google, Goybox or Commiedex.

REEEEEEEEE

Attached: MagStor_TRB-HL7_with_cartridge.jpg (3600x2700, 4.67M)

With your own vps you can use bogstandard unix tools for data transfer, while with Google, Goybox or Commiedex (don't remember this one) you need to install weird proprietary and unreliable clients.
Also vps providers do not make their money by snooping on you.

Then why do they charge less money than AWS?

DV could only store around 14GB without redundancy.

storing things on hard drives is literally the cheapest option though, and i'm pretty sure a hard drive plugged in every once in a while and stored somewhere else is going to last way longer than a tape.

Basically this. Hell, things like refurbished drives are so fucking cheap that offsite and local redundancy out the ass is easily achievable

I'm not that retarded.
I'm very confident that this won't happen.
I live outside of the big cities in Germany. Unless I go into the cities I won't see anything non white.
I do :(. But my mother isn't a meth head.

So no, I don't backup shit.

No, but I plan on eventually migrating to btrfs and creating a backup server to send my snapshots to.

Backups saved me a couple weeks ago. I backed up the same day my hard drive failed. Started a full system backup before work, came home, and my dad handed me an SSD he had laying around that he decided he wasn't gonna use. Turned off my computer to add the new SSD into my last open SATA port, turned it back on, booted into Gentoo, and my home drive didn't mount. Drive wouldn't show up at all. Simply moved the user folder out of /home and onto the root of the drive, added the fstab entry, and I was back up and running. Ordered a new drive, and now I've got another full backup ready in case the same thing happens again. That drive that died lasted quite a while, so it wasn't too surprising that it died.

Raw MiniDV capacity varied up to 22GB depending on length and mode, much like 8mm or VHS-c, the tapes were extremely cheap, and such consumer formats produced a number of proper mid-range dedicated backup formats with reasonable mass market prices like D8.


HDDs are currently 20¢/GB, enterprise-scale BD-RE is half that, and LTO-8 is sliding under 1¢/GB. If normal people like us could buy drives and media like that at sane price, don't you think that might put a pretty huge dent in cloud meme? As for reliability, archival formulations for both have always outperformed HDDs by enormous margins.

It's nice that big HDDs are so cheap now, but things could be soo much better.

I use deja-dup.
It's encrypted, it's periodic, it's automatic, it's incremental, it supports a variety of remote storage options, and it's backed by nested layers of tools that you could use without it.
Above all, it's braindead simple to use. It takes less than a minute to set up. If you've been procrastinating on setting up a backup solution just use this, you can always switch to duplicity if you want something more elitist later.

Attached: bloat.png (822x511, 37.12K)

Why aren't programs like this scheduled to run prior to shutdown instead of boot up? Or even better, give the user the option to choose.

Deja-dup is just scheduled to run once every n days.
deja-dup-monitor is started to keep track, not to immediately perform a backup.

I do understand that. But couldn't it do that on shutdown? Or at least not start on boot?

I just checked, and deja-dup-monitor is taking up a grand total of 1.5 MB of RAM here. What's the problem?
The reason it starts on session launch (when you get to your desktop, so not exactly on boot) is that all desktop environments support launching a process when they start, and people who are too autistic for that can just add it to their .xinitrc, but there's no good way to automatically start a process at any other moment. If you want to do that you need to have your own daemon. Which is what deja-dup-monitor is.

Whole operating systems used to be less than this.

Sure, but the laptop I bought for under a hundred yurobucks came with over two thousand times that.
How much RAM do you have? How much do you value your free RAM? And how much do you value having properly automated backups?

Thank you.

The real question is whether or not it spends most of its time dormant, and thus paged out of RAM.

It calculates how long it takes before it's supposed to activate, and then sets a timer, so it should be dormant.
git.launchpad.net/deja-dup/tree/deja-dup/monitor/monitor.vala#n204
My current instance has been running since February 10th and has used 1.26 seconds of CPU time.

I just use RAM as a proxy for code quality.

This program is 300 lines and contains very little state.
The process is relatively large because it's written in a language that's mainly intended for desktop applications (which is reasonable, because it's a desktop application). It transpiles to object-oriented C.

digint.ch/btrbk/
Check this out for btrfs send/receive automation.

My strategy:
Sadly btrfs performance sucks and it doesn't handle bad sectors properly (but you will know of it after a scrub). The trade off was worth it for me since this saved my arse.

Attached: Literaly the first disk I found on duck duck go images.jpg (1500x1327, 104.77K)

What do you mean by this, are you talking about btrfs or btrbk?

There's NOTHING that could prepare you for that.
speaking from experience

My files are backed up on gratis remote storage (encrypted, of course).

I only back up a couple of really important files, like my password database. The stuff on my ssd is not too important. And then I have about 300gb of irrelevant shit on the 750gb drive connected to my router that I absolutely do not care about.

Daily reminder that hoarding is a disease.

You'll change your tune the day your internet goes out.

I like tarsnap.com/

Chances of a no warning mechanical failure are super small.
Requires off-site backup. Which is gonna be le Cloud for personal use, something botnet noided Holla Forums users won't have.
Use a non-shit OS.

Other reasons
git or other version control
Legit good reason.
Either highly unlikely or use a non-shit OS.

So we basically have backups for virus or you are using a shit OS. And this is why I backup my windows 7 data once a month and my gnu/linux data once a week.

Use tarsnap, locally encrypted, open source, cheapish, etc.

Or use any other storage provider, because you can do local encryption either way.

A smart rabbit should have three burrows.

Btrfs. It doesn't track bad blocks and will try to reuse them.
The problem is in how hard disk firmware handles bad sectors. From what I understand is that there is a reserved zone on the platter that the firmware will remap bad sectors to when they fail.
A bad sector is detected on a read when the ECC fails to recover. The firmware reports the content as zeros and remaps that sector to that reserved area for the next read. Btrfs is actually very good in this case because it checksums all blocks and can recover during a scrub if you are using RAID1 (btrfs RAID1, not mdraid).
When the disk gets enough of these bad sectors that the reserved area is depleted then those bad blocks are not abstracted away and the file system is aware of them. This is where btrfs fails. Ext2+ filesystems, being designed before hard drives did this, has built in bad block remapping so that is skips over known bad blocks. Btrfs lacks this ability.
I read one suggestion to find the bad sector and make partitions before and after it for btrfs to span across.

The real solution is to buy a new disk when this happens.

spinics.net/lists/linux-btrfs/msg40909.html

not unless you kill her
trust nobody

I'll bet that's what people used to say about electricity two hundred years ago, or landline phones a century ago.

Everything we've ever built lasts forever, or until it's deprecated by newer technology.