Youtube Archival thread

As we all know, (((YouTube))) is now screwed, and a lot of the videos are being deleted.

Things to do:
1. Auto download videos from YT and put it to IPFS
2. Use APIs to allow multi-uploads to Vimeo, Dailymotion and Vid.me

My work for #1
C() { curl -L $1$(curl $1 | grep -oe $2 | head -1) | tar xvz; }U="/usr/local/bin/ipfs"; D="~/.config/systemd/user"; mkdir -p ${D}C "dist.ipfs.io/" "go\-ipfs.*x\-amd64.*.gz"; mv go-ipfs/ipfs ${U}echo -e "Description=IPFS daemon\nExecStart=${U} daemon\nRestart=on-failureWantedBy=default.target\n" > ${D}/ipfs.service; ipfs daemon --init &x="for i in $(cat list.txt); do youtube-dl -citw -v $i; ipfs add *.mp4; done"crontab -l > mycron; echo "0 12 * * * ${x}" >> mycron; crontab mycron; rm mycron

Other urls found in this thread:

gist.github.com/skat-delayed/0c39949c3bdb0105d6a9
reddit.com/r/DataHoarder/comments/672t9r/my_youtubedl_script_for_incremental_channel_backup/
nathansalapat.com/blog/automatically-downloading-youtube-videos-linux
youtube.com/watch?v=hWq4DWfrpu8
archive.org/details/THEINTERNETOnApril4th1998
github.com/DistributedMemetics/DM/issues/2
en.wikipedia.org/wiki/Chrome_V8
en.wikipedia.org/wiki/SpiderMonkey
twitter.com/AnonBabble

How to use the script:
1. sudo -s
2. paste in the script and run
3. create a file called "list.txt" containing all the youtube channels you would like to subscribe to
4. Sit back and relax

Related:
gist.github.com/skat-delayed/0c39949c3bdb0105d6a9
reddit.com/r/DataHoarder/comments/672t9r/my_youtubedl_script_for_incremental_channel_backup/
nathansalapat.com/blog/automatically-downloading-youtube-videos-linux

Lol @ all the little Adolphs running for cover. Too funny.

PSA: youtube-dl executes non-free javascript

Counter-sage
Also, they are coming for you next

No, fuck you. GNUnet is a saner protocol.

Which videos?

Please add your entry on how to use GNUnet.
I would like to know how easy it is to use compared to IPFS.

Your script is unnecessarily overcomplicated and very limited. The solution is really simple.
Make a daily crontab for youtube-dl that reads a list of channels and downloads them to a directory.
Make a second daily crontab that adds new found files in that directory to IPFS.

the sudo and curl installs IPFS altogether.
The rest are valid though. Crontabs are better.

Not very.

It's basically as easy to setup, though the GUI is absolute crap. Now for a comparison of gnunet vs IPFS:

IPFS can actually run dynamic content now with CRDTs and pubsub channels, gnunet never will have that capability.
On the other hand, gnunet has a pretty cool distributed DNS system going on, and comes with inbuilt anonymity (through p2p routing, similar to tor et al, but using its own protocol), so it's safer by default.
However, work is going on in ipfs to make it work over i2p. Additionally, it's possible to put TXT records in ipns, which allows for friendly names in ipfs as well.
Gnunet comes with a search function by default, allowing you to find files based on filename as well as user-defined tags. Unfortunately, you get no peer information on your results, so you can't tell if a file is dead (no seeders, only metadata remains).

Finally, IPFS is extremely fast, and has a js implementation (ipfs-js) allowing embedding straight in the browser for people who do not have the desktop go client. However, it has no C client.
On the other hand, gnunet does not have a js implementation, only the desktop client. It is also ridiculously slow (bytes per seconds-range quite often) and there's no way to tell which files are alive or not (compounded with the slowness, it is very grating). It's not clear whether it's because of the lack of nodes and/or if scaling is an issue.

Overall, I believe IPFS is the right way to go. Most content does not need anonymity to access anyway, and I believe that as long as the data is in the network, once i2p support is live, everything will be safe.

Bonus: ipfs people are also working on filecoin, a cryptocurrency which will give people an incentive to either lend their harddrive to store ipfs data, or lend their network connection to move data from storage to a requester as quickly as possible. This should mean you won't need to host CP unwittingly, while at the same time it will always be possible to get someone to host your unpopular content without the possibility of censorship.

you realize it's 2017, right?

fuck off idiot. GNUnet im actually willing to put up with because it has some bearing on anything. ipfs is just some noise

literally u wot m8?

Nigger I've been saving youtube videos I like for 10 fucking years ever since I realized half my bookmarked videos kept getting fucking deleted. Look at this shit you can't find this shit on youtube, I know because they have bots that auto delete this very video if you so much as try to upload it.

>As we all know, (((YouTube))) is now screwed, and a lot of the videos are being deleted.

No, we don't (((all know)))

Maybe should you start growing up and know that the knowledge are in books, not on fucking 1 hours long youtube videos.

I already keep an archive of Youtube videos that I think are important, or that I just like. I've been doing this for a few years now.

Would IPFS function well for a decentralized tracker that can't be taken down like so many of the amazing private trackers were?

Youtube has censored the audio of "THE INTERNET on April 4th, 1998" video. Thankfully the wayback machine archived it completely intact.

Could you please whip up a script that archives webpages to several sites at the same time such as Wayback, Archive.is, and others for redundancy /w ease?

Yes

Archive.org book burnings (see WLP vids)

Do I need a specific distro of linux? I'm thinking of using qubes to be my first.

I was talking about physical books. Not the books you read on a proprietary ebook reader.
The one who'll never be censored in your shelf.

What would I need to do to have it check for newly updated videos & download them, and not redownload everything?

This IPFS thing was too complicated when I was a NEET with all the time in the world. Now I have a job so unless they come up with some kind of browser bundle I can't be bothered. If you'd like content, I suggest asking for it.


Mind posting a link, friend?

youtube.com/watch?v=hWq4DWfrpu8

archive.org/details/THEINTERNETOnApril4th1998

Thank you.

I never used IPFS. what website will the content be uploaded to?

Oh no they removed some shitty cartoon the world is over

Yeah. There's a good write-up of how you could do it at github.com/DistributedMemetics/DM/issues/2

Trackers are easy, what you're looking for is an index. This is harder, since you need some way to classify the content. It could be done with a web of trust and an existing database though, like AniDB+nyaa dump+freenet WoT (or reinvent the wheel since it's fucking disgusting to duct tape shit together like that)

IPFS is to BitTorrent what HTTP is to FTP. So what it does is create a "torrent" of the video and starts seeding it.

There's also BitChute. Which apparently is utilizing IPFS or something similar to serve videos. A lot of folks seem to be moving that direction.

this is bait

bitchute uses WebTorrent. Different technology but same concept. IPFS is much better and full featured, though. For example, bitchute (like other WebTorrent-based sites) have to rely on centralized-web infrastructures for a lot of things (the site is centralized, users act like an ad-hoc CDN rather than really hosting the site per se; there's no DNS and we all know what ICANN did recently, etc.)

Also WebTorrent cannot natively communicate with regular torrents, splitting the network in two. Only like two desktop clients can cross-seed between the two networks.

ftfy

Javascript is not compiled, you know?

No. FTP is similar to HTTP technologically, but HTTP is better for most use cases.

twice the bait

bump for importance

They'll find some way you don't agree with them, then they'll cone after you and your friends buddy.

bump.

does this download the content to your computer as well,or just straight to IPFS?

It downloads the videos onto your computer, then send it to IPFS

If you want to back up vids, back them up now. EME is supported almost across the board and Google's Widevine is ready to be rolled out.

So basically DRM is happening... WHAT?

no. it interprets


what does this have to do with anything??????? also btw:
en.wikipedia.org/wiki/Chrome_V8
en.wikipedia.org/wiki/SpiderMonkey

bump

...

You attempt to pull this shit on every thread where youtube-dl is mentioned. Do you have a scraper or something? Are you markov chain?

Ah, nevermind, it's shamoanjac.

no. i just visit Holla Forums relatively often and i really hate badly written software. especially if they execute nonfree code straight from jewgle without telling me.


no im not a kool faggot. im steve klabnik.

Both are client/server protocols that use plain text messages and map directory structures to URLs.

Good riddance!

No it's not retard

They look like they are similar when the underlying protocols have completely different messaging systems, commands and other things. For instance FTP without PASV is batshit insane. There are many stupidities in the FTP protocol other than the example I gave above. FTP is also a keep-alive protocol, meaning that you can send multiple commands and get responses on the same connection. HTTP is send request->get response->end connection (unless Connection: keep-alive has been specified, in which case you can send multiple requests within a short timeframe).