IPFS THREAD

ipfs.io/

"IPFS is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository"

Share your cool ipfs hashes

Other urls found in this thread:

github.com/ipfs/notes/issues/37#issuecomment-171154379
greasyfork.org/en/scripts/14837-ipfs-hash-linker
addons.mozilla.org/en-US/firefox/addon/ipfs-gateway-redirect/
chrome.google.com/webstore/detail/ipfs-gateway-redirect/gifgeigleclkondjnmijdajabbhmoepo
blog.neocities.org/its-time-for-the-permanent-web.html
github.com/fazo96/ipfs-boards
github.com/ipfs/notes/issues/84#issuecomment-164048562
ipfs.io/ipfs/QmUSgfC3RsXKzKJuUNtpzDK2Wm4ehPxhQ1SxJMcgUqStxg
github.com/ipfs/go-ipfs/issues/875
ipfs.io/ipfs/QmaxcHGXGFq1tXKDnQv9vuXMnF8vBKaoNjUs9GiXnABKcn
github.com/ipfs/go-ipfs/issues/1386
github.com/ipfs/go-ipfs/pull/713
github.com/ipfs/go-ipfs/pull/687
chop.edu/health-resources/where-get-speech-therapy
gateway.ipfs.io/ipfs/Qmbh27bjj6dsExXWvn8UNpuyn5K7hRHzQPCj8JrCKkxcKt
gateway.ipfs.io/ipfs/QmfC4vMDLWb8DqmJavE9wYFzMrrcrpLNuiRofupTGKrknJ
127.0.0.1:8080/ipfs/QmYt8G153xE6jaPxPKxa6mcWQBgVWGdz1BRVbxymdALCTF
8ch.net/pol/res/4958276.html
github.com/ipfs/specs/blob/master/overviews/implement-ipfs.md
github.com/ipfs/specs
ipfs.io/ipfs/QmXta6nNEcJtRhKEFC4vTB3r72CFAgnJwtcUBU5GJ1TbfU
ghostbin.com/paste/b7gkz
github.com/ipfs/go-ipfs/issues/2334#issuecomment-195046511
8ch.net/tech/res/499795.html
filecoin.io/
en.wikipedia.org/wiki/PhotoDNA
twitter.com/search?f=tweets&vertical=default&q=pic.twitter.com
github.com/ipfs/examples/tree/master/webapps/play
ipfs.io/ipfs/QmVc6zuAneKJzicnJpfrqCH9gSy6bz54JhcypfJYhGUFQu/play#/ipfs/QmTKZgRNwDNZwHtJSjCp6r5FYefzpULfy37JvMt9DwvXse
github.com/ipfspics/server
ipfs.io/blog/14-ipfs-0-4-0-released/
infoq.com/presentations/data-ipfs-ipld
github.com/ipfs/specs/blob/master/ipld/README.md
github.com/ipfs/js-ipfs/blob/master/ROADMAP.md#milestone-1---js-ipfs-on-the-browser
dslreports.com/forum/r27666267-ONT-issues
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md
github.com/ipfs/go-ipfs/blob/master/roadmap.md
github.com/ipfs/go-ipfs/pull/2600
github.com/ipfs/notes/issues?utf8=✓&q=is:issue is:open pubsub
github.com/ipfs/specs/tree/master/iprs-interplanetary-record-system
github.com/ipfs/ipfs/issues/103
github.com/pr0d1r2/ipfs-gentoo-overlay/blob/master/net-fs/go-ipfs/go-ipfs-9999.ebuild
github.com/ipfs/go-ipfs/#build-from-source
github.com/roperzh/jroff
en.wikipedia.org/wiki/Opentracker
github.com/ipfs/go-ipfs/pull/2634
ipfs.io/ipfs/QmbyDhoDsefHFTRVUEzneyZHsktwX8q4HeL74ojWv2TMVw)
github.com/haadcode/orbit
youtu.be/UOC_QqtEJtg?t=24m15s
github.com/ipfs/go-ipfs/issues/2234
github.com/ipfs/faq/issues/4
github.com/ipfs/faq/issues/56
github.com/ipfs/go-ipfs/issues/1633
github.com/c-base/ipfs-ringpin
github.com/ipfs/refs-denylists-dmca
github.com/ipfs/js-ipfs/
youtu.be/6vvgxjdmAug
github.com/ipfs/go-ipfs/blob/master/core/corehttp/metrics.go
github.com/ipfs/go-ipfs/blob/master/core/corehttp/metrics.go#L54
gateway.ipfs.io/ipfs/QmNzU35KGvttKMHsUU2MeG6apjkAGsAyCJbitNN8TXLrC5/
github.com/VictorBjelkholm/ipfscrape
gateway.ipfs.io/ipfs/QmedWxrKQZ9VRL7uahrnuQraP81YZg1NRXA3e8KmaWcZhD/
github.com/ipfs/specs/tree/master/merkledag
github.com/ipfs/specs/blob/ipld-spec/merkledag/ipld.md
mediachain.io/
schema.org/CreativeWork
bigchaindb.com/
localhost:8080/ipfs/QmciZetSaqjeinRv2Ck6Pv2iLWmQy8DPe5RFPchGCkBjYj/browser.html
citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.81.6369&rep=rep1&type=pdf
github.com/whyrusleeping/gx
ipfs.pics/
ipfs.pics/static/common.js
ipfs.io/ipfs/hash
earth.ipfs.io/ipfs/hash
docs.google.com/forms/d/1fXchtQEQ6WDjedSk46ZZiye8vl4UxvyRZiltSOv2huc/viewform
github.com/ipfs/ipfs/blob/master/LICENSE
github.com/ipfs/faq/issues/18
ipfs.io/docs/install/
privacytools.io/#vpn
github.com/ipfs/go-ipfs/issues/2815
github.com/ipfs-filestore/go-ipfs/wiki/Landing
orbit.libp2p.io/
github.com/ipfs/go-ipfs/issues/2875
github.com/ipfs/community/blob/master/code-of-conduct.md
github.com/ipfs/notes/issues/124
ipfs.io/ipfs/QmfSZ7iK7hzaLKwe5iaYErK8PcxUuDwNbE9rCDBkSbE9V1
github.com/ipfs/notes/issues/154
torrentproject.se/bc0b911654e2795536370f8cae59d123db4b95b4/The-Greatest-Story-Never-Told-torrent.html
github.com/Kubuxu/ipns-pub
archive.is/VzKIH
twitter.com/NSFWRedditImage

WARNING: THIS SHIT IS WRITTEN IN GO.

THAT'S RIGHT, WHOEVER IS MAKING THIS PROJECT IS CLINICALLY INSANE.

AVOID AT ALL COSTS.

What do you recommend? Fucking assembly?

of course it's going to be shit

Considering what issues the project addresses, it makes perfect sense for it to use .io domain.

You know, because I/O. And the project is about the structure of the Internet at the moment and how everyone has to connect to a server to get content instead of distributing the content, etc.

io is a fucking meme domain used by scripkiddiesat this point

...

IPFS is a filesystem you retard, you can write it in anything you want. There are other implementations planned like js-ipfs or py-ipfs.

...

Here's a question. Why didn't it start there?

Answer: project maintainer(s) is/are insane.

Last I heard it's been updated to version 0.4.0, what has been fixed/added since last version and what can we expect for the next version?

Are you saying that Python and Javascript would be a saner choice for this?

Are you saying Go is a saner choice than those?

plz, user

I think Go is nice. What is it you don't like about it?

The thing is, scripting languages are not that good for anything intensive. They have their uses. Javascript runs in web browsers, and Python is easily portable. But for serious use, you want these things to run fast.

waiting for tor and i2p support. dev says he'll focus on it on february. github.com/ipfs/notes/issues/37#issuecomment-171154379
IPFS NEVER


it's just the reference implementation of the protocol. and there are some projects for other languages.

yes, you fucktard

Lack of anonymity is the only thing keeping ipfs from becoming popular.

OP you should really include more links.

This script changes all ipfs hashes into links to the ipfs gateway (on page loads)

greasyfork.org/en/scripts/14837-ipfs-hash-linker

These redirect all gateway links to your local daemon, it works well with the previous script.

addons.mozilla.org/en-US/firefox/addon/ipfs-gateway-redirect/

chrome.google.com/webstore/detail/ipfs-gateway-redirect/gifgeigleclkondjnmijdajabbhmoepo

Here's some general propaganda (I genuinely think this is a good article that conveys some good things about IPFS)
blog.neocities.org/its-time-for-the-permanent-web.html


Can Go complaints be directed to the Go thread >>493136

We've had some nice IPFS threads in the past months with lots of tech talk and file sharing going on.


A lot of performance stuff, uses less memory, does a lot of operations faster, and apparently fixes some network issues, some people were saying they couldn't get some hashes in the previous thread but could with .4. Outside of that I think there are some functionality things like more arguments and some more config options.


It's being worked on iirc so it's at least planned, I believe they want to interoperate with a lot of different things so that people can use what they want for what they need like i2p for privacy but if someone doesn't trust i2p they could use whatever they wanted like tor or something else.

So, IPFS booru, how could that work? The simplest way I can think of would be to save hashes and tags to a central website, and just retrieve the images with ipfs, but could this be completely decentralized?

What you said seems to be the easiest right now, handling dynamic content seems to be something they're working on with ipns and some other thing (ipld?) but it's not all finalized yet.

I've seen people use typical http for all the dynamic stuff and ipfs for all the more permanent things like the html, js, images, etc., there's also examples of people using Ethereum with IPFS to handle dynamic content but I don't know much about that though. Someone had a site hosted via ipns that displayed a number and a form entry box, you could submit a new number and the site would change (ipns record would update), the new number would be displayed for everyone who visited the ipns hash, it used Ethereum to handle the number somehow. If someone has the hash to that please post it.

This guy wants to make some kind of imageboard-like thing too so maybe it's worth looking into.
github.com/fazo96/ipfs-boards

I forgot to mention >>>/hydrus/ is planning on integrating IPFS, The hydrus network is essentially a local booru that syncs up with repositories, the repos can contain tags, files, and more, this seems like the best option we'll get in the short term and maybe even best long term. You'll run a client with your media, sync remote tags over ipfs and distribute files via ipfs as well.

Nice. Has it always supported automatic tagging based on hash?

That's the basis of the project, it takes in media, hashes it, and you can assign tags to that hash and have relations with tags (parent and sibling).

The tagging is not automatic it's just shared if you use the public tag repository, if me and you have the same file and one of us tags it publicly (local and remote tags are kept separate) then that tag will show up for both of us eventually (after you sync the repo).

You can automatically assign tags based on things like filenames and other factors but it's not magic, however there is this which is planned which is actually automatic tagging
>>>/hydrus/1553

You can already use it over Tor and CJDNS. The i2p thing is just creating a pure TCP mode.

There's a site called Hiddenbooru on i2p

seems like an advantage to me

still no Tor or I2P support and no one uses it. IPFS will forever be a meme.

Go > C > *

I just got my personal file host working which adds all files to ipfs while also giving me a http link to share with normies.

Has anyone done a proper comparison of 0.3 vs 0.4?

If you hate Go so much, go organize an IPFS implementation in a real language. Until then stop bikeshedding and go start another browser thread or something.

If I leave my computer off for, say, two weeks and I turn it back on after a node runs a cleanup, do I have to do anything to re-enable the content I've added? Or does the daemon auto-pin the content to the node again?

When you add content yourself you're also pinning it, so it is still pinned whenever you start your node again.

Go is one of the best languages. Ah those nice, statically compiled binaries. And it has a great toolset. Finally the compilation process is invoked with a simple command and not 30 automake scripts.

I think the more important aspect will be the JS impl.

Once that's out, people will start using it as a matter of course.

It's shit fam.

ipfs.io/ipns/gindex.dynu.com

I just ran into this, so theatrically a file can never be deleted in the file system ... theatrically.

I laughed.

fuck i should stop using my phone for this shit

not really; keep reading

wot? i2p doesn't mandate TCP, tor does.

Hey guys, I am thinking of implementing this as a fuse mount and read it as a p2p trivial deb repo. Anyone tried this? Any problems you can foresee? Are there any more mature projects as an alternative to this?

jej. thanks for the warning. and it seems to be written by a bunch of nobodies. I still haven't got an answer to why I should study this instead of Freenet.

what is the link?

Coming this summer
user Shares A File
An ancient evil is about to be uploaded so you better hang on to your keyboards if you want to keep your daemons running!

Tried once adding the gentoo portage tree, but it would have taken too long.

Adding lots of files will probably be slow, you should try with 0.4-dev which should improve performance a lot.
One of the nice things that can be done is calling apt-get clean regularly or putting /var/cache/apt/archives/ on a tmpfs, since files are already cached by IPFS

No, unless you count apt-torrent, debtorrent, apt-p2p, which were even more unstable than IPFS and had plenty of drawbacks.

Pick your poison

Tight stuff. Thank you user.

because freenet is for websites and this is something completely different


the feds will soon control this

Arch can be easily configured to pull packages from IPFS, I bet it wouldn't be hard to do the same for Debian repos.
github.com/ipfs/notes/issues/84#issuecomment-164048562

Could a chan be made to work with IPFS?

My thought was to have normal SQL Database, etc, with post-content (or alternatively, have each post stored as an IPFS address that is loaded via JS or some shit if that's expensive to store too) and then the user's themselves could contribute to the hosting of image/video content.

As IPFS caches the files it loads, the user's of any particular board would then be contributors to the hosting of that board's content.

Could something like this work?

InfinityIPFS. We could get Josh to build it.

fucking retards dont know how to use tor :)

tor isn't made for sharing large files you fag

Fucking faggot.

See
There's a handful of other ideas floating around of how to do it, search for discussion around how to use IPFS with/as a database or other existing systems.

Hotwheels mentioned before that he was lurking one of the IPFS threads and may be interested in looking into it, granted this is still alpha so I highly doubt he would (and I don't recommend he does) use it now.

There are big things that need to be done before this rolls out for something as big and dynamic as a populare imageboard, it's mostly for long term static content right now but the more dynamic things are in progress, IPNS with pub sub as well as clientside resource limits would have to be finished before this should be deployed on this scale. Another thing would be figuring out how to accommodate non-ipfs users, 8ch could host its own gateway and have IPFS users just redirect to localhost like the addons do for the official gateway (or use a hostfile), a more practical solution would probably to use the javascript implementation when that's done but I know nothing about that, I'm making a baseless assumption that it would be resource heavy and slow to run some kind of instance for each tab, maybe it doesn't have to I have no idea how browsers/js works really.

Bump,
here's all 3 episodes of Boku no Pico
Boku No Pico Episode 1 - ipfs/QmbTWVLtUhdLJws4reyJ7CnkVwwivR4FTM3Jnj9YebNhBu
Boku No Pico Episode 2 - ipfs/QmeTUFENeJJjcN617m3Twd5kCdcTnoyZKHANkZ7NnYe2de
Boku No Pico Episode 3 - ipfs/QmXy6yZAwwtmQc44t7sy7ivndqdHMCBUHu3vbrsih5WzjG

single directory link instead: ipfs/QmeCq2H2w2tJ9Yr8AmLi7bjkopdNpaB5LaG9fZRy52Q4Ts

my shitty upload is triple of download from the very start. seeeeeeeeed plz!!!!11

also yes, please put them in folders and name them properly because mpv couldn't figure out what the fuck those are. that's
after all.

wrong. making shitloads of connections to different addresses is what causes problem on tor.
you can download files as big as you want using things like http or ftp without any extra load.


you're still cancer tho and your priority will automatically lower if you put too much load.


i tried it here it worked:
/ipfs/QmdG8zpEQErMiAzEqnvEU29yk9DzZ6PK13tvuDLWdj5unv
/ipns/Qmeg1Hqu2Dxf35TxDg18b7StQTMwjCqhWigm8ANgm8wA3p


pulling packages is trivial: just add an ipfs address as a repo. the issue is how to best publish and update the repo. there are several ways.

s/>>510557/>>510525/


it already can do that.

IPNS seems like it would be the best way but they're still working on that. IPNS works now but there are limitations being worked out.

I just have really shitty internet

kill yourself. Literally wat. No typing in Javascript. Poor performance for infrastructure needing performance.

what is type inference you unadaptable cunt

Deb guy here, this is why I mentioned the fuse mount. If the packages appear as a local directory which anyone can add to then a package db(I think it's a db) can be made locally to reflect available files.

So the /apt/sources.list would read
deb file:/usr/local/ipfsfuse/debs ./

Then the repo update script can be edited to include
dpkg-scanpackages /usr/local/ipfsfuse /dev/null

So the packages are made available to the package manager just by updating the repos.

In addition we can give it a very low pin priority to ensure it doesn't pull system updates from there.

reposting my touhou link:
ipfs.io/ipfs/QmUSgfC3RsXKzKJuUNtpzDK2Wm4ehPxhQ1SxJMcgUqStxg

downloaded torrent & manually added, now should seed properly

freenet hosts arbitrary files. All I know about ipfs is that it hosts files


Not that I care what PL it's written in, but why would you want Go *and* Java instead of just Java...

Just kill me now.

My files aren't showing up on the network. If my go-ipfs version is too old, does it just ignore my pins?

Please die.

I can't wait for pin in place, I get why you'd want to make a copy of it but there's no way I'm keeping 2 copies of everything I want to share. At least its on the issue list
github.com/ipfs/go-ipfs/issues/875

As soon as this is done I'm sharing my entire media drive. It will be like DC all over again but so much better.


What do you mean not showing up? The only issue I can think of is that 3.x peers can't communicate with 4.x peers, if both of your endpoints are on the same version they should work fine. The public gateway should currently handle either though.

Make sure your daemon is running I guess.

~ $ eix ipfsNo matches found ~ $ eix -R ipfsNo matches found
Sweet, I don't even have to throw it in the trash myself.

I really don't get why you'd install something just to uninstall it, if you think it's trash why even bother in the first place?

Hello Holla Forums, /a/ here. How difficult will be to build a tracker on IPFS and if it would possible at all?

Doesn't seem that hard at the most basic level. You submit a hash to the tracker with comments about it, and the tracker displays whatever metadata it can find with it and indexes it for searches. Pretty much anyone can do that with a standard torrent tracker website template. As for actually tracking, I don't think you can keep track of how many people are seeding/leeching on IPFS. Could be wrong, though.

If it was really ballsy, it could have an option to download a local copy (below a certain size) and spit out some video screencaps if it detects a video. It would have to delete the file immediately afterwards but it's a way of verifying that the file is legitimate without having to trust the uploader. But that's just a pipedream for a widely-adopted IPFS future.

Check this
ipfs.io/ipfs/QmaxcHGXGFq1tXKDnQv9vuXMnF8vBKaoNjUs9GiXnABKcn

You don't need a peer tracker with IPFS, IPFS tracks all that itself. The only thing needed is a an index of IPFS hashes, what most people call a torrent tracker is in fact an index that also runs a tracker on the same domain, with IPFS you only need the index part.

tl;dr you just need some kind of text file that says hash X = series Y in format Z

Ideally you'd have rich searching too. You could more than likely modify whatever existing torrent tracker site frontend you want to just point to ipfs hashes instead of torrent ones after stripping out all the ratio stuff, etc.


Long term I really hope people use IPNS with mount points, imagine you want to watch Series X, some release group can go "here's this ipns (not ipfs) hash", you mount that hash to your media drive like ~/Anime/Series X/ then when the release group puts out an episode it would automatically be pushed to that directory as episodes are released. With pubsub this should be possible.

Is their DMCA infrastructure a problem for the network, or just a problem when fetching from the web?
If it is a problem, would it be possible to have a fork of the network? Like a private ipfs network bootstrap? Sort of like a private bittorrent tracker?

nobody can guess a hash of your file. so you would just share privately. as for security I would just have something similar to a seedbox to prevent people from getting your ip and contacting an ISP

But unlike a private tracker there's nothing stopping nodes from outside connecting to your swarm and then fucking it all up.

To be honest there's not much that can be done to take anything on IPFS down, the only current DMCA solution is to have an opt in blacklist for gateways, gateways are only useful for people not running ipfs themselves. If you're running ipfs yourself and you want a hash that is reachable then you're going to be able to get the content, it can't really be prevented. Kind of like torrents and their hashes, you'd have to take down all the peers hosting it.

Yes, you can do that now, you can choose your own bootstrap nodes and as such run private IPFS networks but there's no real reason to that I can think of outside of bandwidth considerations but IPFS will support resource limits natively eventually so this should be a non-issue later.

As for private sharing like said there's no harm in being exposed to the network with private content since nobody can retrieve it unless they know about it anyway.


I forget but I think there's some way of preventing this, I remember PerfectDark having a simillar "issue" but they treated it like a benefit. I agree with that too, everyone should be connected to everyone else, it keeps the network resilient AND fast when everyone shares with everyone as fast as possible, a race to idle kind of thing.

Is it my encoding? Is there any way to stream chinese cartoons to other people without hardcoding the subs?

Did you test the file in your browser locally? If the subtitles don't show there then you broke it yourself on encoding. Files work fine for me even through the gateway, with that said though the gateway is a backup solution, ipfs obviously works best client to client.

github.com/ipfs/go-ipfs/issues/1386
They plan on adding support for private blocks


Why not just use the http gateway URL in sources.list?

Also running apt-get clean regularly would free some space, since packages are already cached by IPFS.

Creating a mirror from downloaded packeges would be pretty cool.

I've since decided to just bake the subtitles in. My purpose is for normalfags to be able to watch it with their shitty browsers, so the compatibility's gotta be high.

Since that post I've been able to serve it up better by shrinking the file size. Still get the corrupt error in Icecat but I figure anyone using Icecat is also competent enough to launch it through some other means like mpv.

Of course, what else should we use to write a filesystem? :^)

Normalfags don't care about whether their animu is in high quality or not, they just care about It Just Works, Click Play and Enjoy and quality not being total and complete shit. They basically just want a clandestine Jewtube/Netflix.

One could try extracting the srt/vtt files and embed them withing tags and some javascript/redirect to switch from default.

On ipfs add, try using the -t flag, see if it makes it any better, it uses the trickle-dag format which should be better for media files.
See:
github.com/ipfs/go-ipfs/pull/713
and the older:
github.com/ipfs/go-ipfs/pull/687
For more info on the format.
ipfs add -t *files or directory*

I'm not sure but I think it's less efficient (uses more data) but more resilient. I could be wrong though.

As for the subtitle track, that shouldn't be a problem with IPFS, that sounds like a browser or encoding issue since an HTML5 compliant browser should support subtitle tracks in video files and video tags. I haven't experimented with those too much in browser myself yet.

The ultimate botnet?

i want to join it tbh.


absolutely haram. why would you pander to normies on a animu on an experimental tech? wtf?


you realize that normies use Chrome based browsers, right. They don't give a shit about Firecuck, which is good because FF sucks.
did you put just 1 keyframe for muh filesize and expect it to seek? yeah sounds like shit encoding.


No, you don't want apt to cache by default at all in that case.

t-this is a joke...r-right?

you realize no one cares about normies here, right?

Wasn't this one of the Plan9 goals? That's the world I want to live in, global distributed file systems for public files and private clusters for private files. You can't beat this scale of redundancy and ease of replication, over a network even. They should add some form of verification/integrity checking, that would be great. Given that you can dump a list of all the content you have you could do it now in a crude way that would repair any corrupt by just downloading everything you already have to /dev/null, it will fetch any part that's broken, there should still be some built in official solution for this though if they want to be a filesystem imo since they rely on the underlying fs to do it for them now. Maybe it's a long term goal, who knows.


chop.edu/health-resources/where-get-speech-therapy

I want an ipfs-to-9p bridge that I can just run as a daemon and `mount -t 9p` from

Wake me in 2030 when it's done

I only know a little about 9p, I meant to look into it more. Since IPFS can be mounted via fuse and apparently even via Dokan(y) on Windows, what would bridging to 9p allow you to do that you can't accomplish already?

I wonder if you could just rewrite the existing mounting portions and use some 9p Go lib for what you want.

Run the bridge on one computer, now your whole LAN can access IPFS without installing anything and you don't even have to touch NFS's bullshit with a 10 mile pole.

It isn't and there is no way to do dynamic linking nor create shared libraries out of go libraries. Neither can Nim. Rust can, but you have to jump through multiple hoops due to cargo not supporting it correctly.
Why all these languages aren't just GCC/LLVM frontends is something I still fail to comprehend.

I'm not positive but I'm pretty sure you can do dynamic linking in Go via gcc, the standard Go compiler eventually added a way to do this in 1.5 as well.

Here, add this to your list too: /ipfs/QmVYLYeFLvxEEV6qFP8SHH4kP8VJb4Py4vpRkgqH8Hjyfx
It doesn't want to play in-browser for some reason. Probably because it's 10-bit :^)


I've never got WebM subs to work in any browser, even WebVTT subs. I don't even think that Chromium's built-in player supports subs.

They work fine if you have separate files and a HTML video>track wrapper, dunno about embedding them though.

Ahhhhh okay, I've just been embedding them in the WebM.
It should still work with embedding though.

The solution I chose is re-encoding for browser streaming. You get huge filesize gains (especially with libvpx9) and the subs show up just fine but it comes the price of (a little) quality loss and hardsubbing.

I'm imagining a niche for three-episode tests, where the convenience factor lets you try it out and you can download the top-tier (read: HorribleSubs 720p because nobody has good encodes these days) quality episodes from a torrent if you stick with it.

Did you try this?

I'm curious about it.

Oh no, I didn't. I'll have to re-add it and see if it makes a difference.

Its the same as a torrent. As long as someone keeps seeding it it will exist forever

Compare:
[trickle-dag]
gateway.ipfs.io/ipfs/Qmbh27bjj6dsExXWvn8UNpuyn5K7hRHzQPCj8JrCKkxcKt

[default]
gateway.ipfs.io/ipfs/QmfC4vMDLWb8DqmJavE9wYFzMrrcrpLNuiRofupTGKrknJ

Okay, the new trickle dag'd hash is /ipfs/QmVb23Ad9Q3nyyLdhzRpqxVuqUJkwecPGwFViKTmSF6dEp

Really? That'd be glorious. I'll check it out, thanks user.

At this stage, how well does go-ipfs scale run on a server-level implementation? Would a site hosting all their content on IPFS, like Neocities for example, experience any significant delay/latency delivering pieces of a website? Do they just use it for archival (unless they specialize in IPFS storage, like Glop or ipfs.pics)?

bad choice mate

This is the future >>519171

muh update 127.0.0.1:8080/ipfs/QmYt8G153xE6jaPxPKxa6mcWQBgVWGdz1BRVbxymdALCTF


fug JIDF HQ says switch tactics. :--DDDDD
tho, it seems to require JS so IPFS will be simpler for pure file sharing (also mounting and all that).
also impressive for a meme even memer than ipfs (because "bitcoin crypto", JS), zeronet supports tor including tor-only.

Even people in that thread don't want to use it, bad choice I guess. That seems to have the same problem people have with freenet, sharing content you don't explicitly want (possibly illegal).

...

Found something for you lot to do.

8ch.net/pol/res/4958276.html

The best we can do now is to either convince them to host their files with IPFS, or just put up any book/paper we download from Libgen onto IPFS for at least partial redundancy if it ever somehow kicks the bucket. I don't think all of us can mirror such a large amount of content.

Bumping to save from spam.

Is it just me or does the ipfs daemon randomly stop working after many hours? I also put my computer to sleep every night, could it be a bug when it wakes back up?

this isn't /r/jquery

I've been running mine for days to weeks without issues. Does it give you some kind of error or can you just not connect to things after you wake up?

On my desktop it will randomly stop connecting to anything, even stuff on your local network. If you try 'ipfs stats bw' you'll at least see a speed at least in the sub-kilobyte/s range. When it's "dead" I see 0 kbps up and down. Then I kill it and restart it and it works just fine.

Just werks on my SBC, and that's the one I do all my hosting on, so it's really not a huge issue.

memefs

Where is the C++ implementation?

The specification has been out for ages. Go make it.

github.com/ipfs/specs/blob/master/overviews/implement-ipfs.md
github.com/ipfs/specs

pretty easy, but there is a built in faggotry filter that will likely delete your animu

>ipfs.io/ipfs/QmaxcHGXGFq1tXKDnQv9vuXMnF8vBKaoNjUs9GiXnABKcn
Yeah, this seems exactly what IPNS would be made for, considering that otherwise you won't be able to follow the page.

Like points out, it's only a problem when fetching from the web.
I ended up (stupidly) posting a link to some Light Novel PDFs on a public site, using ipfs.io's gateway, and when I checked back later, it was blocked. But you can still access all the files from any other gateway, including a local one.
Basically there doesn't seem to be any kind of risk of being shut down by DMCA, unless a copyright holder is really aggressive and plans to go after all the peers.

There's a newer version on
ipfs.io/ipfs/QmXta6nNEcJtRhKEFC4vTB3r72CFAgnJwtcUBU5GJ1TbfU

That page has always had an IPNS link at the bottom of it, you can see it on both of those and when clicked it returns the latest page if the host is online or someone else is keeping the IPNS alive. I forget if IPNS keep alive has been implemented yet but it's certainly planned if not.

It's the "load latest tracker" link which points to
/ipns/QmUqBf56JeGUvuf2SiJNJahAqaVhFSHS6r9gYk5FbS4TAn

This practice is common on Freenet too I believe where people link to the version of the page they saw since it will always contain whatever it is you wanted to link however there's no compromise since you can always reach the latest version by clicking some link. It's a pretty good system imo, an HTTP analogy would probably be a page that hard linked to its domain name somewhere, if you save the html file locally and open it you may not have the domain name so you won't have a way to reach the latest version of the page but if you click the hardlink it will resolve if it can.

...

...

...

Until someone downloads it and sends the hash out to everyone looking for peers.

Just encrypt the file with a symmetric key first

:-DDDDD

Would it be possible to implement versioning and ownership/master options like you see in syncthing in this ?

the problem i see happening with this is that it's designed to be static so how would you avoid bloat without something like that

The only option at the moment is to use IPNS, but supposedly they're working on a system like you describe.

found something neat about the http(s) gateways. they are all named after the planets of the solar system. also you can pull ur file from each one without letting wget gamble in the round robin. i used http only but you can do https with -no-certificate since wget doesn't like it otherwise.

ghostbin.com/paste/b7gkz

imo this software would be best used on VPS or in a datacenter were upload isn't an issue, but cucks insist upon using it backwards, trying to use the swarm as a backend to the http gateways thinking it will be free hosting. smdh why bottleneck the data at 8 machines which are already overworked.

any news about browser add-ons that run the daemon? i heard there was javascript version out somewhere too. the dev talks about browser integration but i haven't seen anything yet. if you could put the client node in the browser it would save work from the gateways.

yes i know there's a localhost redirect add-on and one for detecting hashes on webpages, but this requires you to run ur client/server on ur local machine by hand. I know it would be stumbing the program to have client browser add-on but it would make it more like MEGA and such. surely with WebRTC and other cuckware it shouldn't be hard to setup a botnet as well.

...

hello reddit.

github.com/ipfs/go-ipfs/issues/875

This is like one of two things keeping IPFS away from not being meme software.

Not really, the common user doesn't want to share multi-GB files.

Would be a nice replacement for torrents, since it would be harder for a swarm to die. Also, season packs would be superfluous.

...

It'd be nice for large files, but not for files you are constantly editing. I believe a comment there mentioned it, but the reason it's like this is because people might end up moving or editing the original file, which would break the hash.

A solution might be to still move the file to the datastore and leave behind a link, but that only solves the "moving" issue, not the editing.

Maidsafe is SJW approved! Look at all the diversity on their website, isn't racemixing the most heartwarming thing ever to be forced down your throat? I hope it turns out to be a total scam, but as of now the coin is severely over-valued. As for ethereum, what can it do that counterparty can't? Check m8 altcoins.

This.

IPFS could have downloadable waifu-bots and I still wouldn't use it until there's i2p support.

I think IPFS and ZeroNet threads should be merged into one versus thread. Does anybody agree?

It's already usable for large files as a torrent replacement, but normalfags won't just magically start sharing gb+ size torrents they made themselves, unlike power-users. Even then, if you're seeding that shit, you gotta have lots of space anyway. It's obviously clearly a flaw, but as they noted, not that prioritary over ironing out issues with the protocol itself.

How resource intensive is the current implementation of ipfs? My server is a simple arm board with less than 200mb of spare ram.

Not very CPU intensive, but I think 200mb could be cutting it short, although you should try and see eitherway.

It's on the list and IPFS is still in alpha, surely it will be added when possible since they seem to be open to the idea. Honestly though shouldn't you be using an underlying file system that handles this anyway like ZFS?

Either way I'm also looking forward to that issue being closed, that functionality should be a part of it.

I don't know if it would be for everyone but the daemon uses ~113MB of memory on my machine with the latest master and the latest version of Go. Prior to Go 1.6 is was using ~200, I don't know if it was the runtime improvements or just coincidental with changes made in IPFs.

IPFS 0.3.11, compiled with go 1.5.1 here.

My IPFS daemon starts at around 50MB memory, but ends up working up to around 200MB after some time has passed.
Actually, right now I've mirrored glop.me, so the different likely lies there?

I'm not sure, I have a lot of content on my node and it used to go up to 200MB on 1.5 after a while but now it caps out in the 100's on 1.6. We're talking weeks of uptime too for both with some moderate downstream usage and high upstream usage.

They did improve the GC for Go 1.6 and said they're going to again for 1.7, maybe that's related.

Also I'm using the latest master which is version 0.4.0-dev so that could also be related as it's a big change in the IPFS codebase.

I do not recommend updating to 0.4 yet though simply because they say not to on github, I guess because of the repo and network differences they're telling people to wait, right now .3 can talk to .3 and .4 can talk to .4 but there's no cross talk yet so if you run .4 and try to grab content only on .3 then you won't be able to, in practice though I haven't had any issues myself, worst case scenario you tell the public gateway to grab the 3 content for you then request again on .4 since the gateway will have it and be hosting for both networks, then you're mirroring it for .4 after that.

Relevant link github.com/ipfs/go-ipfs/issues/2334#issuecomment-195046511

From what I understand even if you change small parts of the file and not the whole thing, those unedited parts live on. The file hash supposedly is a map to block hashes. Overlapping blocks could just be seeded from the original or other files. If I am wrong, please point it out!

I don't feel like having to backup my backup hard drive to change the file structure just to upload my animu to IPFS, or dropping it on my desktop and eat up my whole home partition.

Found the inbred

It gets worse

1) GITHUB IMPORTS. THEY REALLY USED THAT JOKE THING SERIOUSLY
2) MASSIVE OOP-INDUCED BRAIN DAMAGE IN DAG MANIPULATION CODE. REALLY, INSTEAD OF SIMPLE HASH->PAYLOAD MAP THEY USE SOME JAVA-STYLE MEMORY HOGGING PIECE OF SHIT

IPFS is a good thing, but it's only working implementation is quite shitty.

3) (protocol fault) SEGMENTS ARE UNTYPED
4) BLOCKS DIRECTORY. IT'S LIKE THEY INTENTIONALLY DONE IT MOST WRONG WAY POSSIBLE (protip: replacing directory with 36218 files by directory with 36218 directories with 1 file is not beneficial under any file system)
5) NO FUCKING STATS
6) LEVELDB IS SHIT

7) "IPFS ADD" CLI WORKS WITH STDIN - IT'S GOOD. "IPFS ADD" RPC DOES NOT WORK WITH POST BODY - IT'S RETARDED
8) TIMEOUTS, DO THEY KNOW WHAT THEY ARE?
9) IPNS WORKS IN 10% CASES. OF WHICH, IN 99% IT REQUIRES MANUAL KILL/RESTARTS BECAUSE (8)
10) IT LAGS IN PLACES WHERE IT SHOULD NOT (LIKELY BECAUSE 2, 4 AND 6)

IPFS IMPLEMENTATION IS SO SHITTY, THAT I AM SURE I COULD DO IT BETTER. BUT I AM DEPRESSED AND BURNED OUT BY MY DAY JOB. SO I JUST WHINE ABOUT IT AT FORUM FOR 15 YEAR OLD KIDS

So how's that i2p suppor-
Oh... nevermind...

So, should we give up on ipfs and use maidsafe instead?

Maidsafe has strong smell of overengineered vapourware.
IPFS is shit, but it's simple shit that works right now.

So good idea/blueprint, but shitty execution? They should hire competent programmers, probably wouldn't take them this long to make it the thing work as well.

I'd argue they're complementary. Zeronet is better for dynamic, mutable content while IPFS is better for archival and immutability. At least that's my naive understanding of both.

Outside of 'ipfs stats bw'?


>Why doesn't your alpha software work with my alpha software?


They should throw a Kikestarter together. They have the excitement going and they should ride the wave instead of letting it fizzle out by the time go-ipfs hits beta. Especially with the Winblows crowd. I heard it barely works on that.

Because it was promised and they still haven't implemented it.

Aren't they planning on replacing the block store and leveldb with something else?

How do you mean? There are file statstics, datastore stats, and they plan to add limitations (bandwidth caps, disk limits, etc.) so eventually there should be traffic statistics as well, there may be more I don't know about too but I don't look into that stuff, I just want an hourly bandwidth cap.


Why timeout instead of trying forever? The whole idea is that things are supposed to be reachable always so why not keep trying until they are reached?

If I go to get a file or resolve an IPNS name I don't want to return after a timeout, I want the command to either succeed or block until it succeeds. Implementing your own timeout around this should be doable if you need one but outside of some kind of failure/abort state when would you even want to timeout? That makes sense for other protocols where high reliability isn't expected but that's not the intent for IPFS. Maybe it's silly of me to think that way though.

Also IPNS isn't even finished yet so I'd have to give it a pass if it's not working well right now, they don't work 100% of the time now because only the owner can keep it alive, they're going to make it so other peers can keep your name alive without you being online but that's not in yet, once it is though it shouldn't ever fail so there'd be no need to timeout on it unless you're not connected which would probably return an error prior to making the call. I'm not sure though.


image related


Who's working on that anyway? Are i2p people working on the support or are IPFS people doing that? I don't know much about the work being done there.


The good thing is they don't have to, anyone can make an implementation however they want as long as it conforms to the spec. People can hate on the official Go version all they want but they don't have to use it. If someone really thinks they can do better they totally could and people could use their version while interoperating with everyone else.


To be fair a lot of stuff is promised and not implemented yet, that's very typical of pre-release software, time has to pass for people to actually write it.


Works fine on my machine. The only issue with Windows is that it doesn't have fuse so there's no way to mount IPFS as a drive. Everything else should work though. Someone wrote a seperate program that mounts IPFS via Dokan but I have never once gotten Dokan to work with anything on any version of Windows. Maybe it works for some people but that doesn't work for me at all, it mounts it, I can go into the directory and then it crashes, the filesystem client though not IPFS. Probably has something to do with it being written in Delphi.

Well, I went full retard with that

It is far from being a case for IPFS design, which does not actively duplicate and spread data blocks - it's design is closer to bittorent (swarm downloading from seeds, data duplication is either explicit (ipfs pin) or opportunistic - seeding from cache), than to distributed data store (where data blocks are treated same way as DHT keys).
Even if we assume, that data itself is always available, it's still absurdly strong assumption, as it would require that client (from IPFS application to physical network) will never fall into unexpected state, which could hang forever.

Are you sure? Even if it means blocking for 5 years? There is always some deadline, just sometimes it is implicit "until I get bored with it".

You wouldn't, but, on the other hand, you always want "some kind of failure/abort state" - stochastic "it might return at any moment" state is very shitty thing to work with. Especially if you want to make something high-availability.

It is doable, but it's a feature you expect from anything that is not ad-hoc bash script. Also "kill $(ps ax | grep 'ipfs resolve' | sed 's/ *\([0-9]*\).*$/\1/')" is not the most efficient implementation, but you cannot do better unless you do it inside application.

On the other side, making "infinite blocking" is trivial - either by calling in a loop, or by setting timeout to 30 years.

Neat for thread archival, tried saving a dying thread with wget (with rewrite for ads -> localhost to remove annoying loading icon):

/ipfs/QmTAef5qs4EsALx59nEbXEd7nifoHzwsTWrpNtJapCW5Ue/tech/res/537009.html

That is a planned optional feature and there is also a project by the same team "filecoin" that will allow people to generate and spend a currency used for distributed storage, like you could pay me 1 to host your file for a day then I could spend that to do the same with someone else, either directly or just a random set of peers. I'm interested to see what they do with it but I'm planning on just turning on the distributed option myself, I have the space and network to work with and don't mind.

The optional "free" system is presumably going to work like freenet or perfectdark where it just distributes data to peers that allow holding random content. I think there are details on github but I don't remember.

I understand that forcing distribution is great for the network but a lot of people dislike that stuff being on by default since it uses their disk and network on something they don't even know, they could be unknowingly redistributing illegal content and they don't want that.

I mean that's the thing with this, even if it's not assured it will always be reachable that's the intent, so if I'm designing something which utilizes IPFS I have that in mind, if I want timeouts and such I'd probably use another protocol, if I'm using IPFS I intend on maximize the reliability of the content that's expected to be received. In some event where we rely on a critical object we may have no choice no matter what protocol we use, if your program needs a file to act on before it can continue you'll either block or poll anyway.

Regardless I doubt it will remain that way, there must be plans to incorporate them later once everything is more finalized, get it working first then polish it up. I could be wrong though.


That's fair.

Maybe it's worth filing an issue about it to see what they think and if they'll fix it sooner rather than later.

I hope my English isn't too terrible this early in the morning.


Nice. I wonder if you could make a distributed archive this way by doing hash only requests on 8ch a lot. So like you maintain the front page and maybe some thread index that points to a hash of its state before it 404s but you don't host the content yourself just the hashes to it. Then if anyone else archives a thread or file via ipfs it would be reachable.

Maybe not the best idea but I kind of like the idea of an archive that only has threads that were manually choosen by other people to be saved and not just by the site owner.

At that point though I guess you'd just archive all the textual data and maybethe thumbnails while relying on other people in the network to host full images. Could be cool.

Why save homepage/frontpage and other bloat?

I've used this in the past to save without full-size images
wget -e robots=off -p -k 8ch.net/tech/res/499795.html


You would still need to download full images to generate the "full" hash tree

ima i2p bro and i wanna do ipfs's stuff but am too busy obsessing about nntpchan and other autisms to be useful ;~;

IPFS is and always will be a meme until it supports Tor and I2P.

For sure but you wouldn't have to store it forever. A lot of archives do that today where they store images for some amount of time but they will 404 eventually. You could use the same system but not have to worry about storing it yourself permanently as long as someone else did, but you still get the benefit of it always being reachable even without storing it yourself.


Can't you use it right now with Tor? I thought someone was working on i2p support, I didn't get a response on that earlier in the thread.

nntpchan with ipfs for the images

Mainly laziness, saving original images is easiest with just plain domain restricted depth of 1

Neat.

sounds great, but how would I nuke CP?

IFPS does not spread content automatically like freenet, you only seed what you get yourself. So a blacklist of some sort propably

so to delete an attachment, just stop seeding it and remove the reference in the markup?

rather, "delete"

yes, its called "unpinning" in IPFS: If no node has the file pinned (or temporarily hosting it as a result of only downloading it) the file will not be available

I think that's actually a big flaw. The way maidsafe plans to distribute data sounds like the most sane way: service providers get paid and thus have an incentive to stay online, and encrypted chunks are spread around the network so that nobody has plain pizza on their drive. As a bonus, users don't need to rely on a centralized portal to access data when they don't have the storage or daemon required to get data locally, since again, all resources are provided by people who get paid for it in a distributed, fair manner.

So should some brave soul offer up all his CP so we can hash it and add it to a blacklist?

Go is fine

it could easily be gotten around by changing a single bit in the file... we need a program to automatically look at images and decide if they are similar to known images, which is impossible because you would beed a database of the stuff which is a terrible idea in itself...

Compared to VB pre-.net, maybe.

It's trivial to do. Don't talk when you don't know shit.

has anyone modified the Twister html-frontend to display ipfs content yet? I could really use an alternative to kodi.

Sure it'd be easy to implement something to compare images to a database of known images.

The issue is holding a comprehensive database of CP, and expecting not to have the aforementioned database shut down.

thank you, that was my point

new thought: a database of rough vectors containing the rough shape and colors of the original images. nothing illegal would be hosted, but instead rough outline could be used to detect the images (of course with a certain color tolerance to remove fine details. maybe with refinement it could be accurate and less error prone.

that's one of joshy boy's seekrit projects sssshhhhh don't tell

i2p

That's the idea behind filecoin, essentially. filecoin.io/
They want to keep IPFS and filecoin completely separate, though, and just have filecoin work on top of IPFS.
I personally prefer this, although I'm not really that familiar with maidsafe.

You can also simply extract feature vectors from CP images.

No. You lose a fuckload of meaningful information and yet your shit's still recognizable and enforceable.

What if an image is processed just enough to be considered an illustration (as cartoon lolis isn't illegal in most parts) but still retains enough information to be compared to a photo?

Not only would it be worthy of a honorary PhD-winning paper, it is still illegal in most parts of the world (it's in few parts that it's not), and would obviously represent an unimaginably higher amount of effort to come up with.

This already exists, and they already have a huge database of most known CP

en.wikipedia.org/wiki/PhotoDNA

Twitter had a huge problem with CP before PhotoDNA was added.

Pre 2012 twitter, If you searched for all recent images like this ( twitter.com/search?f=tweets&vertical=default&q=pic.twitter.com ), you would stumble across CP every few minutes.

That hard, huh?
The database only needs to be hosted somewhere where it is legal though.


Actually this seems like a good idea to use.

This is
(although they call it a hash here for plebs like you to understand better).
Beside ad-hoc methods like these, one can also simply pass an image through an autoencoder and extract the latent representation as a feature vector.

0.4.0 WHEN

IPFS REEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Someone should make a youtube mirror using IPFS. It would be a normal site but would use youtube-dl to download the videos and use IPFS to store and serve them. That way the videos are much more resilient to censorship and people can save a local copy really easily while also contributing to keep it alive. There's already a js video player.
github.com/ipfs/examples/tree/master/webapps/play
ipfs.io/ipfs/QmVc6zuAneKJzicnJpfrqCH9gSy6bz54JhcypfJYhGUFQu/play#/ipfs/QmTKZgRNwDNZwHtJSjCp6r5FYefzpULfy37JvMt9DwvXse

An IPFS image hosting site already exists. We could absorb most of the code so integrating it shouldn't be that hard.
github.com/ipfspics/server

Forgot to mention that the video site itself would serve the requested video for a set period of time (unless it's deemed very important) and then get deleted to make room for new videos. The older videos would hopefully still live on through individuals who downloaded and pinned them in IPFS.

>ipfs.io/blog/14-ipfs-0-4-0-released/
Looks like there's been major improvements made in pretty much every area.

Any noteworthy improvements?

that is genius

They're listed in the post.
Summary:
Optimizations on adding files
Lots of bugfixes
More modularity for developers.
Added a simple(er) interface for file operations

I meant to post this yesterday, hit submit and returned just now to see I didn't fill out the captcha. Nice.

...

I'm glad they're making significant changes even if it means breakage, I'd much rather they do it now than when it's too late (post alpha). If this thing is going to take off I want the reference implementation to be good since that's what most people will be using or basing off of. Get those important changes in while they can.

It feels a lot smoother, previously pinning certain things would cause it to randomly stall for forever, it all seems to have been fixed now. This is great.

infoq.com/presentations/data-ipfs-ipld
Can't wait for ipld

Am I understanding this right, it seems like a metalink but with more capabilities and in json instead of xml.
The connotations of such a thing on top of IPFS is pretty interesting, distributed data structures on top of a distributed network tied together with a simple single standard format, that's pretty awesome.

Distributing a set of data with potential metadata bundled with it in a single link.

Is all that correct?

You're right, except they tend to prefer CBOR over JSON for their canonical format.
You can read more about this here: github.com/ipfs/specs/blob/master/ipld/README.md

RINA > IPFS

Here's my WWII stuff

Propaganda and Maps
Qmag6FEvWn8JHJ8y9zxxijDhkEX4jnK6F1oVy5jUKXtc3o

Books
QmWu9CuiUuNbdvhwpRfQswXkwbGFY89Zv1oVPzHRBciHyQ

My node will be up and down sporadically since I play video games online sometimes.

I've been running the daemon constantly since this post, the constant upload seems to be killing my ONT, I've never seen anything do that before but I'm pretty new to fiber optic networks and have never had this long of a constant upload stream before.

I have to actually reset the ONT not the router. I wonder if it's actually failing or if my ISP is kicking me off for suspected malware. Anyone else experiencing this? I'd like to keep hosting.

I wish I could inspect the logs on that box but I can't find any way to even access it, looks like only the router can talk with it.

Posting the first three episodes for this season's anime in this thread if you wish to peruse. If you have any of them or you wish to add some, please help out.
>>>/a/451144

Problem with that is youtube-dl is nondeterministic with what formats it pulls down, unless you tell it to always grab the shit 360p VP8 encode Youtube does for older browsers.

For me youtube-dl defaults to 720p mp4, however you may explore the available options with the -F parameter and select with -f. Most of the time 720 and even 1080 webms are available, even though 1080 seems to be separated to 2 streams which need to be combined with little effort.

i2p is a meme
prove me wrong

Note that you can also do -f x+y in order to grab different audio and video streams.(Asuuming x is video and y is audio,or the other way around) youtube-dl will automaticly mux them using the ffmpeg from your PATH.

Couldn't you just grab the list of available formats and parse it? The userscript "YouTube Center" offers an option to download videos and in the settings you can pick the quality you want or tick a box for the highest available, I'm sure their API makes it possible to fetch and parse the quality for a given video. If that's possible you could just always mirror the highest, obviously you'd get various sizes since not all videos are 4k or whatever their resolution limit is but that shouldn't be a problem, just scale the video in the player.

not meme

Vintage memes:
QmaPSD9k7FKNUhpQJy8yix6Xw6nSD3uMRLNwrQi16zDqQL

>github.com/ipfs/js-ipfs/blob/master/ROADMAP.md#milestone-1---js-ipfs-on-the-browser
ipfs.js is now extraordinarily close to actually being usable. To be honest I never though they'd actually make it.

Unlike some other magic dust salesmen, the people behind IPFS seem to actually care about people using their product, this looks promising but i'll wait until it's actually there.

Once it hits, i'll seriously consider dropping WebTorrent for it.

A part of me feels it must be too good to be true, like there's no way we would be allowed to have nice things.

>dslreports.com/forum/r27666267-ONT-issues
This seems to be the same issue I'm having now.

Why are all American ISPs so terrible, this ONT is the shitty desktop model, it doesn't even have a battery backup, I'm not surprised it can't handle many peers at once. I'm going to see if there's anything they can do about it but first I have to put up with tech support phone hell and hope I get someone who can actually help me if at all.

IPFS 0.4.1 came out, mostly just bug fixes.
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md

They also added a roadmap which looks very promising. Looking forward to the lower memory and bandwidth usage.
What I really want though is an option to use hardlinks or symlinks instead of having to copy files into a static directory. That's a major reason why I'm not sharing my terabytes of content with it right now.
github.com/ipfs/go-ipfs/blob/master/roadmap.md

Holy cowtits. Now all it needs is to work on I2P and I'll cry tears of joy.

They're working on a solution for this
github.com/ipfs/go-ipfs/issues/875
github.com/ipfs/go-ipfs/pull/2600
Same, once it's done I'm adding my entire media store.


That has been the biggest problem for years and years, a lot of people say "this is possible, I know it is!" and that's it, they make a big promise that something can be done and that it will but they never produce anything. IPFS made a promise, a spec, and a working implementation all at once, no promises without a base, a real foundation to start from and use even before a proper release.


IPFS is birthed out of this frustration, there's no reason we can't have these nice things. The developer gave a talk stating how unoriginal this idea is, it's a bunch of old and existing concepts that people have talked about and even used for decades but nobody has sat down and weaved them together. IPFS is just an amalgamation of existing and proven concepts, it's just a matter of making them work together which will take some time and is anything but impossible. The state that the alpha is in right now is more than usable and is only improving, I've been waiting for this for too long and am excited to finally see something like this realized. There's a better slide than pic related but I didn't look too hard for it.


Was this addressed with this?

When will IPFS reach the point where you can say "you can use it now and it is better than HTTP in every way"? I saw the roadmap, but don't understand it enough to know when IPFS can finally go mainstream.

Realistically, it probably won't be 'done' until early 2017 at the earliest.

You can use it already though, and it works pretty well. The real tipping point in my opinion will be when ipfs.js is mostly implemented. Which is to say, at that point it'll be in a domain I can do something with, I'm perfectly happy to go crazy throwing up sites and services on IPFS all over the place (I'm bursting with ideas for that) but so long as it requires Joe Average to install something it is not going mainstream. Really looking forward to being able to actually use the js port.

It's kind of hard to say. HTTP and IPFS can seem pretty similar on the surface (I enter a URI and it gives me files), but the way they go about doing that is so drastically different it is not always directly comparable. So, to simplify things, I like to think of it in terms of a few broad categories:


static content: IPFS already does this pretty well, although can be slow to find new or unpopular files. Still, it has the huge advantage of allowing anyone to help host any file, which in my opinion is a reasonable tradeoff, but the speed will also increase as they improve how they use bitswap.

dynamic content (single-author): By "single-author" I mean that the content is authored by the same person that publishes it. This is mostly IPNS territory, which in my experience has been flaky at best. The 0.4.0 release was supposed to have fixed a lot of the problems, but I haven't had a chance to personally play with it. If it did then that's a big step up to being on par with HTTP.

dynamic content (multi-author): This is the really interesting stuff, where a website acts as a sort of hub for other people to publish content to. This needs to be figured out more before IPFS ever goes "mainstream." The current idea, as I understand it, is to implement some sort of pub/sub system in IPFS (you can read a lot of the discussion about it here github.com/ipfs/notes/issues?utf8=✓&q=is:issue is:open pubsub ), but really there are a billion and one other possibilities, including using IPFS in tandem with other networks like Ethereum. Obviously you could just use HTTP for this and IPFS for everything else, but that kind of ruins the point.

live content:' Basically just dynamic content, but very fast. IPFS is definitely not capable of this yet.


There are more, but once they have all of these things checked off, I think that's when I'll be able to say for certain "you can use it now and it is better than HTTP in every way," but it won't happen all at once. If anything, it'll probably happen in stages, both as it improves and as people find more and more uses for it. In fact, I'd say that it's already a pretty good contender to BitTorrent, especially for small files or collections (since everything is de-duplicated), as well as some smaller websites.

As far as the actual process of it going mainstream, the js implementation will be huge for that, as said. The barrier to entry will be as low as being able to load a webpage.


The dark web has never looked so bright.

Thank you for that image, it was the one I wanted to post earlier.

In theory this should also improve with general popularity/adoption as well. More peers = more potential hosts which also means better distribution as well so that's good in terms of reliability(availability/uptime) as well as shorter paths/jumps although I'm not sure if that will help dramatically for most people, it certainly will in things like apartment complexes and dorms, fetching content without going over the internet, very fast.

IPNS paired with IPRS is what I'm waiting for, essentially it lets other peers keep an IPNS record alive so the IPNS host doesn't have to be up, if the original IPNS host is down you ask the network what the latest IPFS hash the IPNS hash pointed to.
github.com/ipfs/specs/tree/master/iprs-interplanetary-record-system
Which imo seems good enough to replace DNS for non-humans or even humans honestly, it's obviously not easy to remember an IPNS hash compared to a domain name but it is possible to link it out or bookmark, if the network keeps it alive that'd be just great. A highly reliable domain+highly reliable static content all without the original host once distributed.

Also excited for this, Ethereum seems like a pretty crazy pair for this, IPFS does distributed static content well, Ethereum does distributed dynamic content well, how perfect. I'm really excited to see if someone does anything big with the 2, imagining a hostless dynamic application is ridiculous, I really want to see some good ones that are not just concepts/proofs.

There were some discussions about it on github but I forgot all the links, there were a bunch of ideas in the previous threads.
github.com/ipfs/ipfs/issues/103

I would really like to see them put this on the gateways if possible, I haven't actually looked at any of the js stuff myself yet though.

I'm also looking forward to more network traversal stuff and message passing, message passing in particular is something I'm interested in (relaying traffic for peers who have direct connection issues).

What an exciting thing, I am beyond sick of the lack of reliability of content or the roundabout ways we use now to preserve/share/distribute things.
Torrents are good but they have fragmentation issues with piece sizes, trackers, etc.

People have already pointed out package manager repos, they would benefit a lot from this, setting up mirrors has never been simpler, just clone the repo locally and you're set for the entire global network. That goes for anything really, want a mirror? Just mirror it on a machine with ipfs.

I could continue to gush over this, I need it, and it looks like we're going to actually get it.

Whatever they did in 0.4.1 seems to have fixed this, maybe it was holding connections forever on 0.4.0, I'm not sure.

ebuilds where

Google gave me this
github.com/pr0d1r2/ipfs-gentoo-overlay/blob/master/net-fs/go-ipfs/go-ipfs-9999.ebuild

Will it reuse the existing open tcp port in the daemon?

Shareaza was nice because it was able to run Gnutella/Gnutella2/ed2k/torrent on the same incoming port.

The Interplanetary Filesystem is supposed to be usable over deep space links with multiple light-hours of lag. Setting timeouts optimized for planet-local networking is dumb.

This is already sort of possible with static content but the memory used for an "ipfs pin add -r" can add up to dozens of gigabytes for larger collections as of 0.4.0.

Hell fucking no, maybe I'll look at this again in half a year

That's just the first result after a 2 second Google search. Why not just install Go and build it yourself until a maintainer makes an ebuild?
github.com/ipfs/go-ipfs/#build-from-source
go get -d github.com/ipfs/go-ipfscd $GOPATH/src/github.com/ipfs/go-ipfsmake toolkit_upgrademake install

Optimized version
go get -v -u -d github.com/ipfs/go-ipfs && cd $GOPATH/src/github.com/ipfs/go-ipfs && make toolkit_upgrade install

user please.

github.com/ipfs/go-ipfs/pull/2600
I'm excited.

Get hype.

so what is the anonymity situation with IPFS right now?

how long until RIAA assholes sit in on files to watch who downloads them like they do torrents?

github.com/roperzh/jroff

idea for a textpunk ipfs project: IFPS man page viewer app.

You cant determine the contents of a file based on its hash. You could try and download every request you see on the network and try to analyze the files that way, but it is much more difficult to keep track of people than just plain bittorrent

your IP is fully visible (unless you're using a VPN or obscuring it some other way) just like a plain torrenter. They've said they're not settling on that as ideal, but it's still in alpha so what do you expect?


yeah but you can hash your own files and (IPFS relies on this) it'll be identical to the hash of all other copies of that on the system. So they could just find the IP addresses of everyone sharing material that matches the hash of their copyrights. In fact it might even be extremely economical for them to do this, rather than scouring public trackers for copies of their media.

I'm writing this portion of my post after I wrote the other ones because it came to mind later. If IPFS can be used as a replacement for something like Dropbox, Syncthing, etc. then can they actually fault people for sharing files with themselves? Does intent matter here? Like if I want to share a movie between all my machines using IPFS I am allowed to do that, if someone else knows the hash then they can also retrieve it from me but that's not my fault, right?
--------------

In the future I can see there being things that disrupt accurate monitoring, the simplest one I can think of is message passing, they can't just ask "who has this file?" because while I may not have it I may be able to reach someone who does and relay it from them, so my client could report that I can serve it to you but it doesn't necessarily mean I am hosting it.
It depends on how they impliment that feature though, if they make it distinct that you're relaying or hosting then you'd just have to make a modification so your client reports it's hosting nothing and relaying everything, I don't think they're able to punish you for just acting as a relay node, I think the same thing applies legally to tor and freenet but I could be wrong.

Outside of that you can use traditional things such as tor, a vpn, etc. eventually integrate i2p into it, maybe more things.


The problem here is it has to be an exact match, if I take a CD or bluray, etc. and rip the contents the resulting file is going to be somewhat unique, the exception to this is when people get premade files from something like itunes or Google play but you can't share those unmodified either since the metadata contains some kind of special data (userid, stuff like that), once you strip that out the file hash will change, they'd have to generate all the permutations a particular file could ever become. It could be fooled as easy as the r9k robot, just append data to the file somewhere
I see plenty of people do this with music tags already

I wonder if something like this could be made too
en.wikipedia.org/wiki/Opentracker
Have some rogue IPFS client reporting that it has hash X and IP Y, that way when you poll the network you get a list of valid and invalid peers hosting the hash and you'd have to initiate the transfer on each to find out which are invalid but I don't think they can legally initiate a download but I could be wrong, I'm not familiar with copyright laws. I'm sure someone will find a way to spam chaff or disrupt monitoring operations one way or another.

New pull request thread. This is the most edge of my seat I've ever been while watching code develop on Shithub. If this works correctly, that opens up a whole new fucking world for IPFS.
github.com/ipfs/go-ipfs/pull/2634

I am sure the Jews will do whatever inconveniences you the most. Probably, that means it counts as copyright infringement.

They have already announced that long-term they intend to add TOR and I2P support. See also their image

ANSI C implementation when? D:

js is somehow better than go?

wont that just be insecure and crashy?

Bugs come and go, but eventually are all fixed. Shit languages requiring a huge runtime and impeaching us from running it on super low end ARM boards won't.

It's incredibly exciting. A part of me keeps expecting something to collapse, the way things are going we'll have an entirely distributed web model soon, where individuals can go back to self-hosting their sites like the original plan was.

GCC can statically compile go. Don't bash a language on one compiler.

Including the runtime into the binary isn't the same as really compiling. Plus, garbage collection.

yes js is

WEB SCALE

I've been wanting to use this for a while, but everytime I try something's off. The biggest problem for me is that is slows all my network traffic to a crawl. As soon as I start the daemon it will predictably slow everything, even DNS resolving, to 30 sec long endeavors at best. I don't know what's causing this, because I'm monitoring my network card and there's not a lot of bandwidth usage (no more than 215 KiB/s).

I've also tried adding things, but for whatever reason it will never upload through the official resolver. (eg. I've put the 'With Open Gates' mp4 video up, but it will only work on the localhost and I can't reach it through ipfs.io/ipfs/QmbyDhoDsefHFTRVUEzneyZHsktwX8q4HeL74ojWv2TMVw) I've been able to access files others have added just fine though.
What am I doing wrong? or is this just the result of alpha software?

Ok

Asm isn't compiled, m8. It's assembled using an assembler.
Also, I indeed used the wrong word, but Go is still garbage collected shit with an enormous runtime.

Ok we agree.

OpenBazaar includes IPFS now. It's fucking happening.

isn't that more the way freenet works? I thought that you were only able to serve the files that you've either "pinned" (mounted) or that you're actively accessing. But I could be wrong idk

I think in the end the whole copyright infringement thing is a moot point, anyway, because those guys are only now figuring out how to track torrents -- how long will it take them to figure out what IPFS is, let alone do anything to combat it? [spoiler]and look how quickly even normalfags got around their torrent-monitoring[\spoiler]

re hashes changing with every file, isn't that something we want to avoid if we want a truly universal file system? eg, one giant "Movies" directory containing the hashes of all movies you can possible desire -- it makes sense to have a separate hashes for dvdrip, 720, 1080, etc. But it would defeat the purpose if there are twenty hashes for the same movie, varying only in the type of encoding or because the ripper has put some shitty subtitle intro at the start. If everyone somehow agreed on the "ideal" rips of each movie/song/whatever and decided to use only those (like bakabt/private tracker) then there might evolve something like netflix (standardized quality, on-demand, huge library) but for any kind of file, with a bigger library, free, decentralized, and completely generated by the users.
that's the kind of shit I dream about

also re live content, there seems to have been some pretty big leaps made with Orbit, the chat client, though that's obviously different to something like a smoothly-updating twitter feed
github.com/haadcode/orbit
youtu.be/UOC_QqtEJtg?t=24m15s

Can I get an explanation about what's good about this? Is it trying to make every file on there completely unique?

It's most likelly just your ISP fucking your shit up user. I know it because mine does the same. Use VPN or host ipfs on server.

Think of IPFS as torrents (trackerless, DHT based), just usable for hosting websites and other stuff you want to stream or download quickly and sequentially on demand.
It's good because it can dramatically reduce bandwidth usage for peers hosting popular content (much better scalability) and also help minimizing effects of DoS attacks.

You should post about it on the issue tracker, the devs probably know how to get information they need to fix it. There's only one other issue I see about IPFS killing their network so it would probably be valuable for them to get this information on what's causing issues in rare cases.
github.com/ipfs/go-ipfs/issues/2234


Nice.


That's correct, message passing isn't implemented yet and will likely be off by default when it is impemented.
There are talks about a share system like freenet but all of that is on top of IPFS and not part of the reference IPFS implementation. There's a couple of third party projects now that allow people to opt in as voluntary mirrors and the IPFS devs are going to make "filecoin" eventually which is like a seperate project for essentially renting IPFS peers who opt into their service, the peers hosting get a reward token for doing it which they can exchange in the same way for the same service. I like the idea, I can essentially loan out my bandwitdh when I'm not using it and spend the tokens to host files I care about when I do need my bandwidth.

Sort of, the really nice thing about IPFS is that it chunks content at a block level, if me and you have the same MP3 but different meta data for it we're both still hosting the audio portions, the same is true for other file types as well, so long as most of the parts are actually the same it shouldn't matter. Obviously though 100% of the hash you want has to be available so someone would have to be hosting the original metadata or have some convention of like 0 padding the end of a file, etc.

I hope people adopt a container-less video standard, imagine instead of grabbing mkvs you just point your media player to a directory hash which contains a video stream, an audio track, and a subtitle track, you could keep the video track, all the audio tracks, and all the subtitles in 1 directory hash and have the media player only fetch the ones you prefer using filenames like video.h264, en-audio.aac, en-subtitles.ass. Directories are technically a container format I suppose but I like the idea of splitting the streams like that for the above reasons, container-less streams seem much better (and more assured) for maximum distribution since everyone has to at least hold the same video stream so it would have a shitload of peers unlike with torrents where there's a lot of fragmentation of peers despite most of the data being the same. Like if someone uploads an mkv that has a video stream, an audio stream, and an English subtitle track, that's a whole seperate torrent and swarm than a torrent containing the same video and audio stream but a different subtitle set.

The coolest thing imo too is that directory hashes are free so you can easily have several lists of the metadata in any kind of format you want without having to duplicate the data or symlink everything. On top of that you can mount hashes, even dynamic IPNS hashes, imagine just mounting the de-facto "movies" hash to ~/Movies, a constantly updated directory containing movies that you can just pic from whenever. High tier. You can already do this somewhat well now.

Very impressive and interesting, I'm gonna look at this more later.

I hope all that I said makes sense, I should stop posting this late since I get rambly but I'm too excited to wait until a time I'm not tired, typing helps keep me awake when I need to stay up too.
If anything I said is incorrect please correct me, if I'm not being clear feel free to ask and I'll try my best.


As it is right now if you want to share a file on IPFS you need a copy of it to live in the blockstore, that patch is going to allow it so you don't need to duplicate it.
It resolves this: github.com/ipfs/go-ipfs/issues/875

It's a big deal because people will instantly be able to share massive amounts of data without needing 2x the storage.
If you mean IPFS itself I think this is a good article. blog.neocities.org/its-time-for-the-permanent-web.html

That's good. I hope all these new distributed web technologies band together to become something more than meme software.

My mind is blown and my cock is diamonds.

Just want to throw in that IPFS now uses a rabin chunker, so in theory it doesn't even matter if the metadata causes the same audio track to be offset somewhat.

They'd converge around one gold standard without bullshit, the same as No-Intro for ripped games killed off all those cancerous chinese ROM sites that demand you use IE6.

It just keeps getting better and better, holy shit.

the filecoin idea will hopefully work as an incentive for both people and larger organisations to act as nodes, by rewarding people who are able to prove that they can serve a file. It seems to be geared towards encouraging people to "seed" more neglected files, too (if it seems likely they will be requested more in the future). I'm looking forward to the days when memes futures are a legitimate commodity.

I was looking further in to how this will work in more general society (since a lot of people will probably instinctively say "like the darknet? no thanks I don't wanna host cp"), and apparently ipfs has already received a bunch of dmca complaints -- because they're using their site as a node, they're responsible for content served. Their response has been to maintain a blacklist of dmca'd hashes, which they will not host and which anyone can opt into. It gets updated whenever they get served a new one. (But obviously you don't have to use it, and you'll still be able to access the content via other people who are ignoring it.) It seems like a pretty neat solution so that the file-sharers can keep doing their thing, while the more nervous/liable can just filter everything through that list and feel safe.

This basic technique would also work for any other kind of content-avoidance, right? So you could choose the degree and type of moderation that you're subject to, and the (now-manual) task of moderating mainstream forums/networks could be automated by making something like PhotoDNA whereas those who don't mind wading through gore and shitposts can go bareback

I suppose then that directory would need some pointer to tell your browser/file manager what it contains, how to interpret it, but that's not a huge ask ... it would be really great if it resulted in meaningful standardization. Pirating wouldn't even have to grow to see huge improvements, if it was able to get its shit together and have everyone agree on the desirable files and seed together.

The obvious example is having a directory for a TV show which updates every time there's a new episode, or a youtube channel, or "dave's monthly tentacle porn roundup" but I'm thinking this would be also great for community-oriented filesharing. You would access content by following a variety of different "directories" which essentially correspond to your online communities. But now I really am getting ahead of myself...

I need to read up more about filecoin, that almost sounds like perfectdark, they have a sekai system that tries to distribute things by popularity, the weakest files usually get distributed to many at first but eventually die out from lack of popularity, if space is limited amongst the peers and the file isn't requested often it can essentially be bumped off to make room for newer less distributed files but it takes a while for it to drop down the list like that. At least I think it works that way, at the time I was reading about it, I couldn't find much English documentation.

I think you could get away with just supporting directory hashes since the hash can be parsed to display its contents, then the player just needs to do things like parse the audio and subtitle names with preferences (so it picks the one you desire). So like, it sees a directory, searches for playable file types and either plays the first or displays a playlist. You could present a list that isn't just a 1:1 file picker if the player does something like assuming that "file.h264" and "file.en.aac" are meant to have a title of "File (English)", or something like that, but that may be stretching it a bit.

An example of what would be expected to be parsed would be the output of
ipfs ls QmVBEScm197eQiqgUpstf9baFAaEnhQCgzHKiXnkCoED2c
Format is hash:filename

You can also do
ipfs object get QmVBEScm197eQiqgUpstf9baFAaEnhQCgzHKiXnkCoED2c
to get the same contents in json, xml, protobuf, and maybe more in the future, that should be good for media players to parse and deal with.

I think there may be other ways to get information from a directory node but I forget since those 2 are the best looking ones, human and machine output.

Your idea may be better though, some kind of standard format that groups them with explicit metadata, maybe some json that's like
video { title:Movie Title, video:file.h264, audio{English:file.en.aac}, subtitles{...}... etc.
I'm not sure.

I'm a fan of that blacklist as well, it's a real nice option to have that satisfies everyone since it's optional.

This reminds me of usenet and gopher. Pretty neat.

Statically linked binaries seem like they're advantageous for web applications where the server is chrooted to its own directory...like what Go is made for.

...

Did your mom drink while pregnant or did you become retarded after birth?

Could I use this to replace NFS in my home? NFS is a pain in the arse for me.

You could use it to send files across a local network with basically no configuration. Just get an ipfs daemon running on both computers. Problem is:
Not the best tool by any means, especially if you send a lot of big files. I prefer samba for serving files on a local network in the long term or scp/rsync if I need to send something over once.

It's mostly just movies that I found lying on the side of the road and made backups of with some .blend/GIMP and .c/python/php files scattered in for good measure. I've used samba before, it's just that I hate the idea of an smb implementation on my personal network. I feel like anything that started off from MS is going to have intrinsic problems with security. Thhen again, this is IPFS we're talking about here.

It can be used over a local network though, I think you just disable the default DHT bootstrap nodes. Don't quote me on that though.

This is already implemented & somewhat functional by using IPNS. cf. $(ipfs name publish --help) for details

The copy thing will be fixed whenever this is merged (apparently it's good enough for testing currently)

In my experience if I have a file locally and am connected to the network it will retrieve the file instantly, I'm not aware of any issue that prevented this before so I can't say if anything was ever fixed but it's been fine for me with media files and my media player as well as other random files and my web browser when nobody else has a copy of the file I'm requesting except me.

For the last one you can just not connect to the network for right now
github.com/ipfs/faq/issues/4
github.com/ipfs/faq/issues/56
You essentially create a private swarm and tell your node to try to connect to the other one(s).
It's not the perfect solution since someone could connect to you if they knew your address and your daemon was reachable, and they manually connected to you but it should work for now until they implement private networks which is a planned feature:
github.com/ipfs/go-ipfs/issues/1633


You do that and add the other node, unless mDNS finds it automatically which may work, I'm not sure. You can disable mDNS too if you don't want to be reachable by everyone on the local network.


This is still good news since it's all clientside, master doesn't need to merge it first for you to take advantage of it. Just grab it and compile it.


I'm really looking forward to pub sub, have they made it so you can have multiple IPNS hashes yet? That's an important one for me since I would want to separate various file sharing lists from say a web site.

OMFG the guy got offered 100 buck to at least start implementing Tor/I2P support and all he did was "LOL"?

What a faggot. I'm with you dude. IPFS Never.

TOR/I2p should just werk

rtfm

if I pin QmTmMhRv2nh889JfYBWXdxSvNS6zWnh4QFo4Q2knV7Ei2B will immediately start downloading and sharing the entire gentooman library?

Do you really expect them to create a 100% safe Tor implementation before they're even done with basic features? If it isn't ready for the clearnet, why should they take time out to make sure it works for a small amount of use cases?

how is torrenting bad for tor but this won't be?

I think just using pin will allow you to share it but not to have a local copy (outside of that clusterfuck of chunks that IPFS creates to share files). Use ipfs get to download the file and then ipfs add if I wanted to share it.

and that will automatically allow me to share to the same address?

Yes.


Pinning something makes it so IPFS does not remove it when it does its garbage collection. If you have the content in the datastore already it just flags it to not be deleted, if you don't it will fetch it and flag it.

"get"ing something downloads it but does not flag it, "add" will add it and flag it to not be deleted.

So you just have to pin something if you want to mirror it, you don't have to manually download it then manually add it, pinning does that for you.


Just pin it it, or you can "get" it too and that will share it until the next garbage collection.

If you want to know the really wacky thing, I came across this library on google because I searched for one of the books in it. I can imagine why they would have found out about it when that starts to happen.

I should probably clarify, get will download chunks to the datastore locally AND output to a file/directory in the current directory.

Pinning will make sure there is a copy in the datastore locally that won't get deleted until you unpin it, you can access the chunks through your local gateway or even "get" them (even when the daemon isn't running or has 0 peers since you have a copy in your datastore that you can reach).

Add will take local files or directories and copy them to the local datastore then pin them.

As long as a hash is in your datastore you can access it locally even while offline, but you need the daemon running if you want to see them through your local gateway, but with get you don't need the daemon running.

If a file is in the datastore and you're daemon is running you're sharing it.

I hope I explained that well.

The guy laughing is not a developer or project member, he's the one that's requesting the feature.

The project leader said they're going to work on it.

turns out x86 ipfs will not run on pi
this is racism

nvm there is actually an arm build available
was worried because I've literally never been able to build this kind of thing myself and get it to work

What is IPFS using all this bandwidth for while it's not even hosting anything?

If you just started propably to contact other nodes and DHT, not that much tbh but might still be enough to saturate a slow residential connection for a while.

Do they know about this issue? Is it covered in their CoC?

what do?
user@anon ~ $ go get -d github.com/ipfs/go-ipfspackage github.com/ipfs/go-ipfs: cannot download, $GOPATH not set. For more details see: go help gopathanon@user ~ $

You should set $GOPATH.

So, basically, gopher protocol with some fancy bittorrent distribution method?

What, never heard of gopher? Google it. You're welcome.

Try this
go help gopath


It covers more and has different ideals than gopher did. What similarities do you see besides their OSI level?

Because of the premature optimization meme, hopefully the resource usage will go down a lot once they put some effort into it.

How do you list all the people a file is being served by?

I2P

Java. And i2pd doesn't have a real torrent client.

Can IPFS already be used for hosting translations of Light Novels and Web Novels? Considering it can do static content really well right now I don't see that there would be any problem.

Also, I can't find a satisfying answer for this, but how would a site fare against DMCA C&D notices? Would it be really easy for people to ignore it and go to the site and read whatever they went there for?

ipfs dht findprovs
where is the file's hash. This will print the peer id's of all of the providers it can find for that file.
You can also do:
ipfs id
for any particular id to get more info about them

I don't see why not, but be warned that it doesn't have anything built-in to actually help distribute the file, meaning that all "adding" something to ipfs does is add it to your node's cache and tell people that you're seeding it. If you go offline then no one will be able to access it until you come back online.
They do plan on making something that works on top of ipfs to accomplish this called "filecoin" ( filecoin.io/ ), but I'm not aware of any progress on it yet. There's also this github.com/c-base/ipfs-ringpin , but I have no idea how well it works.
Really, though, I imagine you could get away with just adding the translations to ipfs (which also pins it by default), and then telling any friends, readers, etc. to help pin it with "ipfs pin " (pinning, by the way, just tells ipfs to not delete the file from it's cache after a while, and pinning a file you don't have also downloads it). And even if you don't do that, ipfs could still help if your file is popular enough, as it will spread and get more seeders sharing it.

As for your second question, it depends on what you mean by a "site." There are a few ways that I know of to use ipfs for a website:
- IPFS as a Backend: This is basically the same as a regular website, but the server retrieves the files it needs through ipfs. This could potentially be useful for a big website with lots of content and lots of servers, but would be pointless for a small site. For a DMCA, the server would likely just block the DMCA'd file hash on ipfs (in fact, they already have a dmca blocklist for ipfs that you can apparently use github.com/ipfs/refs-denylists-dmca . Great for small sites that don't want to get hassled for providing things over ipfs, but still completely optional). That said, the file could still exist on ipfs, and other people interested in it could manually pin and share it if they wanted to, but it wouldn't be accessible from the site.
- IPFS All the Way Down: Removing the need for a central server completely and hosting everything on ipfs. This is doable now with ipns, but in my experience it has issues (alpha software). If they ever get ipns to be reliable, though, this would probably be perfect for what you want to do. As for DMCA's, you would probably have to stop seeding any DMCA'd files on your node, but I don't believe you would have to delete the reference/link to it, which means other people could continue to pin/share the file and it would still show up as normal on the site. Main drawback of this method is that people would have to install ipfs to access the site, which the vast majority of people will not be willing to do (whether this is a good or bad thing depends on your point of view).
- HTTP + IPFS: What I mean by this is sort of combining the last two. The way you'd do this is serve the site like normal from a central server but have some javascript that detects if the person viewing the website also happens to have an ipfs daemon running in the background, and start using that to download content instead. This means that normal people can still view the site, but anyone with ipfs will also help distribute the content and reduce the load on your server. In terms of DMCA's, people that don't have an ipfs daemon running couldn't see blocked files, but people that do potentially could (if it's being seeded). They are also apparently nearly finished with the javascript implementation of ipfs ( github.com/ipfs/js-ipfs/ ), which would mean your site could load things through ipfs whether or not they already have it installed and running, which is pretty exciting.
- Any/All of the Above + Distributed Tech of the Week: This means combining any of the above methods with other things like Ethereum or BigchainDB. There's really any number of ways that you could do this, and each have different pros/cons (to be honest, though, this is kind of where I think things are going in the long run). For avoiding DMCA's (or censorship of any kind) my rule of thumb is "the more distributed, the better," but anonymity can also play a factor, which is why a lot of people are interested in ipfs supporting Tor/I2P. You could look into these if your interested, but it's probably overkill for what you have in mind.

Sorry for the massive explanation, but there's a lot to talk about. Have some raps youtu.be/6vvgxjdmAug

So basically in layman's terms, you can circumvent DMCA in IPFS, but you have to have IPFS installed, otherwise it's the same as HTTP in that the DMCA works. If I understand there could be some more advanced ways to circumvent it without installing it, but it's more of a hassle and in some cases the tech is not ready yet.

As for the first answer, I mean we can already store the translations in already existing tech like epubs, pdf and seed them with torrents or just store them in regular storage sites. The fact that if you go offline the site goes down is already true for HTTP, it's just that we leave that all to some company that has servers online all the time. The fact that some generous fellow can seed the site with the translations that's on your node while you are offline is already a bonus.

DMCA is optional. The IPFS team would rather focus on building the software and not getting sued, so they comply with all notices on their nodes/gateway and let other people deal with yar-har-fiddle-dee-dee'd content. (see )

good luck recompiling all applications again.

don't duplicate information. ever

go get -u ...
Fetches the source, recompiles, and installs all installed packages if they're new or changed. One of Go's talking points is its fast compiler so this shouldn't be a real issue.

Nothing prevents you from using dynamic linking if you really want it either so I don't get why people always bring up that the standard compiler uses static linking by default, gcc has had dynamic linking since the beginning and the standard compiler has had it for a while now too.


This is a whole separate issue but I really feel like deduplication should be a concern for the filesystem and memory manager. This is a solved issue, ZFS has done block level deduplication for years and Linux has had same page merging in memory for a while too. Dynamic linking seems like a vestige left over from the age of non-optimal software and limited hardware resources.

I'd much rather a binary be reliable and work 100% of the time than anything else, not have to be worried about dependency conflicts or repository fuck ups, let the OS worry about what it's supposed to like managing resources efficiently, likewise with compilers.

I'm not saying there isn't a place for dynamic linking or that it's not useful but I really don't understand the stigma with static linking, especially not today.

Great P2P system.

No, you are just incompetent. It adds multigig files just fine

I'd rather not get into the whole dynamic vs static linking debate, but what you're saying only makes sense from the point of view of someone whose compiling shit on a powerful desktop machine.

There are many reasons you'd want dynamically linked libs, and space is only one of them, to me security is the big one, having to recompile all your software when there's a bug in your crypto implementation (ha ha) is a huge waste of time and ressources and if you forget some because of reasons, you're still vulnerable. Having more potential attack surface is bad even if it's not what true security is about.
Cross compilation is also a thing to take into account, there are some archs still used in production for which compiling is a long tedious affair and statically liking would even be impossible to maintain compatibility in certain scenarios.

That's not to say that static linking isn't the best choice in some scenarios, for instance all my forensics tools are statically linked for obvious reasons and i like to have static portable builds of my favorite software around.
But static by default is not a good idea, and while you do make a point about space, it's not the only issue here.

polite sage because off topic

You think RAM is cheap? Deduplication hash tables are huge.

github.com/ipfs/go-ipfs/blob/master/core/corehttp/metrics.go

what is this bullshit?

That's besides the point, file deduplication should be the responsibility of the filesystem. Saying that we don't have a RAM optimal solution doesn't change this, it just means there's room for improvement in filesystems. Besides just because things like ZFS have RAM expensive dedupe doesn't mean it's the only way to do it. I don't remember if this is right or not but didn't BTRFS have some kind of passive dedupe method where it would essentially do dedupe passes at some interval? Maybe that was something else.


Sorry, I shouldn't have made a response to begin with on the static vs dynamic topic, I don't want to derail the thread but I was annoyed with people complaining about static linking. I understand the appeal and uses of dynamic linking but some people treat static linking like it's some kind of horrible thing that doesn't have its own uses. Forgive my tantrum.

Reading this file, the only data exported comes from this function
github.com/ipfs/go-ipfs/blob/master/core/corehttp/metrics.go#L54
Which only seems to provide the number of connections.

And that's probably something you need to better balance the network.

Btrfs doesn't have live dedup yet. Dedup always needs huge hash tables because that's how it is.
Deduplication is best handled by compressors like lrzip for specific files.

Alright, wrote a quick tool (very alpha) for ipfs friendly no-javascript static thread archival, heres the result:

gateway.ipfs.io/ipfs/QmNzU35KGvttKMHsUU2MeG6apjkAGsAyCJbitNN8TXLrC5/

It should support all screensizes, basic thumbnailing and going to reply and back with # tags.

Also ctrl+s on browser saves the page and its resources fine.

Im thinking about utiliizing ipns for a catalog-like page for storing all dead threads.

Not bad, but I don't really like the way everything's centered. There's also this:
github.com/VictorBjelkholm/ipfscrape
Basically just uses wget to make a local copy of a webpage, and then puts it on ipfs. Here's an example I just did with it of this thread:
gateway.ipfs.io/ipfs/QmedWxrKQZ9VRL7uahrnuQraP81YZg1NRXA3e8KmaWcZhD/
Of course, yours has the advantage of also saving the full-sized images, but it probably wouldn't be too difficult to modify this to do that, too.

I also really like the idea of someone using ipns to keep an Holla Forums archive. Would make for a great replacement of the indefinitely delayed 8archive.moe.

I would imagine you just pass the recurse flag to wget and it would work fine for at least 8ch.

so is this just static content?

That's what was originally done in , however the goal in
was to make the static copy independent of the sites javascript and non-necessary resources, aswell as providing good support for thumbnails and original images with clear file hierarchy

Fairly much so, the idea is to ensure content immutability so dynamic content is a bit hard to accomplish. However IPNS is meant to address this by allowing dynamic lists to static hashes

OH GOD NO!

Actually having this be able to run straight from the browser will substantially increase the userbase as normies will be able to use it no effort required. Then we will see how well it holda in the real world

I'm okay with this if the JS implementation is not the only one. I guess we have to be practical and pragmatic about certain things at this point.

So, after reading this thread and watching the demo on their website, this thing seems great. My only concern is the ability to search for content. Is there a way I can search for something like "make america great again" rather than having to know the specific hash of the specific file that I want?

...

HTTP has no way to search for content either, only recently has google and others brought that. As with HTTP you need something else than the protocol for that. Perhaps a service in I2P? Local instances of Yacy with IPNS to share lists of relations?

I guess the optimal strategy would be to have a wiki of SHA256 correspondance and couple that to some search engine tech.

Unlike bittorrent you can share a lot and still have little idea of what you're sharing (which is part of the appeal, but no help here).

What's unintentionally funny to me is that a "tracker" for IPFS is essentially just a table of file names and hashes of files, you don't need any more information to make it work. But to obtain that information i'm afraid we'll have to crowdsource the content indexing (at least for now).

but part of the intention is that you want to be able to have private files or file structures which only you can access because only you know the hash -- I think it's ultimately a good thing that you can't (I don't think) crawl the whole network. It's not ideal to have to rely collaborated lists to find publicly shared files, but it's fine for something in alpha. As hints, it's reasonable to assume that a bunch of different people will try to provide that service in the future via any number of different means. (Off the top of my head, one option could be to do something like soulseekqt where you can create user accounts and see files that others are sharing.)

...

What if there was an option to let users tag their files with certain words to use in searches. The fact that it's optional allows those who want privacy to continue to have it, but it also allows for easier searching of content.

...

Technically that's already possible.

All (well, most) files in ipfs are wrapped in what they're calling a "merkledag" ( github.com/ipfs/specs/tree/master/merkledag ), which usually provides meta-information about the object like what it contains and how to read it. It's more or less just JSON, and lets you embed arbitrary information in it. There's nothing stopping you from putting tags or anything else in that, but the problem is you would have to come to some kind of standard for it to be anything effective. The fact that you're then bloating a file with information not everyone would even want or need is also not a big plus.

Luckily, I hear they're near finalizing the IPLD spec ( github.com/ipfs/specs/blob/ipld-spec/merkledag/ipld.md ), which would help tremendously in creating interchangeable standards. On top of that, it would solve the bloat problem, too, since you could essentially just replace the metadata with a link to the metadata instead, making it an optional download only for people who need it, while still having it behave as though it was part of the same object.

You might be interested in mediachain.io/ . I haven't quite made up my mind about them yet, but they seem to be doing exactly what you're talking about, just with more of a focus on assigning authorship metadata rather than tags metadata. Still, they say that everything will just be based the schema.org "CreativeWork" schema ( schema.org/CreativeWork ), which is more than capable of containing both. In fact, I'd probably use that schema for tags, regardless, just because it seems to fit so nicely with IPLD.

I'd definitely keep my eye on them, but be warned that they're planning on using Blockchain Technology™ for aggregation, which has very questionable efficiency and is notoriously slow. Of course, if bigchaindb.com/ is anything to go by, this may be a solvable, if not solved, problem.


Also, as pointed out, there's Hydrus.
Pretty nifty little program made by a fellow user. I use it all the time.
As a follow up to the post he linked to, it looks like the Hydrus dev has since finished the basic IPFS integration, which is pretty cool.

localhost:8080/ipfs/QmciZetSaqjeinRv2Ck6Pv2iLWmQy8DPe5RFPchGCkBjYj/browser.html

It's a pub/sub demo the orbit guy made. Click the random color buttons to make it go.

So basically gnunet?
citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.81.6369&rep=rep1&type=pdf

Is the build system really this fucked?

It used to be just go get -u github.com/ipfs/go-ipfs but ever since that dependency debacle with npm that broke a ton of node.js projects they thought it would be a good idea to use their own dependency manager which uses IPFS so you will always be able to build even if the author decides to delete their repo. So now the entire build process they have listed is
go get -d github.com/ipfs/go-ipfscd $GOPATH/src/github.com/ipfs/go-ipfsmake install
Which installs gx (their dependency manager), the dependencies, builds, and installs.

I've been doing this since the switch though and it's been fine for me, the makefile should do all this for you anyway but I only rely on it for putting the commit number into the build.

go get -u github.com/ipfs/go-ipfsgo get -u github.com/whyrusleeping/gx-gocd $GOPATH/src/github.com/ipfs/go-ipfsgx i --globalmake install

I'm not sure why that's shipped as a bin, maybe it's just using their official builds or maybe the maintainer doesn't want to have Go as a dependency since it compiles to a static binary anyway.

I'm a bit annoyed they didn't stick with just "go get" but at the same time the idea of a package manager on top of IPFS is something I really like a lot, no repository fuckery and you can guarantee you'll always get the exact thing you need if you reference a package by its hash. Seems pretty neat
github.com/whyrusleeping/gx

so on a scale of 80-100% how many files hosted on this are CP?

If there's a large percentage of CP on the IPFS network I haven't encountered it yet, you'd have to explicitly go looking for that kind of content, you won't bump into it on accident. It's not like freenet where it might just end up on your hard drive even if you don't want it.

Given that IPFS doesn't have any built in anonymity I'd guess the percentage is low anyway, maybe some of the people routing through tor or people experimenting with i2p are hosting some but I think it would be a bad idea to host something that illegal while this thing is still in alpha and not expect to get caught.

Are you concerned about law enforcement harassing you for using IPFS?

that and I don't want to unknowingly host CP because you know, I'm not a disgusting monster and stuff.

I don't believe you.

IPFS is a HTTP replacement protocol, not the implementation. The protocol itself, like HTTP does nothing on its own. You may choose to use it however you wish.

> ipfs.pics/
> Report

> ipfs.pics/static/common.js

Address: 45.55.151.20

i would assume you could hack this function for any hash file and even spam the votes. other than that page and the database on the ipfs.pic apache server, i don't think there is any method for user feedback on content hosted on IPFS, but the site says you can run ur own webserver gallery site as well so you might as well set one up and see what it does in relation to the p2p or w/e.

if you send a file with add into the swarm and send it's hash to another ipfs peer (ur buddy on some encrypted chat platform), there shouldn't be anything stopping ur buddy from receiving the data if he has ipfs running on his/her machine.

ipfs is more problematic when you try to use it as a web server for http(s), there are 7 gateway servers for legacy support to the www (browser client integration when?). This function is counter productive to the purpose of the network since it bottlenecks traffic to these 7 servers (AWS if I recall), these servers have hostnames and static IPs, listed below. you can call content directly with IP address using http or hostname for http/https services (otherwise problem with cert).

Address: 104.236.179.241 => pluto.i.ipfs.io
Address: 178.62.158.247 => earth.i.ipfs.io
Address: 162.243.248.213 => uranus.i.ipfs.io
Address: 178.62.61.185 => mercury.i.ipfs.io
Address: 104.236.151.122 => jupiter.i.ipfs.io
Address: 104.236.76.40 => venus.i.ipfs.io
Address: 104.236.176.52 => neptune.i.ipfs.io

this is the round robin DNS style so when you type in ipfs.io/ipfs/hash ur browser is pointed at one of those servers at random, there is no direct ipfs.io server (or it will redirect to localhost if you have the firefox addon and ur node is online).

this can be annoying to try to send someone a file hosted on the p2p swarm because you never know what server it's being cached in. the servers are all supposed to cache content but in practice it takes time to propagate. to avoid this problem you can add the content from ur node, then run wget for earth.ipfs.io/ipfs/hash (or whatever blanet ur like) and watch it DL completely (the faster ur upload the better, imo nodes should be run on vps or other symmetrical connections).

once the file is DL'd with wget from that server you can safely send/embed/etc that direct hostname link. the content will stay online from that webserver, ur swarm peers, and bootstrap nodes, it will still be more efficient to access the hash from a client node then to use the webserver because they are overworked. content should remain online for the short term (a week?) if you disconnect / delete ur node (burner box style), but for the content to stay up for any length you need to have it pinned and shared from ur node.

because these are hosted on traditional web servers, content that has been cached on those servers can be removed with a copyright request:
> docs.google.com/forms/d/1fXchtQEQ6WDjedSk46ZZiye8vl4UxvyRZiltSOv2huc/viewform
> For all other inquiries, contact: [email protected]/* */

the software itself is under the MIT license, it claims creators are not liable how the software is used.
> github.com/ipfs/ipfs/blob/master/LICENSE

tl;dr don't ever use the webserver / global domain if ur a tinfoil sicko.

if any of you nerds understand this better please correct me. pic related.

oh one last thing as the another anons have mentioned about being anoonoomush.

> github.com/ipfs/faq/issues/18

imo much since the network is often described as 'one giant bittorrent', therefore the same precautions should be taken as running a bittorrent client. if you plan to use it for content you don't consider objectionable there's no reason not to run it directly assuming ur ISP won't mind the extra traffic. if you want additional machines between you and ur IP endpoint consider using a VPN (assuming you have fast endpoint) or a VPS/botnet host which supports the OS's listed here:
> ipfs.io/docs/install/
much like a seedbox, though you need to make sure the machine's filesystem is secured to whatever standards you consider necessary. because of this trade-off you may consider a vpn to be the ideal solution if you can support fast up/down speeds (depending on amount of data you are hosting).
> privacytools.io/#vpn

pleb mode tier:

tinfoil mode tier:

tl;dr anoonoomush is leechin, plz export us.

That'll keep the NSA busy!

I mean, it's always possible to replace the bootstrap nodes with servers you control, for a "private" IPFS network.

If anonymous routing is what you people want then wait for i2p integration instead of making many hops manually.

I hope this gets implemented
github.com/ipfs/go-ipfs/issues/2815
Instead of just being public or private, you could connect to both without them becoming joined.

what is weak trying and smeghead language design

I'm pleased with all this progress, I can't wait to see what happens when they actually make optimization passes, since they're just targeting implementation features right now. The fact that the Go compiler and runtime improvements should continue to happen in parallel by the Go team is also great since that naturally helps too.

In other news, adding files to IPFS without needing to copy them is still progressing.
github.com/ipfs-filestore/go-ipfs/wiki/Landing

Does anyone know what the i2p status is? People seem to want that but I am not following the developments on that section.

Does anyone know if ipfs's data blocks are compressed prior to transport? I haven't been able to find any information on if this is a planned feature and it would be great for reducing network load.

they are not? well the blocks are raw. if you push a jpg file you can load each block and it will render as slices of the image, the dev does it in a demo once.

ipfs.js is now partially functional, and someone built an IRC clone with it.

orbit.libp2p.io/

Come join #tech

gophers pls go

Hello hipster Web 2.0 developer, scared people will stop running your shitty JavaScript code?

Will IPFS work over a VPN?

Yes, all traffic is forwarded trough a VPN

Can Retroshare work with IPFS?

Retroshare connects 2+ people with an encrypted connection and has a robust file sharing system. IPFS shares and pulls files all across the world. I'm not sure what you would accomplish by using them together. If you want to share files with a small group of people, use Retroshare. If you want to share files with anyone on the Internet, use IPFS.

Look Who's Back (Hitler movie): /ipfs/QmScdNDp6y5VbvCbCH9rDEDbjRukYgaeoL9EAC4TUBaBcM

Windows Distributed File System?

No.

I learned that it supposedly works with Tahoe-lafs (which in turn will soon be i2p compatible). Breddy gool.

>github.com/ipfs/go-ipfs/issues/2875
>github.com/ipfs/community/blob/master/code-of-conduct.md
CcukoC'd

I like the Code of Conduct. It's a very handy warning sign.

What were the filenames?

But it won't hide your ip.

Thank you, but I am not interested.

I don't know, but the reaction itself is pretty "fun".

I don't like codes of conduct, but in this instance it's not outlandish to use one.

Haven't red the whole thing so it might be problematic, but still.

The lead dev's objections seem reasonable enough, he doesn't want any mentions of copyright infringement in his project's issues. That's fair, given the type of software being developed.

Impressive level of failure to communicate on a human level there.

you can go write tinfoilFS yourself, fagjew

...

...

The original one was a pirated movie and the replacement was "Github admins can suck it.mp4" or similar.

You use i2p or gnunet or anything similar? So your real IP is never known.

...

Then go use those, warez skid. IPFS is for grown-ups who create content, not leech it.

I'll give you 8/10, it's pretty well made. i2p already has tahoe-lafs, not your garbage collected joke, anyway.

For all the
thing:
IPFS works just fucking fine over cjdns and Tor.

There's plans for i2p integration as well from what I understand, not sure if it's first or third party though who will implement it, probably third party i2p people.

The project lead commented on it just 9 days ago
github.com/ipfs/notes/issues/124


jej

I would love to see a gateway or an implementation.

/ipfs/QmVxMXwTGLwYYFNcA7w8JEN5UPrGowpVzN4niUrHZfzEoG

You should use the -w argument when adding single files to preserve the name/extension. It's an ODT about Hilary Clinton in case anyone is wondering.

did anyone archive that thread? I remember it having a shitload of ipfs links in it

Do we have any Holla Forums archives? I remember someone saying Holla Forums and Holla Forums were archived somewhere, then there's sheeky and/or leeky forums for text only archives, the only one I can find via Google is the pony fucker archive.

Why not just use archive.is? Also a bit off-topic but why is Holla Forums so depopulated? I remember we used to have around 700 active users here.

Is it wtitten in C/Vala/anything compiled yet?
Can it work inside I2P yet?

Holla Forums is depopulated because people are either going back to 4chan or moving on. This happened to every other splinter-chan before 8ch, and the pattern isn't about to break.

On topic:

The canonical implementation is in Go, which compiles to static binaries.

3D printable AR-15 lower+30rd mag: /ipfs/QmXf58HvtUbM3HpnrFKyqwbrdwhPyQggzPEwgauypB6Hp6

There's nothing wrong with archive.is but you have to trigger an archive of a page manually. If nobody did that then the /a/ thread is gone or on some archive that automatically archives every thread for that board.

I'm pretty sure someone linked one of these before on Holla Forums but I can't even find it through Google.


I think the current implementations are Go(compiled) and JS (interpreted), the spec is there though so someone could write it in anything. I expect people to port it once the reference implementation (in Go) is finalized since then they'll have something to compare and test with.

I2P support is still being worked on as far as I know. See
There's been some activity in that thread recently again.

Once the spec and the reference implementation are done I fully expect to see ports to Rust and some JVM language. I might take a whack at a Clojure port myself once I'm not chasing a moving target.

Is there really still no i2p support? Why would I even us IPFS at this stage?

How about posting some content for ipfs you fags?

Any chance you could post the defcad mega pack to ipfs?


To test/experiment with IPFS implementations. Eventually I would like to try to build a video mirror site with the videos hosted on IPFS. See my post

You could always use a VPN with it like people do with torrents if you want extra anonymity. Native i2p support is planned for a future alpha/beta/final release.

Here's some videos about using ipfs someone did
ipfs.io/ipfs/QmfSZ7iK7hzaLKwe5iaYErK8PcxUuDwNbE9rCDBkSbE9V1

People on Holla Forums are using it to share games in their share threads.


It's still useful now and it should get better, there's no harm in getting familiar with concepts and methods before something is finished if you plan to use it in its finished state.

How do I make filenames in IPFS human-readable? I have a massive library of ebooks I would like to share.

For an individual file, you can tell IPFS to wrap it in a directory so that you can still see the filename, ie,
ipfs add book1.epub -w
will give you a hash of a directory containing a single file, book1, so that you can then reference it as /ipfs//book1.epub.

If you have a whole library, then it works exactly the same. Say you (recursively) add your whole library and it gives you as its hash.
# ipfs ls author1.epub author2.epub...# ipfs ls book1.epub
so you can reference the book as /ipfs//author1/book1.epub

Does that make sense? tl;dr you only need to provide a hash for the topmost directory, everything after that is structured normally.

So only the 'root' of whatever you are adding has its filename hashed when using the -w argument. Perfect. Ok now a follow up question; can I tell IPFS to create a symlink for each file instead of copying them into the .ipfs folder?

See

This has been a widely requested feature and has been in the works for quite a bit.
We'll get it when it's ready. I'm hoping by the end of the year.

just reading the filestore thread, some dude did a stress-test with 12.5TB of video, it took 115 hours to add (that's 1.8GB/min) and it only added 9.7GB to his .ipfs folder. Pretty neat. Fingers crossed it becomes a normal part of the fs some time soon. (I find it weird that the devs treat this so casually, though, marking it as an enhancement rather than a necessary fix. Especially since they're all about trying to get it to just werk in the background.)


To clarify: all the files in the directory get hashed, in fact even individual files get chunked and those chunks get assigned hashes. Try the ipfs get links command on a directory and then on a file to see what I mean. The -w argument just creates a dummy directory that wraps whatever you're adding, giving you the option of seeing everything in human-readable form. So if you had a file ~/Library/author/book.epub you could add Library normally, in which case you'd get /ipfs//author/book.epub or if you wrapped it then you could reference the book with /ipfs//Library/author/book.epub.

I'm not familiar with that.


Well that's fucking fantastic. It was 1:1 in disk and RAM usage in the 0.3.x days. I nearly made my server shat itself uploading the Tenchi Muyo archive.

I'm guessing they're focusing more on stabilizing/fixing/optimizing the current core stuff then adding more functionality at this point. It looks like the work for the patch is being done by a non-official dev so I'm sure the official dev team is more then happy to get this feature completed without having to spend their own time on it.


It's a pack of 3D printable gun blueprints. Since you posted a mag I thought you might have more.

Do you post your hashes anywhere? I have over 3TB of anime but I'm waiting for the non-duplicate feature to mainline before I add anything bigger then a few gigabytes to IPFS.

Tenchi Muyo: /ipfs/QmYRsSadz9L7VBtfgdJYdxyRzcAuaxrHUih5zZY3UzQFo2

Entire franchise, ~24G of 480p divx-in-ogm video.

Thanks for the link, but I was asking more generally. More people should post content in these threads but they don't.

There's updated h264 rips out there if you want better quality.

>github.com/ipfs/notes/issues/154

They've finally turned their sights on pubsub. Happily, they're going to do it the right way with appropriate multicasting rather than having a massive BW requirement on the uploader.

/ipfs/QmVDdjaqFr9DYi4F4whQMre3WGVePDuVy7zvuNsZJis3RG/Windows

OS isos. Go nuts.

I agree, but this was a pretty decent thread for talking about & understanding the underlying tech, which is also pretty important. Hopefully by the time the next one comes around, more people will be using it and a proper share thread can develop. As points out, there's a huge thread on Holla Forums where they're sharing their l33t undergr0und gaymes over ipfs.

Here's this thread itself, for a bit of recursion:
QmVM6ZRmnAYNN9mNtRxBaGuP568ZDwZpHhUs3942dSy5z6/ipfs-thread.html

Here's a page someone on /g/ made ages ago collecting a bunch of links. It was dead for ages but appears to have been re-seeded, so pin it while it's hot:
QmNgzvC1Y5dh5pQvPfZpZUkoHuB6Z7xwobo5Rv19nZcwA8

The /g/ index is only 21KB so there's no excuse for not pinning it. If you're going to pin this thread, do it for the parent directory not the html file so that pictures and CSS get included (all up it's 1.8MB).

Holy crackers.
Very cool. This is exciting news, I have TB's of content that I'd also like to share but can't afford to duplicate my entire disk array.


Is this more for real time dynamic content like text and video streams or is the pub-sub model intended to solve IPNS limitation, is it both?

I remember people saying to wait for pub-sub instead of using IPNS right now. IPNS works but the limitation of 1 publish per node is restricting, if this allows you to just publish some hash that is intended to be dynamic or blockchain-like, that would be great. Especially the latter, it would be cool to have a self archiving website built on that, want to see an older version of the site? Just move backwards on the chain and grab the hash it used to point at.

I understand a typical pub-sub concept but I'm unsure of how the IPFS implementations are intended to function and how users are to interact with it. I guess they're nailing that down themselves now.


Remember to use the canonical path when posting hashes, if people are using the IPFS browser addons it makes the links clickable which is nice.
/ipfs/QmVM6ZRmnAYNN9mNtRxBaGuP568ZDwZpHhUs3942dSy5z6/ipfs-thread.html
/ipfs/QmNgzvC1Y5dh5pQvPfZpZUkoHuB6Z7xwobo5Rv19nZcwA8

It seems to work better than the greasemonkey script, that one breaks when you point to files past the hash.


I wasn't sure if it would be appropriate or not for this thread since this is more about the tech itself not content distributed by it. I'm not sure what kind of stuff to share anyway.

Besides that though I think a lot of people are waiting on the project to mature a bit, some of the concepts they talk about are really neat and I'm waiting for them to be implemented before I do a lot of stuff.

I'm personally waiting on the no-copy feature, better ways to host dynamic content than IPNS, and maybe some kind of metadata management in the webui or something, right now I just keep a text document containing names and hashes, most if not all of those came from previous ipfs threads on Holla Forums. It's be nice to have something like we do with pins but with metadata `ipfs meta *hash* "some string to associate with this hash locally only"` then maybe an `ls -meta` which prints out hashes with metadata and what that metadata is, I think IPLD is going to have relations between hashes or something, I know they've talked about this thing on github and it would be nice to have.

Another thing is possible format changes, I'm interested to see if they develop any more formats like they did with the trickle-dag

The dynamic content thing is big for me because then I could just post a hash in this thread and be like "here's where I will post all my shit" or something like that.

You might want to check out >>>Holla Forums10294300

If I wasn't lazy I'd organize all my image content and start an image hash sharing thread on >>>/hydrus/ but I have too many images to sort in my lifetime.

I hope this is formatted in a readable way, I'm very hot and tired.

Adolf Hitler The Greatest Story Never Told
/ipfs/QmWfbi8YxbcWVBSJ7GZLZ2x1oqD499bGjvJoMoUcrmkM32
torrentproject.se/bc0b911654e2795536370f8cae59d123db4b95b4/The-Greatest-Story-Never-Told-torrent.html

Huma Abedin leaks:

/ipfs/QmW4W6nXpY4ToU1T2yUBw68xjcCFCWinDu6RZDvyUDT3Vd

github.com/Kubuxu/ipns-pub

Archived >>>Holla Forums10294300
archive.is/VzKIH

It's both. It's a real pain trying to use IPNS for real time communication, since it takes ~10-20s to resolve. For many uses this is unacceptable, hence pub-sub will be used to enable real-time sending of updates. Even things like, for instance, were you to build an imageboard on top of IPFS, you'd find that when posting, the delays of republishing to the DHT and then for the client to pick up on the new post could end up being >45s, which is way too painful.

Bumping with networking related textbooks.
/ipfs/QmPzAKWQw9tezEYMPGBxxNsYkFCP7KEKiPRupQbbqasoj2

Is it realistic to create a video streaming platform using IPFS?
My idea is to make a website that allows the user to watch all the current airing anime. The website only pretty prints the directory listing of the node. The node contains the hashes for all episodes of the current airing anime.

Probably more realistic when js-ipfs gets fully rolled out, though for now there's a proof-of-concept player located at /ipfs/QmVc6zuAneKJzicnJpfrqCH9gSy6bz54JhcypfJYhGUFQu/play#/ipfs/QmTKZgRNwDNZwHtJSjCp6r5FYefzpULfy37JvMt9DwvXse

The directory listing itself would just be an ipns file, I guess, that could be routinely updated. The problem is that as it currently stands you can't have multiple people all publishing to one site (I don't think).

We need a new thread, this one is about to fall off the board.

So make a new thread. Nothing is stopping you.

he knows that whoever starts the next thread will need to provide some content, to make it a sharing thread, and being a leech he doesn't have any


calm down, the next thread will come when it comes