IPFS thread

Updates
The long-anticipated filestore was added in 0.4.7. Now you can upload files without (significant) increase in hard drive usage.
Enable it with the command:
ipfs config Experimental.FilestoreEnabled true
And restart your daemon. Now you can add files with:
ipfs add --nocopy
It averages out to about 1MB of new data for 1GB of files added, so it's now feasible to share an entire hard drive - but it will take a while to process. I don't recommend you add too much at once, as I found you must run garbage collection to shrink your filestore.

0.4.8 released.
An experimental implementation of directory sharding is now an option. It greatly improves performance with very large directories.
Enable it with the command:
ipfs config --json Experimental.ShardingEnabled true
And restart your daemon.
WARNING: It will change hashes and is not backwards compatible with earlier versions. Only 0.4.8 users can access these files!
NAT port mapping, the infamous router-killing feature, can now be disabled with a config option.
Not really sure how to disable it, though. Here's the PR for reference. github.com/ipfs/go-ipfs/pull/3798
Filestore utilities have been added.
filestore ls allows you to see which files are in your filestore and what their hash/locations on disk are.
filestore verify checks a hash to see if the file is available to seed.
filestore dups finds files in your filestore that are also in your block storage - files not added with the --nocopy option - so you can remove them.

tl;dr for Beginners

How it Works
When you add a file, the files are cryptographically hashed and a merkle tree is created. These hashes are announced by the IPFS client to the nodes in the network. (The IPFS team often describes the network as a "Merkle forest.") Any user can request one of these hashes and the nodes set up peer connections automatically. If two users share the same file then both of them can seed it to a third person requesting the hash, as opposed to .torrent files/magnets which require both seeders use the same file.

FAQ
It's about as safe as a torrent right now, ignoring the relative obscurity bonus. They are working on integration with TOR and I2P. Check out libp2p if you're curious.

Finding a seeder can take anywhere from a few seconds to a few minutes. It's slowly improving but still requires a fair bit of optimization work. Once the download starts, it's as fast as the peers can offer, just like a torrent.

You be the judge.
It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies.
On the other hand, it's still alpha software with a small userbase and has poor network performance.

Websites of interest
gateway.ipfs.io/ipfs/
Official IPFS HTTP gateway. Slap this in front of a hash and it will download a file from the network. Be warned that this gateway is slower than using the client and accepts DMCAs.

ipfs-search.com/
Search IPFS files. Automatically scrapes metadata from DHT.

glop.me/
Pomf clone that utilizes IPFS. Currently 10MB limit.
Also hosts a gateway at gateway.glop.me which doesn't have any DMCA requests as far as I can tell.

ipfs.pics/ (dead)
Image host that utilizes IPFS.

github.com/ipfs/go-ipfs/milestone/32
Milestones for 0.4.9. Get a sneak peek of the next version's features.

Other urls found in this thread:

archive.fo/ymVRe
github.com/ipfs/go-ipfs/issues/3397
ipfs.pics/
ipfs-search.com/
bitsharestalk.org/index.php/topic,22576.msg301280.html#msg301280
github.com/kenCode-de/c-ipfs
github.com/ipfs/go-ipfs/blob/master/docs/config.md#addresses
github.com/ipfs/infrastructure/
github.com/ipfs/refs
github.com/ipfs/go-ipfs/issues/3596
github.com/ipfs/js-ipfs/issues/779
github.com/ipfs/go-ipfs/blob/master/assets/init-doc/security-notes
en.wikipedia.org/wiki/Distributed_hash_table#Security
plan9.bell-labs.com/wiki/plan9/Configuring_a_Standalone_CPU_Server/index.html
github.com/ipfs/js-ipfs/issues/800#issuecomment-290988388
lists.gnu.org/archive/html/gnunet-developers/2017-04/msg00008.html
lists.gnu.org/archive/html/gnunet-developers/2017-04/msg00009.html
ipfs.io/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw
ipfs.io/
lists.gnu.org/archive/html/help-gnunet/2017-04/msg00001.html
gnunet.org/bugs/view.php?id=4610
github.com/majestrate/nntpchan
gnunet.org/concepts
gnunet.org/gnunet-vs-i2p
ipfs.io/ipfs/QmZwiWDEsmHxouwQ8w883U1NaZZPV6rnTCStAanEp8HoWX
github.com/ipfs/specs/tree/master/merkledag
ipfs.io/ipns/QmNewHashForIPNS
ipfs.io/blog/24-uncensorable-wikipedia/
github.com/ipfs/go-ipfs/issues/3861
github.com/DistributedMemetics/DM/issues/2
github.com/bigchaindb/bigchaindb-examples#example-on-the-record
github.com/ipfs/js-ipfs#packages.
github.com/cakenggt/set-db
ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/
ipfs.io/docs/install/
ipfs.io/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw/
gateway.ipfs.io/ipfs/Qmd94XnQfkSeQE9tgjj4qUzdnPQDqdkaxxviYoN8SRMpTZ
gateway.ipfs.io/ipfs/Qmd94XnQfkSeQE9tgjj4qUzdnPQDqdkaxxviYoN8SRMpTZ/RevolutionOS.webm
gateway.ipfs.io/ipfs/[hash]
gateway.ipfs.io/
github.com/ipfs/notes/issues/37
github.com/libp2p/go-libp2p/pull/79
github.com/ipfs/js-ipfs/issues/779),
github.com/ipfs/go-ipfs/issues/3978
github.com/ipfs-search/ipfs-search
blake2.net/#Q3_speed_is_bad
github.com/ipfs/faq/issues/24
csrc.nist.gov/groups/ST/hash/sha-3/Round3/March2012/documents/papers/GURKAYNAK_paper.pdf
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md
github.com/ipfs/go-ipfs/issues/3994
keccak.noekeon.org/yes_this_is_keccak.html
cryptome.org/2016/01/CNSA-Suite-and-Quantum-Computing-FAQ.pdf
eff.org/wp/iaal-what-peer-peer-developers-need-know-about-copyright-law
localhost:8080/ipfs/QmecV5hVLRpzn1yxe7y1Vx9NhATUfczSk4Kt8qHzL9SZqa
littlenode.net/directory.html
github.com/ipfs/go-ipfs/blob/7ea34c6c6ed18e886f869a1fbe725a848d13695c/CHANGELOG.md
gateway.ipfs.io/ipfs/
glop.me/
twitter.com/NSFWRedditImage

How's the pubsub thing?
Like, how good it is, or does one need to pay attention to some caveats (aside for config changes) to use it?
Can I have something like a blog by just telling people "hey sub to some-shit-name to get things from me" and then use the command to publish on a file/hash?

I haven't tried it because I couldn't find out how it worked. It turns out they snuck an ipfs pubsub feature in without documentation in the --help menu. I had to grep the ipfs commands list to find it.

USAGE ipfs pubsub - An experimental publish-subscribe system on ipfs.SYNOPSIS ipfs pubsubDESCRIPTION ipfs pubsub allows you to publish messages to a given topic, and also to subscribe to new messages on a given topic. This is an experimental feature. It is not intended in its current state to be used in a production environment. To use, the daemon must be run with '--enable-pubsub-experiment'.SUBCOMMANDS ipfs pubsub ls - List subscribed topics by name. ipfs pubsub peers [] - List peers we are currently pubsubbing with. ipfs pubsub pub ... - Publish a message to a given pubsub topic. ipfs pubsub sub - Subscribe to messages on a given topic. Use 'ipfs pubsub --help' for more information about each command.

I'll play around with it. It would be very convenient to use this to create a "here's my shit" link that you can share with others.

Can I calculate the hash of a local file and see if it already exists in IPFS?

I2P support when? IPFS inside I2P would be the ideal internet. All the benefits of IPFS, anonymous.

i2p and Tahoe-LAFS already exists

Great, now when can I run ipfs as an unprivileged user and use this on my media without fucking around with bind mounts?

True, but nobody uses it. IPFS looks much friendlier to new adopters.

To be honest I'm more interested in the sub than the pub
I had an idea of a daemon that is subscribed to a public name where people publish hashes of files they have uploaded, so that the sub daemon when it sees a new pub will pin the hash
It's essentially a cheap pure-ipfs file storage service
I don't have the resources to try it out myself unfortunately (I lack a dedicated machine acting as the sub daemon) so I can't really check how pubsub really works for this kind of stuff

I don't even know how you managed to fuck up the installation this hard.


I tried it out. I published to a topic easily enough (the command went through instantly) but when I tried to subscribe to it, the command hung for maybe 5 minutes before I killed it. I highly doubt anyone published to the topic so I don't know why it's taking so long. I'll try it again but I don't have a lot of hope for it working.
Hopefully in the future pubsub will be tied to IPNS identities so you can follow a specific publisher.

Same as with i2p, waiting for a C/C++ version before even thinking about it I'm on Gentoo, so using special snowflake languages isn't cool.

Reposting the videos from the dead thread.
/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw

`ipfs add -n *file(s)*` will get the hash(es) only. `ipfs dht findprovs *hash*` will show if anyone online has it.

I'm waiting for the java version

Is IPFS developed by Holla Forums users or something?

I don't think so but I think most of us can use it to share things easily. Like source code, or anime.

# chpst -u ipfs /usr/bin/ipfs daemon$ ipfs add --raw-leaves --recursive --nocopy test/ 181.15 KB / ? [------------------------------------------------------------------=-----------------------------]01:53:54.173 ERROR commands/h: cannot add filestore references outside ipfs root client.go:247Error: cannot add filestore references outside ipfs root
I don't download random binaries off github and run them as my normal user because I'm not a complete fucktard. Stupid fucking Go users.

bring it up to the devs, user

I've never used chpst, but don't you think it's because you're running the daemon as ipfs and you're running the command as your "normal user"?

Previous thread archive.fo/ymVRe

hey, testing out a bot I made for IPFS threads:

IPFSbot:

make a post with this syntax. bot will attempt to download the torrent, add the files to IPFS then post the link itt

please don't abuse it

Forgot to mention, the limit is 3 gigs per torrent

Yeah, that's not a good idea. Someone will queue up CP.

Or worse.

IPFSbot:09b156fe1fd6aee386013d94bba1a94c17565187

I couldn't find an easy way of printing a directory recursively so I made a small script. If you can make it print the actual subdirectory structure it would be a big help.
#!/bin/bash# Outputs the entire directory tree to a file# Does not keep track of the exact subdirectory structureecho "Retrieving simple directory tree. May take a while..."hash="$1"(tree_all() { ipfs ls "$hash" | while read -r line; do case "$line" in */) echo "$line" hash=$(echo "$line" | awk '{print $1}') tree_all ;; *) echo "$line" ;; esac done}tree_all) > /tmp/"$1"_all# Splits up the 3 columns if you need to process them separatelycat /tmp/"$1"_all | awk '{print $1}' > /tmp/"$1"_hashcat /tmp/"$1"_all | awk '{print $2}' > /tmp/"$1"_sizecat /tmp/"$1"_all | awk '{for(i=3; i

It's telling you it can't go outside its working directory. If you have your datastore in ~/.ipfs, it can --nocopy anything in ~/*. If it's in /var/mything/stuff/.ipfs, then it can --nocopy anything in /var/mything/stuff/*. I don't know why that limitation is there.

Interesting. Hope to see it combined with their authentication structure so you can subscribe to a particular node rather than just a topic string.

...

It's probably not documented in the main --help because it's experimental, all that stuff is here.
github.com/ipfs/go-ipfs/issues/3397

Layman user here, it's been months since I've visited an IPFS thread, how usable is it right now? By that I mean can normal websites be hosted on it yet and is sharing files with it better than some other shit?

How much Japanese porn is on IPFS?

The idle spam traffic issue got slightly better (but don't hold your hopes up if you have a DSL). But overall, ipfs is still a meme. That is, inefficient and slow compared to bittorrent.

IPFS is a filesystem. There is not going to be websites with backends on IPFS. The server logic has to be served on another service.

Neocities already host their sites on IPFS.

So Gentoo is so shit you can't even run Go?

With --nocopy, it's functional enough to support share threads. You can host static websites with it just fine, but development on js-ipfs is still underway. I'd wait until that hits some serious milestones before integrating it into your website.


Just wait until I finally organize my JAV collection.

This right here.
Combine the two and technically you've made half of the stuff the entire internet runs on obsolete and depreciated.

No, it means I have to compile the whole shit for only one program. No, thanks.

As opposed to everything else on Gentoo. Oh no it will take me a whole extra 2 seconds to compile Go, a language that boasts fast compilation times and easy building.
If you're running Gentoo or any source based distribution, you better have your build system set up to be fast if you're using that over binaries, if you don't show me proof for some kind of build cache, distributed compiling, and in memory file system for building on your machine right now I'm going to revoke your Gentoo rights, you'll be forced to install Ubuntu. I'm sorry but it's how it has to be, you knew what you were signing up for when you chose to use ports.

theres a C++ i2p client btw.

How would normies go about benefitting from this?

Once upon a time I could watch some movies and tv shows for free with IPFS, but the thread I had, for finding IPFS links, died (amazingly, it was on halfchan Holla Forums. I swear). I still have no clue how to find anything on the IPFS network, but it'd be so cool if I could, no more torrenting for popular movies...

>ipfs.pics/ (dead)
It's not dead.


As with almost all technical improvements the benefits to normies will go unnoticed, honestly that's true for a lot of improvements, you only notice shit when it annoys you, it's impossible to see when something didn't fuck up.

The one that comes to mind are things like local network sharing, "wow that thing you sent me loaded hella fast" and eventually message passing, even if they can't access a page maybe someone they are connected to can and serve it to them, less timeouts and errors like that even if their connections are spotty they'll probably have some redundant way of communicating which ipfs would utilize, like short range adhoc shit.

For now there's this
ipfs-search.com/
maybe someone will make something better later or something will be integrated.

People sometimes use IPFS to share things on Holla Forums in the share threads . It's been in the OP template forever now only second to bittorrent, however most people use bt magnets.

Once filestore is fully implemented I'll be adding my entire drive, the only thing I'm waiting on is filestore rm support.

My bad, it was down when I wrote the OP and when I searched for mentions of it, most people were acknowledging it had been down for a while. Good to know it's still alive.

If getting caught torrenting is your concern, IPFS can't really help you. It works in a similar manner and your IP can be exposed to copyright trolls. That being said, its relative obscurity means you are probably safe. Outside of gateways getting DMCA takedowns - i.e. the Gentoomen Library collection, which you can find at QmTmMhRv2nh889JfYBWXdxSvNS6zWnh4QFo4Q2knV7Ei2B - no actions have been taken to attack the IPFS network.

Why would you do that to yourself?

I used bash expansion in a loop before I changed it to be recursive. Forgot to change it.

I tried running pubsub locally and it seems that when subbing the client waits for someone to publish something before echoing the received data, and subbing after someone published does not retrieve the already published data
That's probably why you waited 5 minutes without anything happening

Not official, but I remember seeing this a while ago actually bitsharestalk.org/index.php/topic,22576.msg301280.html#msg301280 . Here's the actual repo github.com/kenCode-de/c-ipfs . Looks like it's still being actively developed, but I haven't bothered to actually try it yet.

/ipfs/QmcCbCgQibX6a6UwXhw352FBktSeFKwbJJ5LRHSNKkv2MW
ipfs hash for those NSA leaks that happened today

...

Add a "dnslink=/ipns/$(ipfs id -f="")" TXT record for your domain name and you can then do `ipfs name publish $directory_hash` to make the contents visible at /ipns/[domain]/. If you want a normal webserver to serve those just proxy those requests to the ipfs gateway port and pass the Host header, it works automatically.

With that setup you can even have the DNS point to a non-local ipns ID and use that to update remotely without ever touching the webserver.

what a fucking meme. are there any http gateways that don't comply? is it possible to build your own gateway? otherwise this software is basically useless.

gateway.glop.me is the only other one I know of. It hasn't served any takedowns as far as I can tell.

Maybe that is because the gateway owner likes to stay out of jail.

heh

wtf i hate ipfs now

The JS IPFS library is completed yeah? IPFSTube when?

All you have to do is change the interface the gateway listens on to something other than localhost in the config file, then people can access it while your daemon is up. Specifically
github.com/ipfs/go-ipfs/blob/master/docs/config.md#addresses
the "gateway" option in the addresses block.

Most people are using nginx as a reverse proxy in front of it. You can see how ipfs.io is set up here
github.com/ipfs/infrastructure/
but it really is as simple as exposing the port and linking people to it. mydomain.com:8080/ipfs/somehash
or telling nginx about it and forwarding traffic on /ipfs to it so it's only exposed locally but proxied by the public nginx server.

There's no blacklist by default, you have to set one up if you want one, I think you can find out how here but I don't care to look into it.
github.com/ipfs/refs

There's also plans to make the gateway read only so that it will only serve things you have instead of fetching whatever is requested, so you can add a bunch of shit and link it ublicly and not worry about people downloading things with your repo.
github.com/ipfs/go-ipfs/issues/3596

there are hardly any webrtc aware nodes, afaik

Are there any countries that don't have regulations like that?

if you guys want to turn off NAT Mapping (makes your internet connect and router work, but will connect you to less nodes) do
ipfs config --bool DisableNatPortMap true

Thanks, that's good to see.

Nobody seems to have noticed, but the madmen are trying to implement actual .torrent file support in js-ipfs github.com/ipfs/js-ipfs/issues/779

For now it looks like they're just incorporating navigation of .torrent files and magnets into js-ipld, but when the whole project is completed it's going to be nuts. I have a mental image of what it might look like, if my understanding is correct:
Straight black magic shit. And I haven't even gotten into js-ipfs specific features that utilize Node.js web services or browsers.

* - I'm not sure if this is in the spec, but it's trivial to add on so it would be pretty likely.

I've watched a bunch of videos on this and that indeed seems to be the goal they are going for, the tutorial videos here use the term "reachable" since IPFS spans multiple networks and protocols to fetch content, if you can reach a peer you should be able to get content their sharing, no matter how they're sharing it or what network they use. In the future with peer to peer message passing/relaying it will be even crazier because then it's not a matter of "can I reach them?" it's "do I know anyone I can reach that can reach them for me?" None of these concepts are new, it's just that nobody ever implemented them all under a single project before, I'm loving this, it's really how we should all be connected, let the daemon handle all the networking bullshit, you just worry about the content itself, whatever and wherever it is. That even sounds like what HTTP aimed to be, and that did an alright job, but this seems much better, they learned from all the headaches people had when working with traversing the network just to access data on another machine.

Some mature programs (programs that are relatively "old" yet still have a userbase because of a concept they implement) that come to mind are things like freenet, i2p, DC/ADC, ed2, bittorrent, perfect dark, anything that bridged 2 or more of these somehow, and a bunch more. If you are familiar with p2p file sharing applications you recognize that these concepts can be implemented and implemented well, so the goal of the IPFS team doesn't actually seem unrealistic. It just a matter of someone rounding these concepts up and slapping a standard implementation to them that interacts with their other implementations, like IPFS isn't an application unto itself but just a giant wad of network glue code, connecting everything to everything. If you think about it that's not impossible at all, it's all technically possible and most of it has been tried before. All our computers are in contact with each other somehow, and usually in a way shorter/closer than traversing a typical ISP route, but nobody takes advantage of all that connectivity.

My only concern with IPFS is how hard it might be for an attacker to take it down?

Surely, I could just hash a shitload of junk files and then slam so much into the DHT that it becomes a bloated mess, no?

Are there protections against this?

Anyway, looking forward to it stabilizing a bit more and then getting native integration in browsers. At that point, we can look at fucking up most service monoliths (YouTube, Twitter, Facebook, Netflix, etc).

I get it now

It's like turning the entire internet into your personal hard drive.

That's neat.

Right now trying to legitimately add more than ~1k files at a time causes so much DHT traffic it'll crash most home routers. Trying to DoS IPFS is like pissing into a sun made of piss.

IIRC they're not really concerned with security yet since the base implementations are subject to change rapidly, I know there are plans to have it audited multiple times once it's feature complete but obviously they're not there yet.
github.com/ipfs/go-ipfs/blob/master/assets/init-doc/security-notes

As for spamming DHT specifically I think I remember reading about spamming DHT before for other systems, maybe it was BitTorrent, the take away was that it takes a SHITLOAD of bandwidth from multiple peers to sustain an attack that actually has impact, it seemed difficult even with a large botnet, if IPFS were to grow large it would become increasingly difficult to inconvenience peers on purpose without exploiting some security flaw as opposed to just brute force attacks. I think what you said is the worst case scenario, some people could puff up the DHT a little bit for a little while and make you process more records or whatever.
Wikipedia doesn't have much to say but maybe there's good security write ups somewhere else on that topic.
en.wikipedia.org/wiki/Distributed_hash_table#Security


For some reason when I think of it that way it reminds me of Plan9, it seemed like their goal was to turn every computer on the network into your personal computer, that's an oversimplification but what I mean is being able to utilize the hardware of one machine from another.
I wish I understood Plan9 more than I do, stuff like that seems amazing.
plan9.bell-labs.com/wiki/plan9/Configuring_a_Standalone_CPU_Server/index.html
I know the IPFS team mentions it sometimes so they're aware of the interesting concepts of the project and may be inspired by parts of it.
There's this image that is usually used against Rob Pike but I feel like people take the network quote out of context, he doesn't mean "cloud" services like people think, it's stuff like this, where there's no reason it shouldn't be that way but we just don't take advantage of the systems and technologies we already have, we really do waste it by not combining it all together. Obviously this is an ideal though that would be hard to realize and very communistic, why should I lend my hardware to the global botnet if I'm not going to take full advantage of it myself. For storage only it seems a little more practical but maybe we can start there and expand out slowly. Who knows.


jej
The state of consumer routers is pretty damn terrible imo but I still hope the IPFS team work around their awkward and arbitrary limitations. I've got the best equipment my ISP has to offer and I can still take it offline if I try hard enough, what a racket they run, I may as well just build my own. My tinfoil hat says they do this shit on purpose as some kind of piracy prevention, ISPs seem to hate every P2P system, do they hate them enough to gimp their own equipment? Probably, since nobody will complain about it.

i have judged

github.com/ipfs/js-ipfs/issues/800#issuecomment-290988388
They're getting damn close to having a fully operational js-ipfs in the browser. The two remaining components are js-libp2p-circuit and js-libp2p-kad-dht. From what I can see, they're both already pretty much done. js-ipfs is 0.23 now, and from what the dev is saying the DHT and relay should drop in either 0.24 or 0.25.

I'm fucking excited, if it actually works this will be a genuine game changer.

[GNUnet-developers] IPFS similar project competing rather than helping
lists.gnu.org/archive/html/gnunet-developers/2017-04/msg00008.html
lists.gnu.org/archive/html/gnunet-developers/2017-04/msg00009.html

That OP sounds like a baby.

The fuck? IPFS doesn't have a block chain. I don't think this guy understands what the project is about.

I'm too retarded to figure out how to install and use this.

See
ipfs.io/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw

Pic related on:
ipfs.io/

wrong pic sorry

gnunet was designed for the taller system while IPFS wasn't and was added afterwords

I think he knows a lot more about these P2P mesh project more than any of us
lists.gnu.org/archive/html/help-gnunet/2017-04/msg00001.html

Yeah but that doesn't make IPFS itself a blockchain. You can build a blockchain on top of IPFS quite logically, but that isn't a part of IPFS, that would be whatever program you've built on top of it.

Same thing with gnunet it's a framework
The thing is that gnunet was made to be censorship resistant and completely anonymous (if wanted).
GNUnet can be the web 3.0
It's pretty versatile.
It resolves every problem that we actually have with the actual web that transmits everything in clear text per default and that keeps logs of anything.
It's basically resetting the web and correcting it's flaws.
I don't get why people are so enthusiastic about ipfs or I2P?
Because it's simple to use right now ?

Just found this
gnunet.org/bugs/view.php?id=4610

when I enter the hash of a file from ipfs search into the "enter hash or path" bar nothing happens.

What do I do?

Let Holla Forumsnical people worry about it for now.

I don't understand this criticism. They complain that it's too similar and they're reinventing the wheel, but then point out the use cases where they're different as criticism.

I'll refer you to the description of IPFS as a "Merkle forest" because it's the best illustrative example of what it is. Like a torrent tracker, you have individual hash trees, based on the hashes of files peers seed to the network, which the network uses to match peers. There isn't one universal tree which all content is connected to. It just uses hash trees to manage peer discovery and file handling/immutability.

You can also use IPLD, a library and sub-project of IPFS, like a universal API for navigating blockchains. This is not useful only to navigate IPFS-native Merkle trees but also hash trees used by other blockchains (Git, cryptocoins, torrents, etc.), which is why they spun it out into a full-fledged project in itself.


Why are you searching a hash on IPFS Search? It's for discovering hashes.
If you want to download the hash, put it in a gateway link and download it (like gateway.ipfs.io/ipfs/[hash]) or use 'ipfs get [hash]'.

No. I got the hash from IPFS search then I tried to download it using IPFS but it didn't work.

Nobody's seeding. There's not a great incentive to seed right now, so you'll see a lot of files hosted by a single person who will rarely run the daemon. Combine this with this being beta software where versions quickly become obsolete and incompatible and you have low adoption.

That could change if Filecoin or an IPFS-based Ethereum contract lets you pay people to host files, or js-ipfs progresses, but that's a discussion for another day.

To add, this is the primary innovation of IPFS. The project is primarily a bunch of glue that can stick a bunch of otherwise incompatible content-addressed systems together behind a single common api. This approach of trying to make IPFS compatible with everything is what's going to make it catch on in my opinion.

What about CP? Is it possible to spam the network with it and have random people download it by mistake? Is it possible to quickly report/delete a file?

Fuck GNU. They're fucking useless.

It's not really possible to spam the network with anything. The only thing you can do repeatedly is inform the DHT that you can provide a specific hash and whatever other metadata comes with it (name, etc).

No. It's not like Freenet or GNU Social or something of that nature, and you won't incidentally help host CP in any manner.

You can't delete anything from the network. It's designed to be immutable. Those so motivated could report a peer's IP address or a hash to law enforcement (together with a lengthy explanation of IPFS). Peer discovery functions like a torrent tracker, so cops would likely use the same methods to shut down CP as they would a torrent - find peers sharing a known illegal file, get their IP, prosecute.

Again, you're not likely to stumble across a file you aren't looking for, so I don't really see it as a problem for the average user. This is only an issue if you're a server owner (think ipfs.pics or an imageboard hosting files through IPFS), someone helping host their content, or law enforcement. I don't think torrents have made CP any easier to find or download, nor do I think IPFS will.

Oh. So files aren't distributed, only the hashes? You only have the file if you download it?

So it's really a 1-up from DHT torrenting.

Sufficient adoption of this won't happen until the anonymity is ironed out and integrated.

Already, 90% of the files on there are dead links from people who started it up for a laugh and then immediately uninstalled it because they thought twice and got spooked.

Is this possible? Are there lists available that people could use on a hosting service they run?

IPFSChan sounds like a fucking solid idea.

A chan needs a server infrastructure. It's not static. IPFS is not suited for this, as it's constantly changing.

Try this instead:
github.com/majestrate/nntpchan

I considered that, but on the other hand one man's blacklist is another man's whitelist. Even if you got every node on the clearnet into agreement to instantly blacklist any CP hash, you'd still have people running rogue nodes within TOR and I2P.

I think the best you could do in that regard with current IPFS is to handle media only. I'm still not sure how js-ipfs works, other than integrating a node directly into the Node.js framework. (For example, a user uploading an image to your server and it automatically adds it to IPFS and this is all handled with Javascript.)

In the future, I can imagine people making use of it. Addons for browsers could embed a js-ipfs node and have the option of automatically adding a file you save to the filestore. (Or you can ask it to prompt you first.) When IPFSchan needs to serve someone a file, and the client has the addon, it requests the browser to use WebRTC to bypass the server and directly link other browsers to serve files. If needed, this could grant you special privileges on the website. (Think Exhentai and [email protected]/* */)

It's more than an idea. There's actually a working ipfs-based imageboard impl. in development. It works pretty well, but it's still being blocked until js-ipfs is properly ready.

GNUnet looks like this. No, it can't.

At any rate GNUnet makes more fundamental mistakes. They're trying to redo the entire stack at once, and they're trying to push a monolithic program that requires user installation to do it. Nobody is going to use that. Nobody would use ipfs either if it was just going to be go-ipfs, their brilliance is (assuming it works) porting it to something normalfags can automatically use through their browsers with no modification. Beyond that, ipfs deliberately delegates as much as it can to other projects, leaving an anonymity layer to TOR/I2P, and trying to push addressing to cjdns. They are trying to make ipfs glue, that can be used as a common interface to websites/torrents/git/bitcoin/whatever. ipfs is following the Unix way, while GNUnet is not.

Can you people actually read the thread instead of asking questions that have already been answered?

SOMEONE PLEASE GIVE ME A HASH OF A FILE WITH SOME SEEDS

Thats not the only GUI.

It shouldn't have a GUI, it should do what ipfs does and act as a transparent distribution layer for other applications.

what internet drama ?
Learn to read nigger


read the concepts instead of saying bullshit
gnunet.org/concepts

what the fuck does that even mean ?

because you think that brilliant ?
In security that means suicide.

Sorry but the technical reality isn't according with your needs and what you think people needs.
You sentence is fallacious.
All the time I hear people saying that
I call bullshit on that and to explain why I call bullshit is because when you look at android it doesn't stop normies from downloading X app to view a website that normally can be seen via a web browser.

Because you think that it's a obviously a good solution ?
Tor has flaws because it's designed to be compatible with the actual web.
Has for I2P
gnunet.org/gnunet-vs-i2p

Gnunet already use multiple software/libraries that they didn't made themselves.
You don't understand the unix way.


Pic related


What part of "gnunet is a framework" you don't understand ?

What questions are you talking about?

user...

Gentoomen Library. Knock yourself out.
QmTmMhRv2nh889JfYBWXdxSvNS6zWnh4QFo4Q2knV7Ei2B

Thanks.

Currently the only Chrome or FireFox extensions related to IPFS just redirect you to a local node, but this requires the user to have a running daemon. I remember early in IPFS, they were trying to push an "ipfs:" URL protocol (before dropping it) so I want to make an extension that picks up any URLs with "ipfs:" and has js-ipfs grab the files.

Though, it looks like Chrome's extension API (which FireFox is adopting) only really allows you to register protocols beginning in "web+", and it looks like that API only lets you redirect people to other webpages.

Ultimately, i have no clue how to implement this extension within the limits of the Extension system, that would make it user-friendly. (aka, seamless)

that doesn't change the fact that it's a meme now, does it?

this is bait.
every time you open a web page in firefug it saves shit to your disk, so your disk is always potentially full of CP
bitcoin's blockchain can have CP in it and there's no way to remove it
popular torrents or any type of publicly hosted file can have CP in them until they get reported but there is still evidence of CP all over 50000 computers of users who got tricked until then
a popular publisher of content, such as netflix, or a warez group, could stenograph CP into every file they release for a period of 10 years and then release the key, and everyone in the world would be forced to delete these files or be guilty of CP crimes
of course, politically speaking, anything is possible - ipfs could be suddenly marked contraband because we pretend your reasoning has any ground

GNU, not even once.


I'm pretty sure things like this are why people are mad about the old API's being dropped in Firefox.

If you're restricted to opening web pages I'd say make an html page with some javascript in it and hash it with IPFS, then just register the ipfs.io gateway with that hash as the handler, that way people will still have to go to the gateway to load the javascript but the javascript you write in that hash could take it from there, either using js-ipfs or handing some data back over to your extension to do whatever you want with. It seems hacky and terrible but maybe it will work for what you want, video and hash related.
ipfs.io/ipfs/QmZwiWDEsmHxouwQ8w883U1NaZZPV6rnTCStAanEp8HoWX
/ipfs/QmZwiWDEsmHxouwQ8w883U1NaZZPV6rnTCStAanEp8HoWX
web+ipfs://QmZwiWDEsmHxouwQ8w883U1NaZZPV6rnTCStAanEp8HoWX

Disclaimer: I don't normally write JS so don't assume anything I'm doing is the right way, I just threw it together to test.

Really though if you're planning on just loading things from anchors or text you could probably just scan every link and/or p tag and load it in the window, no need to actually handle the protocol by interacting with the browser protocol handler, passing back to your handler, instead just parse anchors, if you find an ipfs href then replace it with something and/or handle it in whatever this browser context is.

Does that make sense for what you want to do? I'm not sure what specifically you mean by "grab the files" but I'm assuming either of these methods should work.

whyrusleeping is asking for things you want to see in the next ipfs version. reply what you want here
i've already asked for a daemon shutdown command and faster download speeds

i2p support

an actual fucking manual

Where is he asking this? Tell him I just want js-ipfs to integrate the DHT already.

What front-end is that?

Wtf does this mean? Haven't used IPFS in a month then see new thread now this shit happens:

$ ipfs get QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw/Part%201%20-%20Desktop%2008.03.2016%20-%2021.16.04.17.mp4Error: merkledag: not found$ ipfs cat QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw/Part%201%20-%20Desktop%2008.03.2016%20-%2021.16.04.17.mp4 > fileError: merkledag: not found$

What the fuck is `merkledag`

What? How am I baiting. I was just asking if it was possible for LEA or someone shady to incriminate the network by planting evidence.

Jesus, relax.
You see conspiracies everywhere.

he asked on riot.im

Arguments shouldn't be percent encoded outside of the browser.
ipfs get "/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw/Part 0 - Desktop 08.03.2016 - 20.37.05.13.mp4"

However your command should have returned

If you're getting a "merkledag: not found" it means it can't find the actual data structure (a merkeldag) you requested (via the hash you gave ipfs), most of the time this means you're not running the daemon. If you don't have that hash's data locally then you'll need to get it from someone else, to get it from someone else you need to connect to the network, otherwise ipfs won't be able to find it.

just make sure the daemon is running with `ipfs daemon` and then try to get it again, using quotes around it if a filename has spaces in it, like above. prepending "/ipfs/" is optional but those are the canonical paths.

github.com/ipfs/specs/tree/master/merkledag

What user means to say is, #ipfs on Freenode.

i wasnt sure if messages from matrix go to irc or not, plus his name didnt have (IRC) next to it

Is there a rust implementation available?

REEEEEEEEEEEEEEEEEEEEEEEEEEE I WANT js-ipfs NOOOOOOOOOOOOW

This whole shit about nyaa being dead, and the anime torrenting community collectively losing it's shit has me wondering.
Has anyone actually made a user-friendly front end for ipfs? Something like a pseudo-filebrowser would be great, actually.

...

i remember someone posting a hash for an auto-updating anime streaming site on ipfs, ill try to find it if i can

That was me. It was mostly just a proof of concept, but it's fucked at any rate since I had it scraping releases from Nyaa. Which is now ded.

/ipfs/QmaxcHGXGFq1tXKDnQv9vuXMnF8vBKaoNjUs9GiXnABKcn
doesnt look like it scrapes from nyaa, the episodes are all in ipfs

We talking down or dead as a doornail? I checked /a/ but I didn't see any threads about it.

dead as a doornail
We think...
The owner just nuked all his shit and disappeared and the last thing he talked about was a new legislation that passed in the EU that made even hyperlinking a file a 10-year prison sentence
Also the domains for the site mention that they are disabled or "quarantined" in their whois data,The domain registrar says that quarantined domains are ones set for deletion by the owner.
Also >>>/a/672279

This is the latest version I saw.
/ipfs/QmXta6nNEcJtRhKEFC4vTB3r72CFAgnJwtcUBU5GJ1TbfU

The site uses a trend you usually see on Freenet where each static page has a link to a dynamic one which redirects to the latest version of the same or newer static page, in this case, via ipns.

Okay, played around with IPNS a little bit to figure out how it works. Really neat stuff.

GUIDE TO IPNS FOR NEW PEOPLE
When you add something to IPFS, it creates a hash link to a static file. That's fine, but what if you are trying to add a folder that keeps updating? Let's say it's your anime folder. You add new anime all the time, but you want one link that always points to it - even when the contents change. IPNS lets you do that. It lets you make an IPNS link that links to an IPFS hash you can change at any time, but the IPNS link never changes.

Make an RSA key for what you want to share. Example: Your anime folder.
Now anything you publish under this key will link to the same IPNS link, which points to whatever IPFS hash you choose.

So you add your anime folder.
Take the one that it spits out last. Should look like
with nothing after.

Now it's time to publish that hash.
Now you can link to ipfs.io/ipns/QmNewHashForIPNS and it points to your anime folder.

If at any point you wish to submit a new hash for your anime folder, run "ipfs add" on it again and then "ipfs name publish" with the new hash. If you want to get fancy, you could probably hack together a cron job that does this.

Another thing you could do if you want to share something with other people, is set up a domain. If you have the ability to change the DNS on a website, you can add a TXT record, with:
dnslink=/ipfs/$SITE_HASH

This way, people can access your folder from a MUCH shorter URL.
For example: /ipns/anonmatch.com/

You can (I think) also link your DNS to an IPNS hash. This is convenient, if for example you lose access to your host machine; in that case you just have to change the hash, and wait for the DNS to propagate.

Does anyone have any idea how pubsub works? I'm interested in experimenting with dynamic content now instead of waiting for it to come out of the experimental phase.

Thanks for writing this, I only knew how to work with the old IPNS which was limited to a single key, having multiple IPNS keys under a single node is super useful.

If I wanted to have a database exist on IPNS, and regularly update it, what would be the best way to query this database without downloading the entire thing everytime it's updated?
One idea I had was splitting the database up into chunks, as several json files that hold data entered between certain dates. This way any time there's a database update people who already have it pinned only need to re-download a small piece.


I toyed with it a bit. Basically you listen onto a certain "topic", and people send messages to this topic.

For example, if I ran a daemon (with pubsub enabled) I could run this in bash:
curl "localhost:5001/api/v0/pubsub/sub?arg=animu"
(I could also use `ipfs pubsub sub animu`, but better to connect through the API)

Now, any time there's a message sent to the "animu" topic, I get a JSON object that's something like this:
{"from":"VGhpcyBpcyBhIHJlYWxseSBmYWtlIGhhc2gK","data":"WW91ciB3YWlmdSBhIHNoaXQ=","seqno":"FLudLD7Ky5M=","topicIDs":["animu"]}

The "from" value is a base64 encoding of sender's peer-id (the string you normally use for IPNS is a base58 encode of your peer-id)
"data" is a base64 encoding of the data sent. In this case I sent "Your waifu a shit".

It's a neat feature, but it doesn't allow you to read older messages; it's like a chat room without a logged history.

Really needs an option to store way more metadata for local `ipfs add` commands, because I have a few thousand hashes pinned by now and no idea what the fuck they are.

Recent versions of IPFS have made it so that it skips over things already pinned. Or at least gives them a quick check; my SBC takes maybe a tenth of a second for each one. If you wanted to set up a cron job to recheck a folder, I'd set it to once a day during a time of low usage just to be safe.


Agreed. For now, you can redirect "ipfs filestore ls" into a file (or pipe it into grep, but it can take a while to generate) and search for filenames there. No idea if you can do something similar for things pinned the old way.

I just store my hashes in txt file with the name followed by the hash/hashes.
Jelly game/ipfs/Qmc2fSA6chjZMbXRi7JGQZ7tXEmDbtG2XF9FXRLCtGLBQV

It would be nice if there was a config flag that automatically stored the file/folder name with hashes on add, when enabled. So you could do something like ipfs pin ls -m to list metadata, or maybe -n for name, or something like that. Maybe -h for human mode.

NixOS are looking towards using IPFS for their binary distribution.

Neat. Why are some of the nodes on the IPFS side still using http?

A guess would be that they are not IPFS clients, but rather people accessing IPFS through an HTTP/S proxy.

Why not use the -w argument? I always use it because it retains the file names and directory names of everything.

IPFS will most likely opt in, NIxOS will probably use IPFS in a pseudo federated style.

Check out the files interface (ie, "ipfs files --help"). It's a quick-and-dirty method for organizing your IPFS files as if they were unix files. You can't add comments or anything but you can name and organize things in a sensible way so that you can find hashes more easily.

One thing that I like about it is that it can help you easily find hashes of arbitrary combinations of files. E.g. if I have Holla Forumsseries1, Holla Forumsseries2, ..., Holla ForumsseriesN and I want to send someone a link to the folder that contains just the second and fourth. Then I can go
* ipfs files mkdir /tmp
* ipfs files cp Holla Forumsseries1 /tmp/
* ipfs files cp Holla Forumsseries4 /tmp/
* ipfs files stat /tmp
This will give me the hash of the folder that contains only series 1 and 4, which I can share and because of content-addressing, of course I can now do
* ipfs files rm -r /tmp
but the link will still point to a valid folder in IPFS, since I didn't delete the files themselves.

Of course this is kinda laborious, but you can see how a nice user-friendly interface would allow you to just ctrl-click and select some files before right-click and "get link" or something.

Turkish Wikipedia on IPFS:
ipfs.io/ipns/tr.wikipedia-on-ipfs.org/wiki/Anasayfa.html
English version is coming Soon©.

Linking relevant article: ipfs.io/blog/24-uncensorable-wikipedia/

Basically Erdogan's a dick; IPFS plans to make Wikipedia accessible everywhere.

...

Don't be. They can't edit over IPFS.

Does this mean I can upload files to ipfs and access them without host being online just using link (dropbox etc.)?

No.
Before, when you added shit to IPFS, it made a copy in the form of several broken up blocks. Now it doesn't make an entire copy, and instead references the original.
At least, I think that's what it means.

another question, if i have a file, let's say abc.rar and then download it from ipfs url, if someone else has that file but my original host is offline would it be served?

Yes, as long as someone has the file, you should definitely be able to retrieve it.

What's the design goal of Go? Judging by the interface flexibility I guess it's to mainstream streaming programs from the internet.

That doesn't help when you're trying to figure out what the hell you're looking at in `ipfs pin ls` output.

markov spambot pls go

If they're running an ipfs daemon and have pinned the file, yes. Otherwise no.

IPFS v0.4.9 is out!

Major changes are the implementation of CIDs (Content IDs) and better IPNS performance. The hashes we've been using (QmThisIsAHash) are now treated as v0 and it will now use v1 (zbTheNewHash) format when adding files. All old v0 hashes will be forward compatible while v1 hashes are supported in v0.4.5 and newer. Those on v0.4.4 or older need to update to be able to read the new format.

Also IPNS is going to be re-written and replaced by IPRS (soon?) which will dramatically increase performance and incorporate git-like commits so if I publish a website I can look and go back to any past revision of the website.
github.com/ipfs/go-ipfs/issues/3861

This is neat.

I wondered why they bothered, but then I saw them talking about adding your choice of hashing function, with blake2 being named specifically. Pretty neat. The IPFS node on my SBC desperately needs faster hashing if I'm ever going to add large collections of files and that's a step in the right direction.

IPFS hashing performance blows goats even on a dual socket Xeon server. I can't imagine how bad it is on an ARM potato.

Part of the reason its performance is so fucking awful is that for every file you add, it broadcasts the fact you have each chunk to the network, unless you started the daemon in offline mode.
This is also why trying to add too many files at once causes an error (because it tries to open thousands of sockets)

Requesting infographics guide to use IPFS and IPNS

How about you fucking read instead you nigger? Making it normalfag friendly in its current state will only drive people away from it.

Do it yourself. It's not hard to figure out or make a quick infographic.


You need to fuck off too. It's perfectly usable as it is now, even if you have to deal with a command line or the shitty webUI (which I don't even think allows adding to the filestore so don't even bother).

Since the failure of Nyaa and the subsequent shitshow, I've been thinking about how to actually solve this problem so it doesn't happen again. I've a very strong intention to go ahead and try implementing something (the current replacements are all as vulnerable as Nyaa was), but before that I think more eyes on the design first would be smart.

github.com/DistributedMemetics/DM/issues/2

It's a sketch of a bittorrent index system with decentralized control and distributed content. As far as I can see, there shouldn't be any reason it wouldn't work. If it's actually shit, please post why in the issue or here.

ipfs port of missgenii's streams
/ipns/QmSvL1st4UyHTyZxw4VAnz8SfSJmH1Dcn7kBtJCD2SHhwH
i'll add the rest another day

If you don't keep your node open, I can't download it, buddy.

I'd give serious consideration to using BigchainDB. I think it resolves differences elegantly and is designed to scale well. I suppose it's not truly decentralized (as in each peer is capable of hosting) but it can make a federated network. With some massaging, not unlike the project you have planned, you could make it easily deployable.

Here's an example that could easily be modified to work as a tracker service:
github.com/bigchaindb/bigchaindb-examples#example-on-the-record
I might play around with it a bit and see if it can feasibly be worked into IPFS distribution.

From what I can tell, you could import IPNS public keys and make accounts from them, and then make immutable assets (hashes/magnets) to manage between databases. All consensus is handled automatically and filesharing is handled via IPFS, as well as snapshots of the blockchain.

Is it possible to rout ipfs traffic through tor or i2p? Using a piracy platform that shares your ip without at least that much protection sounds unwise.

Then maybe you SHOULDN'T BE USING IT AS ONE

You're retarded if you think, ipfs will have a future without (illegal) media sharing. The other uses are just gimmicks.

Here you go.
ipfs.io/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw

The solution is just to stop pirating normie shit you fucking pleb. I've pirated literally terabytes of software, music, and anime from a colocated server with a public IP address and only ever got a DMCA for the one time I torrented an episode of Game of Thrones.

I hope you get swatted for giving the jewish media free mindshare

that pic, that pic is too funny

mailto:[email protected]/* */

Requesting that dick font tbh.

It's cocksure/kocksure.
/ipfs/zdj7WeXo7YnCwneooxgr9MDqr3xV6XgjW8V4TJzBwxMNAa7oX

js-ipfs hit 0.24.0.
Highlights from the Github:


>Always wondered of how many pieces IPFS is built? Check the updates Packages table at -- github.com/ipfs/js-ipfs#packages.

could something like this be used to make a decentralized tracker?
github.com/cakenggt/ipfs-foundation-frontend
ipfs.io/ipfs/QmXny7UjYEiFXskWr5Un6p5DMZPU87yzdmC3VEQcCx9xBC/

See

Couldn't it use an embedded js-ipfs node instead of having to configure my daemon to allow websites to manipulate it? For searching files, at least.

I'm also interested in how this preserves the database so you can find files that have already been added, not just catching whatever floats through the pubsub while you're looking at it. That's the one barrier between a dream and a functioning website.

The developer has a repo for the db system it uses here: github.com/cakenggt/set-db

An interesting solution that uses only IPFS and some clever JS. Good if you want the system to be a central or federalized authority sharing immutable information.

I'm a total noob who only started playing with this yesterday. Could somebody see if they can pull the text from this hash so I can know I'm using IPFS properly?

QmcnNJMLtzJpu1ekmN8Vj7caWPEcC8NewWZr6JGSrERLo4

I don't know how you could mess up pinning/adding, but here you go.

can i search the normal internet with this? like wikipedia?

Me neither, that's why I wanted to be absolutely sure. Thank you very much.

IPFS doesn't have a gateway to the web but there are websites on it. You specifically said Wikipedia, so this is your lucky day because one of those sites is a wikipedia mirror.
ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/

I want a service that can still work without ISP internet. If for some reason my internet is down, I still want to browse shit and be connected.

Ive been searching for a few days about this, and found IPFS so I post this here to see if IPDS work like this. I also found Freenet, GNUnet, Netsukuku
So which one is better for this goal?

IPFS is general purpose, so it can act as anything from torrents to git.
GNUNet is focused on filesharing, like BitTorrent, but more anonymous.
Freenet is a great example of a distributed internet, but it's programmed in java and don't expect a YouTube-like site on it.
Never heard of Netsukuku, is it like ZeroNet?

That doesn't exist sadly. You can't network any computers if there is no network. Have you noticed that when you see telephone poles, there's like four or five different cables going between them, if not more? It's not just power and phone lines, it's the internet. You're not going anywhere without using your ISP's wires.

Thats the other one I found, but forgot to add it.
Man, I know 0 about this stuff, so I have no idea if Netsukuku is like it.
So GNUnet its like thepiratebay? You just search torrents? Can those torrents be websites?
Lets keep the example of wikipedia, can I browse it with GNUnet or Freenet or Zeronet or something else?


Ok thanks. What if I still keep my ISP but for some reason the intenet goes down, or the power goes out could I still search things and communicate with others?
You know like radios that uses the network kinda independently.
The thing is I'm not in the US and I my govenment its not nice.

Theoretically, IPFS does work on local networks as well as the Internet. Government censorship is part of the reason this project exists. tl;dr: It's like BitTorrent but you don't have to worry about trackers or .torrent files or magnets because all its content is served on a giant DHT.

Relevant excerpts rom their article about mirroring Wikipedia:


ipfs.io/blog/24-uncensorable-wikipedia/

by this, they mean an ISP network of a IPFS network?
What about if I'm conected to an ISP network, but the internet its down?

I'm going to check IPFS out, I already downloaded it for windows since most people use it, but according to the site I need to move the .exe file to %PATH%
where is that?ipfs.io/docs/install/
And how do I use the commands at 'test it out'

A sneakernet is a play on words; it's putting on your sneakers and delivering files on foot with physical media. Since your country likely does not have some advanced mesh intranet, you might have to share files by delivering them.

IPFS is not magic. It cannot connect to the Internet if you have no internet. It cannot access files which are not a part of the IPFS network. What it can do is serve files over a local network and online seamlessly, so if a bunch of people chained together local networks and introduced one Internet connection, everyone could access IPFS files. (I think.) Or it can share files from one computer to every one on the local network.

See part 0 in here: ipfs.io/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw/

You can add a file to the network. (ipfs add -rw --nocopy [file])
You can download a hash through the command line. (ipfs get [hash])
You can also download IPFS Gateway Redirect for your browser if you want the daemon to handle links you find here, like the one above, instead of a gateway that acts like a proxy for people who don't have the software.

Why do you guys support a protocol that:
- Has the owners of said protocol determine what you can and can not see through the use of a blacklist?
- Does not give the user full privacy.
- Does not give the website owner full privacy.

Why would i even remotely support shit like this that places more control in the hands of the few?
Why aren't you mad that httpv2 was sabotaged by PRISM partners to not include encryption by default at the IETF?

Only if you access the content on their own gateway. Nothing stops you from using other gateways that don't have a blacklist, or running a daemon locally and circumventing the problem entirely.

That just wasn't on the developers' minds at the time, but they have talked about supporting integration with other services like Tor. I expect more development on that front, especially now that the devs have started encouraging IPFS as an anti-censorship tool.

What control do they have exactly? The only control they seem to have (other than being the code maintainers) is that they have the most used web gateway for it, but it's not like you couldn't just run the daemon locally.
Fuck, they even let you set it up to run on a private network, if you really want to.

Thanks for the info. The whole gateway thing seems like a pretty powerful feature. How heavy would it be on equipment to run your own public gateway with/without your own blacklist? I haven't run into any problems using the local daemon but a public gateway seems like an interesting project.

YO, upload it again

Wat Im doing wrong?


hey it worked
someone get my txt file
QmNVGskxMUag7j8qqhpTzaK2wzbXfyvYBMJJ5SMzBgiDEG

Ok, let me see if I get it
I cant connect to IPFS if I have no internet?
I cant add or get files from IPFS if I have no internet?
If I connect to the IPFS, but then the internet goes down, is IPFS down too and can't download anything?

Could you explain the local network part? Do I set them up with IPFS or before or after?
If I'm chained with 5 PCs do I still need internet to be able to download from them or add files?

Thanks

So if I download a pic or a .txt, the browser opens it? Would I still need to use the command prompt?

You can use IPFS if there's no internet, but you could only download files from computers that are connected to yours. If your room mate has a PC running IPFS you could download things from him and he could download things from you. But because the internet is down, you wouldn't be able to download from my PC in my house because there's no connection between us.
IPFS won't go down. It just won't be able to access files on the internet like I described previously.
You know LAN parties? When people do those, they often aren't connected to the internet at all but they can still play multiplayer games with each other because they have a "local area network." To put it simply, the internet is just a gigantic LAN party.
IPFS will work as long as it's installed. It doesn't matter when.
Nope no internet is necessary in that case. I think you're starting to understand now. Even if I cut the wire coming out of your house you could still use IPFS with those 5 computers.
If you use the browser you wouldn't need to use the command line at all. It would be like any other link.

btw just something to add in case you were wondering about how to hook up your computers. If you have them hooked up to your router via a cable or wifi, then you've already made a local network.

Put an nginx/apache reverse proxy in front of [::1]:8080 and you're done. Not heavy at all, except for the part where IPFS rapes most home routers by opening thousands of connections, but that's nothing to do with running a gateway.

/ipns/QmSovWVEK7iYVYvWnkgJkvRc5tcNCVuJb1M58MgivC6CTm/sighax.html
/ipfs/QmbiH9fRg4ZUbojbSpUnyvz9vMzk6PhiuQ46JUmxjKBLgR/sighax.html
3ds sighax on IPFS

Previously connected? With what?
Ok, but if I have a LAN with 5 PCs why would I need IPFS?
IIUC a LAN can share files within the other PCs in the LAN.

Well yeah in that situation you could just use a traditional local sharing approach I guess, but that's no fun.

A good argument might be that you don't need to trust any of the other computers on the network, that only the site's author can modify it so it doesn't matter where you get the actual files from.

is it posibble to have a local network with someone a mile away?

without requiring a mile long cable

If you have direct line of sight, you could use a directional antenna. That should work for at least a couple miles.

If you try to cat or ls a hash and it just hangs there, I think that means nobody is seeding it. The guy who posted the dick font probably ran garbage collection without it being pinned and nobody else wanted it.

For the other part, thanks for testing my text file again. It's interesting because I garbage collected that a while before your post so someone out there is still seeding it.
I tried catting your text file a couple times today and yesterday but I guess you were either not online or got rid of it already.

since this stuff work by downloading from others, if I use this for browsing sites an videos, that means I'll end up with gbs of files in my computer?


thats pretty cool

try getting this .jpg QmbFf7VVdYrRH8o4JahDSFkBYdxkuuFQvYMYFrXhYGBeFg

and this .mp4 and tell me how good is the playback QmdUidbY1CWQPPhQb6yTcyFH7hBfrYuxqcya1ifr6vztZ3 but dont download it with 'get' try it out live, like on the tutorial, and replay to me with a video to test it out
i used 'get' to get your file and 'add' to upload it, hows 'cat' different?

Your filestore only holds 10gb by default, so after it reaches that limit it'll start discarding files that aren't pinned.

i suddenly got this error while i had daemons running
10:37:46.881 ERROR core/serve: ipfs resolve -r /ipfs/QmfXjUxumtCJt9KxanaTTimBZaqVPA5ZYytGbTANrRGBBr: Failed to get block for QmfXjUxumtCJt9KxanaTTimBZaqVPA5ZYytGbTANrRGBBr: context canceled gateway_handler.go:548
10:37:54.926 ERROR commands/h: err: context canceled handler.go:288

now when i try to start daemon again it says
Error: cannot acquire lock: file "C:\\Users\\Jose\\.ipfs\\repo.lock" already locked

i closed the prompt and opened it again and the same error appears

I got the image, but the .mp4 wouldn't work. Wrong hash, or maybe you garbage collected? Give me more hashes and I'd be happy to test them all.
Cat starts a file stream which you can then pipe into other programs. Check this out (if you have mpv installed)

ipfs cat QmSxB9bNiAhPorUV5QLTDRNwRSAGudrEHZyBgf8kLD9bdm | mpv -

what is mpv? i tried changing mpv for 'vlc' vlc.ex' 'VLC' VLC.exe' and none worked
i was able to open it by using my browser with localhost:8080/ipfs/hash and also with ipfs.io/ipfs/hash
the second one was faster tho

try these videos
QmdUidbY1CWQPPhQb6yTcyFH7hBfrYuxqcya1ifr6vztZ3
its a .mp4
QmNh2se1qe4W8Am6NG7C2iBz6LBEKJRaDETfoMsbCReXjY
and QmbhqCiuBziYBuanADsqdcSXaAMkfYskZUvVsPygnf3pVb

This is really unstable. I don't know what causes daemon crash on publish. VPN (it found peers tho) or symlink to folder

This piece of shit keeps crashing. It was due to symlink, but then I did add in real folder and it worked to something like 50% and semAcquire crash.

Error: cannot add filestore references outside ipfs root. And this happened at 65%. What does it even mean?

I just assumed that you were on Linux. The intended result is that you stream the audio. I don't know the method of doing that on Windows.
Also I couldn't download your video files. What OS are you using, and what implementation of IPFS are you using? I used go-ipfs on Windows 7 ages ago and I don't remember having much trouble with it but I also wasn't using a VPN or symlinks.

i guess the ipns name is wrong, so have a direct hash
/ipfs/QmQWKEb3ZHZsgm8nMvefRYbsttCU3f2oH6BfGUs5BYGEE4/missgenii

i leaved the deamon on, and when i returned this appeared
12:05:15.347 ERROR bitswap: couldnt open sender again after SendMsg() failed: connection write timeout wantmanager.go:233

so im guessing thats why you couldnt get my videos.
im also using go-ipfs
try it again

ipfs get QmQWKEb3ZHZsgm8nMvefRYbsttCU3f2oH6BfGUs5BYGEE4
Saving file(s) to QmQWKEb3ZHZsgm8nMvefRYbsttCU3f2oH6BfGUs5BYGEE4
5.94 GB / 5.94 GB [============================================================] 100.00% 0s Error: unrecognized node type

----
λ ipfs get QmQWKEb3ZHZsgm8nMvefRYbsttCU3f2oH6BfGUs5BYGEE4/missgenii
Error: no link named "missgenii" under QmQWKEb3ZHZsgm8nMvefRYbsttCU3f2oH6BfGUs5BYGEE4

You're using NSA/Windows, so best bet is to just restart your computer.

Question; what version of IPFS did you use to make this?
I mean, it's clearly a valid file, and I can traverse the dag tree using ipfs dag get, but it doesn't appear to be recognized as a proper folder/file.

IPFS can only add things to the filestore that fall at the same or lower levels as .ipfs.

If it's at ~/.ipfs, then you can add ~/myfile and ~/Documents/myfile, but not /tmp/myfile.

k. I think i got it working either way. Can you access this?

ipfs/QmNb3mGvbCHDSa7PwrXVieRzRvEcD3i5iHAmkXAaxcGiZc

well windows is the norm so i want to be able to install this to other people


i restarted my pc and deamon is on again, so someone try to get these videos ad let me now if they could

not him but i opened it on the browseer and tehere are a bunch of 7z files for a bunch of consoles. gamecube, nes, snes, dreamcast, and so on.

ok works.

i give up. once again:

panic: runtime error: invalid memory address or nil pointer dereference
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x21 pc=0x4984a6]

Someone get these files so I can close daemon for fucks sake

i used sharding, so you might have to enable that

Is it normal for it to take this long

How stable is js-ipfs? Does it crash often?

seems like some files work while others don't, and i have no idea why
have a new hash
/ipfs/QmSzXkeWXauf9WMDw5Q3gbqiSAmtVHJwLgC8ydwWcfDRd6

It's normal for it to not work at all.

Sorry, I was changing ISPs. Dick font is now hosted on a gigabit connection. If anyone wants me to temporarily (until I do a manual gc) rehost dumb garbage on a node that has high uptime, reply to me with it and I'll add it to my get queue later if it's cool.


I wonder if it has anything to do with some of the experimental features you have enabled. I don't know much about the status of sharding right now, seems like it could generate "fun" problems like this. I'd stick with posting stable hashes publicly for now, either the old standard ones or the new cid ones.


The ETA seems to go crazy when less than 100% of the content is on the network, what hash is it? If I have it then it should be much faster now. I can not wait for bitswap sessions to be fleshed out and enabled for `get`, that should really improve transfer speeds and there's still a lot more they can do to optimize network transfers and repo access.

The Coming of the Dial (1933).mp4
QmQE62kGEcuU3yyzAJLgGqAvZvCEEq6nSHJ6goWismq3PG
Old documentary from when they were first setting up the phone system in the UK.

Remember to use the -w flag when adding to preserve filenames if you want.
/ipfs/zdj7WfRQuBNeZ5GrghmcsX4aN9eNh4s7VbKx6YQJsvfdNwe7H

Thanks for the tip. I've been trying to find out how that works.
Why does your hash not start with 'Qm' like every other hash I've seen and used?

New version of IPFS is ditching the "Qm" requirement at the start. It's still somewhat backwards compatible with slightly older versions, but new hashes will come out with that format instead.

Are you going to actually start working on it or is that just a plan for others to adopt? The next version of js-IPFS should have everything you need for it to work.

--cid-version 1
I think it uses sha256 or something instead of whatever it was before.

Per it appears they've almost got the DHT done, but it's still not done yet. I'm sorta waiting for development to progress a bit further to nail down what is actually possible. Regarding others adopting it, I've accepted that I'm the only one anywhere who actually cares about seeing this sort of thing done to the point of putting in actual work, so I'll push something out eventually.

It still uses SHA256 like before but it uses a new multihash format. It doesn't change anything because you still can't easily change hashing algorithms. The only way to use SHA3 or Blake2b or something else right now is to replace the default algorithm and recompile it yourself (which isn't that hard actually). Just make sure whoever is downloading it has an up-to-date version of IPFS that supports the algorithm (Blake2b was added in v0.4.6).


Nice to hear. I'm not a web programmer but wish you luck.

Might as well contribute.

Also is my IP publically visible in any way? I know it's not meant to be an anonmyzing network but I'd still like to know.

QmQE62kGEcuU3yyzAJLgGqAvZvCEEq6nSHJ6goWismq3PG

Sorry, copy/paste wrong

QmQhqMPuxsisuNqoS7eGPaTUnvw6Ht9mTNQmGUQcB6n3h2

Here's two nice cringe videos for you all.

QmfKP2AQY3pes23UBNYwacW8f6fsYrrBSMh1LEWQnRhkmD

QmVdF8nczdxmf6F8HabsyDb4D45VDdhopegCi2CQumQ7Mh

And here's a Holla Forums film:

QmUcav5qGSuZ2iVcwBvjPK8sf7eX29mr53ERr4ukqFuGnw

I'll try to keep seeding for a while but I don't own a server/always-on connection so please someone else seed for me

This is what the Federal Bueru of Investigation Actually Believes

Qmewupx161vWibdz9gnHB9x6xxJ44qa9HF9eq5sFLuUaWR

This one wasn't going through for me, but I decided to just let it sit instead of canceling it. An hour or so later I come back to find that it connected and downloaded the file.
So it seems like if you 'ipfs get hash,' it will run until the content becomes available.
That's good to know.

Yeah... I am on a coffee shop internet since I currently don't have my own ISP so expect slow seeding if any.

Damn dude, wrap your files with -w when you add/pin so we can see the filename. Or at least be specific about what you're sharing.

so `ipfs add -w file`? Will do in the future

How's this? Did I do it right?

Qmewupx161vWibdz9gnHB9x6xxJ44qa9HF9eq5sFLuUaWR

Damn it, I copy/pasted wrong again!

It gave me two different hashes. Which one is the correct one?

QmbDoUBVXnUwTLt7y4bzkryHrPGEBo7PmUwshhXtm1DdoR

QmcmgJMDCgshJdPeESVqWSgUDqNLJrP9AqbvkeM5FMNAso

Ok, I think I understand this now.

QmPLT2ghLCxziD3kcbL9e1UjoENFt5KX3SJZ94D4rWu1tJ

Use the one it spits out last. I think that's the one with no filename attached, the highest in the hierarchy.

Is there currently a way to see if other people have downloaded the file so I know if I can stop seeding for the moment? Also no one answered my IP question

Do I need to add the -w flag when uploading entire directories?

You really want me to wrap it every time?

QmP4ADv6F6k7zbryjynM9o2LBqC8TZiu2yWSDLVZSqDF17

Not that I know of.

Yes, you can see someone's IP, and I'm pretty sure you can link it to a hash. I'm not really sure how to. I think it involves one of the dht commands somehow. Considering how the system works, I can't imagine it's hidden in any manner.

I added that Revolution OS webm. See how it looks?
gateway.ipfs.io/ipfs/Qmd94XnQfkSeQE9tgjj4qUzdnPQDqdkaxxviYoN8SRMpTZ

You can also link directly to the file with a human readable name instead of its hash.
gateway.ipfs.io/ipfs/Qmd94XnQfkSeQE9tgjj4qUzdnPQDqdkaxxviYoN8SRMpTZ/RevolutionOS.webm

You don't need someone to manually check for you every time you want to test something. Throw your hash into gateway.ipfs.io/ipfs/[hash] to see if it's publicly accessible.

Alacrity died, reviving replies.

there's serval and ssb. both can share only small files though, because they use social replication

You can find who's currently seeding a file by using "ipfs dht findprovs ". If you see a bunch of peers hosting it that means it's probably safe.
Another trick is to load something through a public gateway like gateway.ipfs.io/ so that it's (temporarily) cached there. Be sure to load the actual files, though, not just the directory they're contained in. The same goes for finding providers from above, too. A bunch of people might be hosting the directory a file is in, but not the actual file, so you have to check for that too.

To expand on this, ipfs dht findprovs [hash] will get you their user ID. This is a Qm hash that I think is created when the .ipfs folder is set up. From this I'm sure you could get a rough seeder count.

From there, ipfs id [user ID] will get you some more details: Their public key, any addresses (local and global ip4 and ip6 plus their ports) they use to connect, client version, and protocol version.

-Q will make it only output the final hash.


Only if you want the directory name to be included, usually you want that.

You should really prepend /ipfs/ to your hashes so that they're clickable for people with the browser extension. Like this
/ipfs/Qm...
Really the full path should be used everywhere anyway.

Qmd3MyXvbA4Nq7wEyu1sCrBz3kTBzAKvA5SYyEPNEGTWDM

TRIANGLES

/ipfs/QmUk2hg96w811f8CU22QVvUrYnQEnzohvMWcDN5BHRrWQi Computer Privacy in the 1980's

/ipfs/QmWCLPFqxEBBiuk7R2RRhjGKAjHHhrsyk15g8Fypqm1tbY Classic Tucker

/ipfs/QmaNXupAnC283CPazzPatyAajp4dqvpayhsdKS9PRdtMEP Not sure why I saved this.

/ipfs/QmbXRs6ChsqGcqhEJ43mECYVgkJDLqqbmGR2eA7HdJ85ed 200 Years Together (a book)

/ipfs/QmYbBpwWHpKsfjzGCcCkrjzS2xftaxaX6AjJerqnPS9wDk The US Citizen's Guide Book

/ipfs/QmV8qQxFrb5LEeYhBUD1kkyKDY67aZe9NuHNPDLWkFA7HK The C Programming Language First Edition (1978)

/ipfs/QmRyVddDaAvT8EmTUbEwLtBKb2rZg72ajKG1CEE7PvduJj UNIX Administration Course (1997)

/ipfs/QmUFvUxbQAfi1PtsUvSrsds3gZAfPwpabEpNm2TpRmgoFw Read this First (old humor)

/ipfs/QmQDtRE931XpkpXxXur9bsXmXY89XGnnBmUCTAMzVd7fe9 warnings (also old humor)

Is it possible to torify ipfs or would that just be like torifying bittorrent?

VUKNLVXPDvqzVwPu3PpR8TpYHpYMYbKpjzGLj37CVq Unix for Poets.

I don't have a lot to upload here that's relevant to the board :/

Is it possible to torify ipfs or would that just be like torifying bittorrent? Also is it possible to update an item after upload. For ecample, if I had a plain-text file I wanted to share over IPFS but then noticed some spellling mistakes I forgot to correct, could I re-upload and overwrite the pre-existing upload? Furthermore, is it possible to add to directories (with the `-w` flag) I already added?

/ipfs/VUKNLVXPDvqzVwPu3PpR8TpYHpYMYbKpjzGLj37CVq Unix for Poets.

/ipfs/QmcHXEGQkVxqwLbSUYJ8jZqqEtJrJPYDVZW2ZSDweyXQhJ Various Tech videos (with that -w option so you can see what's what)

I am making an init script for IPFS but I don't know how to make it run as me and not root. Maybe I should create a dedicated user? Advice?

#! /bin/sh### BEGIN INIT INFO# Provides: ipfs# Required-Start: $local_fs $network# Required-Stop: $local_fs# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: IPFS# Description: Interplanetary File System Daemon### END INIT INFOPATH=/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/binDAEMON=/usr/local/bin/ipfsNAME=ipfs-daemonPIDFILE="/var/run/ipfs-daemon.pid"# Activate special INIT functions not found in shell#. /lib/lsb/init-functionscase "$1" in start) echo "Starting $NAME" $DAEMON daemon ;; stop) echo "Stopping $NAME" pkill -f $DAEMON ;; restart) $0 stop $0 start ;; *) echo "Usage: /etc/init.d/ipfs {start|stop|restart}" exit 1 ;;esacexit 0

N/m. This is stupid. I'll just use cron.

/ipfs/QmNmiuY26P1AeiZu91XXaKF6qdLpPnbo7ekzGB93VT2SUH Bernie_Sanders_Sex_Tape-H-Km35kBuLQ.mkv

I need to find more things RELEVANT to this board

su USER PROGRAM ARGS

chpst -u ipfs -g ipfs $cancerous_unreproducable_binary_blob_of_go_code

/ipfs/QmVtBGvYemq1S8h1Lim1bjsdGGVVg7jUWXoHmmKXxMe49k

The Einstein Hoax, a compendium of various author's documents making the case that Einstein was a fraud.

"Also, it should be noted that the IPFS protocol itself does not prevent anonymity at all, and a version of IPFS with careful choice of transports (e.g. tor, and not point-to-point tcp/udp links) + routing system (e.g. not the global dht) can achieve anonymity guarantees. But this is an endeavor unto itself worth doing very carefully. We will be guiding design for this in time, but this is not our focus at the moment.

What this might look like is to use things like TOR as only transports, use ephemeral node keypairs (ids generated every run, or per connection).

There will be much more to say about anonymity with ipfs -- and different ways of achieving it -- but for now please do not count on the go-ipfs to provide any guarantees!"

Why not Freenet or another large alternative?

/ipfs/QmX2rLCT78x8WNToaejtfaKLi7MHGqkJSAZYqiEg8XUFjM/
a collection of lana rain's videos

They should make it so only complete files show in the network. This is frustrating.

What IPFS really needs now is set-it-and-forget-it downloading like torrent clients do. That and a daemon that doesn't crash every fucking day or two.

This thing is useless right now.

sorry about that, something was eating up my connection

Stop watching porn you little shit, you're making it so that other people can't.

i can't help it
i think I'm addicted, user
also nice trips

How much simpler can `ipfs get` be? You give it a hash and optionally a download location and it will try to download the file forever.

What are you running on? I don't have this problem.


NO


There is work being done on native tor integration but the developers are not rushing for it, they want to do security passes before enabling it, you could probably build it yourself though or just torify it however you were originally going to. It is all P2P though.
github.com/ipfs/notes/issues/37
github.com/libp2p/go-libp2p/pull/79


tl;dr
it's more flexible and extensible

IPFS is intended to be modular, a bridge between networks like Freenet are possible and inevitable. There's work going on now related to getting content from tor onion links(linked above), torrent manifests (github.com/ipfs/js-ipfs/issues/779), and blockchains like bittorrent, ethereum, zcash, et al.

In theory you'll eventually be able to add or enable a module that will allow you to directly get content via freenet if you want and possibly share data through it too.

IPFS seems like a nice frontend to place on top of things to act as a one stop shop or final solution to the retrieval problem. You shouldn't care where the data is, what network its even on, or how it gets to you, you should be able to just get the content and share the content, easily.

This can still be done securely as well, they let you segregate yourself off into a private network, use your own preferred encryption, choose your own routing scheme(s), have peerlist control mechanisms, and more. The benefit of this is that you'll end up configuring IPFS to act how you want and let it handle the traversal/negotiations regardless of where the content lives, even if people come up with better networks than say Freenet, it's just a matter of writing a module that will aloow you to use Freenet2 in addition to retaining compatibility with Freenet1 and not losing the tools around IPFS itself for managing various systems.

The buzzword they're throwing around is "merkle forests" and IPFS is the glue code that ties them all together, so it's not so much a matter of "why IPFS over X" it's more a matter of "you can use IPFS on top of X, Y, and Z, all with the same interfaces and without worrying about the underlying technologies yourself".

...

I'd like the ability to ipfs get something and it automatically resumes if I close the daemon or it crashes. It's also mildly annoying that I have to use screen so I don't have to leave the window/ssh open while it downloads. It would be nice to have an option to hide it away once you tell it to download and auto-resume upon starting the daemon.

ARM version on Banana Pi. It crashes every 1-5 days and I don't know what's causing it, since I don't think it prints out crash logs.

That wouldn't be a bad feature to have, you should suggest it in the IRC or on github, adding a flag to get like "--background" or "--persist" or something. You could also implement that as some kind of demonized script that keeps retrying `get` until the exit status is a successful/non-error value.

I'm sure people are going to make torrent-client-like frontends for this eventually too.

Interesting, are you running other services on it? I wonder if it's getting killed by the kernel for something like oom, go-ipfs has its own error logging and the Go runtime has its own routines for catching critical errors and logging those as well, so if it's dying it should always log something.

Error: merkledag: not found
What does this mean and why am I getting this error? I was able to download the readme after installation but a day later it was no longer possible, giving me this error.

Have you started your daemon?

Search the thread first please.

That's a good idea, and I think I'll work on a script for that tomorrow. At the very least I can make a cronjob to relaunch the daemon if it crashes. I also have that happen to rtorrent but less frequently, so maybe it's related.

Based on this bug report the next version will finally give users a choice of which hashing algorithm to use. I've been waiting for this so I can use Blake2b instead of SHA256 to hash my terabytes of chinese cartoons and mongolian music faster. We're at a point where there's no longer any reason to not add all your hard drives to IPFS.

github.com/ipfs/go-ipfs/issues/3978

Have they finally fixed the bullshit where running the daemon as a limited user and using add --nocopy on readable files outside that user's homedir never works?

Good. It takes almost a whole day to add a couple hundred GBs on the little SBC I have running my media server.

Haven't tried it on the git version.


I have a feeling a lot more people are going to start actually using IPFS very soon. I hope Filecoin doesn't ruin our fun though.

IPFS INSIDE I2P WHEN?


Nice.

Filecoin doesn't exist, and ipfs is still in infancy, even the worst torrent client performs much better if we're talking about casual piracy.

Nah, normies want a finished product, doubt they'd be interested in experimenting.

This still has the same problem as bittorrent, right? We need a site to post the hash if we want to share the thing.
Also, it's almost ready for Tor (and maybe i2p), right?

I'm ready to switch from bittorrent to it, but I'll wait for an infrastructure (imagine if animebytes.tv just added IPFS support).

OOM?

Both bittorrent and ipfs have decentralized feeds (pubsub in ipfs and dht-rss in bt).

Websites are necessary for discovery because people are lazy fucks, not because the technology would be lacking in that regard.

It works like git but with the parent allowed. The default repo is in ~/.ipfs so anything in ~/ or under can be added by default. You can set the location of the repo with the $IPFS_PATH variable if you want to move something higher up the hierarchy. The more sane thing to do is to just put symlinks in the .ipfs directory, pointing to what you want.


IPFS itself can be indexed and searched
github.com/ipfs-search/ipfs-search
You can also host an index (static and/or dynamic) on IPFS that people could get and post to.


Neat.

That works now, don't know why it didn't last time I tried, whatever. Going to wait for 0.5.0 or something because 14 hours to add 60GB is bullshit.

I thought excessive speed was actually a security flaw of hashing algorithms? Blake2b being 5x as fast as SHA-256 also means that a malicious actor can brute-force 5x as many hashes in the same time.

If you're blindly bruteforcing it, you're doing it wrong. It doesn't matter that it's faster by a factor of 5 when you're cranking out 10^(gorilion) hashes. I'd be more worried about attacks on the algorithm itself, and SHA-2 has collision attacks up to like 40- or 50-something rounds.

Plus the nice thing about IPFS using multihash is that you can always use a different algorithm if you're worried about security.

These hashes are for content addresses, not passwords, if you want the content the hash represents it's easier to just request it than to try and reverse an arbitrarily large binary from it.
blake2.net/#Q3_speed_is_bad

Point taken.


I guess that leads on to another question: if I could find two different files with the same Blake2 hash, could I use this to get an IPFS user to download something he didn't mean to? I know it's peer-to-peer, so let's say I flood the network with peers armed with the colliding file. Is there any value in such a collision?

As with most crypto things like this, it's extremely unlikely you'll ever encounter a hash collision even when looking for one, but what's even more unlikely is trying to find a hash collision with a useful target, even then I doubt anything useful would come of it since content is chunked into blocks and you'd likely be the minority sending blocks that will appear bad to the the client, which will probably get you auto blacklisted for that peer.
That being said I'm not really familiar with this specific type of thing because it's not usually something that happens, I've never heard anything about hash collisions for any format make any kind of headlines outside of people forging pdf hashes, but iirc they used some oddity in the format, so they modified the data without changing the hash, they didn't generate something new that matched the old one.

Here's what the devs have to say about sha256
github.com/ipfs/faq/issues/24

The absolute worst thing that could probably come of this is you waste a peers bandwidth by causing them to download blocks from someone else. Hash collisions are usually only used to forge something like a paper document or some kind of license related thing, I don't know of any value that could come from one used for content addresses. Reminder: I'm pretty ignorant on this topic(attack vectors).

Just Use Skein hash already. Faster than BLAKE2.

absolute trash performance. keccak is the shit.
csrc.nist.gov/groups/ST/hash/sha-3/Round3/March2012/documents/papers/GURKAYNAK_paper.pdf

that is the absolute best thing that could happen. the absolute worst thing would be arbitrary code execution

How is the Rust rewrite coming along?

Rust would make IPFS code merely unreadable.

But the cornerstone of IPFS design is to make it as slow and inefficient as possible, so they opted for JS rewrite instead ("safe", unreadable AND slow)

It's so great even its authors warn people not to use SHA3.

Maybe if you own a keccak ASIC.

Balake2 is great since it's faster then SHA2 in regular software. SHA3 will be more usable once processors include ASICs for it.
It reminds me of ChaCha20-Poly1305 vs AES. AES is the current encryption standard with lots of desktop hardware acceleration support but with little mobile hw support. In this case mobile devices and older computers preform faster when using ChaCha20-Poly1305. It's great that we now have both a high speed ASIC accelerated choice and a high speed general purpose choice for both hashing and encryption.

Its not a rewrite.
The main implementation is still in Go and the js version is just so it can be embedded in browsers which was their solution to getting it out there while bypassing the part about getting normalfags to care about shit(Which is where most projects fail if they need adoption in said spheres to propagate the system)
This is all in the original design document

Ipfs-search is down lads.

ipfs-search is dead, the owner can't host anymore.
He dumped the latest database at /ipfs/QmXA1Wiy3Ko29Q54Sq7pkyu8yTa7JmPokvgx4CXtQBWirt

Rest of the code is opensource on github so anybody can restart the project.

There is literally 0 documentation on how to use ipfs with Tor....

You won't find many books on how to screw in nails either, sunshine.

IPFS 0.4.10 is coming soon.
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md

Big new features
Check the integrity of your pins. Great for checking to see if your filestore pins are still intact.

Rather than removing and re-adding a pin, you can update it and skip lots of needless hashing. Great for maintaining big folders you want to share.

You can now select which hashing method you want. Blake2b-256 has been added and will eventually be the default.

You can now leverage IPFS and libp2p to open any sort of stream between peers. Intended for developers looking to build software on top of IPFS.

Full changelog

never happened. source faggot?

What did they mean by this?

So Fucking Cool.

IPFS v0.4.10 is out
I've been waiting for the features in this update for years. Finally I'm going to hash all my weeb shit.


When keccak was under near-final review by NIST (the organization that turned it into the SHA3 standard) they proposed decreasing its security from the team's proposed default of 576 bit to 128 bit. After massive community outrage and people threatening to not use the new pozzed standard NIST settled for 512 bit security. Would you expect anything less from a USA government controlled organization?


Say I hash my anime folder. Now I download a new show. To update it in IPFS I would remove my old anime folder pin and re-add it to include the new show. It would then have to recursively re-hash the entire folder over again. The update command skips the already pinned data and only hashes the new data. This saves loads of time when updating multi TB folders.

Technically speaking, I don't think having to rehash takes exactly as long as running ipfs add again. I'm sure ipfs pin verify is much faster but it's definitely faster the second time around.

Ancedotal evidence: I bit the bullet like last week and added most of my games folder. I stupidly ran it in the background so I didn't get the final hash. I'm not even sure it finished. (My daemon crashes often.) I have to add it again, so I tried running the same command I did last time and it estimates ~4 hours, and when I use --hash blake2b-256, which would start the process from scratch, it estimates 16 hours. Now, I haven't done any side-to-side performance tests but I believe it's not because blake2 is underperforming.

Your system could've used the file metadata cache it build up for the second ipfs add but it could've gotten flushed before you ran it with --hash blake2b-256. Also because SHA256 is slower it might've not finished looking in every directory yet which could've affected the estimated time calculation. Code-wise re-adding everything with ipfs add should take the same amount of time no matter how many times you've previously pinned and removed it.

Holy shit, you have any idea how big this is?
You can "dial in" to a live server now by having only its IPFS pubkey. We're no longer dependent on dyndns bullshit to run permanent stuff from home servers any more. I've wanted something like this for like fifteen fucking years.

github.com/ipfs/go-ipfs/issues/3994
This is the feedback thread on ipfs p2p. They even provided a helpful example:

ipfs config --json Experimental.Libp2pStreamMounting true


p2p listener open p2p-test /ip4/127.0.0.1/tcp/10101

ipfsi 1 p2p stream dial $NODE_A_PEERID p2p-test /ip4/127.0.0.1/tcp/10102

so where is the part where the keccak authors warned people not to use sha3?
protip: it never happened

These next few years are gonna be great.


I wasn't the guy who said that, I was just explaining what happened to you. Go sort through the mailing list archives yourself if you really want solid evidence of keccak authors disapproving of NITS's modifications.

I'm a bit slow, I have my two daemons and supposedly it looks like they're talking to each other, but how do I actually send a message through that open channel?

never happened
usually it is like this: you make a claim, you provide the evidence

let me greentext you the history of sha-3:
just to clarify: they mad no changes to the keccak function itself, only to the capacity of the sponge construction. something that the keccak authors themselves proposed before.

tl;dr: sha-3 is perfectly secure, nists modifications were sensible, keccak authors approved, use shake-{128, 256} instead of sha3-{224, 256, 384, 512}
further reading: keccak.noekeon.org/yes_this_is_keccak.html

So do we have an index of well curated IPFS content yet, or are you hoarding all this weeb shit for yourself using non-free hashes?

...

Can I add a directory tree of 10000 files without it filling up the file descriptor table with outbound UDP connections and crashing yet?

I found out that OmniOS is kill so I'm going to hash everything once I migrate to Proxmox over the weekend.

What are some things that you're planning on doing with this?

Yes, ipfs add --local mydir/

The whole Illumos community seems to be dying now that Linux containers are more flexible than Zones and ZFS is in Debian and Alpine. Joyent mismanaging the fuck out of SmartOS isn't helping.

Does there exist something like the freenet forums for IPFS?

Quick summary:

https:[email protected]/* */RSAmP6jNLE,~BG-edFtdCC1cSH4O3BWdeIYa8Sw5DfyrSV-TKdO5ec,AQACAAE/fms/147/operation.htm

Can't you use sha-1? If so, you should be able to write a program that converts torrents into ipfs links.

Is that really a good idea? Isn't it better to hash each episode by itself?

sha1 is deprecated

It's used in torrent files, so you could import a torrent file and use ipfs to download it. But they're switching over to sha256 now.

literally no torrent client is switching to sha256

The spec isn't finished yet, but libtorrent will implement support for it when it is.

kys

SHA-1 is still preimage resistant, so they don't really care yet. But they're working on it.

How does the hashing in IPFS work? Is it just hashfunction(yourfile) where hashfunction = sha256 or does it split the file into chunks?

lol.
the proper time to start "working" on it would have been 10 years ago. it is too late now. even if the spec is changed today, how quickly do you think will the client implement the changes? what happens to all the existing torrent?

Very fast, in general, judging from how fast earlier changes have been implemented.
They become legacy torrents, all new torrents created will use sha256.

fucking LOL. i bet 500 eurocucks that nothing will change. seriously, make a ethereum contract.

Look at how long time it took DHT to be implemented. arvidn is usually part of writing out these specifications, so they're in libtorrent from day 0.

How does the "trust chains" in IPFS work? Do they do the same stuff as web of trust in Freenet?

If I upload a folder containing file1, file2, file3, and someone else uploads a folder containing file2, file3, file4, but with different file names, will they be merged?

metadata is hashed separately from file contents

So you'd have 4 files and then separate pointers to them? If I have a file which I know has the sha256 hash X, can I search for that or are they hashed in pieces like BitTorrent?

Yeah now that Linux distros have unprivileged LXC containers and ZFS bundled in, there's pretty much nothing special anymore that Illumos brings to the table. I wanted to switch over at some point anyways but I've been waiting the last few years for ZFS on Linux to get some more performance and bug fixes.


There's no need for a public trusted pubkey lookup system. Each IPFS daemon (like your local one) can optionally implement their own blacklist or be part of a private whitelisted IPFS network.


Why would I hash each episode individually? I'm going to hash my anime, music, VN, and manga folders separately. The -w argument wraps each of my subdirectories into individually addressable IPFS directories with all the file names intact.


Source? The last time I checked they were still debating whether to use SHA256 or Blake2. IPFS is transitioning its default from SHA256 to Blake2b soon. I hope they increase the default RSA key length to 3072 too. Even the (((NSA))) is transitioning away from SHA256 and RSA 2048 for current long term hashes/keys.
cryptome.org/2016/01/CNSA-Suite-and-Quantum-Computing-FAQ.pdf

Yes IPFS uses sharding similar to bittorrent except all shards in the network are accessible to anyone. It's basically one huge DTH of every file added. Read the thread or watch the webm I encoded that explains it

But then you have to do all spam filtering yourself, you're not anonymous, and so on. FMS and the Web of Trust had some really interesting ideas technically. How does ipfs-board work?
So if someone uploads their anime folder and it overlaps with your folder, you get two different hashes for the same file?
Might be blake2. But they're switching anyway, it's a recent BEP to be implemented simultaneously with Merkle hashing.
So if you have the sha256 of a file you can't find it?

Jesus christ, that video could be cut down to 10 minutes.

How does IPFS differ from Freenet? Is it as anonymous?

IPFS is not anonymous and easy to monitor. DMCA is possible in IPFS.
IPFS does not "gossip", i.e only popular data is distributed.
Essentialy IPFS is a low-latency transport protocol designed as a replacement of HTTP.
IPFS is (((Google))).

Freenet is anonymous and censorship-resistant but de-anonymization is possible by packet collision detection.
Darknet built upon Freenet is completely anonymous.
Data is always being distributed throughout network.

But they're quite similar, no? You could add a plugin which adds the caching stuff to IPFS too.

Freenet works by that each node is a proxy, and when you request something it's cached on all nodes between you and the data. Doesn't IPFS do something similar?

Why would I need to filter spam? I only share files I own or try to get files from others that I specifically search for. I'm not forced to host other people's files like Freenet.

Metadata files might differ but the blocks they point to should be consistent, provided you use the same hashing algorithm.

Probably not.


Read the OP. It's not meant to replace Freenet or GNUnet or anything like that.


Content is uncensorable. The IPFS gateway honors DMCA only because they don't want to get involved in legal battles while they're trying to build the software. You can still use the network or other gateways to find the DMCA'd hash. This is why you should be looking at this from the lens of Bittorrent, not Freenet. Torrents are similar in this regard, as it's like taking down a frontend that serves .torrent files or magnets but you can't take down the trackers or knock the file off the network (without going after individuals hosting the files).

>IPFS is (((Google))).
This is laughably inaccurate.

Someone should make an FAQ so that we don't have to answer the same questions every single thread.


Why does self-censoring mean you're not anonymous? There's nothing wrong with subscribing to a public blacklist. Use gateway.ipfs.io/ipfs/HASH to use their blacklist (mostly DCMA'd hashes). There's no Web of Trust needed just like there's no need for one when using Bittorrent.

No if it's the same file then the hashes will be the same. The file names and other metadata are separate from the contents of the files. If person X shares folder 1 and its contents are A and B, and person Y shares folder 2 with contents of B and C, person Z who wants to download B will get it from person X and Y even though no one knows about each other. Just like bittorrent each file is sharded (split into a bunch of tiny files and individually hashed) so that the entire network is deduplicated. Think of it like everyone is contributing to a single huge torrent.

Correct, just like you can't find a file on bittorrent if you have the SHA1 of it. Magnet links and IPFS hashes incorporate hashing algorithms into their own hashing structure. It's not a direct lookup service.


DCMA is only an issue on public gateways. If you run a local daemon there's no default blacklist.

Right now IPFS is very gossipy with reguard to the peer exchange and DTH. They're working on getting it less chatty. Content is distributed on a completely voluntary basis unlike in Freenet where you're hosting unknown content.

>IPFS is (((Google)))
Explain further.

If you want to make something like Freenet Messaging System you need a Web of Trust to filter spam. I'm not talking about filtering files.

So the files are split into blocks, not concatenated and then split like BitTorrent? Good.

You can if you know the SHA1, it's smaller than the block size, you know the file name and size, and it's a single file torrent. Ridiculously contrived example, but you can do it in theory.

Is there anywhere you can download the entire blacklist so you can pin all the DMCA'd content?
Also, that's bad from a legal perspective IIRC, since maintaining a blacklist makes the developers responsible for the content. At least according to eff.org/wp/iaal-what-peer-peer-developers-need-know-about-copyright-law

But you could in theory make an addon adding a distributed data storage feature like Perfect Dark/Share/Winny/Freenet/GNUnet without much extra complexity?

Can't they just use BT PEX and DHT?

Okay, a last question. Are all IPFS apps web apps, or are there any desktop apps? If you want to make an application running on IPFS, is there any built in plugin support like Freenet or is it just a matter of interfacing with the API port and running it as a desktop app/web server?

If you're going to play it like that then yes, that's possible in IPFS too.

I'm not sure. If you ask them and say that you want to use it too I'm sure they'll give you a copy.

Granted I only skimmed the whitepaper, but because the website gateway is separate from IPFS itself it shouldn't matter. Websites are not indiscriminately responsible for the user content posted there. If someone hosts a torrent sharing website and they block some DCMA'd magnet links it has nothing to do with Bittorrent itself or its developers.

I suppose you could make something that periodically downloads random IPFS/IPNS hashes and rates them by how popular they are (how many hits they get) and remove unpopular hashes from your cache. What's the point of trying to turn IPFS into Freenet when Freenet already exists?

I meant DHT, not DTH. And yes they are already using them but there's still a lot of work to be done.

Anything you can do on a traditional webserver or filesystem you can theoretically do in IPFS. There's a FUSE and Windows addon that lets you mount IPFS as a regular hard drive in your system. In the newest update they introduced raw TCP bytestreams which opens the door to many more things.

There's no official messaging service or forum or anything like that for IPFS; it's only a protocol. There are applications built on top of IPFS that exist now (check out awesome-ipfs) but they're all community contributions. I'm sure trust systems will come for those that need them.

That's where Filecoin comes in. It's still just a whitepaper for now but they intend on a cryptocoin with which you can spend coins to get people to host your files. It's similar to services in existence right now like MaidSafe or LBRY.

That's not the point of Freenet. The point is that it's anonymous and censorship resistant.
IPFS would be a good base layer to make something like Freenet but not trash as an overlay network, since IPFS is already a very good P2P foundation. Freenet is "good idea - terrible execution".

But then you need a server replying to the requests, or am I mistaken?


My bad, FMS isn't official either. But there doesn't exist any? All I can see is ipfs-boards, and it doesn't seem very robust to say the least.

How long time does it take to put a raw block to IPFS? No slower than generating a torrent?

This, when Arvid has decided he thinks something's cool it gets implemented very quickly.

...

You do realize what is downstream from libtorrent, right?

not utorrent

...

fucking retard. i dont use utorrent. i was just pointing out that utorrent wont support all the fancy new stuff arvid is adding to libtorrent.

Summerfags leave.

ZFS-on-Linux gets really good at 0.6.3 or so. We use it in production.

...

not an argument

Summerfags leave.

uTorrent is also updated very rapidly since it's made by BitTorrent Inc. And everything else (except rTorrent and Transmission) is built on libtorrent.

Is there any model for consensus in IPFS to implement voting, besides "1 key = 1 vote"?

"ERROR flatfs: too many open files, retrying in 300ms flatfs.go:180"
Should I be worried? Inserting 1000 files, around 10mb each, seems to be going smoothly.

Sounds good to me.

Anyone doing this deserves what they get

They've been dealing with issues like that for at least a couple versions. If you keep having trouble with it, try the --local flag or maybe enable sharding. I'm not really sure if either will help with this issue in specific but I've heard they're great for adding very large directories.

Yes, it essentially becomes a very, very, poorly done POW. But there's nothing else, like "1 IP = 1 vote?"

...

no thanks fam. maybe later.

There are alternative implementations.

Any packaged for Debian?

Or Flatpak?

No packages in Debian, but the go-ipfs download includes a binary and an install script

localhost:8080/ipfs/QmecV5hVLRpzn1yxe7y1Vx9NhATUfczSk4Kt8qHzL9SZqa - Neon Genesis Evangelion 1-26 DC+EoE [MULTI][BD 1080p 8bits v2.22][Sephirotic]

IPFS index when? Could just do one straight up with folders, won't even need search feature.
Insertions would only need to modify a few nodes.

There used to be an IPFS index, but it was shut down because of server costs. I think there is an IPFS hash of the site still around, but I cannot find it.

No, I mean an entirely static one. Not based on tags or keywords, just a straight up hierarchy with the built-in folder browsing support. You could delegate responsibility using a bot that republishes from IPNS namespace B-Z to A, that's the only "moving part".

Just download the binary and run install.sh to drop it in /usr/local/bin/.

all I want to know is how do I find warez, bookz, and tunez with IPFS? make a 30s tutorial and IPFS will rule the universe.

install gnunet

I'm not sure what you're trying to communicate here. So it's a static index that is automatically built from IPNS announcements? Like IPFS Search but organized into one big hierarchy?

Thought I would try this out for the first time since like v0.1, anyone have an up to date list of content?

Its just ipfs get [address] to download right?

It's like an old ftp/file server. You build it with the folder support in IPFS, so you can browse it via the command line or the browser. Whenever you add something new, you only need to recalculate some of the hashes.

so just an ipns hash?

I should note you don't need to do any of that manually.
ipfs add New_episode.mkv | ipfs object patch add-link my-series/new-episode.mkv

That will stitch it into the old structure, updating all hashes as needed.

No, you want to have the leaf node in multiple places. So if you update the series "Neon Genesis Evangelion", you need to update N, 1995, 新, and 30, if you're indexing based on english title, year, japanese title, and MAL ID.

What's the command to replace OLD_HASH with NEW_HASH in all subdirectories of ROOT_HASH and return the new hash? ipfs object patch add-link ROOT_HASH OLD_HASH NEW_HASH?

Nobody knows what you're trying to do, and that's not how you use object patch add-link.

Do something like

Why though? Why would you have multiple child nodes with the same hash? That's just doubling things up for no reason.

To create something like an index, where you can browse by first letter, year of release, ID, and so on.

You're still not explaining it very well at all. Why go through all that work when you can use a script like to parse the hash's file name? What you describe in is just the "ipfs pin update" command attached to an ipns hash. Your indexing service wouldn't be changing the structure of ipfs hashes, it would be collecting ipfs hashes and correlating tags (metadata) with them. That way you sort or search by the metadata and get the resulting ipfs hash.

There isn't enough content around to make it worth such a detailed index right now. An updated /g/ index (ie, list of hashes) is really all we need at the moment. Also, downloading 59GB of animu from a single peer doesn't really make any sense.

If anyone's serious about actually getting a small library going, my suggestion is that we should go through our media folders doing --hash-only. Then upload our lists and see what shows & movies most people have in common. If we start by adding them, then at least there'll be something of a seed pool at the start.

This could probably be automated in 2 ways.

Or you could just merge all the files and run through `unique -d` but that wouldn't be an accurate amount of file providers, only duplicate shares from submitters. It also doesn't prune itself based on the network.

Well I was suggesting the list-comparing thing just because I had assumed no one's added their full libraries to the network yet (so findprovs wouldn't tell you anything). But yeah in terms of maintaining an index, you'd definitely want to track of the health of the hashes and report on that.

see

Because you want to make it browsable via browser.
Does that provide "search-and-replace" functionality?

You could set up some RSS feed and auto download anime, for example HorribleSubs or tokyotosho.

Is there some way to figure out if a hash is online?
There doesn't seem to be a timeout, so the request will hang forever.

# ipfs dht findprovs | wc -l
This will tell you how many people are currently declaring that they can provide . It might give you a false positive, but it won't give you any false negatives (ie, it can definitely tell you if a file is offline, it might not accurately tell you that a file is definitely online).


I think you're looking at this index idea in the wrong way. It shouldn't just be a humongous file hierarchy. The ipns hash of the website should point to the front-facing code, which in turn is populated by an independent index of hashes associated to metadata, as others have pointed out. Why emulate ftp when you can emulate kodi and make it an order of magnitude easier both to code and to use?

But if you're really desperate to have everything organized as a single hierarchy, then just use the ipfs files interface. It allows you to create a UNIX-like virtual file system, so it would go like this:
Let's say that your ipns currently points at , which is the directory /share in your virtual file system, that you're using to build your mega-index.
# ipfs ls /anime/series/evangelion
oh the latest episode isn't in there!
# ipfs add -q /path/to/new/episode.mkv
this gives you , the hash of the file to be added
# ipfs files cp /ipfs/ /anime/series/evangelion/name_of_new_episode.mkv
this adds that hash to the given virtual directory
# ipfs files stat /share | head -n 1
this gives you , which is the hash of the updated share directory. You can now update your ipns to point at this hash and it'll just automatically have updated all the relevant child directories, of course. This actually really simple and implements exactly what you want (from what I can tell). Try it out with some text files or something to prove to yourself that it works as advertised.

that should have been
# ipfs files cp /ipfs/ /share/anime/series/evangelion/name_of_episode.mkv
You can think of it like a symlink between the absolute path of the file, and the place where you want to keep it in your virtual file system.

># ipfs dht findprovs | wc -l
Awesome, thanks

just slapped together a website for a directory.
needs a lot of work
littlenode.net/directory.html

So far I just grabbed some links from this page.
I intend to add a script that will determine which links are reachable.
I'll probably make a thread when/if this website enters a usable state

last thread
Updates
complete: github.com/ipfs/go-ipfs/blob/7ea34c6c6ed18e886f869a1fbe725a848d13695c/CHANGELOG.md
""0.4.10""
ipfs pin verify checks the intergrity of of pinned object graphs in the repo
ipfs pin update effeciently updates the object graph
ipfs shutdown [/ipfs] shuts down ipfs daemon[code] ipfs add --hash allows you to specify a hashing algorithm, including blake2b-256
ipfs p2p (experimental) allows you to open arbitarty streams to ipfs peers using libp2p
""0.4.9""
ipfs add can now use contentID instead of multihashes, which allows the type of content to be specified in the hash. This could allow ipfs to nativly address any content from a merkletree based system, such as git, bitcoin, zcash, and ethereum

tl;dr for Beginners
How it Works
When you add a file, the files are cryptographically hashed and a merkle tree is created. These hashes are announced by the IPFS client to the nodes in the network. (The IPFS team often describes the network as a "Merkle forest.") Any user can request one of these hashes and the nodes set up peer connections automatically. If two users share the same file then both of them can seed it to a third person requesting the hash, as opposed to .torrent files/magnets which require both seeders use the same file.
FAQ
It's about as safe as a torrent right now, ignoring the relative obscurity bonus. They are working on integration with TOR and I2P. Check out libp2p if you're curious.
Finding a seeder can take anywhere from a few seconds to a few minutes. It's slowly improving but still requires a fair bit of optimization work. Once the download starts, it's as fast as the peers can offer, just like a torrent.
You be the judge.
It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies.
On the other hand, it's still alpha software with a small userbase and has poor network performance.
Websites of interest
gateway.ipfs.io/ipfs/
Official IPFS HTTP gateway. Slap this in front of a hash and it will download a file from the network. Be warned that this gateway is slower than using the client and accepts DMCAs.
ipfs-search.com/
Search IPFS files. Automatically scrapes metadata from DHT.
glop.me/
Pomf clone that utilizes IPFS. Currently 10MB limit.
Also hosts a gateway at gateway.glop.me which doesn't have any DMCA requests as far as I can tell.
ipfs.pics/ (dead)
Image host that utilizes IPFS.

Nice one. I had been storing some links, most of them from previous threads and the /g/ index, on this page if you want to use them
/ipfs/QmYm91cERLaSqetKNhPjPGB4fGYW9zyztvXsnuMoehBMLV/hb/hashbase.html

When will pubsub be "production ready"?

Nice, I tried this out and it seems to work well.
I've added the entire discography of Carbon Based Lifeforms (space-like ambient music) to the public key:

QmPKvd1heEnAGUemGJcRss5H8zsC6MMJxEWYZKatpVtbiF
It's about 496 MiB.
My machine is not always up (sometimes for technical reasons), so I wouldn't rely on it, it was mostly just proof of concept.

To use it, are we supposed to do an "ipfs name resolve " , then an "ipfs get"?
Apparently it seems to have a limited lifetime, so I may have to keep updating it.

lol and another question, is there a way to view progress on an "ipfs get"? Sometimes it just hangs forever, and I can't tell if it's searching or actually downloading.

ipfs get is enough

You can also do ipfs name publish --key=anime $(ipfs add -rwQ --nocopy /path/to/chinese/cartoons).

Requesting: script that checks timestamps and only hashes the files that have changed and aren't unfinished

Unfortunately I have to agree. Tried to play around with it some but I just don't want to learn more about Twisted