IPFS thread

ipfs.io/
ipfs.io/blog/19-ipfs-0-4-3-released/
Last thread: archive.is/hPYvi


We haven't had a new thread for over a month now. Share your new IPFS hashes.

Other urls found in this thread:

blog.neocities.org/its-time-for-the-permanent-web.html
ipfs.io/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw
addons.mozilla.org/en-US/firefox/addon/ipfs-gateway-redirect/
chrome.google.com/webstore/detail/ipfs-gateway-redirect/gifgeigleclkondjnmijdajabbhmoepo
archive.is/hPYvi
github.com/victorbjelkholm/ipfscrape
archive.is/iYnUW
github.com/ipfs/go-ipfs/pull/2634
github.com/ipfs/examples/tree/master/examples/ipns
github.com/ipfs/notes/issues/64
github.com/ipfs/go-ipfs/issues/1633
github.com/ipfs/notes/issues/146#issuecomment-232953462
localhost:8080/ipns/ipfs.io/blog/
docs.google.com/a/andyet.com/forms/d/e/1FAIpQLSekmfikDJ5mIS5YugnAinTfiuyJ4BgkqkkX18DldpbirC3dUQ/viewform
github.com/libp2p/specs
github.com/haadcode/orbit
github.com/ipfs/go-ipfs/issues/3177
github.com/ipfs/webui/issues/95
github.com/ipfs/pm/issues/217
github.com/ipld/cid
github.com/ipfs/go-ipfs/issues/3299
github.com/ipfs/archives/issues
github.com/ipfs/examples/tree/master/examples/websites
github.com/ipfs/notes/issues/124
youtube.com/channel/UCdjsUXJ3QawK4O5L1kqqsew/videos
public.etherpad-mozilla.org/p/ipfs-oct-10-go-ipfs
public.etherpad-mozilla.org/p/ipfs-oct-10-libp2p
github.com/ipfs/faq/issues/19
github.com/ipfs-filestore/go-ipfs
github.com/BrendanBenshoof/cachewarmer
filecoin.io/
github.com/ipfs/archives
github.com/ipfs/notes/issues/58
github.com/ipfs/go-ipfs/issues/3313
github.com/ipfs/go-ipfs/issues/3316
github.com/ipfs/go-ipfs/milestone/5.
github.com/ipfs/go-ipfs/milestone/26.
github.com/ipfs/go-ipfs/blob/master/docs/config.md
dist.ipfs.io/#go-ipfs
ipfs.io/ipfs/QmcYo2u6Sk9gfQfPizEiLrRRos5GmBx5gi5FNSxGy2tNYt
github.com/ipfs/go-ipfs/issues/2060
github.com/ipfs/notes/issues/131
ipfs.pics
github.com/ipfs/awesome-ipfs/blob/master/README.md
zeronet.io/
github.com/ipfs/js-ipfs
docs.ipfs.apiary.io/
github.com/ipfs/notes/issues/37
github.com/ipfs/go-ipfs/milestone/26
github.com/ipfs/go-ipfs/issues/3397#issuecomment-264764028
github.com/ipfs/go-ipfs/issues/2509
github.com/oduwsdl/ipwb
gateway.ipfs.io/ipfs/QmR9aMT7QzRLcpQrBf3fCYnnwtJ8paRSQU9mGoJgEpP4Mv/Books
github.com/whyrusleeping/ipfs-see-all
blog.p2pfoundation.net/solid-can-web-re-decentralised/2016/04/07
github.com/multiformats/multihash
github.com/ipfs/go-ipfs/pull/3629
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md
ipfs-search.com/
google.com/#q=inurl:ipfs.io/ipfs/
glop.me/
8ch.net/tech/res/715551.html
github.com/ipfs/awesome-ipfs
emule-project.net/home/perl/help.cgi?l=1&topic_id=134&rm=show_topic
libtorrent.org/dht_rss.html
bittorrent.org/beps/bep_0003.html
github.com/ipfs/specs
github.com/ipfs/specs/tree/master/bitswap
github.com/ipfs/go-ipfs/milestone/30?closed=1
dist.ipfs.io/go-ipfs/v0.4.7-rc1
github.com/ipfs/go-ipfs#build-from-source
github.com/ipfs/go-ipfs/issues/3786
ipfs.io/docs/install/
github.com/ipfs/go-ipfs/milestone/23
en.wikipedia.org/wiki/NeWS
youtube.com/watch?v=hpCxtb2E1as
swarm-gateways.net/bzz:/theswarm.eth/
github.com/ethersphere/go-ethereum/wiki/IPFS-&-SWARM
sourcediver.org/blog/2017/01/18/distributing-nixos-with-ipfs-part-1/
github.com/NixIPFS
github.com/ipfs/go-ipfs/issues/3763
github.com/ipfs/notes/issues/76
twitter.com/SFWRedditVideos

Could anyone tell me why I should use it over Bittorrent?

That's like asking why you should use HTTP over Bittorrent, the intentions for the protocols are different.

I don't know all the comparison points but I'll list some. This is all the stuff I've gathered from previous threads and looking at GitHub so it could either be outdated or wrong.

Gateways provide interoperability with other protocols, currently I only know of HTTP but more will come as libp2p matures, this is nice because you don't have to wait for people to implement support for IPFS, if someone writes a gateway then everything that supports that protocol supports IPFS indirectly, you just tunnel through your local gateway or someone elses gateway if you don't want to run ipfs yourself on your local machine. The obvious example is visiting an html page over HTTPIPFS in your browser which doesn't understand IPFS, or streaming a video file in your media player.

Cross content compatibility, with BitTorrent if you make 2 different torrents with 2 different sets of files but they also contain some of the same files, they may not share peers, if the chunk size is different they will for sure not share peers. With IPFS since all the content is hashed on its own instead of as a bulk group if anyone on the network has the file you will be able to fetch it from them even if it's part of a separate set than the one you're downloading.

IPFS is a file system that can be mounted, I could post a hash and you could browse it in your OS and use your OS tools on the files. There is also a mock Unix file system which is neat, it allows you to take hashes and compose them into a directory structure, so you can quickly and easily compose different sets of data. i.e. "I want this folder but without these files, and take these 2 hashes and put them in this directory but renamed to this" Since all that stuff is just meta data it doesn't effect provider count like mentioned previously, even if you take a hash, give the content a name, bundle it with other files, and distribute the resulting hash, that hash is still made up of the same components and will have the same providers including yourself (you'll now be providing the same content with different sets of metadata, it only takes a few bytes of disk to provide to both, not 2x the content just to have different structure/names). With a torrent you'd have to make all that stuff on your real filesystem, remake the torrent, then distribute it starting with 1 peer (yourself) even though other people have those files, if you want to continue to provide for the original torrent you need to keep both copies on disk even if you're only changing metadata.

Dynamic content, IPNS (and later: pubsub, IPRS, IPLD) allows for dynamic content on top of IPFS, IPNS gives you a hash like IPFS does but this hash is a pointer to an IPFS hash that can be changed, so you can add data publish to the IPNS hash and then people who visit/get the IPNS hash now will receive the new content(s). You can optionally use it with DNS too by adding a TXT record this way you can have human friendly domain names that automatically resolve to HTTP or an HTTP gateway if the user isn't using IPFS, or resolve using IPFS directly if they are using it.

cont...

Better dynamic content systems are in the works, which I think will be very interesting when coupled with mounting. You can already use and mount IPNS which means you can create and read from a distributed FS very easily. Think of how annoying it is to set up redundancy for network file systems and the limitations imposed on them, with IPFS you just publish a hash and mount it while the underlying program takes care of discovery, routing, traversal, etc. All in all it's 2 commands for the provider and 1 for the consumer, add, publish, mount. To create another point of redundancy (mirror/fallback/etc.) on another machine you just get the contents periodically, that's all. When pubsub is done you'll just subscribe to it and I think that will happen automatically, literal drop in nodes with no config needed. There's crazy things people are doing with read only virtual machines and software package repos that are real neat, imagine being able to just mount your entire package repo and fetch it from the closest source on demand which could be a remote peer or someone on your LAN. A series is neat too, imagine I have a series called "Whatever", you could mount the IPNS hash at "~/Videos/Whatever" and it will just populate as I publish them. BTSync can do some of this but not all of it, on top of that it's proprietary and thus not as portable, IPFS is an open spec and go-ipfs is/will be an open reference implementation. IPFS also allows for resource constraints such as disk usage, bandwidth, etc. if you configure it with a disk watermark it would automatically reclaim the disk space too so you could watch ep1, then 2, then 3 and maybe by that time it hits the water mark, removing the blocks taken up by the old one but the pointer is still there so if you want to watch ep1 again you just double click it and it will refetch it. That's optional though if you want you can just keep all data forever or manage it manually.

libp2p is intended to be modular, things like tor, i2p, cjdns, etc. will interoperate with IPFS and thus anything that uses IPFS. If I remember correctly custom BitTorrent clients had to be written for i2p integration and you can't use those clients outside of i2p or in conjuncture with the standard clearnet, in the IPFS case you could use the clearnet, i2p, or both at once if you wanted. This is being worked on though and I don't think there's any integration outside of HTTP so far but there seems to be a lot of interest in i2p integration, I think cjdns already works without any necessary changes since that exposes interfaces to use.

I don't know if this is an advantage but since all the content is addressed by its hash and not by anything else it shares the benefits of magnet links, all you need to get the content(s) is its hash, I guess the benefit ties in with the cross content thing, since a piece of content will only ever produce the same hash by default there should be no accidental peer fragmentation where a BT magnet essentially just downloads a torrent file to be used which could be different from another torrent file that has the same content.

I haven't looking into it but API is important, I'm surprised to see IPFS being integrated into other things already, the gateways make it real easy to drop in support immediately and refactor later. I don't see many things taking advantage of BitTorrent as a library, migrating from a traditional protocol to BT seems more complicated. I can't really talk about any of that yet though since it's not all solidified yet and I'm not that experienced with BT as a library myself.

I just remembered the browsing aspect of it too, it's real easy to browse IPFS content and with the ease of composability it's easy to create sets of content no matter the origin and without the need for wasted disk space, there's no need to worry about fixing metadata either since all the hosts will still be there. It reminds me of FTP and Direct Connect where you could just share your entire media drive and see what other people are also sharing, browse aimlessly for fun instead of needing to know what you're looking for and without penalty. Someone posted their chan folder in one of the previous threads and I just clicked the link, and browsed some webms and images, it was neat and aimless, not too many people would want to do that with a torrent.

IPFS is really designed as an HTTP replacement more than a BitTorrent replacement but when considered HTTP is just a way to distribute data and so is BT, IPFS is born out of the frustration of all these distribution concepts being talked about forever but never being implemented, it's taking old ideas and implementing them as well as adding some new things that I think are interesting and much more flexible than both HTTP and BitTorrent. The current implementation works pretty well but there is a long ways to go, the fact that people are already considering the alpha implementation as a bt replacement should say something about where it's going. I've seen people over at Holla Forums use and recommend it for self hosting because it's convenient for the provider and the consumer. In the future when they add no-copy-add (which seems like it's almost done) I think it will take off big in the file sharing scene, it's just too easy and convenient to use, if someone makes a good user friendly front end for it I think that will have an even bigger impact.

I'm also going to shill these articles because they probably do a much better job talking about IPFS than I can
blog.neocities.org/its-time-for-the-permanent-web.html
https:[email protected]/* *//an-introduction-to-ipfs-9bba4860abd0

As well as these videos from the last Holla Forums thread I saw on IPFS
ipfs.io/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw

and this extension that turns ipfs paths into clickable links (see image) and handles gateway redirection automatically (uses the gateway if you're not running the daemon, uses your daemon if you are)
addons.mozilla.org/en-US/firefox/addon/ipfs-gateway-redirect/
chrome.google.com/webstore/detail/ipfs-gateway-redirect/gifgeigleclkondjnmijdajabbhmoepo

Korean documentary with English subtitles of the Xingu and another tribe in Brazil.
/ipfs/QmUSoSsiSWB9vN3hfnrVV1r5vk3S9vPhsgbrXNNpqYeEiD

Shows how they live and towards the end how they're becoming more integrated with modern society and tools.

When do we get a C or C++ client? Can't run the Go one on my shitty ARM board.

Of course you can. You just have to compile it yourself.

I mean with reasonable performances. The same reason i2pd was created to "replace" i2p.

Have you tried it recently? The Go team recently made performance improvements in 1.7 and ipfs itself has had some improvement with their caching etc.

The Go compiler improvements may be x86 only until 1.8 though, I forget.

I've seen people say they were running on low spec ARM boards before when it was all worse.

Previous thread archive.is/hPYvi

Dude, it's literally the third link in the OP.

...

Does anyone have an performance/memory usage metrics on go-ipfs?

Is there a c-ipfs yet?

They have some way of getting metrics from the daemon but I don't remember where that stuff is and I never used it myself.

Maybe one of the github pages has some, they said somewhere before that performance isn't really top priority until the reference is completed, when it's finalized there will be optimization and security passes. Until then I don't think they'll group all the metrics in one place, they usually only reference specific improvements in pull requests related to performance.

On my x86-64 machine it uses about 200MB of memory, I allocated some for caching and have a huge datastore, I usually keep it alive for days at a time, I only kill it when I need my bandwidth. I don't know who is to thank for the reduction (ipfs team or Go team) but it used to idle at around 400mb I think when Go 1.5 was the latest version.

I forget the numbers but I also noticed the CPU usage going down as Go and ipfs were developed further.

If you're concerned about memory you can set the environment variable IPFS_LOW_MEM to true (since 0.4.0) and see if that does anything, I never tried it.


Only Go and JS currently that I know of.

I got excited when I saw the new thread and jumped the gun. My bad.


Feel free to download the zip and host it or use this.
github.com/victorbjelkholm/ipfscrape

Now that Filecoin is based on Ethereum, things could get interesting. I really hope the devs finish IPFS before they release Filecoin publicly. I'm predicting they'll somehow be used when people abandon private torrents and switch to private IPFS nodes. I hope file sharing stays free like it is today.
archive.is/iYnUW

This is the future I've wanted for the longest time. There's nothing preventing us from clustering our devices and actually utilizing this global network. Finally people are making an initiative to take advantage of this, programs and data on the network, a reliable and distributed network not a weak centralized one which may disappear or fail at any point.

This is the closest I've felt to a new age, which is kind of shitty since it should have been like this from the beginning, but at least it's happening one way or another. Even if it wasn't the world wide web, and even if it's not IPFS+Ethereum, I'm glad to see these concepts being pushed more and more as time goes on.

BitTorrent is still better than IPFS

In what ways?

Original user here, BT is better because it has good clients (performant languages without GC, good curses interface or daemon/client arch). The end.
The problem with IPFS is that you're unlikely to find a file sharing only oriented software.

IPFS is complete shit.

/thread

How does any of that factor in? There are libraries for BT written in Java, Python, JS, etc. but those are not the protocol or spec, they're just implementations of it. Nothing prevents you from writing them in another language, the same is true for IPFS as well.

What problems have you run into performance wise with the go implementation? In my experience it runs pretty well for not having any optimization efforts yet, it will only improve from here. The only performance things I ever notice are i/o bound operations.

The reference implementation is a daemon with 2 interfaces, a cli and a webui. You can wrap these with their api or create your own interface from their libraries.

The entire point of the protocol is to distribute data so I really don't know what you mean by this.

What does BT have over IPFS in relation to the protocol because no matter what BT implementations are bound to that. I can't see any actual advantages.

I really don't get where they come up with this stuff, I mean why are they even comparing BT to IPFS as if both protocols were competing? The aim of both protocols are not the same, one is for file sharing and the other is trying to replace HTTP.

Also I'm OP, I don't get what the guy you replied to meant by "Original user".

I'm adding my entire media drive when this is merged
github.com/ipfs/go-ipfs/pull/2634

This, but IPFS is still better then BT at file sharing since it deduplicates data in the system. This solves the problem of different people sharing the same files but can't download from each other because of different DHTs.

Looks like a newfag who thought he had to say that he was
He meant he's original guy asking the question that responded to and not some random person.


Same here. I hope IPNS gets some love soon too.

What's even the point of ipns? Asking because this shit literally has no documentation:
~ $ ipfs mountIPFS mounted at: /srv/ipfs/ipfsIPNS mounted at: /srv/ipfs/ipns~ $ ipfs name publish QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3NnError: You cannot manually publish while IPNS is mounted.

In short, it's for distributing dynamic content.

When do we get trusted private networks?
Might not even be worth running syncthing at that point when mirroring doesn't require a complete copy of all the files and we get trusted private networks.

That's by far my biggest peeve in any file-sharing system.


IPNS hashes point to IPFS hashes (or other data maybe in the future like IPLD). The point of them is to have dynamic content like a web site, you make a site, hash it, publish it, then link out your IPNS hash, if you change the content and republish people going to that hash now will see the new hash. Think of it like DNS but instead of name->IP it's hash->hash.

It works but the stuff around it are not done yet, I think as it is right now you need to republish every 24 hours or it will stop resolving, even if the content has not changed you need to keep the record alive. Eventually other people will be able to keep it alive so that at worst it will resolve to the last known thing it pointed to even if the publisher stops publishing.

The other limitation that will be removed in the future is allowing 1 daemon to have control over more than 1 IPNS hash.

github.com/ipfs/examples/tree/master/examples/ipns

I've seen a lot of people using it like they did with the equivalent on Freenet where they put a link to their IPNS at the bottom of the page saying "click here for latest version" that way if someone is linked to an IPFS version they can potentially find a newer version by clicking the IPNS link which will point to a new IPFS hash.

The other dynamic thing is a pub-sub system
github.com/ipfs/notes/issues/64
I Think right now publishing to IPNS takes about 5 seconds for the records to propagate so it's not useful for real time stuff, pub-sub will be for that.


Depends on how you mean, you can have a private IPFS network right now by just removing the default peers from the config, replace them with trusted peers. The only issue right now though is that you can't prevent someone else from connecting to you, that may have been resolved though, maybe check github for blacklist or whitelist, I think I remember seeing some way to do that.

There's also this for a key system
github.com/ipfs/go-ipfs/issues/1633

The other thing would be integration with other protocols like I2P which would be configured in a way you'd prefer and then you'd tell IPFS just to use that interface or however it would be implemented.

github.com/ipfs/notes/issues/146#issuecomment-232953462

>go to localhost:8080/ipns/ipfs.io/blog/

/ipfs/QmVu1EcF8Gap2RLiPj61RHWUCr5nn16N6ffmBHZwnTaJAj

The short answer is that they're both peer-to-peer systems, and there are tens of thousands of peers on bittorrent, with a few hundred on ipfs, so obviously BT is far and away the better system right now. The argument by the fans (myself included) is that if the level of adoption between the two was comparable, the utility for users on ipfs would be much higher.

One example of something you could easily do with ipfs much more conveniently than bittorrent or http: I have a picture/book/song on my hdd that I want to share with a friend. I right-click the file, copy hash to clipboard, send him the hash. When he clicks the link, he downloads it from me and everyone else in the world who also has it on their computer and has chosen to share it.
If it was http, I would need to go find a site that had a similar file and send him the link. If it was bittorrent, I could create a magnet link, but it'll only have one peer. You might ul it to the chat client directly, but then you're subject to their rules & limitations on file sharing.

e.g. here's 250MB of assorted books
/ipfs/QmcL9T6sZrKc8qEXPaQtqcd6tMEVjmFCVGLxdXDWPU69La

Looks like a second Zeronet. What has it got more to offer over Freenet? Because it sounds like exactly the same thing if you ignore their apparent eagerness to respect DMCA docs.google.com/a/andyet.com/forms/d/e/1FAIpQLSekmfikDJ5mIS5YugnAinTfiuyJ4BgkqkkX18DldpbirC3dUQ/viewform and lack of anonimity.

good point, but anything other than this?
What am I missing and why does everyone seem so hyped about it when similar more secure technologies already exist?

Can someone help me understand? I looked up the Ehtereum and it seems to be distributed system to build your applications on top of it.

It thought IPFS already works in the similar way. When people say IPFS+Ethereum they mean that IPFS rebuilds its technology on top of the Ethereum so in a way it helps Ethereum become a standard instead of IPFS using it's own implementation.

Is this it?

another meme.

Does Zeronet still have a 10MB size limit? When I tested it last that was a big deal for me, I'm not too familliar with it so I can't make other comparisons.

Freenet forces distribution so you may be harboring content you do not want to re-distribute, this stengthens the reliability of the network but it does so forcibly which isn't great.

With IPFS there's no limit on size and you have control over what you store. There are already peers and services that will store your content for you like ipfs.pics and some of the cache projects, the IPFS team itself is also going to provide something for mirroring your data that people can opt into instead of it being implicit.
The intent is for libp2p to be pluggable so if you want anonymity you could integrate it with something else you already trust that provides it like i2p or something else.

That's only on their own gateway, it's not possible to censor something on IPFS, the best you can do is blacklist it yourself like they're doing with their gateway, anyone else can do it with their own private gateway and their own blacklist, or peers themselves can too but nothing can remove something from the IPFS network as long as someone has it, every peer that has the content would have to delete it and hope nobody else adds it again later.


I don't think usercount inherently makes it better or worse, it just shows popularity, more peers won't fix the shortcomings of the standard.

IPFS itself is cool but the network protocol it uses is more interesting imo. It's conceptually mostly old ideas refined or actually implemented instead of just theorized, IPFS should seem very similar to a lot of things, you see people in this thread comparing it with BT, Freenet, Zeronet, HTTP, etc. since all these things boil down to getting data from one place to another, the differences are only in how they do it. What transport(s) does it use, how is the content addressed, how are peers discovered, how is traffic routed and relayed, what obstacles can it handle (NAT), how easy is it to replace or update any of these, will changes break compatibility?

In IPFS it's using libp2p which combines ideas from older projects into a cohesive and modular system that should work well with other things instead of being just another service on its own that is exclusive to itself. The developers keep describing the spec as a "thin waist" like IP is to networking, they intend to have something that can act as a basis for building other things on top of it, that all interoperate with each other, where things are not incorporated they are bridged. Essentially just taking the good bits from everything and making sure they work well together and have interfaces so that underlying systems can be replaced as/if needed, if you don't like the content addressing system or something better comes along it can be replaced easily without breaking the thing built on top of it, the same is true for every part of libp2p, transports, routing, encryption, etc. All in the same way we use IP today, if something on top of it needs to be changed then we change it but IP stays the same, IPFS is just something built on top of this new network layer and that allows it to not only be good now but stay good in the future without compromise, it's a stable standard that won't suffer from stagnation because the foundation of it allows it change without breaking anything. The only difference seems to be in who is considered here, with IP it's mostly thought of as another network point, where p2p is more points rather than 1 point. The modularity of the protocol allows for easy experimentation and improvement's as time goes on where other systems are heavily rooted in their own foundation they've built on top of IP.

I think the underlying project is worth checking out github.com/libp2p/specs

The over simplification is this,
IPFS is a hard drive, Ethereum is a processor.

IPFS stores and distributes data, Ethereum processes instructions.

With both of them you can make a program that doesn't rely on a single host machine, Ethereum "contracts" are run by peers on the network in a vm, each instruction has a cost associated with it that goes to the network, so as a user you pay a small tax (called gas) to run some kind of operation, if the program requires some kind of data it could be pulled from IPFS which also doesn't rely on a single machine. So you can have autonomous programs that live on the network.


For places like apartments and university it's true, it's like a distributed local cache instead of having a central one.

Yes good goy, continue mocking basic mathematics and critical thinking as if it's a funny joke. India will be along to strip your future career prospects shortly.

Now that I think about it, wouldn't it be possible to make a real cyberspace (like shown in fiction) with IPFS + Ethereum? The potential is really huge.

Well, when it's done, we could do a Holla Forums sharing network. Can't be worse than the Retroshare shit. Even with Go.

I'm not sure how you mean, it's depicted in a lot of ways. It could be depicted as some ethereal realm where things just exist irrelevant to people in the real world but at the end of the day it's just programs and a real network that depends on peers to distribute it and run it.

The fact that the creator doesn't always have to pay for hosting is what really resonates with me, I really like the idea of programs having a run time cost as a currency and existing forever. If you want to run a program/service you just fuel it with gas, you don't have to worry if the original creator isn't capable of hosting it anymore or is dead, it will always work as long as you have fuel which you can easily obtain either from putting in work on the network yourself or trading it with others.

Reminds me of a time share but with tokens, or maybe arcade games, you put in an arcade token and Pacman will execute.


Even just in residential local networks, I always hated when me and my roommate would have to download the same game over a DSL connection despite one of us already having it locally, it'd be much better if it just automatically pulled from them instead of me having to FTP files around manually.


Since it's all just text hashes and metadata you could set it up on anything like IRC, or even just a thread here. Orbit exists now which is a chat program built on IPFS.
github.com/haadcode/orbit
we could make a room and hang out in there or post hashes here.

Once IPNS keep alive is done things like this will be better too
/ipfs/QmXta6nNEcJtRhKEFC4vTB3r72CFAgnJwtcUBU5GJ1TbfU

The owner of that site doesn't keep their record alive unfortunately so that IPNS link only works sometimes. In the future it should always resolve as long as someone holds the latest reference.

Does anyone know if this is possible with Zeronet? This seems like a big deal.

Oh yeah. And the reason is to plant non-obvious backdoors, exploiting UB in C++ code.
Do some research before posting.

github.com/ipfs/go-ipfs/issues/3177
you can access some stuff through the API, it seems

They were shilling that on /g/ too?

I saw them on 8/pol/ trying to enlist the dumbass tinfoilers.

No one wants to run the www off their own internet connection anyway.

Most people don't even want to run bittorrent for free movies, everyone wants Netflix but they don't want to run Netflix off their home/mobile connection.

0chan is ok but still in a shit stage

Yeah, sure. Do you have any proof?

Still patiently waiting for the ability to securely serve the IPFS web console through a web server like I can do for ruTorrent. I want to be able to add a file to the interface and have it automatically download to my SBC home server and share the file over IPFS.

Looks like they don't give a shit:
github.com/ipfs/webui/issues/95

The ssh thing at the bottom is what I did, seems to work fine.

Now that's a neat trick. That's good enough for me.

I tried setting up an apache reverse proxy just now to see if I could get it to run over https, but it turns out all the xmlhttprequest stuff is hardcoded to http. Looks like that ssh hack is the only option right now.

I use that SSH trick all the time for all sorts of things. It's great for bolting on SSH's authentication and encryption onto any protocol you like, and I think it should be used more often rather than having things like that support SSL.

It looks like parts of this have already been merged, the rest of the work seems to just be waiting on review and merge conflicts.

I've got time to organize my porn before linking it.

Why do post flags stick, it's annoying.

Poor kevina keeps getting ignored. I don't know why this is such a low-priority issue. This will single-handedly speed up the adoption rate exponentially.
github.com/ipfs/pm/issues/217

This thing isn't very good at handling 80GB directories with half a million files. Some commands barely work, some others seem to be trying to read the entire dir's contents into RAM.

They'll have to figure it out soon because you just know people are going to start dumping the linux kernel git tree into this for the sake of memes and bitching when it eats their computer alive.

My guess is that they're satisfied with where they've gotten IPFS itself for now, and they're focusing on other parts of the whole (IPLD, IPNS, js-ipfs). Also it seems like they're overhauling multihash into something more complex:
github.com/ipld/cid
Which from what I can tell, seems like they've decided they want to represent an object not just by its hash, but with some other metadata describing what it is and how it should be read.

I agree, though, that they seem awfully dismissive of kevina given how obviously important this "feature" is.


I think datastructures like git, bittorrent, blockchains, etc, are all meant to fall under IPLD, which they're actively working on.

The team seems to be helpful with review and advice from what I've seen, a lot of the discussion seems to go on in the pull requests, not the main one but the related ones, I wouldn't be surprised if there was more talk privately on IRC too.

I think the fact that it's a small team is why the merge is going slowly, it's a pretty big change and it was developed while other changes were going on so there's a lot to update on top of just review, all in addition to existing work.

I think the pace has been fine but I too wish it was in already. Given that it's a non-critical enhancement I don't think it would have been implemented for a long time by the actual team so I'm glad to see it being worked on at all right now. I do think it's very important for migration, I wonder if the team is worried about it, more adoption means potentially more bandwidth on their gateways and maybe legal hassle in relation to the DMCA. Who knows.

Unrelated this issue made me laugh
github.com/ipfs/go-ipfs/issues/3299

I've got a server with 2TB of disk space and nothing purposeful to fill it with. Is there anything interesting on IPFS that could use more mirrors?

These are from past threads

WW2 documents (propaganda posters, war maps):
/ipfs/Qmag6FEvWn8JHJ8y9zxxijDhkEX4jnK6F1oVy5jUKXtc3o

Two PC98 collections:
/ipfs/QmYJB5kVFwSdZhcJ29wLtoWhqFEubZCqgq4gcuFQuqzpGM
/ipfs/QmbJgaAntArpY9F2HpNZd7i72mGKkLRWUivAG9o5SKMzLk

Most of the Saturn US library:
/ipfs/Qmb8xjpvojqDEBohvJHBun68iSjnxnesypcYMv7RSpe5KW

Puyo Puyo SUN for the PC:
/ipfs/QmQVQwapUPeHr5zssan4GUXdTz31neSqmnrJMV1MiPDqpE

Puyo Puyo for the Macintosh:
/ipfs/QmTNsMqMA7BSUuGuUJSgopP8TiqCovg5EiQDdZ2cfYvuXB

Fuck, I can't even ipfs ls --resolve-type=false on these things

You could check out github.com/ipfs/archives/issues for some of the things that are officially being pursued. There's a guy doing full-site mirrors of xkcd in that issue list, the current one is at
/ipfs/QmNogExCdnMJwWE1bpEweMUQyo3X2LP6tuWVvmLYJxUc6o

There's also an old page here, I think it's pretty outdated
/ipfs/QmZBuTfLH1LLi4JqgutzBdwSYS5ybrkztnyWAfRBP729WB/archives/

I myself have put a small collection of books up on
/ipfs/QmcL9T6sZrKc8qEXPaQtqcd6tMEVjmFCVGLxdXDWPU69La/assorted_fiction
if you want to have a browse through them


Try "ipfs object stat" to see if it's available at all, then I've found "ipfs object links" often works faster than "ipfs ls"

I wonder if the CID changes to bitswap in master fucked up my ability to connect to people. I can't seem to get stuff that's gettable via the public gateway, so maybe you can't connect to me either.

I wonder what's wrong, maybe I'll revert to a stable release.

Nevermind, it's not that.

It's not you, findprovs returns like 20 peers for most of these

Sorry I should have clarified, I'm one of the providers, my reply was meant to explain why they might not be reachable but I was wrong. The gateway can reach me so I'm not sure what the problem is.

My ports are forwarded and my connection should be fast.

I get really annoyed when they release an update but don't update the changelog for days at a time.
Ipfs release v0.4.4 - pinset bug hotfix


I'm interested in your book collection but it looks like only some of the metadata is cached on the network. Could you re-host it for a bit?

Here's a bunch of manga/anime light novels I downloaded a while ago and will probably never read.
/ipfs/QmTXLfnPLzhvi4KqkD7L8qALBBY2phzfvyAo4Lkiu6z1S4

Here's the Spice and Wolf novels 1-14 in epub and mobi. The guy converting them is working on the others too.
/ipfs/QmXsAC4bdggq2QErUxn45Wkc4UFM9syTezvDWPDue3Xd8p

Does ipfs actually work now? Like if you try to visit a page it will get that page from multiple sources closest to you?

I started using it at least halfway through 2015, it's probably worked since before then. It's not optimal yet but it works.


No wonder I couldn't get it earlier, disregard this post I should have checked too.

It has a web frontend with a metric shitload of web2.0 bloat to display peers on a 3D spinning globe. There's no cross-origin requests so I'd say it works.

What about DNS?

hmm, I'm hosting it now, and I can get everything in the manga hashes you posted just fine, so I don't know what's going wrong ...

obligatory post of the /g/index
/ipfs/QmNgzvC1Y5dh5pQvPfZpZUkoHuB6Z7xwobo5Rv19nZcwA8

Just finished watching the "go-ipfs Call - Q4 Planning"

Somewhat obvious but still exciting.

IPNS and their pub-sub ideas.
github.com/ipfs/examples/tree/master/examples/ipns
I don't know where the pub-sub stuff is I haven't been following it too closely.


TXT records pointing to IPNS or IPFS hashes.
github.com/ipfs/examples/tree/master/examples/websites

I tried pining your books but I'm getting very slow download speeds. Is your upload really slow?
It would be nice to have a --progress option when pinning.
[[email protected]/* */ ~]$ ipfs stats bwBandwidthTotalIn: 761 MBTotalOut: 5.5 GBRateIn: 32 kB/sRateOut: 698 kB/s


I can see this succeeding private trackers in the future. The network would host the content as well as the entire website.
Did they mention i2p at all? I'd rather they work on optimizing the core then add additional adoption incentives at this point. I'm a bit afraid that when they merge the filestore code it'll catch on quickly but would be still under developed.

I tried ipfs pin add QmcL9T6sZrKc8qEXPaQtqcd6tMEVjmFCVGLxdXDWPU69La but it doesn't do anything. Weird because I can look at and download the books through my localhost web ui.

Are ipns dnslinks not supposed to work with the dns command? I can get “ipfs dns fs.ipfs-dns-test.tk” to resolve but “ipfs dns ipfs-dns-test.tk” does not, but when I access "gateway.ipfs.io/ipns/ipfs-dns-test.tk/" or "localhost:8080/ipns/ipfs-dns-test.tk/" it still works.

I didn't hear any, I think that's tied more to libp2p than go-ipfs itself, I only watched some of the p2p meeting though.
According to this
github.com/ipfs/notes/issues/124
Maybe it will come after TOR.

In the talk they mentioned XTP, something about communicating with other protocols via IPC, maybe that will tie into this. Have ipfs launch a process for i2p and control it until it's properly integrated, I'm not sure though. Proper integration seems inevitable but I'm not sure when that would happen.

The tone of the meeting seemed to be to focus on grinding down issues and reducing resource usage mostly, I take that to mean improving the core by virtue of bugfixes and re-writing portions of the code to be more efficient, and hopefully less error prone as a result of reviewing/reimplementing the same portions.

The big thing I didn't even mention is documentation, if they fix up all their documentation and add better examples, it can only be better all around. Potentially more users which leads to more testing/issue reporting, better entry points for developers may mean more third party contributions too since people will be able to actually understand how the project works and fix things themselves if need be without relying on talking with the original developers to get a foothold. This could help with the i2p or any other integration as well, so long as it's useful enough to third party devs who want to add support.

It seems like the filestore code is not getting any special treatment so it should be as good as any other contribution, they're certainly not rushing it in, it had to conform to the new IPLD and CID stuff they added and it's being reviewed by the 2 top guys on the project.
As for IPFS itself, if the filestore acts as it should I think it will be fine given how well the rest of ipfs is working with datastore files, filestore should just be an alternative to that, a big change for the user experience but not such a big change for the rest of ipfs, it's still being distributed the same way etc.

Another note is that a user jefft0 has been doing tests with TB's worth of video content on kevina's filestore fork which is pretty great. I don't doubt more people will be stressing it once it hits release. The only thing I worry about is their bandwidth on the public gateways.

For reference the talks I'm mentioning are here
youtube.com/channel/UCdjsUXJ3QawK4O5L1kqqsew/videos

and the notes are here
ipfs
public.etherpad-mozilla.org/p/ipfs-oct-10-go-ipfs
libp2p
public.etherpad-mozilla.org/p/ipfs-oct-10-libp2p
taken from this github.com/ipfs/pm/issues/217

github.com/ipfs/faq/issues/19
I'm assuming only destination port on INPUT is necessary?

I just found this in a 'helper' script:
# update firewall: open ports 4001 and 5001declare -a ports=("4001" "5001" "8800" "9876")for port in "${ports[@]}"; do UDP="INPUT -p udp --dport ${port} -j ACCEPT" TCP="INPUT -p tcp --dport ${port} -j ACCEPT" set +e sudo iptables -D $UDP >> /dev/null 2>&1 sudo iptables -D $TCP >> /dev/null 2>&1 set -e sudo iptables -A $UDP sudo iptables -A $TCP echo -e "Opened port ${BLUE}${port}${NC}"done

Which translates to:
iptables -A INPUT -p tcp --dport -m multiport 4001,5001 -m state --state NEW,ESTABLISHED -j ACCEPTiptables -A INPUT -p udp --dport -m multiport 4001,5001 -m state --state NEW,ESTABLISHED -j ACCEPTiptables -D INPUT -p tcp --dport -m multiport 8800,9876 -m state --state NEW,ESTABLISHED -j ACCEPTiptables -D INPUT -p udp --dport -m multiport 8800,9876 -m state --state NEW,ESTABLISHED -j ACCEPT
If I'm interpreting that ugly/unnecessary loop and comment correctly (have no idea what happens with "set +e" and that shit). Have no idea why it would drop shit, unless it's some uninstall thing.

But according to that issue page I linked, all I need is TCP on 4001?

Sorry, but I'm very thorough when it comes to my firewall. Very. Thorough.

That script is horseshit. You only need tcp/4001 exposed to the world (there's no UDP involved anywhere), 5001 is the webui and should only be exposed to your LAN if anything, 8080 is the http gateway which you do *not* want fucking randoms being able to access because it's equivalent to running a tor exit node.

I've restarted my machine and turned off any other bandwidth-intensive programs I had on, but ipfs stats bw gives me way lower RateIn/RateOut numbers than show up on my system monitor. (Like, RateOut is consistently in the single-digit kB/s range, whereas nethogs shows it idling in the 30s most of the time.) It's especially strange that can download them over the webui (so finding the files isn't an issue) but not pin the directory directly.

update from kevina via #ipfs:
>I consider to filestore code stable now. If you want to try it out at github.com/ipfs-filestore/go-ipfs I could always use more feedback.

It took a while but after a few tries it successfully pined. I realized that I'm still running v0.4.3 so updating to v0.4.4 might've fixed it. No point in reporting the error messages I guess.

Can I run this on my server as a node which maybe stores and shares a bit more than most or something?

Yes. It won't do anything on its own without manually adding stuff though, other than forwarding hash search requests around the network.

Eh

When you run your daemon it will only download stuff you tell it to, the stuff you download is also shared so long as the daemon is running. There's no implicit sharing like freenet by default, there are some services already that you can opt into if you want to just donate storage and bandwidth or you can just selectively do it yourself by pinning content you like/want to share.

What is it you'd like to do, I'm not quite sure what you mean by this

This sounds better.

What I've learned as a hobbyist server admin is that I don't want to do shit manually. I'll work my ass off to not work at all.

I don't remember the names of any of them, there was some markdown listing them but searching "ipfs cache" returns a lot of unrelated things so I can't find it. They should all be running this though
github.com/BrendanBenshoof/cachewarmer
You could set up a gateway instance and make a pull request on that to be added to the master list if you wanted.

I'd wait for filecoin, the official implementation of such a thing.
filecoin.io/
Essentially you get tokens for hosting other peoples data that you can redeem for the same if you wish.

You could also look around here and manually pin something
github.com/ipfs/archives
like Wikipedia or something crazy like that.

Thanks for the options, user.

What about ipfs cluster? github.com/ipfs/notes/issues/58
I've only just stumbled across it, but it seems to be a tool for "automatically" sharing a pool of data between a group of nodes, to achieve redundancy and massively shared storage. If a member of the group pins a file to the cluster, and there's a consensus among the other nodes that it should be kept, everyone automatically pins the file (or shards of it).

I'm not sure how that consensus is actually reached, but it sounds like a reasonable half-way between "if no one pins your shit you lose it" and "everyone automatically holds shards of everyone else's shit by default".

Neat, I haven't seen this yet.

Perfect Dark has a good method of implicit distribution, when you choose to upload a file to the network, the file gets sent out to some peers and they hold on to it without knowing what it is, users set a limit that they will hold say 40GB's or so, it's like a stack that can change its order, when your upload gets sent to them it's on the top and the lowest thing gets bumped off(culled), if someone requests that file it gets put back on the top, so popular files hang around. There's other rules around it but the popularity aspect is what I like. The more a file is accessed the more likely it is to remain on the network even if the original host goes away, similar to IPFS but implicit instead of explicit.

All that said persistence via popularity is already a thing in IPFs and IPFS does have watermarks similar to Perfect Dark, I wonder if you could make a similar system that doesn't need so much consensus but is a little more ephemeral/fragile, I think Freenet works in a similar way to maintain persistance via donated storage+bandwidth but I don't know the details.

I'm curious what the IPFS team will do both for filecoin and for this cluster thing, also what third parties will come up with as well. I'm really glad implicit random sharing is not required or on by default like it is in other systems but I'm also glad it's going to be something you can do easily if you want to. That seems like the best approach for everyone.

So, basically, i2p when? Because as it is right now, I don't see any advantage over BT just for file sharing.

Read this thread

"Soon" isn't an answer to when.

This isn't a skiddie warez tool. If you were hoping to leech chinese cartoons over your edgy sekrit encrypted network, fuck off.

It's totally suitable for that as well as other things.

Yes it is, but anyone asking how to do it obviously hasn't RTFM and should fuck off

Nobody's asking how to do it. I just asked about i2p support. What I've gotten is that libp2p will get it "soon".
Looks like you're quite mad especially since most ipfs threads are about sharing stuff through it.

You already found your answer. Soon means whenever someone gets around to write the code.
Read the thread for reasons why it's better then BT for file sharing. I've already explained it once.

...

Looks like we can expect private networks by December.
github.com/ipfs/go-ipfs/issues/3313

All indications point to ipfs 0.4 being utterly useless at handling large directories. When's that getting fixed?

This was opened recently
github.com/ipfs/go-ipfs/issues/3316
which should be resolved by the end of the quarter.

What do you consider large and how does it become useless? I've added content of my own and retrieved other peoples content that had lots of small files and others with large files, I think the biggest I've done was ~300GB's of video, afterwards I did a gc which took a while but did complete, it would probably be much faster now that you can specify hashes to delete instead of cleaning the entire repo. Even back on 0.3 people were adding things like the gentomen library and people now are doing things like mirroring the arch package repo, the archive project does full sites. I'm curious to see at what size things start getting unmanageable and in what ways.

I'm going to go wild once filestore is added, multiple TB's of arbitrary shit.

Not that user but people have reported running into problems both in memory and bandwidth usage when trying to add huge datasets. There are a few detailed discussions in github.com/ipfs/go-ipfs/milestone/5.

Their Q4 milestones are all pretty ambitious to be honest, kevina reckons he'll have filestore ready by December: github.com/ipfs/go-ipfs/milestone/26. Once it hits (and if it proves stable) we should do a poll to find out what files we all own, so that they can start off with a robust number of seeds.


A workaround I've seen somewhere, if you're having difficulty adding a huge directory with lots of small files, is to write a little script that adds each file individually and then does "ipfs object patch add-link" to them. Advantage of this is that if you crash halfway through, those files that did manage to get added will still be pinned, and the directory for them will be the most recent output of the patch command (as long as you can retrieve that).

I've been trying to `ipfs pin add` some of the hashes in this thread for over a week now. They have at least 20 providers.

Can you point to one specifically? It might be a connectivity issue on my end or yours, someone else said the same thing earlier in this thread. Everything I have seems to work on my own gateway, a friends gateway, and the public gateway so it should be retrievable but I use the master branch so something may have broken on my end, it still seems to be working fine though in the same cases.

If it's one of my hashes I'm going to see if I can rope another friend into trying to download from me to test it on a bad connection, they should still be able to pin from me.

Attempted ipfs pin add Qmb8xjpvojqDEBohvJHBun68iSjnxnesypcYMv7RSpe5KW and I'd left it going in a tmux session, along with ipfs ls in another window. Neither of them finished overnight so I ctrl+c'ed the latter
Doing ipfs repo stat every few seconds shows it filling up at roughly dialup speeds.

That was my fault, it should work now. I shared that hash with Holla Forums and later gc'd it, I added it again.

Yeah, working now. Once I have a full set it should stick around for a while since I gave ipfs a whole 2TB disk instead of the stupid 10GB default cap. Assuming it doesn't OOM again.

fuuuuuuuuuuuck

That 10GB cap only comes into effect if you start the daemon with --enble-gc
github.com/ipfs/go-ipfs/blob/master/docs/config.md

It's set to 10GB in the config by default but doesn't do anything unless you want it to gc periodically.

Once filestore is up these should hang around forever from me too, I'm determined to have a good collection of old console games from every region eventually.

can it be run on windows?

dist.ipfs.io/#go-ipfs

thanks fam

I'm in paranoid mode now with this recent outage and have to stop putting this off

This also may be handy for you
>ipfs.io/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw

danke

FRESH 2HUs, GET 'EM WHILE THEY'RE HOT

Touhou 1-14 (1-13 English patched) in one 7z:
/ipfs/QmQ523KApwGB2GiR24hyAzX17Mz3Kc46CzBAd98ieDWAAU

These work fucking flawlessly in WINE if you're on OS X, Linux, or something more niche.

Honestly I think client/server was the optimum model for the early internet since it allowed plebes to just buy a PC and a modem and use their existing phone line when they wanted to connect to the network. Their desire for moar gigabytes provided the economies of scale to roll out broadband to most of the world and ever cheaper and more powerful PCs, which got us to where we are now. Distributed systems only work if the end nodes are powerful (compared to, say, a 386) and the network fast and reasonably reliable.

Protip you can add directories with -r, since ipfs can be mounted and it's more likely other people may add the same files, it's not necessary to glob or compress collections (unless the compression ratio is high) and may be better to just add files raw. If you are going to add single files -w is useful since it preserves the name and extension, you can use -r and -w together to preserve a root directory name too if you want.

The real big advantage is being able to selectively download individual files/directories instead of a whole collection too.

I've been meaning to play 12.8 again, thanks user.


That's true, I guess there was no real option for the time. It's good to finally be here now either way.

I'm stuck at 73MB. Are you still hosting it?


That's exactly what I was thinking when he mentioned it's a 7z.

I jsut watched the first one and recognized it from jsut now catching up in the Holla Forums ddos thread

thanks again fam

no wait got confused

Yeah I forgot to -w it and didn't want to ipfs add it twice.


I'm still hosting it. I'm grabbing it with "pin add -r" on another machine with better upload so hopefully it will start going faster for you guys once it finishes pinning.

Extra tip: If you add it again with -w it will take a lot less time since it's already copied in and doesn't take up any extra space in your datastore (outside of a few bytes for the metadata), plus everyone already hosting it will still be hosting it.

You could also use the files API to avoid rehashing and you don't need the original file either.
ipfs files mkdir /touhouipfs files cp /ipfs/QmQ523KApwGB2GiR24hyAzX17Mz3Kc46CzBAd98ieDWAAU "/touhou/Touhou 1-14 English.7z"ipfs files stat /touhou
Then paste that hash.

Readded:

/ipfs/QmV2yFuKNwAmZW3sX42J92aeJKvMoyUsetKrZPc1ScqYmi/Touhou.7z

I got them from a Holla Forums share thread. Didn't know Holla Forums was interested in IPFS too.


Thanks user, will mirror and play.

I think he's implying that it came up while the DYN ddos was happening, since it's scenarios like that which remind us all why decentralizing is going to be so important in the coming years. (Now someone needs to tell them about cjdns...)

I have a better idea. As some here might know, IPFS can mount itself as an actual file system with FUSE. sudo mkdir /ipfs /ipnssudo chown `whoami` /ipfs /ipnsipfs daemon &ipfs mountYou can then literally access the network as though it were a local file. This means you can then do silly things likewine /ipfs/QmdLPr2QBxfNpgF8PRr9N4KfTiMCPuwAAzh4JnmWCvx2Ji/TH14/th14.exeand it'll load and run the game directly off the network. It's smart enough to prioritize grabbing pieces as the program tries to load them too.

My connection is slow so loading might be slow though... give it a few minutes.

What if you DDoS all 'mirrors' of a site as well? Isn't it relatively easy to find all instances of a site or something? I suppose you'd have to ignore images and the like though.

It's the same model as what it would take to DDOS a bittorrent swarm. In the same way, it just depends on how many peers there are sharing a particular file. If your site has 100k visitors, your poor adversary is going to have to take down 100k connections.

Consider how hard it is to ddos some central servers right now, IPFS allows you to have many redundant points which are routed in various ways, what's worse is that visitors are also now redundant points, I can see it being almost impossible to stop specific hashes from being reached.

For each peer you need to find out every method they are using to deliver data and stop them from doing that on every network/protocol they are attached to, that seems like a hard task especially if you consider networks that are connected physically through something like cjdns or other mesh networks, coupled with message passing you'd have to take down a lot of exit nodes and visitors which is a lot of collateral/work.

I can't think of a simple vector to attack everyone that would actually be feasible, I'm not up and up on how much bandwidth you can rent from someones botnet or how much it would cost for how long but I have doubts an individual would be able to afford it and even then it may not even be enough to deny anyone.

It just occurred to me that you'd have to keep updating the list of victims too since it would be dynamically changing as peers go on and offline, someone could come online after the attack has started, and then instantly propagate to anyone visitors who are currently blocked, unblocking them in the process and creating even more targets you'd have to hit. It's not like a cental system where you have 1 thing to attack, even if it does balancing etc. it's still the same target and then it handles the routing of the attack still. Dealing with all the unknowns and redundant peers seems unrealistic or at the very least unsustainable, especially as IPFS grows both in adoption of users but also in interfacing with other protocols.


I extracted that archive and added it, I didn't clear the config files, lnks, thumbs.db, etc. it's just that 7z extracted.

/ipfs/QmZoh2cKzBgFRX7uXeswui4ffPdXH6voN22gFDq8bZPByM

Reminder that `ipfs get` takes ipfs paths so if you only want one game you can just supply the path to it. like ipfs get "/ipfs/QmZoh2cKzBgFRX7uXeswui4ffPdXH6voN22gFDq8bZPByM/12.8/Great Fairy Wars"

some on Holla Forums are switching to linux from windows for various reasons mostly rooted in the mistrust of government and certain (((parties))), that's why I posted that guide on /poltech/ when that board got started


it's come up before on Holla Forums but I think I heard about it here first, I've jsut been puttign it up

Not a DDoS per se but still a denial of service: DHTs are vulnerable to sybil attacks, and IPFS uses a DHT to give out provider info. So it's not a matter of finding every provider and silencing them, just of drowning them out. If an attacker was able to own a large enough percentage of nodes in the network, they could give out bad info/drop requests for certain hashes and effectively censor it, since it would become that much more difficult to find genuine providers. I think Kademlia (which IPFS uses) is in the process of solving this problem, but it's by no means solved yet.
This paper gives a good explanation, though its proposed solutions are a bit above my head:
/ipfs/QmTRdDWu1CzS86qs5V38q94mFJcCpmdzZMLxzg3xzStGDP/sybil-resistant-dht-socialnets.pdf

(As a side-note, it's nice to be able to just grab files off my hard drive and share them immediately without uploading them via a third-party.)

I'm obviously not saying that DNS is more dos-proof, but it's worth keeping in mind that everything is vulnerable somehow.

Anyone want to stresstest the network?

/ipfs/QmY6G7aYbBYYpJ7LdoerGWQDKd8RA9RNF89mGFXF79L4di
Contains all the modarchive.org uploads from 1995-2015. Allegedly.
It was on top of the ipfs/archives list, problem is it can't get critical mass of peers and the ones that have it keep going offline. But it's smaller than the full Saturn set so it should be possible.

Shame they don't offer the metadata as a torrent, I'm pretty sure at the rate the site's decaying it won't survive its 20th year...

How big is that modarchive haul? I can't get the directory to display on gateway.ipfs.io.

100K files. I can't ls the thing either, but someone thought ahead and uploaded that too:
curl ipfs.io/ipfs/QmcYo2u6Sk9gfQfPizEiLrRRos5GmBx5gi5FNSxGy2tNYt | zcat | less
Maybe it'd be easier to grab one file at a time from that list, I dunno

Goddamn that's a lot of files.
[email protected]/* */:~ > curl ipfs.io/ipfs/QmcYo2u6Sk9gfQfPizEiLrRRos5GmBx5gi5FNSxGy2tNYt | zcat > modarchive-ipfs.lst % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 6378k 100 6378k 0 0 1786k 0 0:00:03 0:00:03 --:--:-- 1785k[email protected]/* */:~ > less modarchive-ipfs.lst [email protected]/* */:~ > wc -l modarchive-ipfs.lst 142465 modarchive-ipfs.lst

I've got the torrents here, maybe I should try putting them into some kind of sane folder structure. Might get it going.

This has happened like 3 times now and it won't resume the file from where it left off.

I avoided that because most 2hu games require you to have write access to the game directory itself - whereas TH14 doesn't. Anything from /ipfs is necessarily going to be read-only.

Based on the source website it's about 41GB in total.


After trying like 5 times yesterday I left my computer on over night downloading it and it finished right before I ate dinner today. It will take a while.


That's really annoying. I wonder if you can use OverlayFS over the IPFS mount. If I understand it correctly it should theoretically be able to make any read-only directory writable with the modifications done in a separate read-write directory.

I did it so you could download individual games instead of the whole set. There's also overlaying it like mentioned below.


That would work, some kind of unionfs. You could even do it all in IPFS too using the mutable file system for writes. So you'd make a mock location, copy the contents from /ipfs/*hash* into it like this mount it, and then any writes would exist in that location with names and everything, you could then stat that directory if you wanted the current hash to share with people. Like if I played and submitted a score I could send that hash to people and they can compete against it then they share their hash, this could be automated with pubsub somehow too. That approach obviously would have conflicts though, only one person could play at a time or they'd just overwrite score data with their own at the end, some kind of lock would have to be used or someone would have to write a score file parser that compares and merges them to resolve conflicts. If Orbit can get real time chat working, a syncing system like that should be doable. Probably best to wait for them to expose that mount point than write something custom right now. This is exciting though, I didn't think of streaming and syncing game data with people like that, removes the need to implement networking and servers yourself without giving up an online leaderboard. You could probably go all the way and make something on Ethereum to handle the merging, that might be going too far just for distribution but it might be fun to try.

github.com/ipfs/go-ipfs/issues/2060
github.com/ipfs/notes/issues/131

alright fuckers
QmV3axcpv6vUFefP5g1SWvQP8Ex2Gq2zpigCCKWdxD9jED

Just found out about ipfs.pics

/ipfs/QmW3FgNGeD46kHEryFUw1ftEUqRw254WkKxYeKaouz7DJA

Holy shit. How fucking long did that take?

From torrents to folder structure on disk: about 2 hours.
ipfs add on that: about 8.


Stupid question for anyone who knows how ipns works - I've got a BIND server on my LAN with a correct dnslink entry for a box running ipfs. I can "dig -t TXT foo.lan" fine, but if I run "ipfs name resolve -r foo.lan" I just get a timeout. Now that I mention it most ipns stuff seems horribly slow/broken. Am I holding it wrong or is it just fucked right now?

We should compile a proper list of everything using IPFS that we know of.

you can use my ass if you want

github.com/ipfs/awesome-ipfs/blob/master/README.md

So this is basically zeronet
zeronet.io/

So I had an idea last night.

You can use raw IPFS to build a "friends list" protocol:
To signal your visibility to someone specific, pin their public key so you show up in a `findprovs`.
To add them to your list, take the multihash of encrypting their ID with your public key, then put the contents of encrypting your ID with their public key in /ipns/$you/.well-known/friends/$multihash.
Once both sides have each other added, you can remove the visible ipns entry (put it in the private mfs tree or something) but keep it pinned.

There's obviously a bunch of shit missing (a nospam mechanism etc) but it demonstrates it's possible to build a completely serverless (IPFS and WebRTC) web chat system.

Similar idea but different application. Think of Zeronet as a web page-oriented service, whereas IPFS is for file storage. Both are capable of doing the other's job, just not as well. (Yet.) Theoretically they could be a part of a unified stack.

bump

We knew that already.
Stop that.

Orbit already exists in terms of IPFS solutions, and there's also Matrix which is more "federated" than "completely serverless". The IPFS, Matrix and CJDNS communities have a lot of cross-over, I think. The fact that you can't easily have a persistent group channel in tox points to federation being a preferable course of action to completely distributed. Don't know how Orbit deals with this (ie, the scenario of a newcomer wanting to join a channel while everyone's offline).

PSA that halfchan is aware of IPFS. Pulled from etherpad:
/ipfs/QmWrzsvNxb5Yp9HfVJQX6JT9MrFoyypgKKUdGvuujHQENJ

It points to you needing to read the docs instead of standing outside a project shitposting. New groups are persistent and totally distributed. They've been testable for a while now and they'll merge soon. Contributions to the only 100% distributed FLOSS VOIP messenger welcome.

self sage for butthurt shilling

Pro tip: Fuck off with this retarded hipster trend of using generic English words to name projects.

No don't give me the link, I definitely don't care.

I'll stop when ipfs becomes practical without a browser in front :^)

Is Zeronet capable of being live like 8ch is ?

IPFS is a raw filesystem. ZeroNet is a gay webapp. IPFS is also a lot more protocol agnostic and works over clearnet IPv4, clearnet IPv6, Tor, and CJDNS.

It's literally designed to replace HTTP so a lot of the content will always be HTML files. It's perfectly usable as a FUSE module right now.

I've been hearing this one since mid-2015

I'm glad these idiots are choosing to avoid becoming Holla Forums rapefugees but I foresee this being a crash-and-burn endeavor. Just write some IPFS code into vichan or lynx or whatever the fuck and host it. Their obsession with turning an archive into a fully-functioning imageboard is silly.

Never mind, they're faggots.


There is literally nothing wrong with that name. If it was something like Orbitr then I'd tell them to take a long walk off a short pier.

Could you host a private WOW server on something like IPFS or Zero Net?

Could you actually read what the fuck this is before grovelling for people to spoon feed you your shitty warez? Fuck off back to 4chan.

So in theory someone could write an ipfs extension for firefox and access ipfs hosted html without installing any dependencies?

I was thinking something like tor browser bundle.

You can just use the gateway to get IPFS content without having IPFS at all in any tool that can speak HTTP. There's a browser extension already for Firefox that anchors ipfs paths and redirects you to the gateway if IPFS is not running or to your local daemon if it is

There is also the Javascript implementation
github.com/ipfs/js-ipfs

You could bundle all that together like the tor one and it should be fine.


You could host the static assets on it, eventually the dynamic ones too.


Does Zeronet still have file size limitations? Last time I checked their presentation said something about pages having to be in a few megabytes.

Yep, ~10MB or so.

I would like an extension that adds a "Download and Pin to IPFS" option at the very least. Then again, I'm sure they'll add a feature that tells the daemon to monitor a folder and pin anything new that comes into it.

Like this?
When you pin something IPFS will fetch the blocks for you if you don't already have them. Downloading in the typical sense is already handled by the browser, you just download it, no extension needed for that.

I haven't read the api docs yet but anything regarding downloading and pinning should be possible right now, you'd just have to add commands for them to a context menu, or button, or whatever like the addon devs did for pinning in their addon.
docs.ipfs.apiary.io/

I'm not sure what you mean by this. Like watching a directory and automatically adding files to your datastore? You could probably create a script to handle that and add it to cron, or write something that gets events from the OS/FS. Depending on what you want there's many ways to approach it.

That's only for IPNS content. IPFS content is immutable.

Some hero has come along and hacked together a very basic crawler/indexer for IPFS! ipfs-search.com is the address. It's odd seeing what kinds of things people are adding -- I've already found plenty of memes, an album of some dude's VR set-up, some linux isos, five seasons of Seinfeld, and of course I stumbled across a whole folder of furry webms and gifs.

It's pretty bare bones, the crawler just polls the DHT periodically and finds out what's being passed around. Since nodes are constantly broadcasting their provider records, I think it mostly just reads those then queries the hashes. One interesting thing the dev mentioned is that apparently he was seeing 200GB/day moving around the network.


Given the context of the quote (downloading a file and adding it to ipfs) I it's fair to assume he was talking about a watch-directory for normal files, which is easily implemented by a cron job. But yes, a watch-directory in ipfs itself would essentially be an instruction to the daemon to automatically pin whatever an IPNS name publishes. From what I can tell, this doesn't yet exist -- IPNS is still pretty rocky, and I think that feature depends on pubsub (and something called IPRS? they keep adding more),

Awesome. Looks like they threw a front end on it now which is even better.

IPFS devs confirmed for time traveling aliums

No wonder they're so concerned about latency of transfers between planets and satellite relays.

Is there a browser implementation in development yet? As in one that functions like webtorrent instead of relying on ipfs.io/ipfs/* ?

The JavaScript version was mentioned a few times, one of which was repeated just a few posts above you and another near the beginning of the thread.

I can't wait till this works in I2P.

Doesn't it already? I thought you could tunnel it through I2P.

You can tunnel through TOR right now but they're also working on having a more native integration, someone in here managed to get IPFS to communicate with TOR nodes without having to tunnel all their traffic.
github.com/ipfs/notes/issues/37

Once this is complete and the dev working on it voices concerns with how pluggable their system is, i2p integration should follow. I could be wrong but I expect TOR integration to be a milestone once the current ones are finished.

github.com/ipfs/notes/issues/124

Improving developer documentation is also a current milestone which when finished may help too.

I'm finding 90's-tier websites. Is this from Neocities or something?

To add to that, in terms of workarounds rather than development, someone on IRC has been working on making tor/ipfs proxies with onioncat
/ipfs/QmYKQvBsbYrRhdaGvQXcEoSam7s5gKVYULfRgNPzN5JM8N/IPFS-via-OnionCat.html


Probably. I've been seeing a lot of BBS and Usenet material on there, which is cool.

Is there any way to determine the filesize of a hash before downloading it from the CLI? I can't find anything in the manual, and it's not convenient to download data to find out how large it is.

Also, has the ability to pin data without duplication been added? I remember this was a point of contension, but I haven't seen any updates.

ipfs object stat
Also just look at "ipfs object --help" for some other stuff.

Not yet, but it should be coming soon. github.com/ipfs/go-ipfs/milestone/26

sweet, thanks

Anyone got No-Intro? I know it's on archive.org, but they're all one or more of torrentless/incomplete/login-required

It would seem that you are severly lost and have posted in the wrong thread let alone the wrong board. I assume you were looking for >>>Holla Forums `s share thread?

Fuck off, cunt.

If that is how you want it to be, so be it.

I'm waiting to see how the filestore feature works out before adding big things like that. It should have preliminary support by the end of December at which point I'm just going to add my whole game directory.

However, I'm really curious if the de-dup will have any significant reduction in size on things like this without compression. I don't even know how big the entire no-intro set is when fully extracted, but I'm assuming it's very large. Maybe there's some way to pipeline the huge single archive through tar to stdout then to another instance of tar for the smaller sub-archives, finally out to ipfs stdin. Worst case I could add a system at a time and then use the files api to mirror the directory structure later.

What's the current de-facto set to grab from archive.org? It would be nice to have it raw on IPFS for mounting, if the de-dup works well enough I might just ditch the copies on my filesystem.

Related here's an Atari set hooked up to a JS emulator.
ipfs/QmacAqRVhJX9eS7YJX1vY3ifFKF9CduDqPEgaCUSa4x5xb

What should this thread be? We're talking about the tech, running small experiments, and sharing some files.

Shit, I just noticed the de-dupe is "due" December 5th. Holy fuck I hope they can deliver.

iirc Neocities has as a goal to make every page on their domain available from IPFS.

Could you explain?

everytime I use ipfs it always disconnects me from my internet. am I doing something wrong? ive searched everywhere and found nothing. i am running Ubuntu gnome, if that helps.

You most likely have a shitty NAT router that chokes on dicks when something opens too many connections.

The way the filestore works now is that in order to pin a file, it stores a copy of the file in the data store. This makes it infeasible to share any sufficiently large filestores.
A user on Github (kevina) is trying to change the filestore such that if you add a file, it doesn't duplicate it and take up a bunch of space. (I assume there will only be some small amount of metadata.) This will cause a huge boom in sharing and finally make IPFS a competitor to torrents.
github.com/ipfs/go-ipfs/pull/2634
The targeted completion date is December 5th.
github.com/ipfs/go-ipfs/milestone/26


That what it was for me. I can only run it on my SBC that I have directly plugged into my router. My router explodes if I try to run it on my desktop that wirelessly connects. As of now there's no way to limit connections like you can do with a torrent clinet and I think it's pretty low on their list of priorities. Best thing is to just buy a new router as much as that sucks.

try `ipfs --routing=dhtclient`

Turn on IPv6 if you've got it and give your IPFS box a static v6 address. Simple packet routing is a LOT less CPU intensive than NAT. My router rarely goes over 10% CPU use and I've got over a hundred peers on a PC, most of which are IPv6.

thanks guys, but turns out the reason why my internet was cutting off was because of the "gateway redirect" option in the Firefox add-on. everything else I tried didn't seem to help.

The "gateway redirect" option only makes it so that you're actually using your ipfs daemon instead of the public gateway, if you disable that and visit links on the public gateway you won't be redirected locally which means you're not actually using ipfs for retrieval on your machine, which is why it's not killing your network.

My network used to die on 0.3 if I requested a few hashes that had 0 providers (nobody online), it would spam the network with want-list requests and my router couldn't handle it. I updated the firmware on my router and it hasn't happened since. Things should get better as they refine libp2p and fix any issues related to wasting connections, right now nothing is really optimized.

I really really hope IPFS becomes stable soon. Would love to see it integrated in Web Browsers by default.

what router did you use? i currently use a netgear c3000

They still don't add even minor shit like alternative image formats. God forbid you try to get Google or Mozilla to implement something major like IPFS. We'd sooner see a browser built from the ground up to support it.

ISP supplied, people on forums usually shit on it but it's been alright for me. The firmware updates are automatic, it may have been coincidental but I was getting disconnects with IPFS consistently by requesting dead hashes and then after my ISP pushed an update it no longer happened, even on the same IPFS builds. I can request a ton of dead hashes and they all time out eventually while my connection stays intact, I also run the daemon for days to weeks at a time with a lot of content/traffic.

I wonder if the new muxer they pushed would help you in any way.
github.com/ipfs/go-ipfs/issues/3397#issuecomment-264764028


I'm assuming the JS implementation is going to be the main method for using IPFS in the browser. Running the go daemon and redirecting to it has been fine for me, it's simple and automatic enough when using the Firefox addon.

alright, actually figured something out here! there is an unclosed bug for this type of issue i've been noticing here: github.com/ipfs/go-ipfs/issues/2509
adding "/ip4/0.0.0.0/udp/4002/utp" to my config file fixed the ipfs daemon from displaying the "too many open files" error, which was dropping my connection.

testing my server.
contains random webms of mixed quality
/ipfs/QmXixdUwZM7jaTNdnuy5yZVyUnMz1j6rUVFJSQL1HCXxNn

That's weird, the pull request related to that was merged saying it should fix it.
If it still works though that's good to know, I'll remember this if people have that problem again. Glad it's working now.


Maybe you're offline now, I can't seem to connect. Leaving my get running so it should mirror when you come online.


RIP
Seems like there's just a lot of review left, but the pull requests have been getting merged very quickly lately. I think it went from the 50's down to the 20's in the last few days. Hopefully this gets merged soon.

/ipfs/QmRQGtp2L2xxLnfbjWbpcQuABbyUwWHytZCKBVzE6S7ik3

ipfs.io can now reach the files, hopefully it was a temporary issue (but I'm not sure of the cause)

I've got these files mirrored across two systems, one of which is on 24/7

It was my fault, at some point I accidentally set my datastore directory to read only, the daemon had a lot of access errors when trying to write that I didn't notice.

has anyone actually tried this?
github.com/oduwsdl/ipwb

Conceptually, this seems great, but the request->response could easily be spoofed, no?

As in, I WARC a page from my PC, modify the content to suit my intentions and then share it on IPFS while claiming that this is the original page fed back to me?

Or is the idea that an authoritative source (i.e. archive.org) archives the content, we download the WARCs and act as another node of distribution?

However, even so, if archive.org is forced to delete to the WARC from their site, there's still no way to prove that the copies available on IPWB are legitimate.

Someone please correct me if I've missed something here.

i suspect that you could compare the archived webpage to the real webpage, or if it's not available anymore, check with someone else who archived the webpage.

So are there any sites like torrentz2.eu to search for files or do I have to rely on some autist providing me with a hash. are there even any files I want or just obscure shit no one but you cares about like


this is shit and just gives me JSON with embedded HTML in the description tags. also a search for 2160p yeilds no relevant results as the biggest file is 2MB. I'd at least expect some shitty jewtube quality 4k video to have been shared

Fix your browser

I guess this website is so poorly coded it sends you to the json file is noscript is running. Yet it is basic and featureless that you cant even sort by file size.

My question still stands, is there any search engine for ipfs which isnt shit. It has more than 10 pages worth of results (who knows i stoped after 10) for fags but no 4k video files.

That search engine is 1 month old today, it's still being worked on, filtering by filetype, size, etc. seem to be on the github issue list.

I don't think most people are sharing 4k gay porn which is why you're not getting many results, if it's not being shared it won't show up in any search engine, even a more full featured one.

Some people were talking about crawling IPFS with Yacy but I don't know how that's going. I'd be interested to hear myself since Yacy isn't that bad.

As most people have said in this thread you should wait for no-copy-add to be merged, once that happens people will be able to share their content without the cost of additional disk space. I don't think people would be willing to have 2 copies of all their 4k content right now.

github.com/ipfs/go-ipfs/milestone/26

you cant share a file without duplicating it? even if the developers demand their own directory tree, have they not heard of symlinks?

That's not how this works, you should read the spec and check out the filestore implementation to see how it is going to be implemented. IPFS doesn't copy a file and reference it, it actually stores the data and makes it available by its content address, it is its own filesystem hence the name ipFS.

It's a pile of shit that uses a nosql DB on top of a real filesystem. Don't fucking lie.

What's your point and what are you expecting?

Lie about what? What do you think filesystems are? You're being real naive with all these remarks.

you're talking to two different people. and real file systems dont store their shit on other file systems or use nosql.
t. user who wants a search engine

still hoping and praying for the youtube killer

From the webm thread.

this tard has never heard of CDNs of local caching proxies.

jesus christ this guy bullshiting so hard hoping someone even stupider will give him venture capital. or are CS grads this retarded to actually believe this

Nice try MPAA/RIAA.

I'm the fag who couldnt find any 4k content. IPFS isnt being used for piracy about anything anyone cares about if there isnt any 4k content

...

saved

Look at what you've done. Not only have you destroyed their server with your giant jpgs, you've also enabled bad grammar.

You could give this a read while waiting

/ipfs/QmQn6euAwZXGYTq6i6TXfCZwhjK3SKDNAX86sxsUzjnSHS

I see your Mein Kampf and raise you a library.
gateway.ipfs.io/ipfs/QmR9aMT7QzRLcpQrBf3fCYnnwtJ8paRSQU9mGoJgEpP4Mv/Books

He actually mentions them, did you even watch the video?

Is there some way to find the IP address of a peer from the content hash they're serving? I'm trying to see if I can find mine from the network after pinning something. I know "ipfs swarm peers" shows all the peers I'm connected to, but mine isn't in the list (probably makes sense for that though)


Nice.
Is it up? "ipfs object stat" isn't responding after 10 minutes.

Apparently my daemon crashed. That seems to be happening often for me. I hope 0.4.4 fixes that because I just upgraded. The joke was that it's a library of /liberty/ books and you probably wouldn't approve.

I'm still waiting for ipfs-see-all to finish. I have a lot of pins that need to be checked. If you upgraded to 0.4.4 and you also have a lot of pins, run this shit to make sure they're still there. (The binary is on their website.)
github.com/whyrusleeping/ipfs-see-all
It's been about an hour since I started so my daemon's still down. Is it supposed to take that long? My poor SBC is probably dying under the load.

When will torrent websites use technology like this so they can't be shut down?

SMALL TOWN GHUUUUURRRRLLLL

NO
I SAID CHICKEN-CHAN AND COCK-KUN

i stared at this for an hour but then i realized he has cancer :(

That's when you buy a cheap fixer upper in the neighborhood, fix it up, and start renting it to the people with funny beards. After they've been paying your mortgage for a few years you can sell the apartment at a massive profit since the prices have gone up massively.

IDGAF about that, but why are they always so aggressive then defensive for no reason?

Probably. It depends how long SS took to rebuild, I imagine that the reconstruction of their home was the main focus after the war.

Ketchup + Louisana Hot Sauce is GOAT

how much are you willing to pay?

Right now I think it would be pretty easy to mirror all the static content from a torrent tracker on IPFS but there's no real point.

If people need to use ipfs anyway you may as well distribute files with it instead of with bittorrent. The only issue with that is that IPFS isn't finished yet and neither are the tools usin it. Things like search engines should improve over time which might be more useful than a centralized index unless you want specific curation, message passing combined with pluggable networks in libp2p + network gateways should satisfy the private tracker crowd as well, hosts/users who are worried about getting busted can broadcast only to private networks or through i2p (or both, or another network/combination) and have someone less paranoid join the private network and bridge the connections from private and/or darknet to public, relaying messages to users on the clearnet who make queries, this would keep them safe while still allowing the content to be made available to the public if they choose to make it so. Essentially a gateway in itself but instead of for protocols it's for making a private network have public endpoints, I think Freenet or something else has a system similar to this.

Of all the stuff mentioned
* 1 search engine exists right now and it's young, more should come in time.
* Private networks are being implemented and requests for feedback is going on now github.com/ipfs/go-ipfs/issues/3313 , the level of control and flexibility should be ironed out later.
* As far as I know message passing/relaying isn't be discussed yet, I know the main dev talks about it during presentations but there's no development effort towards it yet that I've seen.
* Integration with other networks like tor, i2p, etc. are currently in progress.
github.com/ipfs/notes/issues/37
github.com/ipfs/notes/issues/124

Eventually you should be able to make something like a tracker/index with users adding hashes instead of torrents, the ability to comment on hashes, search them, etc. just like you would on a tracker, it's just a matter of waiting for these things to be implemented and tested. A lot of the mutable systems/structures are being worked on currently too which would be important for distributing things like comments and registered users. Ethereum may need to be involved for some of that stuff for a truly distributed system, I think it's possible with IPFS alone but any of the mutable stuff would have to contact specific nodes in control of specific keys which is still better than a purely centralized system like we have now.
I have no doubt someone will figure all this out in time, it's just a matter of waiting for someone to have the time and motivation to actually glue it all together and make pretty frontends. Probably after IPFS is stable and proven secure/audited.

I drank coffee when I shouldn't have so I can't summarize this post any better than that, I'm probably wrong somewhere too so look into it yourself.

blog.p2pfoundation.net/solid-can-web-re-decentralised/2016/04/07

Say I pin-add a main directory with subdirectories inside. Can I then add additional subdirectories and re-pin-add the main directory without the main directory hash changing?
I want to be able to give out a link to a folder and then add additional content later. Is that only available with IPNS?

1. no
2. yes

I have never used Go before and doing this caused my system to crash upon logging in. Am I just retarded or am I not meant to experience the future of distributed filesystems?

You only need to do that if you're compiling ipfs yourself, you don't need to do that for ipfs itself. You should probably download a prebuilt from their website or from your OS's package repo.

/ipfs/QmTCuoQTGFB6EJFEgBGYTjiJ6E3fnFMnF8TL4nk669eaob/Alex%20Jones%20next%20level%20demon%20burning.mp4

...

PLACE BETS NOW
What's going to happen first - IPFS stops using SHA2 hashes of protobufs for literally everything, or Tox gets persistent rooms and history?

God damn Christmas

I need to get some redhead porn just for this guy.

He sounds like a diagnosed autist

alright, it's me who has the broken router again. i fixed the issue by editing Discovery.MDNS.Enabled = false in the config. they say that they are working on it, but like in it's past due by 25 days.

Thread is tl;dr so for some user new to this shit, is IPFS actually secure/anonymous or does everyone know what buckets I'm reaching into/sharing/etc?

No more secure than a torrent because it works by the same principle - the swarm can see your IP. The protocol is designed to mesh with TOR and I2P but this hasn't been completed yet. IPFS is still in early alpha.

Thanks. Avoiding like the plague, then.

Filesharing should be kept off Tor, it creates way too much strain.

It's not just useful for filesharing. Together with Filecoin, it's meant to create redundancy across the internet and avoid losing important sites forever.

What creates strain with torrents on tor is the client firing up 200 circuits at once, but file sharing in general is fine. IPFS will be fine too as long as it doesn't spam circuits.

As was said, it's Early alpha, come back and take a look again when it can run on TOR and I2P.

...

If its important it wont be lost forever, the author will put it on a new site. And no, your DeivantArt page is not important.

The current generation of 'successors' to classic implementations like tor, i2p should be skipped alltogether because they don't offer a solution to real problems we will face wone kikes tighten the grip on the internet.

Our only hope is a protocoll stacking directly on top of backbone protocolls. Only then we're able to implement solid anonymity and censorship resistant solutions

I'm hoping for Tor and i2p support by next year. In the mean time VPNs are still useful.
What I really want is the datastore code getting merged soon.

that shit has been compromised for years

Then you should have no trouble posting proof, unless you're just a shill of course.

/ipfs/QmdBSdmZKuFS2wsBGs4ExKt89QuXmGUf1uXmfp8gY8d32f
~10gb worth of random music UUUU plebs

ipfs is so shit at handling large directories.

is the folder I uploaded ded then?
Webm completely unrelated, no idea what it is.

Should work if you split it up into smaller subdirs, 3000 files or less seems to work.

I wonder if it will bet better with the new sharding system that got pushed. I'm running the non-final v5 right now and it seems to be handling some io stuff better.


All that means is that it took too long, each request has a Go context which has a deadline, if the request takes longer than the deadline it cancels, like timeouts in other systems. I think you can configure that somewhere, I know `ipfs get` tries forever by default, and the gateway seems to have a long timeout by default but it does have one.

Oh god is it actually 10GB's worth of music unsorted in 1 directory. user please organize a little.

You mean sorting by albums isn't enough?

My fault, I thought the other user was implying it was unsorted. It's downloading on my end now, slowly though.

The entire comic book run of Kaliman, El Hombre Increible (a mexican superhero).
/ipfs/QmPcaHtkPtZ5ZdmasekWt2TWKuP9SNXvz9eC9UggvqTD9k

I'm a nub to this shit so help me understand it. It's a bittorrent for every file ever? And uses itself as a web hosting service? It's file indexing system is a hash table in a distributed block chain. Once a file is uploaded its metadata enters the block chain and there is no mechanism to purge it ever? The system assumes there will never be a hash collision?

This thing sounds like it would grow uncontrollably and fall under its own weight. A malicious actor could create block chain entries for trillions of files that may not exist or may only exist of random data, stop seeding the files and the entries would remain for ever cluttering up the block chain and there appears to be no mechanism to purge the entries.

And if your trying to hold all the files ever the eventuality of hash collisions need to be addressed.

Who in the swarm is going to donate there hdd space and bandwidth for hosting everyone else's shit?

What about privacy concerns, the person sending you the file knows what your viewing. I would suspect that interested agencies would start seeding all the content they are interested in tracking so they could see who accesses it.

I may be full of misconceptions as I just looked it up today so feel free to correct the things I have wrong.

no blockchain, just a hashtable and a datastore, and it will be purged after a certain amount of time unless someone has it pinned. A malicious actor would just be flooding his own computer. (I think, I'm not necessarily the most informed person here)

Hash collisions are addressed by upgrading to a better hash function. All hashes are using multihash ( github.com/multiformats/multihash ), which means it's easy to use any hash function you want without any problems. Right now I believe it uses sha256 as default.

Right now, just anyone with a kind heart. In the future, though, they're going to be working on filecoin ( filecoin.io/ ) as an actual incentive for hosting stuff. Ethereum also has their "Swarm" in the works so it will be interesting to see how they're different.

That's up to the user. Ipfs doesn't enforce anything, but you can use it over a vpn, tor, or i2p if you want anonymity. (Actually, I think proper tor and i2p support is still in the works, but they definitely said they are working on it)

Happening time, only 1 month late.
github.com/ipfs/go-ipfs/pull/3629


I can't reach this, are you online?


You host your own shit and shit you like/want to keep. Think of it like a distributed cache or a distributed filesystem. It's less a donation of hdd space since you're going to have to store the content anyway, like how we're both storing this thread in our local cache, and I'm storing the image you posted anyway, the only thing I'd be donating is my bandwidth which costs me nothing (no data caps). Outside of cached content there's long term storage which functions more or less the same way, if I save your image I'll have it and can access it as I want to, but it's also fetchable to anyone else who has the hash for it.

Check this too if you're interested

So they just spent a month working out exactly how the solution would work? Geez.

How do I clean my repo out? I've tried outputting ipfs ls to a text file and removing pins from there but I've had it run for hours without any output. My SSH tunnel would close before it finishes anyway. I looked at the IPFS_PATH directory but it doesn't look like it's nearly as large as the shit I pinned.

ipfs repo gc?

No point in garbage collecting unless I unpinned shit I want to sweep out.

Wait, never mind. Didn't dig deep enough to find the .data files. I'll just delete those with rm and pin a couple things back up.

If you want to list out only the stuff you pinned you do ipfs pin ls --type=recursive or the shorter form ipfs pin ls -t recursive, on my machine doing `pin ls` without specifying the type also takes an insane amount of time because it's listing every hash you've pinned plus every hash any other pinned hash depends on, which in my case is a ton of hashes since I have multiple directories containing thousands of individual files. `pin ls -t recursive` will just list the tops of trees and not the entire tree, as in just list out the actual hash you pinned. `repo gc` will clean out anything not listed under the recursive ls and nothing it depend on either, just everything else which isn't pinned, the "garbage".

I wish `pin ls` acted that way by default where you'd have to specify "all" for everything, but whatever. Maybe they'll add a config option for it.

IPFS 0.4.5-rc1 got released and it looks like it includes the datastore code! I'm betting it'll be released in May.
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md

I don't see #3629 or #3646 in there.

B-but at least we're getting pubsub, right?

So how would /baphomet/ use this service in conjunction with others to their advantage?
I'm no programmer but am interested in seeing the meta before the gritty learning I'll have to do to become one.

It's invisible to search engines and web crawlers as long as nobody posts clickable ipfs urls that go via a public gateway. Anyone running an IPFS node can access them via localhost:8080/ipfs/, it's safe to post links to that (and safe to click them, p2p traffic is encrypted).
If enough people on the board are running IPFS and clicking each other's links, it becomes impossible to take down content.

What about ipfs-search.com/ ?

Unless the NSA is paying some greasy intern to type XKEYSCORE terms into that search box all day, it's not a problem.

False. Whenever anyone pins something on IPFS, their daemon sends out a provider message to the swarm notifying everyone that they have the data. That's how it's able to content-address effectively. You can mock ipfs-search but some guy was able to throw it together in his spare time because he realised how easy it was to just listen to all the provider records being passed around and record all of them.


It's like a big torrent swarm collectively sharing an arbitrary file system rather than specific files like movies or whatever. Like a torrent, all the IPs sharing the file are viewable by the swarm-members. The devs are working on, but I don't believe they've yet implemented, private networks. This is roughly analogous to a private tracker -- only those who have the crypto key can access the shared data pool. This is probably more relevant to your interests, but right now the software is _not_ anonymous and hasn't had a thorough, independent security audit, so don't go thinking this is le nouveau darknet or anything.

Would it be possible to make it so that the origin (ip) of a file cannot be determined?
Maybe with something tor-like, or "hiding" all traffic in an encrypted layer.

There's no point in rewriting such a system, they plan on interoperating with tor, i2p, and other systems which are already proven and trusted. This was talked about already in the thread around here

In theory when everything is done you should be able to have any configuration of private swarms, anonymous connectivity, message passing, and protocol bridges.

Okay so you're saying Google, Bing and Yahoo run IPFS wiretaps and add the URLs to their index for normies to find?

Not the same guy, but there's no "wiretapping" needed. I know for a fact google is already finding links on ipfs just by scraping public gateways. Just do a search for site:ipfs.io and you'll see what I mean.

The main point is anything you put on ipfs in it's current state should be considered public, unless you're manually encrypting before adding.

So is Google brute-forcing sha256 hashes, or are you saying it crawls webpages for plaintext multihashes like "/ipfs/Qmb4fyfTa2XQoMjd22WVFJq5AVdracTUTmqorJ8Sx3nhLy"? Or do you just not understand the basics of how a web crawler works?

.. when you feel like there is no hope.

Where do you people keep coming from?

I'm saying google doesn't have to brute force hashes or start looking for text starting with /ipfs/. If sites like ipfs-search already exist, then it is probably not very far off from whatever crawlers google employs from finding out about it. I do think, however, that it's not entirely unlikely that google or whatever search engine will eventually become interested in ipfs and start indexing plaintext multihashes, or even what they find off the DHT directly, which isn't exactly a difficult thing to pull off (especially if you have the metadata-gathering resources of google).

More importantly, though, I'm trying to make the broader point that thinking about ipfs in it's current state as private is only going to lead you to trouble. There are efforts to make it more private and anonymous, but they generally either don't fully exist yet or can't be trusted to work right.

Does it work inside I2P yet?

Quality software.

It's alpha quality software. Did `ipfs repo fsck` not work, what was wrong and how did you end up fixing it? I've ungracefully killed the daemon a handful of times and never had any issues so I'd like to know just in case.

Now that Go 1.8 is released it would be interesting to see benchmark comparisons between it and gccgo. Supposedly gccgo -O3 has been faster in the past but with the recent improvements it might've almost caught up.

Also reminder that IPFS 0.4.5 was released so everyone should update.


You clearly have no idea how search engine crawlers work. See to see how ipfs-search works. Google has absolutely no interest in the foreseeable future to directly index IPFS objects. They would make no money from it. The only reason they index websites and make software like Chrome is to serve you as many ads as fast as possible.

As for the anonymous concern, it's currently on par with torrents from an IP address concern, which is already easily solved with a VPN. For i2p integration I hope they'll wait until the Monero devs get i2pd in a stable and reliable state before they add official support. It would be a dream come true if i2pd got integrated into the Go version but unfortunately I think it's just going to be run on top of it, which is fine too.


Nobody knows. Go try it and report back.

JUST

The only requirement for showing up on Google is being posted on a site that is already indexed, same with any other crawler. (Reminder Google doesn't list results for/from 8ch)
google.com/#q=inurl:ipfs.io/ipfs/

...

Did literally nothing
Corrupt leveldb.
rm -r .ipfs/datastore/

Ouch taking bets on whether it's finished and tested by the end of the year. I'm expecting an October release.


I know, I said they won't directly index IPFS websites.


Hello lainchan poster.

Let's keep this shit going, dumping some Lain:
/ipfs/QmS6BXMbbFUrfgC32rYjgmYh57DJ5DMxYYxrmRqQMHqrZX

glop.me/

>8ch.net/tech/res/715551.html
bittorrenting is now officially deprecated.

>>>/reddit/

this guy gets it

...

what do

Don't use IPFS over TOR. It's nowhere near finished and hasn't been audited, especially when you try to merge them with two unrelated browser addons.

James Mays Builds A Car:
/ipfs/Qmb1EkbZtvXFrXddid2rEddD5n2xxe3gemgBVf3C6XQjB4
/ipfs/QmZJWrX7T4GmNbQwGaZvofTJpd8gPVEmci7G5SwTVZ3JR2
/ipfs/QmZWyVTau8gR4i1MixXm45mDwvU8Qfjkazb5gH2hS7CyKE
3 parts

...

Nice. Using streamable formats and containers are recommended so you don't have to download the whole thing (mkv works but not directly from browser)

Contributing some Steins; Gate

QmX98azJ448bnPYpZBzUBBGE1gsw9w5be9vJJfcbaWWiar

Last time I checked IPFS insisted on downloading an entire file locally before it'd send the first byte to the browser. How do you get it to work?

Ah, my bad. I haven't used big enough files to determine their streamability (and nodes i've tested already seem to have full copy), just assumed they would since ipfs stores everything in even blocks, so maybe it would request the right ones in order when streaming.

Heres some o'reilly books

QmSiKPd7SNHhweXnj2zXtuqjEfoyK4SKMkrP5QZfd8n9Hx

Streaming videos works great in IPFS. I watched all the Touhou OVAs someone posted in the last thread with mpv. mkvs work just fine.

trickle-dag helps streamability, I think. Use the -t option when you add video files. Loading the files into mpv also helps.

could you make a chan that stores all pics in ipfs?

Anyone use any awesome-ipfs stuff? I'm curious to know if any of them are worth looking at.
github.com/ipfs/awesome-ipfs


Back in the day Hotwheels was open to the idea if anyone wrote it into vichan, but that ship has sailed now. I guess the technology is out in the open, seeing as ipfs.pics exists, but I would have no idea how to write it into the imageboard. Plus there's no guarantee that it would behave nicely. Some optimizations may be necessary for it to not eat CPU/memory.

I'm real excited for Orbit to be finished, the concept of sharing music libraries via beets is something neat that I'd want to integrate into another media player, I've used hydrus since before IPFS integration and since that I've been considering moving much more (all of my)content to it.

WE HAVE --NO-COPY MERGE
I REPEAT, WE HAVE --NO-COPY MERGE
THIS IS NOT A DRILL
It's still experimental (and confirmed for 0.4.7) but you can test it right now.
github.com/ipfs/go-ipfs/pull/3629

Ohh shit, it really is happening. Just how long have we been waiting for this now?

I'm complete ipfs newfag here, but I'm curious:

From what I gather, ipfs is a lot like bittorrent (people must opt-in for chunks, and there are no other incentives to disperse chunks). But apparently innecesantly complex compared to BT. I'm consistently told "no, ipfs is not bt, read the spec" - and thats the problem.

Apparently there does not exist conscise protocol spec for ipfs. It's all long-winded descriptions of banal shit like kademlia, USKs [2], and git-like dag chunking - surprise, bt can do all of these too, but as neatly separate, yet often popular, extensions of the core protocol. There is no conscise description of IPFS, and what it takes to implement base client seem to be simply missing, or I just can't find any.

It gets worse. Main IPFS innovation should be advanced bitswap strategies, such as chunk speculation. Basically, peers speculatively downloading random "cheap chunks" from the network, hoping those could be traded for better value later. This strategy is very complex game-theory wise, but with excellent content retention properties.

IPFS paper promises that you "could" do this, but does not elaborate how. Fuck, any P2P filesharing protocol with local strategy (including BT and eMule) "could" do this. Yet the IPFS strategy modules don't even try to hint anything of the sort. It merely keeps local peer score the same way eMule does [1], while the actual implementation of global chunk auction strategy is pure vaporware. One is curious why IPFS devs consistently ignore the most important topic, and instead keep blabbing on about nebulous bullshit such as "http replacement", when it does not do any better job than bittorrent in that regard.

[1] emule-project.net/home/perl/help.cgi?l=1&topic_id=134&rm=show_topic
[2] libtorrent.org/dht_rss.html

Replace the part of the file uploader code that does `cp $imageupload /mnt/nfs/media/$imagehash` with `ipfs add --pin=false $imageupload`. You get the hashing, deduplication and write-avoidance for free.
Then you just run an ipfs http gateway on port 8080 and forward the frontend webserver's /ipfs/* URLs to it. You could go crazy and have the threads themselves stored in IPFS but that's probably more load than it's worth for any site.

Where's the concise description of Bittorrent?

bittorrent.org/beps/bep_0003.html

So 7 years after BitTorrent was created

I get your point, but I'd give it some time. Ipfs is still in alpha, and things like even bitswap are still being actively worked on, so it makes sense that they don't have a finished spec on it. Maybe they're not treating it as "innovatively" as they should, but it's important to keep in mind that ipfs is meant to be modular, so you should be able to, in the future, replace bitswap strategies with something of your own, or replace bitswap altogether. I think things like that are the real advantages it has over bittorrent, anyway.

Also, I saw you mention the IPFS whitepaper, but not the actual spec, which is here github.com/ipfs/specs . In fact, here's the actual bitswap spec github.com/ipfs/specs/tree/master/bitswap (which you'll notice says is a "work in progress"). You might also be interested in the libp2p spec, which ipfs is built on top of github.com/libp2p/specs

WE'RE AT 0.4.7-RC1
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md

ipfs config Experimental.FilestoreEnabled true

And to further build hype,
github.com/ipfs/go-ipfs/milestone/30?closed=1
You have no idea how excited I am right now.

...

So IPFS is finally going to become useful instead of just being a neat idea. Can't wait for the update.

I was able to use it to add my music library while only adding about 3MB to my repo's size, pretty cool stuff.

ipfs add -r --nocopy frustration

IPFS based private tracker where?

ipfs-search.com/ is the closest we have. There's no real need for a tracker when you don't need the same manifest to spread the same file, especially when most people would just use it to leech through http. (It's almost a relief .mkv doesn't work in browsers.)

Can someone please explain what this means?

ipfs 0.4.7 can do `ln -s` as an alternative to `cp`

Instead of copying the files to .ipfs and putting them into the ipfs filestore format (which leaves you with both the original file and the chunked file on your disk), ipfs now has the option to just link to the file on your disk and read it from there directly.

And the practical importance is that, like hosting a torrent, you only need one copy of the file on your hard drive. If I want to pin my entire animu folder, it only needs the time it takes to hash every file as well as a small allotment for storing the hashes - one guy said he added 60GB of files and only added ~60MB to his .ipfs folder.

It's now feasible to share every file on your hard drive if you wanted, so IPFS can graduate from meme status into practical media sharing software.

Unlike a torrent, you don't need a specific .torrent/magnet link and tracker to communicate. If both of us had the same file and we wanted to share it, we could independently pin it to IPFS and can both seed it to a third person. Also you can serve the files through http gateways so you don't even need the software. You can download or stream a file right in the browser.

You do realize that networks such as Overnet or DC++ DHT work more or less the same (file-level opportunistic sharing).

Of course that very common files are already on the network. That said, it is still far much faster to stream directly animeshit from torrents especially if it is the top most popular 0.001% of all animeshit - simply because there are far more seeds.

The trouble is generally with unpopular data. Power law's a bitch.

But I don't need to download, install, or configure software to access things from IPFS. If the technology really takes off, i.e. JS-IPFS gets finished, it can be embedded in a website or a browser.

I think they mean to fix that problem with Filecoin, but that's still a white paper with no proof of concept planned soon.

Should probably mention you can get the rc1 without having to build it here:
dist.ipfs.io/go-ipfs/v0.4.7-rc1

How does one even build ipfs? Last time I looked you had to have it installed and running just to download it.

Some pointers for using the filestore:
Also consider deleting the .ipfs folder before you start with 0.4.7 if you want some storage back and don't mind re-pinning files.

Why raw leaves?


I think it's done through a go command, which I could never get configured correctly.

Any actual communities around IPFS? I'm not seeing what I can do with it besides download some random 320 MP3's from some random hashes. Any real implementations of it yet?

I hope they fix building with Go 1.8 soon to take advantage of the new compiler on all architectures and shorter garbage collection pauses.


Nice doubles and suggestion. --raw-leaves will defiantly improve storage and performance on large multi-terabyte filestores. When it hits stable I'll be adding my ~3TB anime folder and ~1.5TB weeb music folder.


There's a hand full of programs integrating it already. Now that the filestore code is merged there's going to be a big uptick in Ethereum smart contracts using it.
github.com/ipfs/awesome-ipfs

In 0.4.6 they updated go-multihash to 1.0.2 which added Blake2 support. I hope they're going to give the user the option to switch hashing algorithms soon instead of the current hardcoded SHA-256. I also noticed it still has SHA1 support which could pose an issue in the future.

who is "they"?

You just run make install in the go-ipfs repo. It will download the dependencies from ipfs if you're running it but if not it fetches then via an HTTP gateway. You can also build manually by running gx to fetch deps then run go install in the cmd/ipfs directory. Building it with make has always been easy and fast on my machine.


Building with 1.8 has been working for me, just built the latest master with it.

Also this
github.com/ipfs/go-ipfs#build-from-source

Can you feel it now, Mr. Krabs?

Oh shit I didn't expect it so soon. I need more time to organize everything first.

Also reminder to use `ipfs add -r -w --nocopy --raw-leaves ` to use the new filestore.
-w or --wrap-with-directory wraps the file (or files, if using the recursive option) in a directory. This directory contains only the files which have been added, and means that the file retains its filename. This is very useful for sharing groups of files and provides an easily parsable directory structure.
I'm thinking of making a site that indexes user contributed hashes and assigns user contributed tags (along with some global tags) so people can search files based on their tags or filename. Is it worth it over just making threads on >>>/ipfs/?

i decided to test ipfs and how well it can handle things by putting up a website onto it
test it here
/ipfs/QmTzE3Ao7s5qLJNawFW51DUN8PVtz4mx3LHvRZUG3GheZN/wii/

make sure to pin it too because i don't want my computer on forever

Pinned :^)
How did you scrape the whole website? I'm guessing wget but what command did you use?

All ded lads, will reupload soon™.
In the meantime, have some thicc JAV to make up for it.
ipfs/QmPJJA9wRxYdNCPsHBDqk997MXUmpTmdK3qeqJrdPWbmvh/NITR-073.mkv
Excuse the mkv format, I'll it convert into .webm soon™, but my computer is of potato, and it'll take till tomorrow to convert. if anyone gives enough of a fuck to download, convert, and upload faster than I can, feel free.

actually, i used httrac to download the entire website, then I edited it to fix some problems that would be caused from linking outside ipfs, like the trailer. i can post the directory hash later if you want

Tried it through the HTTP gateway and it stopped at about 100MB and crapped out. Can't resume it, either. That really worries me. Is that just a Firefox thing?
Eventually I could connect to it through the daemon but it took a good 10 minutes.

In the meantime I think I'll reorganize my hard drive in preparation. Got lots of JAV on that thing but I think I'll share them as-is in their .mkv containers. It discourages leeching and increases the potential for pins from people who already have the video. On the other hand, I know of some videos that are in dire need of file resizing before I share them. (26.5GB is unacceptable for a 4 hour video.)

Stopped for me at 174 MB

Huh just interested in how others are doing it. I pinned your link last night and it still didn't finish by this morning but I'll try again.


Download/upload speeds are extremely slow and sometimes drop when only a single person is sharing. They really need to work on optimizing speed.
Now that the filestore is merged there's no reason why not to share both the original version and a smaller version. Many people will be willing to download the original file (just as you have) for archival purposes. If you want to make a smaller webm version I suggest you take a look at my webm encoding guide.

I think it's on their radar
github.com/ipfs/go-ipfs/issues/3786

I find it odd that people are complaining about the speeds, they're usually pretty quick for me as long as the peers themselves are fast, even with the lack of optimizations. I notice a little hiccup in the beginning of a transfer and then the main bottleneck for me seems to be disk io. I look forward to cutting any slop though, every bit helps, especially on already slow sources.

--nocopy automatically enables that.

i have my computer turned off so it won't download the files. the whole website is 1.25 gb

ipfs/QmXVh99MtDRNn24L6VDGqH34GXGZtQEot6EsEg3wDXpiHb
Dogshit quality, but only 270mb, good enough to post in Ωchan. Any better encoding would take days for me, so unless someone else wants to take the .mkv and encode it themselves, it's either this or the original.


Browsers don't support .mkv, so yeah, pinning it is prolly the best option, and it keeps the file assessable too.
Be aware that I can't upload 24/7, so sometimes the files will go down. Check during burgerlandi nighttime, I always keep it on after ~11:30

Is there a way to recursively `ipfs ls ` a directory? It looks like it can only do one level at a time. If I want to know what's in a directory I have to manually type in each hash for each subdirectory which is a pain.

Reminder the gateway just translates from ipfs to http, so you can use it with anything that understands http, like mpv (see webm here )

Does --nocopy work when pinning?

Didn't know that, I just looked it up.


Use ipfs get -o=$PWD/filename
The -o= is used to directly download files. If it's a large directory I usually also put in -a for an archive (.tar) so it's quicker to move around. Then you move the files to the proper spot and add the directory with --nocopy -r -w.

0.4.7 NOW OFFICIALLY RELEASED
ipfs.io/docs/install/

IPFS status:
[ ] A meme
[X] Not a meme

GET --NOCOPY ADDIN BOYS, THIS SHIT TAKES A CENTURY

nice try CIA niggers

You weren't going to compile it anyway, arch ricer

IPFS status:
[ ] A meme
[ ] Not a meme
[X] Slow as fuck
[X] Worthless

What are you adding? I added around 200GB of music in about 10 minutes with nocopy, it was on a different disk than my ipfs repo.

Should probably warn people that adding the same file twice in --nocopy mode will cause a filesize increase. It doesn't appear to check if it has the hash first before adding it again. Worth consideration if you're adding a ton of things recursively.

Adding is fast, but getting is slow as shit.
Blame the ISP jews who give minimal up speeds.

I'm not that guy but I serve files on IPFS from an external hard drive connected to my Banana Pi, so adding usually takes a while. I added the Gentoomen Library again yesterday with nocopy and it took a good 6 hours at least. It's just over 30GB and it's a maze of directories and smallish files, and I suspect those suffer performance issues compared to fewer but larger files.


By the admission of the IPFS team, the network efficiency is garbage. That will probably be high priority going forward.
github.com/ipfs/go-ipfs/milestone/23
It takes ages to connect sometimes, but once it gets going it's pretty fast. As for streaming videos directly, I've had better success with VLC than mpv. The latter tends to take longer to start and will sometimes stop in the middle of a video and stop loading.

IPFS locally caches blocks you access through the server until the max limit (10GB default) I'm pretty sure. But yeah I was confused the first time I ran a garbage collection too.
Can you post the Gentooman Library hash? I have a script I'd like to test and a large collection of many files and subdirectories would be perfect.

I wonder if this has to do with the cache, VLC is intended to be used with streaming content so it probably has saner defaults for caching the content for playback and keeping the connection alive. For me I give mpv a huge cache even for files on disk so it's probably why I never had any issues with streaming with it, it's probably still hitching like that but while it's caching, not while it's playing back.Good to know they're working on it though, hopefully there will be seamless playback everywhere.


That 10GB limit is the default but it only affects the daemon when automatic gc is enabled with `ipfs daemon --enable-gc`, otherwise ipfs will not automatically collect garbage, you have to manually trigger it with `ipfs repo gc`. I almost never run gc lel

You can also configure it to run every *time* too but I never messed with any of that, the whole point for me is caching everything and serving it from my node, I don't want to pin everything though, just my own content and keep a lot of garbage that someone else might want for a few months.

Yep, that's all we need, a way to make our shit even more persistently stored on the Internet. Brilliant. At that point you can assume that everything you do on the Internet will be written down for ever.

QmQsqfXrGy2YhfrYpiD8yRmW8wWd7wGyC9P41WXHTdAfuh
Beware that the default IPFS gateway got hit with a DMCA so you'll have to grab this one through ipfs or another gateway. Is gateway.glop.me still running?


Good idea, I'll try fidgeting with the cache.

That has always and should remain the case. When you post something on the internet it's there forever, especially if you don't want it to be. People used to be responsible, now they're just apathetic.

The more important thing to consider though is how much useful stuff gets lost for one reason or another, companies shut down, people die, hosting bills get too expensive, whatever the reason and whoever the owner it no longer matters, someone can harbor content forever if they choose to, preservation in the hand of the people, not some arbitrary central entity.

Maybe he's lying but the project leader Jaun mentioned in a talk that he fetched source code off of archive.org that wasn't anywhere else. I remember having to do something similar to figure out anything about this en.wikipedia.org/wiki/NeWS
Interesting and useful stuff gets lost forever all the time, usually because people expect it to be available remotely and never make their own local mirror, it sucks to see that happen.

I only got the first 480 mbs downloaded, still up?

I have that file. I'll add it right now. My host should be up 24/7.

If you have 2 hours to kill, here's some mildly interesting footage discussing the intersection of IPFS and Ethereum (and all merkle tree-based tech, really). Explains the function and importance of IPLD.

youtube.com/watch?v=hpCxtb2E1as

You should use /ipfs/QmZoh2cKzBgFRX7uXeswui4ffPdXH6voN22gFDq8bZPByM as mentioned by so the individual files can be accessed if needed. Given the popularity of touhou there might've been someone else not from this thread who also added them themselves. In that case you'd be downloading it from them too.
People should share directories rather then archives because once you download the archive you extract it and most people just delete the archive since you would have two copies of the same thing. This severely cuts down the potential number of people willing to share the files and contribute back to the system.

Could IPFS + Ethereum be used for a distributed storage system similar to storj.io where 'farmers' are paid for renting out their drive space?

That's what IPFS's "Filecoin" will be.
filecoin.io/

You just described filecoin, which they said they want to work on in the future, but your guess is as good as mine on when we'll actually see something come out of it. They have a website for it filecoin.io/ , but I don't think it's fully up to date (for example, they recently confirmed that it will be built on top of Ethereum and not be another altcoin)

Ethereum is also working on their "Swarm" swarm-gateways.net/bzz:/theswarm.eth/ which I haven't looked into much but seems to basically accomplish the same thing as ipfs but differently. Pretty weird that they didn't just decide to join efforts with ipfs, but you can read they're reasoning here github.com/ethersphere/go-ethereum/wiki/IPFS-&-SWARM

Still stuck at dat 8.49%

Stalls at 1021 kb instead.

NixOS has a side project trying to integrate ipfs with their build cache.

sourcediver.org/blog/2017/01/18/distributing-nixos-with-ipfs-part-1/
github.com/NixIPFS

We might get a declarative, functional, atomic AND distributed Linux distro in our lifetimes.
What a time to be alive.

So, another reskinned redhat linux?


Don't touch the poop.

change your open files limit using ulimit

I'm not defending it or saying I like it, but who cares about the name? Xcoin has been the name for every digital token that interacts with blockchains for a long time now. The technology is the same regardless of what they call it, I think it's at least appropriate for the function, used for renting file hosting or exchanged for other tokens/currencies/services.

where is the cake eggman you said there would be cake

I can't wait for IPFS inside I2P.

...

10,000
Happens for all other IPFS hashes I try, even ones I can view in the browser

Come to think of that, I saw it mentioned in an issue.
github.com/ipfs/go-ipfs/issues/3763
I think it's still yet to be fixed. They've dealt with it in previous versions and they fixed it then, so have hope.

...

...

I cannot imagine anything better. This could be the future. I need it so badly.

0.4.8-rc1 is out.

Jesus, this is really updating fast now. Should we expect yuge performance gains from directory sharding or is it just a convenient tool for managing shit with IPLD? I read github.com/ipfs/notes/issues/76 but I'm really not that knowledgeable about the subject.

0.4.8 is out
Wow they release nothing for months and now we have four releases in 3 weeks.