/ipfs/ - IPFS thread

Updates
0.4.10 - 2017-06-27
Features:
0.4.9 - 2017-04-30
Features:
tl;dr for Beginners
How it Works
When you add a file, the files are cryptographically hashed and a merkle tree is created. These hashes are announced by the IPFS client to the nodes in the network. (The IPFS team often describes the network as a "Merkle forest.") Any user can request one of these hashes and the nodes set up peer connections automatically. If two users share the same file then both of them can seed it to a third person requesting the hash, as opposed to .torrent files/magnets which require both seeders use the same file.
FAQ
It's about as safe as a torrent right now, ignoring the relative obscurity bonus. They are working on integration with TOR and I2P. Check out libp2p if you're curious.
Finding a seeder can take anywhere from a few seconds to a few minutes. It's slowly improving but still requires a fair bit of optimization work. Once the download starts, it's as fast as the peers can offer, just like a torrent.
You be the judge.
It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies.
On the other hand, it's still alpha software with a small userbase and has poor network performance.
Websites of interest
ipfs.io/ipfs/
Official IPFS HTTP gateway. Slap this in front of a hash and it will download a file from the network. Be warned that this gateway is slower than using the client and accepts DMCAs.

glop.me/
Pomf clone that utilizes IPFS. Currently 10MB limit.
Also hosts a gateway at gateway.glop.me which doesn't have any DMCA requests as far as I can tell.

/ipfs/QmP7LM9yHgVivJoUs48oqe2bmMbaYccGUcadhq8ptZFpcD/links/index.html
IPFS index, has some links (add ipfs.io/ before to access without installing IPFS)

Other urls found in this thread:

github.com/ipfs/go-ipfs/issues/4029
github.com/ipfs/js-ipfs/issues/962
blog.ipfs.io/30-js-ipfs-0-26/
github.com/ipfs/js-ipfs/issues/973
github.com/ipfs/js-ipfs/pull/975.
github.com/ipfs/js-ipfs/issues/952,
github.com/crypto-browserify/browserify-aes/pull/48
github.com/ipfs/js-ipfs/issues/952
github.com/ipfs/js-ipfs/issues/981.
github.com/ipfs/js-ipfs/tree/master/examples/browser-video-streaming,
blog.ipfs.io/29-js-ipfs-pubsub.
github.com/Agorise/c-ipfs
github.com/DistributedMemetics/DM/issues/1
github.com/HelloZeroNet/ZeroNet
dist.ipfs.io/go-ipfs
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md
decentralized.blog/ten-terrible-attempts-to-make-ipfs-human-friendly.html
github.com/ipfs/go-ipfs/issues/3092
ipfs.io/ipfs/QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6
podricing.pw/posts/1191416
github.com/ipfs/go-ipfs/blob/master/docs/config.md#ipns
security.googleblog.com/2017/02/announcing-first-sha1-collision.html
bittorrent.org/beps/bep_0052.html
blog.dgraph.io/post/badger/
github.com/ipfs/go-ipfs/issues/3908
github.com/ipfs/js-ipfs/issues/1095
blog.neocities.org/blog/2015/09/08/its-time-for-the-distributed-web.html
github.com/ipfs/js-ipfs/pull/856
github.com/ipfs/go-ipfs/issues/3994
github.com/libp2p/js-libp2p/tree/master/examples
github.com/libp2p/go-libp2p/tree/master/examples
github.com/ipfs-shipyard/ipfs-desktop
ipfs.git.sexy/sketches/run_a_gateway.html
ipfs.github.io/public-gateway-checker/
hardbin.com/ipfs/QmVktW6uo1mcqSiufH7fmExsmyC7dFx2GCYiEDmJLSatnD
localhost:8080/ipfs/QmVktW6uo1mcqSiufH7fmExsmyC7dFx2GCYiEDmJLSatnD
ipfs-search.com
github.com/ipfs/go-ipfs/blob/master/docs/config.md#addresses
github.com/ipfs/notes/issues/37#issuecomment-247717781
inclibuql666c5c4.onion.link/
github.com/ipfs/archives/issues/137
github.com/ipfs/notes/issues/1
github.com/ipfs/archives/issues/87
github.com/ipfs/archives/issues/136
github.com/ipfs/archives/issues/142
stellite.cash
moinakg.wordpress.com/2013/06/22/high-performance-content-defined-chunking/
github.com/ipfs/js-ipfs/issues/1228
twitter.com/NSFWRedditGif

no it's not. bittorrent uses sha1 which was shattered and has been deprecated decades ago.

where's the release that doesn't crash routers by holding 1500 connections open

lol

I think they've fixed it.

...

If gateways make IPFS accessible to everyone, why isn't it more widely used?

because nobody uses it.

Client is badly optimized right now, it takes loads of ram and CPU for what bittorrent clients can do without breaking a sweat, they haven't worked on optimization since the protocol is constantly changing.

this is also why nobody uses it
stop breaking fucking links every 2 weeks

Why did you make a new thread? The old one was fine


You have no clue what you're talking about. The only non-backwards compatible change they made was in April 2016 when they released 0.4.0. Since then all changes have been backwards compatible with new releases.

It had no image.

Launch it with
ipfs daemon --routing=dhtclient
to reduce the amount of connections it uses.

Additionally, some progress has been made limiting the extent of the problem, though the issue of closing connections doesn't appear to be touched yet.

Issues to watch on this subject:
github.com/ipfs/go-ipfs/issues/4029
github.com/ipfs/js-ipfs/issues/962

If you're going to make a new thread, at least post some updates.

js-ipfs 0.26 Released
blog.ipfs.io/30-js-ipfs-0-26/


New InterPlanetary Infrastructure
>You might have noticed some hiccups a couple of weeks ago. That was due to a revamp and improvement in our infrastructure that separated Bootstraper nodes from Gateway nodes. We’ve now fixed that by ensuring that a js-ipfs node connects to all of them. More nodes on github.com/ipfs/js-ipfs/issues/973 and github.com/ipfs/js-ipfs/pull/975. Thanks @lgierth for improving IPFS infra and for setting up all of those DNS websockets endpoints for js-ipfs to connect to :)

Now js-ipfs packs the IPFS Gateway as well

Huge performance and memory improvement
>With reports such as github.com/ipfs/js-ipfs/issues/952, we started investigating what were the actual culprits for such memory waste that would lead the browser to crash. It turns out that there were two and we got one fixed. The two were:

>>browserify-aes - @dignifiedquire identified that there were a lot of Buffers being allocated in browserify-aes, the AES shim we use in the browser (this was only a issue in the browser) and promptly came with a fix github.com/crypto-browserify/browserify-aes/pull/48 πŸ‘πŸ½πŸ‘πŸ½πŸ‘πŸ½πŸ‘πŸ½


>That said, situations such as github.com/ipfs/js-ipfs/issues/952 are now fixed. Happy file browser sharing! :)

Now git is also one of the IPLD supported formats by js-ipfs

The libp2p-webrtc-star multiaddrs have been fixed

>You can learn more what this endeavour involved here github.com/ipfs/js-ipfs/issues/981. Essentially, there are no more /libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss, instead we use /dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star which signals the proper encapsulation you expect from a multiaddr.

New example showing how to stream video using hls.js
>@moshisushi developed a video streamer on top of js-ipfs and shared an example with us. You can now find that example as part of the examples set in this repo. Check github.com/ipfs/js-ipfs/tree/master/examples/browser-video-streaming, it is super cool πŸ‘πŸ½πŸ‘πŸ½πŸ‘πŸ½πŸ‘πŸ½.
>HLS (Apple’s HTTP Live Streaming) is one of the several protocols currently available for adaptive bitrate streaming.

webcrypto-ossl was removed from the dependency tree


PubSub tutorial published
>@pgte published an amazing tutorial on how to use PubSub with js-ipfs and in the browser! Read it on the IPFS Blog blog.ipfs.io/29-js-ipfs-pubsub.

do ipfs still use the ipfs (((package))) app that
throws away the cryptographic integrity of git?

do ipfs still use the ipfs (((package))) app that
throws away the cryptographic integrity of git?

NixOS will have IPFS as well WEW

this is nice but i can do without the pedoshit links

Call us when it can use both i2p/tor. Until then, it's inferior to bittorrent for filesharing. Why? Because the only client availables (github.com/Agorise/c-ipfs isn't ready enough) are in GC using retarded langages.

They're working on it. c-ipfs seems promising as well.

What pedo links?

Turn on IPv6 you dumb nigger. NAT is cancer.

Does IPFS need the equivalent of TOR firefox?

My quick IPFS indexer seems to be working good, will try to make database available over IPFS if it keeps working good. Could anyone who knows how ES works download the 200gb of dumps from ipfs-search and run `grep --only-matching -P "[0-9A-Za-z]{46}"` on it? They're compressed/encrypted somehow.

Nah, it's non-anonymous right now anyway, you could just install a new chromium/whatever and limit it to localhost:8080 if you really want to.

Can the user behind this comment on whether js-ipfs 0.26 is enough to get it started?
github.com/DistributedMemetics/DM/issues/1

I'd rather put my hopes into the actually-existing ipfs ib made by another user here over anime-pic-one-commit over there.

This is 16chan-tier software

Now is not the time for optimization, that comes later.
Also, see

"The age of men will return;
and they're not gonna get their computer-grid, self-driving car, nano-tech, panopticon in place fast enough..."

Qmd63MzEjASAAjmKK4Cw4CNMCb8NqSbL6yiVRfYnhMBT1H

QmXr7tE6teZgkzdcy5L4PM421FP3g3MpWUEGYfXJhEqfBb

Why does IPFS idle at like 25% CPU usage and high ram usage? Its not even seeding often or downloading anything.

How much later? They've known this shit is unusable for two years.

what does it do better than zeronet

Better spec, better design, and better future-proofed.
Problem is, right now it still send a fuckton of diagnostics information and is poorly optimized.
They're making progress, but it's not fast enough.

It's not a honeypot where every post you make on any site is tracked globally and anyone can insert arbitrary javascript (automatically run by all clients) (which must be enabled completely at all time to view sites in the first place) so long as any content of any type is allowed for upload in any page where that content would appear, for one.

bittorrent is NOT secure over TOR

It's just as secure as any other protocol. Some clients might potentially leak your IP, but there is nothing inherently insecure about the protocol.

That sounds bad. Are you sure? It sounds so bad that I don't know if I believe you.

Try it yourself. The steps are as follows:
- go to an arbitrary site
- upload whatever they allow, content is irrelevant
- you will find a local file that was created with the content you uploaded
- edit it to insert arbitrary content
simple as that, bypasses all sanitation attempts, etc.
There has been proofs of concepts on 0chan in the very early days of zeronet, and this has never been addressed. The zeronet devs seem to not care about any of the components that make this possible.

sasuga redditware

This is all well and good but the problem with the internet today is that it relies on a centralized infrastructure to gain access to which one has to pay fees to people with the power to cut one off completely if not control the content being served.

Anyway,
QmdqDLpEKd7zJ5fHNv8r3a4vJVcbgT3g3yW2vqvVHHmKXk

i have judged

Try upgrading from a Pentium 3.

Also centralized web servers were conceived because of a very important flaw with peer to peer infrastructure

but you've got that wrong, a centralized server shuts down no one can ever access that file.

Did you ponder a bit before typing such a sloppy mess? Those centralized cluster of web servers can still represent a peer in a p2p infrastructure. Nothing is forbidding such a cluster from joining the swarm and sharing files. Compared to a distributed web, centralization offers very little benefits in return and is only still around because it offers more control (to the owners of the files and servers).

This is literally the same problem with web servers. Even your VPS is on an actual server. So if you're saying P2P (and self-hosting) has flaws well

none of you are wrong but you've all missed his point

What point, that you can't get a file if the only peer in the world that has it turns off his PC? Well no shit captain obvious, but that isn't a problem with the p2p infrastructure, that's a problem of people not giving enough of a shit to seed that file. Maybe Filecoin is a better solution to that, but then again who knows what will happen in the future.

jesus christ OP I just wanna download Initial D and DBZ and you're linking to CP.

you talk too much man

can anyone explain how i can use p2p in a website. i have an idea of how i want to use it for a chan and other ways but i dont understand how i would go about it.

are you asking for an entry level explanation of how it works?

no, how would you apply p2p to a traditional website.

also what i dont like about ipfs is that it's almost impossible to have any privacy or remove content.

decentralized hosting, I'm not sure I understand the question?

im talking about the code, how would you apply p2p.

that depends on the type of site user, come on

well, let me make a mockup.

I want to make a chan where users can create their own boards on the host site, and every thread is hosted by the users though p2p, the more users the more relevance and faster the thread loads for everyone in the thread. after a certain number of posts the thread will be removed and flushed out of everyone's computer.
the features will be unlimited file /video and reasonably lengthy text sizes.

...

that's almost exactly the same as IPFS chan

sorry for asking to be spoonfed, but do you mind showing me a bug about this? I can't find it.
Based on what I can tell, you can't just arbitrarily change 0chan to post whatever file you want.
This isn't related to ipfs, so I'll sage

user's just havin' a giggle, go ahead and click it.

Unoptimized, probably the DHT.

They're working on other stuff right now, they got a lot of money from filecoin ICO so we should be seeing some progress pretty soon. There's also a C implementation in the works.

IPFS can run on any transport (hyperboria), which can run over any link (ronja)

You cache it automatically when you download it.

Tor integration is in the works for privacy

You're mostly just describing smugboard , it's very similar to what you're proposing.

It's not a bug. It's how it is designed. The poster can arbitrarily change the content of what they have posted. This includes changing the media type, and is simply a matter of editing the content that is stored locally. When someone requests the file, because you are the poster of the file, your doctored copy is distributed because the content you "upload" (e.g. text posts, or actual document attachments, etc.) are handled in this way.

You can even see the instructions on "how to modify a zeronet site" here:
github.com/HelloZeroNet/ZeroNet
as comments you post to a site are not handled in any special way compared to anything else. It's also why you need an ID to post anything and why your ID can be used to track anything you say across all sites by a simple grep: to enable modifying the content (which is not differentiable from a site, up to a point) by signing a more recent copy of the content.

wew, its worse than I thought

Why aren't files hashed? IPFS gets this right, why is there no network-level guarantee that files haven't been altered?

but Tor works easily on a P3, why is IPFS special?

IPFS is still in alpha (not optimized yet) and has the overhead of a complete p2p system (routing). Tor is much simpler to implement since every peer is not a contributing node. A large number of peers connect to a limited amount fast nodes. In IPFS every peer is also a node. This is the difference betwen Tor's decentralized network approach and IPFS's distributed network approach.

IPFS uses a distributed naming system (ipns) to point to the latest version as well as static pointers (ipfs-based addresses) to point to specific files. This is to enable tracking the latest version (i.e. enable the ability to update content) while still giving the guarantees there was no tempering by the controller. Zeronet doesn't seem to care at all about such possibilities: all that matters, only about the ability to update the content. Similarly, the zeronet folks don't give a shit about security (for the longest time (that might still be the case) they had been running with very old versions of various libs, including crypto libs, with unaddressed CVEs, for example). You can just say "their threat model is different" but at this point they disregard secops 101.

Release candidates are out for go-ipfs 0.4.11. If you want to try them out, check out the download page: dist.ipfs.io/go-ipfs

If you have troubles with IPFS using way too much bandwidth (especially during add), memory leaks, or running out of file descriptors, you may want to make the jump as soon as possible. This version includes prototypes for a lot of new features designed to improve performance all around.
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md

So if I use something like IFPS to host a website, does that mean that I don't have do fuck with things like domain name registration?

Technically yes, but in practice you will still need a way to register a user-friendly name because people can't recall localhost:8080/ipns/LEuori324n2klAJFieow. But there's a way to add a friendly name in the ipns system (google around, I don't recall the correct method), which allows people to use localhost:8080/ipns/your.address.name instead, so that's an option. Other than that, all kinds of systems can leverage the likes of namecoin if you're so inclined.

Eh, I actually prefer the hash method. Keeps things a little more comfy.

That's really fucking cool though.

decentralized.blog/ten-terrible-attempts-to-make-ipfs-human-friendly.html
Here's a list of all DNS alternatives that IPFS team can use, my guess is that they will use Filecoin considering that it belongs to them.

The method is to register with any normal DNS method a TXT record with content: dnslink="/ipns/" and it will work. So it's actually relying on the external system.

I thought filecoin was just an incentive to store other people's files?

Can I limit the amount of space that IPFS uses or if I download and start running it will it just fill up my hard drive indefinitely?

It is. I think they're going to recommend using ethereum domains as IPFS has plans to be deeply integrated with it.


IPFS doesn't download random things to your computer. It caches everything you view but by default it's capped at 10GB.

By default IPFS does not fetch anything on its own, it only will retain the data you manually added via browsing or manual adding.

If you want you can run the daemon like this `ipfs daemon --enable-gc` which will read your config for 2 values, 1 is a timer and the other is storage. By default I think they're 1 hour and 10GBs, that means a gabrage collection routine would run either when you hit 10GB's of garbage or 1 hour has passed. What it considers garbage is anything that's not "pinned", if you don't want something to be treated like garbage you pin it.

Someone made an issue recently that I agree with, there should be an option for a minimum amount of data to keep, right now garbage collection deletes ALL garbage, but it would be nice if you could set it to keep xGB's worth of non-pinned content at any one time.

github.com/ipfs/go-ipfs/issues/3092

QmVuqQudeX8dhPDL8SPZbngvBXHxHWiPPoYLGgBudM1LR5

All 13 of the current "Manga Guides" series, in various formats.

My mixtape.
Good music with a good video to go with it
Holy Nonsense
Also why does 32.00 MB / 54.44 MB [=======================================================================================================================>------------------------------------------------------------------------------------] 58.79% 0s20:13:33.012 ERROR commands/h: open /home/user/.ipfs/blocks/GY/put-460657004: too many open files client.go:247Error: open /home/user/.ipfs/blocks/GY/put-460657004: too many open files
keep happening? Each of these I had to try adding several times.

Have you upgraded to 0.4.11 yet?

I have a question about implementation

Each file is divided into chunks, which are then hashed. These hashed chunks form the leaves of the merkle tree, which have parents that are identified by HASH( HASH( left-child ) + HASH( right-child )). This continues until we reach the root node, the merkle root, whose hash uniquely identifies the file.

To give someone else the file, from computer S to computer T, S gives T the list of leaves, and the merkle root. As I understand it, this is basically what a bittorent magnet link does as well (along with tracker and other metadata). We know the leaves actually compose the merkle root, by simply building the tree from its leaves, and verifying the new merkle root is the same as the provided one.

Computer T then ask around if anyone else has the content of the leaves (by querying for the leaf-hash), and verifies the content by hashing it upon download-completion. Once it has everything (and verifies), it simply compiles the parts into the

Assuming there is nothing wrong with my understanding above, I have a few questions:

How do we know the merkle root actually identifies the file we meant to get? ie if someone hits an IPNS endpoint, and an attacker intercepts and returns a malicious merkle root + leaves, now what? Is there anything to do about this or is this just a case of don't trust sites you don't know

When computer T starts requesting for leaf-content, is it requesting by querying on the hash of a leaf, or the merkle root? Bittorent only requests parts from users that have the full file, which comes from the latter. If you request by the leaf-hash instead, I'm imagining that the less-unique parts (like say a chunk of the file composed entirely of NULL bytes) could come from ANY file source, regardless if that user actually has the file you're looking for.

And extending that, with some infinite number of files stored globally, it would be possible to download files with a leaf-list that NO ONE actually has; each leaf being found in some other file; composed in some particular fashion to create the requested file.

Can you use IPFS in combination with a tor bridge with obfs encrypting files and still transfer files? If this worked would the person receiving the data still see your public IP?

So you mean how is the data verified to be correct once the client receives it. inb4 is isn't verified

On the leaf. Each 256k block has its own DHT entry (hence why it's known to be so chatty). This also means that if you have a file and change one byte in it then most of the file will be deduped by IPFS if you readd it.


My understanding is that the IPNS entries are signed by your public key, so that's not an issue. There is a problem where a malicious node could return an old entry, but that's the reason each entry is stored in a fuck-ton of DHT nodes. Which is also the reason it takes so long to resolve names, it doesn't just take the first resolution it can.

So what I suggested then, that a file no one has could be generated by the network given a list of leaves, by retrieving them from other files, would hold then? I suppose though that's not anything special, except that the granularity of chunks is bigger than say, 1 bit. But to confirm my understanding, is this true?

With bitswap, it DOES download random things, kinda. You swap fragments among peers from random content.

It's hash-based addressing: if the chunks that make up the file exist in other files, they are exactly as valid in the requested file as it is in that other file. That is, request the hash and the provenance is a meaningless concept: you can think of it as two completely different kinds of data (the actual chunks, and the file descriptors which are merkel graphs)

IPNS is still handled via DNS, so short of someone pwning the authoritative nameserver for a domain, you're looking at a hijacked local resolver, which you can defeat via VPN.

I am requesting that people please prefix their hashes with"/ipfs/" when posting, so that the browser addon detects them and anchors them, this way people with it can just click on them.

Like this
QmZsWcgsKNbjmiSeQGrZUwAbHVVVtxSFKm9h9AFKoAK8aH
->
/ipfs/QmZsWcgsKNbjmiSeQGrZUwAbHVVVtxSFKm9h9AFKoAK8aH

updated my porn folder again
/ipns/QmVm4jMdZnewAAU3QPoUBJ6jpjjicRWsfcjfD7c47rf1KC/latest.html

direct link since ipns is buggy
/ipfs/zDMZof1m2wGAGywacnVpmTXZ76tW4EWixSdVz1rkkNGLj3d5vAuh/

alright my first try at this, it's Lovecrafts "beyond the wall of sleep"
/ipfs/QmShe7riU5RVJ7iGkr7ebMqsgUjMk5SfiqeRMeB1Hnu6gX

Holla Forums shoop incoming

Nobody is interested in the contents of your spank folder, you degenerate.

but someone might be

And here the necronomicon
/ipfs/QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6
could someone tell me if it works alright?

You can check yourself by accessing a file through the gateway, i.e. ipfs.io/ipfs/QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6
If you can see it there, everyone can find it.

How do you mean?

ipns forgets what it's linked to after 12 hours and is extremely slow

podricing.pw/posts/1191416

RIght near the top of page linked by OP is the following:

gathered links. mostly from IPFS generals on 8ch/tech/.

* user's qt ipfs page (this site)
/ipns/Qmeg1Hqu2Dxf35TxDg18b7StQTMwjCqhWigm8ANgm8wA3p

* the best cp archive i've ever seen
/ipfs/QmY7KEmJKpx7bNDQ2WfDJp2zdsvX1ATZKWd4AXAhDLCaBM


I ain't clicking it, regardless of what it may hold. Probably cartoon ponies teaching how to circuit-probe.

No one can *tell* you that magick works, user. You have to say the incantation yourself. If you see results that seem magical to *you* then the magick has worked for *your* belief. Magick is not science.

Disclaimer: I think this is all right but I'm not sure since things are changing rapidly, correct me if I'm wrong

Nodes don't forget, the record expires, by default your daemon is supposed to re-publish anything you published every 12 hours and records expire every 24
github.com/ipfs/go-ipfs/blob/master/docs/config.md#ipns

Just make sure you actually publish after starting the daemon and again anytime the daemon restarts. I think this will be automated later and other peers will keep the record alive but it's not like that yet.


It's Rick Astley's Never Gonna Give You Up


Are you still hosting? I managed to get only 25MB's.

If magic only exists based on personal perception then magic cannot affect other people in any way, since their perception differs from yours.

yeah I forgot to turn it on, try again.

IPFS v0.4.11 is out

This looks like a large update. Everybody should upgrade.
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md

Hmm. As long as you are compiling it yourself still......

2 weeks old my man.

Data on the attack. security.googleblog.com/2017/02/announcing-first-sha1-collision.html
For the time being, it's still not considered a realistic scenario because of the amount of effort involved, even if it's a possible one. Either way, they've added SHA256 to the BitTorrent v2 specifications.
bittorrent.org/beps/bep_0052.html

Fucking finally.

It was released the 27th. You're thinking of the release candidate.


That was either a retarded decision or a (((perfectly planned))) decision. Why would they slow down the entire network to use an algorithm that isn't even recommended anymore due to the prevalent security threats? And don't give me any muh hardware acceleration crap. The only x86 CPUs that have hw acceleration are newer Intel Atoms and AND Zen which count for a minuscule percent of torrent users.

It was (((perfectly planned))) considering the (((endurance international group))) owns bittorrent now.

That's exactly the excuses I was reading in the Issue thread on the subject. It sounded stupid to me too.

BitTorrent is an open standard. It's up to the community to implement. If the community got together and decided to implement a modified protocol, the recommended specs would necessarily have to change to reflect this if enough people were about it. That or we could fork and call it ButtTorrent, since we're being stubborn butts about it and everyone just calls it torrenting anyway.

Would IPFS be a great choice for those who're looking for a way to have a AI back itself up to prevent another Tay?

Those titles were a joke, nigger

Do you think it will work in the outer space?

No.

how would you prevent another tay?

I don't think there's a solution to that other than do it yourself. If there's source code, you can rebuild it; if there isn't, tough luck. I don't see what IPFS has to do with this, other than maybe helping to host it or its databases.

To whomever posted QmVuqQudeX8dhPDL8SPZbngvBXHxHWiPPoYLGgBudM1LR5 and QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6 these stop part-way. Please continue to seed them.

Not the original poster but I have a copy of those but I'm currently migrating from flatfs to badger, it's going to take a few hours because I have hundreds of GBs of shit in here and slow ass hard drives. My node should come online in like 20 hours or so since I just started.


What data should we send to/from space? I'm willing to send puzzle games and cute girls if they send me pictures of the moon and videos of them doing things in low gravity.

if err != nil { return nil, err}

if err != nil { t.Fatal(err)}

if err != nil { return nil, err}

if err != nil { t.Fatal(err)}

if err != nil { return nil, nil, err }

if err != nil { return nil, nil, err }

if err != nil { return nil, nil, err }

if err != nil { return nil, nil, err}

if err != nil { return nil, nil, err}

if err != nil { return nil, err }

if err != nil { return nil, err }

if err != nil { return nil, err }

if err != nil { return nil, err }

if err != nil { return nil, err}

Okay I'm up now, badger seems fast as heck compared to flatfs.

I made a mistake earlier, I only have 25% of /ipfs/QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6

I knew I had 100% of /ipfs/QmVuqQudeX8dhPDL8SPZbngvBXHxHWiPPoYLGgBudM1LR5 though

OP should come online so I can finish the mirror.

Are there any benchmarks on datastore filesystems yet?

The benchmarks for the underlying systems should be representative enough. There are plenty online of people comparing leveldb with other things, badger itself competes with an improved version of leveldb(RocksDB) and the devs posted a lot of information about both of them in their post, they also have comparison results.
blog.dgraph.io/post/badger/

There's probably no benchmark for the flatfs scheme the ipfs team made, I don't think it would make sense to bench either since the underlying filesystem and OS have a major impact on it, since it's just flat files in directories.

Reminder that the interface for datastores is in place and currently being worked on, so you can implement the interface and use whichever datastore you want. Most of (if not all of) the filestore(no-copy adds) work was done by a third party.

I suppose badger is a no brainer considering they're making it the default next patch, as far as I can tell.

It seems objectively better than leveldb in every way, personally it's been much better than leveldb and flatfs on my machine, like noticeably so in terms of speed, hashes went from taking about 2 seconds to be fetched on the public gateway, to instant, and this is with hundreds of gigabytes of things in my datastore.

Surprising how much overhead there is in flatfs just from storing everything as 256kiB files. Makes me wonder how much faster git would be with an sqlite backend.

bump for interest

What are you interested in?

they meant usury

...

Many people do use this without even knowing, dumb fuck. Get out, candy ass piss ant; Inform yourself before speaking. This isn't some inconvenient technology. On the contrary, very fast and improves speed over regular internet usage. This isn't even addressing the other benefits. BTFO.

Whoa, calm down there sis.


Outlawed or not it would be hard to prevent people from using it. The development team is coming up with strategies to use it even in restrictive environments.
github.com/ipfs/go-ipfs/issues/3908

At best you'd be able to prevent some part from working easily but since the plan is to be able to swap out components, you'd still be able to utilize it by getting around blocks. Things like cjdns and message passing will make it very difficult to prevent data access.

Version 0.4.12 has a release candidate out now. Here's part of the changelog:

The final version of 0.4.12 was released yesterday. Now that it won't spam my network as much I might host some content. But the real question is, is my bandwith is better spent seeding the torrents or seeding them on IPFS?

Do you think it's likely you'd max out with both running? I tend to have my torrent and ipfs daemons running simultaneously, if I get bittorrent traffic it's usually just a burst for an hour or a very slow trickle to someone on the other side of the world. Same with IPFS, most of the time it's idle, and I have a lot of content shared on both + other networks.

Right now I have both running + soulseek.

...

A global network is something I'm okay with my computer contributing to. It's robot communism at best. The costs and benefits of a decentralized and/or distributed system are more appealing than the faults associated with a centralized system, in my opinion.

The details of IPFS itself (while not new) seem especially nice, things like immutability, content addressing, and the inherent lack of trust associated with P2P systems which encourages better validation and security practices at a network level.

Things like pinning services (essentially CDNs), private networks, and real-time dynamic content via pubsub, allow for some opportunity for capitalists as well.

I think things like Freenet which force you to share the load are more communistic, with IPFS you're only ever sharing what you yourself want to share.

Wew, page 9. Anyway...

Charlie Manson Superstar (ogv)

/ipfs/QmdSCNSSdpS3j6vudHR87FChHHus4jsgMKGMKLP4g86tRM

WHY IS js-ipfs TAKING SO LONG?

What a quality argument. It's good to know that the best and brightest are still here.

go-ipfs 0.4.13 is out. Judging by how quick it was, you should probably download it immediately if you're on 0.4.12 already.

Because you arent contributing.

Cool intro graphic.

Friendly reminder to the thread ipfs add -w "Charlie Manson Superstar.ogv"
Alternatively
ipfs files mkdir /tmpipfs files cp /ipfs/QmdSCNSSdpS3j6vudHR87FChHHus4jsgMKGMKLP4g86tRM "/tmp/Charlie Manson Superstar.ogv"ipfs stat /tmpipfs files rm -r /tmp
/ipfs/QmcaQifUM8ixuERUXe4fXX89hCAhgKK8MGFPomSEcZgn2C/

You don't need to fetch the file to craft that hash in the latter either, and you can always get the base hash back from it if you need it bare.
ipfs ls /ipfs/QmcaQifUM8ixuERUXe4fXX89hCAhgKK8MGFPomSEcZgn2C

I wonder if this linkifies with the extension
/ipfs/QmcaQifUM8ixuERUXe4fXX89hCAhgKK8MGFPomSEcZgn2C/Charlie%20Manson%20Superstar.ogv


wew

oh hi i hrd thr ws cp?

github.com/ipfs/js-ipfs/issues/1095

Nothing very interesting there.

Animegataris. Episodes 1 to 10.
QmbNZZpQRThNurXNPhcazA2Gw52FCC3SxChiC3T6GiMB3e

Is your daemon running? I can't access this.
also prefixed for the browser addon /ipfs/QmbNZZpQRThNurXNPhcazA2Gw52FCC3SxChiC3T6GiMB3e

Saving the thread while we wait for a new version, so I might as well ask: What is the difference between pin and add? I'm still not really clear on that.

no offense guys but.. why care about this shit? I can't use it through tor or i2p so it's just the same as torrents: SEND ME A DMCA PLZ! I want 3 strikes and my net cut off!

Pin tells the GC not to reap the blocks until you unpin them.

Reminder that by default, adding something pins it, if you don't want that use ipfs add --pin=false and whatever you just added will be reaped on the next GC, if you don't want it to be reaped, pin it with ipfs pin add *hash*.

So when are we expecting for DHT to work in js-ipfs, so we can properly run it on websites?
At the moment, running a node on a webpage only allows you to retrieve content that the default gateways seem to have stored.

Would it be possible to just continuously reannounce to keep the files cached in the gateways? It's a shitty solution but it might work as a stopgap until js-ipfs is more complete.

bump

You are the ultimate brainlet. This isn't just for pirating shit, it's also for censorship circumvention and freedum.

Where is my in-page IPFS without gateways? Once we have that it will be a new age of Internet prosperity. Smugboard can finally replace this crumbling shell of an imageboard.

Eventually. It seems js-ipfs doesn't yet have proper routing or something, so it can't connect to the DHT, and or connect to nodes it hasn't already connected to. This limits js-ipfs to any node on the bootstrap list.
I was thinking it'd be fine to just use go-ipfs and have people download a client (which runs the daemon in the background), but seems like everyone these days wants to do everything from within the browser. Thus, we wait for js-ipfs to reach a usable state.

Yes, don't wanna blow our load before making it perfectly transparent for normies. They should use those filecoin shekels to hire like 20 more devs though, is this not the most important development in Internet history since social media? Where are the buzzwords, the hype? Oh right, this will kill CDNs and file hosting sites (jewtube etc) among other things.

I agree. Maybe I'd see it if I really analyzed how many people are making pull requests, but it seems like things are moving the same speed they always did. Granted the big things to tackle moving forward are executive decisions about how to implement complicated shit like bitswap, but you can pay people to help with bug fixing while you approach conceptual solutions. Filecoin can't live without IPFS and vice versa. It's very much a yin-yang thing. I'd like to see some serious muscle put into shipping out a 1.0 before Filecoin even hits open beta. God forbid they come out of the gate with critical bugs or scaling issues.

Another nice thing to see would be hiring someone to help develop community projects. Imagine a guy who knows the software back to front helping out all the little guys in that IPFS Shipyard, especially things that could work as building blocks for bigger projects.

Bumping a thread on a slow board, with 15 pages of threads, after only 4 hours between the last bump, is bad form. Please don't do that.


I didn't want to just complain about wasting posts while wasting a post myself, so I'll give my input on this.

I agree that this is extremely important but important things aren't always exciting. IPFS enables us to reliably host data on our own, with all kinds of redundancy, distribution, censorship resistance, and a nice promise of practical permanence(as long as someone has the data, anyone else can access it using the original hash, forever(no dead links)). All of these things are indeed important but it's not exactly exciting, for some of us it's almost frustrating since we sit here and say "it should have always been this way...", with that in mind I can understand why there isn't much noticeable "hype", it's mostly silent experimentation and adoption which seems fine to me. I don't think IPFS needs any kind evangelizing, especially not right now. I think anyone that comes across it can come to their own conclusion easily on whether it's worth using or not, it will grow naturally regardless of it's public image, like most good technologies. What comes to my mind is things like BitTorrent, it's huge today and it's not because of any kind of marketing bs and without any promise of people making money off of it. Also it's a long way from being finished anyway.

That all being said it's not like there aren't people writing about IPFS and getting excited, the project creator does more than enough talks at conventions, schools, etc. and in my opinion is doing a good job explaining what it is, does, and why you should care about that.
The old threads used to link this a lot
blog.neocities.org/blog/2015/09/08/its-time-for-the-distributed-web.html
Since 2015 I've seen more and more publications about it so it's not like it's not happening, it's just not mainstream yet, and I don't think it should be until it's finished and all polished up. I think they have a good balance going on right now, it's too early to hype things up when they're unfinished and changing rapidly, and what's nice is it feels like the community understands this too. Imagine other projects that get popular too early, they get talked up a lot but don't actually meet their promises yet, people hear about it, they try it, and they get disappointed.

When github.com/ipfs/js-ipfs/pull/856 is merged. Which is taking fucking forever.

This is true, it's pretty clear to me everyone seems to know what's up regarding this. Nobody wants to push this mainstream until it's something we can stand behind.

Someone want to explain to me how the ipfs p2p subcommands work?
If I'm understanding correctly, you can basically listen in for TCP/UDP connections, and connect to them by resolving an IPFS Peer-ID, instead of a domain or IP address?
Looks pretty useful; has anyone actually made use of these features?

It's experimental, so documentation is light. This thread has an easy example demonstration: github.com/ipfs/go-ipfs/issues/3994

Seems more fleshed out for js-ipfs, for reasons that are only natural.
github.com/libp2p/js-libp2p/tree/master/examples

libp2p is a set of protocols and utilities, of which IPFS utilizes most of them. "ipfs p2p" is a subcommand for binding a normal application to your PeerID via ports, so it's possible to create a centralized website on top of IPFS, with decentralized addressing.

Yeah, nevermind, it's pretty much the same thing, it looks like.
github.com/libp2p/go-libp2p/tree/master/examples
Or at least, it's a part of it.

Never realized this existed until now. It's almost exactly what I had in mind for shilling IPFS to normalfags, especially ones on Windows.
github.com/ipfs-shipyard/ipfs-desktop
Anyone try it?

I've been using this one
/ipfs/zDMZof1m1fX98cTLyC2VLe9iDQQhWgDLu5foshBSsxSWHQNuiyYV

eww

They have to do that on the main gateway to avoid getting vanned for knowingly hosting illegal content. If you want to link it to other people who don't have ipfs, you can use someone elses gateway or better yet, use your own with your own blacklist.
ipfs.git.sexy/sketches/run_a_gateway.html
I never even looked into how to setup a blacklist for it though.

What's the content of that hash?

Found it on /ipfs/QmU5XsVwvJfTcCwqkK1SmTqDmXWSQWaTa7ZcVLY2PDxNxG/ipfs_links.html
It's labeled as "Programming books".

I picked a random one from here that doesn't use their DMCA list.
ipfs.github.io/public-gateway-checker/

hardbin.com/ipfs/QmVktW6uo1mcqSiufH7fmExsmyC7dFx2GCYiEDmJLSatnD

Remember to use your local gateway regardless.
localhost:8080/ipfs/QmVktW6uo1mcqSiufH7fmExsmyC7dFx2GCYiEDmJLSatnD

Don't read Zed's books either.

Sipser's book isn't that bad. What are you getting at user?

I had high hopes after the first category, which is spot on.

If your school teaches Java, it's basically a "Java school", there is literally no lower circle of hell for a CS school, except teaching C# maybe.

Second category : Papadimitriou is a respected computer scientist, was this book that bad?

Third category : Sipser taught at MIT for 30 years.

Fourth category : comparing apples and oranges, I very much doubt that books on the left are used at the same level or for the same course as books on the left.

I will do my own list, cool idea.

...

ipfs-search.com
Seems to be up again.

I jumped ship at 0.4.8 after reading the CoC, looking better now.
How can I 'seed' content? Download and 'add' the directory later? I am not seeing an obvious answer.

Any pointers in how to use it? Couldn't find ditshick translations with it.

Yeah. ipfs get will pin it temporarily to your datastore, but doing an ipfs add or ipfs pin after will hold it indefinitely. I'd recommend moving it to a permanent location on your drive and using add so you can use the --nocopy flag to save disk space.

I don't know how long IPFS Search has been up, so they might show up after another update or reannounce.

Good schools use the K&R, ANSI version.

...

Any good ways of covering this?

I think if you poke around .ipfs/config you can choose how you connect but I'm not really sure. However, you shouldn't be running IPFS through TOR in the first place.

Stuff like this will probably confuse people, `get` doesn't pin anything, it just caches the content in the datastore. Pinning content just takes cached content and flags it so that the garbage collector does not reap it.

Another option is mounting ipfs and using the mfs if you want to reorganize the layout.


github.com/ipfs/go-ipfs/blob/master/docs/config.md#addresses
Make sure you're just listening on the tor interface.

Also look into this guys fork so you can just connect to tor directly instead of proxying.
github.com/ipfs/notes/issues/37#issuecomment-247717781
If you go that route you can put just that in your listener section.

nsa honeypot

What sets this project apart from freenet? From what I've read so far, it seems very similar.

Some differences are that IPFS does not distribute content by default like Freenet, with IPFS you are only sharing data that you choose to share, nothing prevents you from joining a pool that syncs things in a similar manner to freenet but it's not implicit by default.

The design is modular and meant to be swapped out to a users content, for example swapping out the routing system for an existing one such as cjdns, i2p, et al. or some combination of all of them. Contrast this with Freenet where what you get is what you get, if you don't think the encryption, routing, hashing system, etc. used by freenet is good enough you can't swap it out, with IPFS the end goal is to allow interoperability between all these different systems through a common interface and network bridges.

This is probably the most important aspect because it means changes both in the core project itself are possible but also people can utilize other systems and concepts too if they like, this prevents stagnation and hard upgrade paths for the network as a whole. The project creator pitches a "thin waist" like IP, you can design a ton of hardware beneath it and a ton of different protocols on top of it, IPFS is meant to act almost like a P2P IP stack where things like the transports can be modified without breaking things built on top of it or besides it. Their desire to bridge things is also interesting, pulling information from external things like git over http, torrent data over dht, and other things external to IPFS and their own bitswap is very interesting as it allows for easy migration and interop with existing systems, most of that is experimental right now but seeing it working at all is interesting, the opposite style is also useful, things like pushing data from IPFS over http.

Summing it all up I'd say IPFS is not trying to compete with things like Freenet as much as they are trying to bind it and others. I've seen people say IPFS intends to be a giant collection of gluecode making all these networks and concepts work together since there's no reason they can't. A very broad reach across multiple networks over multiple systems through 1 common data structure and interface.

I forgot this one.

We're watching you, non-believers~

I believe.

What happens if someone uploads cp? Is it just on everyone's computers forever?

Why can't you people read the thread, it doesn't distribute implicitly, if someone hashes CP it's only on their node unless you download it too, if you download it on accident just run garbage collection or wait for garbage collection to run automatically if you have that set.

Indeed. And Lord Google shall pin it as the darkness, and the masses shall follow Their word.


As does all of Holla Forums, possibly even all of 8ch!

Oh, okay.
What a useless protocol, then.

You'd rather it be stuck on your node? What's the benefit?

I expected some protocol-layer encryption

There is, that doesn't have anything to do with automatic distribution across nodes. I'm not sure how you mixed those up.

/ipfs/QmYShhhD6j7vwL4SGUp2vNwHerRc7VU4Qm86ZJWvgqZn9G - ReleasetheMemo.pdf

/ipfs/QmbtL1GXPaCwUd1k8iBHynpGZUM6CwrLAuHR9LjZTtcUYB - Damore Complaint against Google, 1-8-2018.pdf

/ipfs/ QmUUA8zudreYSSoGp8aujw1d6QJAjoan9W7MATF7udjrws James_Mason_-_SIEGE_3rd_Edition.pdf

/ipfs/QmVzwWXRfeA8d7C9D3TZ9aM2P3SzxHfk3ALF2dghy5yrcn - full collection of Transmetropolitan

inclibuql666c5c4.onion.link/
Someone should IPFS the books in here.

These people need it the most (especially the Rabin fingerprinting)
>>>/pdfs/ >>>/tdt/ >>>/zundel/
github.com/ipfs/archives/issues/137
github.com/ipfs/notes/issues/1

Good luck with your faggotware.

...

Stop trying to get more people to use it right now, it's not finished yet. It's good for us because we know how to use it. Promoting it to less tech savvy users isn't going to help anyone and looks like spam.

Rabin isn't even out of experimental yet and the only browser that supports IPFS as a valid protocol is Firefox 59. Wait until it's finished before telling other people to use it.


The guy telling him off (flyingzumwalt) doesn't do anything important, he schedules the meetings and is resident bitchboi on the forums. The real project maintainers seem alright.


Why don't you do it? If you post the hash I'll pin it.

...

/ipfs/Qma5PjcXXSf3Y46a5UtkrhC8j8npJmFp6DDzyPWXuzsoEK

github.com/ipfs/archives/issues/87 # doing a "sprint"
github.com/ipfs/archives/issues/136 # test result
github.com/ipfs/archives/issues/137 # more results
github.com/ipfs/archives/issues/142 # wanting people to refute him
So rabin fingerprinting is worse than Gzip, and is comparable to classic chunking.

Some schoolfags are developing a CryptoNote cryptocurrency with an IPFS node list.

stellite.cash

Are there better alternatives than rabin and whatever IPFS does by default? Maybe someone should propose them. For one time operations on big datasets, having a slow but space efficient chunker/de-dupe system seems like it would be valuable.

I noticed a lot of those tests are using the default rabin size too, IPFS allows you to specify the min max and average chunk size, I don't know how effective settings these would be versus the default
ipfs add -s rabin-[min]-[avg]-[max]

This is a problem I always see in traditional filesystems, ones that offer de-dupe are usually very taxing on memory and a lot of them just recommend using compression methods like lz4.


lel

When the optimal rabin chunking between txt, html and epub are different, it will be tough for IPFS to implement
moinakg.wordpress.com/2013/06/22/high-performance-content-defined-chunking/

Fucking this.

WUBBA LUBBA DUB DUB I'M PICKLE RICK MR. POOPY BUTTHOLE XD PICKLE Rick!!!'n XD WUBBA LUBBA DUB DUB IM PICKLE RICK GET SHWIFTY XD HAHAHAHAHAHA GOTTA PUT THE SEED WAY UP YOUR BUTTHOLE WUBBALUBBADUBDUB XD I'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XD
I'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XDI'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XDI'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XDI'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XDI'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XDI'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XDI'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XDI'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XDI'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XDI'M PICKLE RICK!!!!!!!!!!!!!!!!!!!!! XD Sing get schwifty and polish your pickle rick if you know what I mean xD
PICKLE RICK BAZOOPER
Hi I'm 13 and I just started watching Rick and Morty and I can tell you for a fact it's my favorite show!!. Lik the one time Ricky said said there's probably like no good !!!! i was agreeing so much I'am smarter then you're average fidget spinner teen at middle school to even though I have one. I may be young but I'm smarter then every theist on earth basically the show is also really deep when they said like no one was born for a reason I was so blown away as they must have big balls to say that on tv so I told my friends on minecraft and they agree too. LOL once when my mom took me to McDonald's I asked for the Mulan dipping sauce and the dumb bitch didn't even get the reference XD One time in class i evan shouted "I'm PICKLE RIIIICK!" and Mrs.Janice told me to go outside i fucking hate that cunt school is for dumb ppl just like what Rick said, i m too smart for such imbicells. But yeah I love Rick and Morty and I'm actually smart enough to get it to.
LE PICKLE RICK xD WOW Holla Forums LOVES RICK AND MORTY NOW! FINALLY! xD GOD'S NOT REAL MORTY I'M A PICKLE I'M PICKLE RICK WUBBALUBBADUBDUB XD Turned myself into a pickle Mooortyyy BOOM IM PICKLE RICK
GET IT PICKLE BUT ITS RICK LMFAO xD LMAO 420 PASS THE WEED BROH HASHAHAAH DUDE WHY ARENT WE TALKING ABOUT PICKLE RICK XD
WE DID IT REDDIT! WUBALUBADUBDUB XD
WOW LOOKS TOTALLY LIKE RICK FROM RICK&MORTY WUBBLUBBADUBDUB IΒ΄M PICKLE LETO XD
is this a le new epic meme? screen kapped for dat sweet karma xD. FUS ROH DAH!!!!!1 i used to be a christmas but then i took an arrow 2 da knee :^(
BAZINGA BAZINGA ZIMBABWE. top kek, toppest of keks. le rick from pawn stars? hahahaha le mayonaise
fucking epic ass meme

fukken saved

cat > Documents/pasta/pickleshit.txt

What was it?


Come back online you fricken frick. I only got 78% and want to mirror this for you.

An autistic pickle rick post.

...

Fantastic.

controlled opposition
report and ignore

What?

Pink Flamingos (1972) - /ipfs/QmZUr7Si5aLU4ua6tmdHhpdXKWdvwAndccMtKWpd3HGSzn

JS-IPFS v0.28
github.com/ipfs/js-ipfs/issues/1228
Soon

Whats some cool projects based on it or something I can use it for?

just downloaded it and looked at their docs and tried their demo stuff, I'm up and running.

/ipfs/QmdNRFVKhieT8FpkxkppXMct7y9ZpLjZJpDKeZFjoeWnoz - Rules_of_the_Internet_2.0.png

SPREAD THIS LINK
OFFICIAL Holla Forums RULES

/ipfs/QmQgfZp9wWSq1QdyxBeKF2YuH8cGpQ5vy6B6mib23dLQ37 bsd_coc_discussion.mbox

I think I'm the only one contributing to this thread now. Since I'm doing hacky shit atm because internet got shut off, I am unsure if it's uploaded or not. Enjoy it tho.

/ipfs/QmRCbRdih2hr44uzrE6aViVEAncAQ9hD61wiUSwoKhWspU

Fucking hell, how do i do this?
I want to change the IPFS directory from C drive to D since my c drive only has 5gb.

this shit will never be mainstream.

Attached: 6453.png (608x532, 16.81K)

Obligatory

Buncha Touhou OVA's from an old IPFS thread. I should be able to seed for quite a while.
/ipfs/QmSsepMbw1ASbcysMjFHwvSND5PMhThf9UEbYsUoBneAAn

sorry but i dont use meme os's.

What's the problem? Make a new environment variable called 'IPFS_PATH' and set the value to 'D:\whatever\you\want'

because ipfs claims the environment is not defined.

Then don't use meme software, you faggot. Go back to /g/.

git gud
You don't even have it set in your screenshot so I don't know what you're trying to show off there.

Click New... for either user or system and add it.
Name:IPFS_PATH
Value:D:\mygayshit

Make sure to launch a new shell and check with echo %IPFS_PATH%, or just set it from the shell you're in with SET (temporary) or SETX(permanent). Windows is picky as shit when it comes to sourcing environment variables.

Attached: works for me.png (859x868 440.53 KB, 13.48K)

How do I know if my uploads are being peered? No one is replying to my posts so I don't know.

>ipfs dht findprovs ... - Find peers in the DHT that can provide a specific value, given a key.
This will spit out peerid's that are online and have/provide the full hash.

Is there a way to list all peering files in this manner?

I'd probably loop over the output of
ipfs pin ls --type=recursive

I'm not sure your endgoal though.

My endgoal would to make a tree-view of all files I've uploaded, with the peers peering each file

Dresden - The Call of the Blood (1996) (Flac format, meta-date double-checked)

/ipfs/QmQqF6FJDp9F57aL57w8ArTNuvKNzCm43d9MDBWWo7iG3X

I think the thread is more for talking about the protocol and implementations rather than sharing files like Holla Forums's "share threads". Otherwise I'd be posting content. For now I'm waiting to see the go-ipfs 0.4.14 release. It should have a lot of performance improvements from the developers but also with the help of Go 1.10 having its own performance improvements.

Holla Forums should have a share thread of their own, see how Holla Forums does it >>>Holla Forums14459689
Just post tech related content through any means, BT, IPFS, DDL, etc.
It would be good for trading data but also talking about protocols and shit. The retroshare threads could be merged into it.


More lurkers than posters on Holla Forums. I try to mirror hashes that are posted in these threads, sometimes people post neat things. Sometimes people post a hash and go offline forever though which is annoying.

this fucking this

It would be great to see how many peers are sharing same files like torrents

That's what the findprovs command basically is, you could recurse a hashes content and run it for each child hash to have a finer grain too, how many peers have this individual file vs how many have the whole directory/node-root.


I would start with those API endpoints, use pin to find out what you have and findprovs to find out who else has them, count the results and display them in some tree structure.

ls and `object links` would help traversing directory hashes.


I'd like to see bandwidth statistics in the same way, current bandwidth usage on hash X, total bandwidth used on hash X. It might be possible to write an interface for that now but I haven't looked into it. If I wait long enough someone else will do it for me.

Attached: Untitled.png (1442x812, 59.21K)

My waifu songs...enjoy
/ipfs/QmadHXY6TXeSasoc6DZyBruEtV6gfF4F1oSRbi6VTVdfdw

/ipfs/QmcrnR5d8UXLxYE7uYxkT4pNwwChZbYietbxG5bvkvr1xB
/ipfs/QmTWXWo3B3TxvVAt2M5N67JqMhr94Qa7kQEQV5ZDvmBMvz

has anyone tried making a booru on IPFS? i know about hydrus network, but i want something you can access from the browser

I meant how many peers right now are downloading from me

kek

A true "hacker" movie
/ipfs/QmfCurXyGFq2nyVDPdMa9qYEgTcRxCDiWv5wotM7b8vftB

Wario World NTSC-U
/ipfs/Qmbki3KYmZ31rvQ8fPnNLXD5KnKS67ckgEi6wgPyhp8eoU

hey reddit, did your girlfriend peg you in the ass again?

Also curious about this.

>>>/hydrus/7858

IPFS 0.4.14 is out

Main improvements are lower resource usage and pubsub IPNS
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md

Man I seriously can't wait for this to take off, this is possibly the best we'll get off the current decentralization/blockchain memes.

never ever

ipfs will win

Attached: 817.jpg (800x606, 53.28K)

"never ever" as in we are never going to escaoe the blockchain meme

/ipns/QmPZmiFdf7PTHqrHYXQPSHY5PdgQC62L7JPw4GcrmGYJ8e
(currently /ipfs/QmfALsokxXrTCgBAixMXDuxgVexy6ySP823TnrJ4kPmDeY)
Comprehensive Libbie fanart archive.

Oh thank fuck. Now if only it would stop killing my shitty new router.

That IPNS stuff is big, taking it from 2-10 seconds down to instant is huge for dynamic content, and it's pushed not polled client side, perfect.
I guess this is more just an application of pubsub which is also good to see that it actually works as it's supposed to.

Static content seems more or less dealt with, there's only room for enhancements.
Connectivity is always being improved.
And now dynamic content seems to have the focus.
Having all these components alone working gives people a lot of flexability to make whatever they want. I think the only thing holding people back at the moment is API documentation and they seem to have some people focusing on that.

Oh I'm excited.


Make sure to check around the issue tracker, I remember reading something about disabling mdns can help with bugged routers, you might lose LAN discovery that way though, which shouldn't be a big deal, if it was you'd probably have a better router.


user NO

Attached: pervert - Copy.jpg (827x720, 122.15K)

Anyone else have trouble using ipfs pin add ? I want to pin some rather large folders without having to download them first, and pin them from my drive.