Ipfs thread

last thread is four months old
Updates
complete: github.com/ipfs/go-ipfs/blob/7ea34c6c6ed18e886f869a1fbe725a848d13695c/CHANGELOG.md
""0.4.10""
ipfs pin verify checks the intergrity of of pinned object graphs in the repo
ipfs pin update effeciently updates the object graph
ipfs shutdown [/ipfs] shuts down ipfs daemon[code] ipfs add --hash allows you to specify a hashing algorithm, including blake2b-256
ipfs p2p (experimental) allows you to open arbitarty streams to ipfs peers using libp2p
""0.4.9""
ipfs add can now use contentID instead of multihashes, which allows the type of content to be specified in the hash. This could allow ipfs to nativly address any content from a merkletree based system, such as git, bitcoin, zcash, and ethereum

tl;dr for Beginners
How it Works
When you add a file, the files are cryptographically hashed and a merkle tree is created. These hashes are announced by the IPFS client to the nodes in the network. (The IPFS team often describes the network as a "Merkle forest.") Any user can request one of these hashes and the nodes set up peer connections automatically. If two users share the same file then both of them can seed it to a third person requesting the hash, as opposed to .torrent files/magnets which require both seeders use the same file.
FAQ
It's about as safe as a torrent right now, ignoring the relative obscurity bonus. They are working on integration with TOR and I2P. Check out libp2p if you're curious.
Finding a seeder can take anywhere from a few seconds to a few minutes. It's slowly improving but still requires a fair bit of optimization work. Once the download starts, it's as fast as the peers can offer, just like a torrent.
You be the judge.
It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies.
On the other hand, it's still alpha software with a small userbase and has poor network performance.
Websites of interest
gateway.ipfs.io/ipfs/
Official IPFS HTTP gateway. Slap this in front of a hash and it will download a file from the network. Be warned that this gateway is slower than using the client and accepts DMCAs.
ipfs-search.com/
Search IPFS files. Automatically scrapes metadata from DHT.
glop.me/
Pomf clone that utilizes IPFS. Currently 10MB limit.
Also hosts a gateway at gateway.glop.me which doesn't have any DMCA requests as far as I can tell.
ipfs.pics/ (dead)
Image host that utilizes IPFS.

Other urls found in this thread:

littlenode.net/directory.html
blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html
github.com/ipfs/go-ipfs/issues?utf8=✓&q=is:issue is:open label:perf
github.com/ipfs/go-ipfs/issues/4048
github.com/ipfs/go-ipfs/issues/3060
docs.google.com/forms/d/e/1FAIpQLSfdFpWhJj8OIGA2iXrT3bnLgVK9bgR_1iLMPdAcXLxr_1d-pw/viewform?c=0&w=1
github.com/ipfs-search/ipfs-search
github.com/DistributedMemetics/DM/issues/2
github.com/ipfs/webui/pull/229
gateway.ipfs.io/ipfs/
glop.me/
github.com/ipfs/go-ipfs/milestone/33https://pastebin.com/Xxz9qMkv
ipfs.io/ipfs/QmU5XsVwvJfTcCwqkK1SmTqDmXWSQW
hydrusnetwork.github.io/hydrus/help/ipfs.html
ipfs.io/docs/getting-started/
mgtracker.org:6969/announce&tr=http://opentrackr.org:1337/announce&tr=udp://tracker.coppersurfer.tk:6969/announce&tr=udp://tracker.leechers-paradise.org:6969/announce&tr=udp://tracker.pirateparty.gr:6969/announce
dist.ipfs.io/#go-ipfs
github.com/ipfs/community/blob/master/code-of-conduct.md#copyright-violations
archive.fo/zlIyn#selection-7793.0-7793.9
github.com/ipfs/faq/issues/64#issuecomment-149789308
archive.fo/2HUKR)
ipfs.io/ipns/QmNewHashForIPNS
github.com/cakenggt/ipfs-foundation-frontend
gitlab.com/siderus/Lumpy
github.com/ipfs/examples/tree/master/examples/init#systemd
github.com/ipfs/go-ipfs/blob/master/importer/chunk/parse.go
github.com/ipfs/go-ipfs/blob/master/importer/chunk/splitting.go
ipfs.io/blog/25-pubsub/
github.com/orbitdb/orbit-db
rbt.asia/g/thread/60020717
ipfs.io/blog/30-js-ipfs-crdts.md
github.com/pfrazee/crdt_notes
en.wikipedia.org/wiki/International_Image_Interoperability_Framework
github.com/ipfs/notes/issues/240
ipfs.borg.moe/ipfs/QmVbkjWGnrTXQxtvtfnQLy7CbHVvKPKhu9V2oRWspiFk31
github.com/ipfs/notes/issues/14
github.com/ipfs/ipfs-companion
discuss.ipfs.io/t/roadmap-for-production-readiness-of-various-features/1023
github.com/ipfs/go-ipfs/pull/40473
github.com/victorbjelkholm/ipfscrape
twitter.com/SFWRedditVideos

it is a meme

Is there a way to check what hashes I'm currently uploading? It's maxing out my disk IO, is there a way to see what files I have that are so interesting?

Hello, newfriend.
:^)

You should've removed the ipfs pics link, as its not coming back. I do remember seeing a hash for it though.

didnt mean to sage

Reposting from the last thread
I decided to start a directory for IPFS hashes.
currently it just contains hashes reachable from the last thread. I'm looking for submissions and I'm also going to search the internet for content.
Only spent a short time on it so far, but here is the website as it stands now:
littlenode.net/directory.html

who the fuck writes the date like that? it goes from smallest to biggest. out of 12 months, out of 31 days, ____ out of 1000s of years

commies

>month < day < year

Probably just NSA seeing what you got

Do it in ipfs
Is the code for generating the totoro index available?

hope they enjoy my anime collection then

Who doesn't? Both date and time are ordered by their relevance to everyday life. You start with the most relevant unit, then optionally add others.

Acceptable answers to "what's the time?"

Acceptable answers to "what's the date?"

Big-endian number systems are an arab mistake both in literature and computing. It should be written and spoken 81-04-7102 in the same order we write text.

Good luck sorting on that.
You incrementally add more and more precision.
2017 - 365 possible days
2017 07 - 31 possible days
2017 07 25 - 24 possible hours
2017 07 25 17 - 60 possible minutes
2017 07 25 17 09 - 60 possible seconds
Compare this to only knowing "it's the 25th of some month". Completely useless

t. intel shill

Archive of the previous thread:
/ipfs/Qmd6xGf3AugWkQYnPuik5JQM7TxAETbsSv2q7t9trMLmkc/8ch.net/tech/res/732727.html
It's 16MB, pin for posterity.


I'm almost certain totoro was just a static webpage, there was no index on the back-end. So you can just call "ipfs get" on it to see the .html file yourself.

Note that you have to pin the parent directory (Qm...mkc) in order to preserve images, css, js, etc.

QoS aware netcode when

Any reason why you didn't include an image?
Looks kinda shit to me, so here's one for the next thread.

Also while I'm at it, I also downloaded another svg icon and reexported it to png

...

...

Remove the dithering or at least reduce it/use webm.

It wasn't square either, here's a better one as a webm.
The gifbuilder I use was adding that dithering, and I'm too lazy to manually do it with ffmpeg.

Now we have an OP image to use 6 months from now.

IPFS needs a killer app. What will it be? My guess: pubsub video streaming/rebroadcasting, would kill away a lot of trash (release groups etc)

I lied, here one that's better.
Found an awesome tutorial here: blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html

CP

what if we make a new thread now before it's too late?
this one's only 2 days old.
Plus, the first post is a tripfag trolling

Fucking finally I thought I was the only one.

I'm a fan of ISO 8601 when it comes to documentation. It's ordered by largest increment of time to smallest.
2017-07-27

It working

What makes so many ipfs functions slow?
I'm connected to ~420 peers.
Resolving a public key takes about a minute.
Listing the contents of a hash can take several minutes just to start.

What technological improvements do they need to take to fix this?

classic scene from interstellar
QmXaNWSxZVCHtig4S2wwT6yBSLoo6KANXmT3RSeDgA6vFP

It's probably leveldb's fault somehow. They also have done almost no optimization yet since nothing is finished/set in stone. Accessing stuff locally is slow probably because of the database and underlying filesystem, the datastore used to be worse by having misaligned blocks and less than optimal sharding. What's sent over the wire is a lot of debug and meta info that may not be required and running diagnostic commands like `diag` wasted effort on the whole network, the queue system is dumb right now instead of having some kind of system to not waste effort and pick the optimal route(s). There's plenty that needs to be done for speed improvements and those will come but I think their main focus right now is making it work reliably now, secure after that, and fast last.
They're tagging performance issues too so that makes it easy to see over time, what's hogging and what's been fixed.
github.com/ipfs/go-ipfs/issues?utf8=✓&q=is:issue is:open label:perf

I see, I wasn't aware that it could be lack of optimization in leveldb.
Is it worth contributing to the repo? I'm willing to do boring shit like testing, but reading through the hundreds of Issues, it seems that only a couple guys have got a handle on everything. It's hard to tell where to start, and I'd rather not be another new guy getting in the way.

That's a highly personal question, user. I don't think I can answer it effectively. If you like the project or just want experience, it might be worth it to you to learn how to contribute, that's up to you though.

The project right now is very young so they don't have a lot of the groundwork to onboard new contributors, but it's one of the things they intend to fix once things stabilize a little bit. It's hard to write documentation for something that's in active flux.

These issues might help a bit, there's more big threads there with good information but I don't remember them right now.
github.com/ipfs/go-ipfs/issues/4048
github.com/ipfs/go-ipfs/issues/3060

For me, I heard about the project years ago on /g/ and it seemed like the exact thing I wanted to use, so I started to experiment with it and follow the development on github, there are good fragments of documentation, it's just a matter of them unifying it, you can also lurk the irc or ask questions there to learn more about it. I only know as much as I need to know to do what I need with it.

I guess, decide if you have the time or motivation to contribute, try reading around the project and see if you can get an understanding, if not, ask the people who do know about it.

Should try to collect the documentation in one repo and translate it
Well, standardized terminology would still be useful I guess.

Thanks. I'm really interested in contributing.
IRC seems to be mostly inactive, so I may have to contact someone directly. Going through the "easy" or "help wanted" issues, even those are mostly stuff that only the developers would know how to tackle.

Look at discuss.ipfs.io

I feel like that's true with most software channels, they're not a chit-chat room they're a getting stuff done room. If you want it to be active you'll have to ask a question in there.

I feel like (and I think most people agree) that it's better to just ask sometimes, the developers know the answer to your questions and could tell you real quick "read this specific file" instantly instead of you wasting time scouring yourself. It all depends on your social abilities at that point I guess or your typical methodology for figuring out a project. It's not impossible at all to understand it by yourself but if you can get someone to help you then I think you should take advantage of that opportunity, the faster you learn the faster you can start helping on your own so they have some incentive.


While I like discourse(software), I don't care much for that goofball project manager changing shit up all the time. I guess he is making improvements and now is the time though, but all these changes to where information goes is probably why people can't get a handle on shit now. Questions used to go into a questions issue repo, that got migrated, there was a "notes" repo for markdown documents and misc notes in the issues that was weird, it's all different now though, thankfully. Maybe he's alright, hard to judge when he's responsible for making a mess but then cleans it up too.

Ok, I've never got involved in an open source community in this way, so I'll reach out.

It's not always easy but I wish you good luck.

Looks like there's some life in the Filecoin world. They released a new white paper, updated their website, and started accepting signups for beta testing. They are also opening a pre-sale but it's only available to (((accredited investors))) so you can forget about that unless you're filthy rich.

Beta sign ups:
docs.google.com/forms/d/e/1FAIpQLSfdFpWhJj8OIGA2iXrT3bnLgVK9bgR_1iLMPdAcXLxr_1d-pw/viewform?c=0&w=1

Filecoin should be decoupled from IPFS.

found some hentai
/ipfs/QmchsYMr8Xc3y52AbSa5QAqJSWqwKTpoeZ7JpBSTw5xMCR

and a whole fuckload of Presidential Intelligence briefings from the 60s
/ipfs/Qme6epvZDj3vzHcFKdF1nZhbixjw8Bn4imGcKnbUyBJL89

it's pretty neat poking around the wilderness of unindexed shit looking at what people have been uploading ... I found like 100s of photos of some dude's VR basement a while back but lost the hash ...

Sia > Filecoin

Battle Angel Alita
/ipfs/zDMZof1kuwqroHi5myLv4uu36ic3E73VTFzDU93Xg9C69SmZmZRD/


I wanted to sign up but you have to have a net worth of at least 1 million dollars. I might buy some when it hits the markets. I bought a few hard drives to mine it too. I have a feeling they're going to be worth much more then what they're going to be initially valued at seeing that BigchainDB is advocating it.

oof

So do you just smack the keyboard until it produces a vaild hash or is there a proper way to do this?

Well actually a fair bit of stuff gets crawled and indexed by regular search engines, since they see everything going across the gateways. So you can just take a peek at, e.g. "site:{ipfs.io,glop.me}/ipfs/*" and you get a lot of results. The "unindexed" stuff I refer to means finding other peoples' threads and sites and what-not on ipfs and then following all the links I find, seeing what's alive.

Over the longer term I think it'd be a fun project to try to get a local instance of ipfs-search resurrected. But I'd have to understand at least a little bit of go before I attempted that, so it's not happening any time soon.

github.com/ipfs-search/ipfs-search

Russians write D.M.YYYY.

Can't you run DHT nodes and log all hashes like BitTorrent crawlers?

Russians use the word "Me"

Everyone who uses "Me" now is a commie

So, where can I find sites to download files with it? The sites OP posted are either down or not working

As soon as the next version of IPFS-JS is released we're starting work on a decentralized index site.
github.com/DistributedMemetics/DM/issues/2

So how exactly is this different, and better, then I2P + Tahoe-LAFS?
"Tahoe-LAFS is a Free and Open decentralized cloud storage system. It distributes your data across multiple servers. Even if some of the servers fail or are taken over by an attacker, the entire file store continues to function correctly, preserving your privacy and security."
By default this stuff is insecure.

This needs uploaders though. Why not just collect everything you can and identify it separately, for example via anidb integration?

This seems really similar to FMS in structure.

1 binary zero installation

Is there an IPFS GUI for the normies?

what the fuck will it take to make posts like this unecessary
what can break normies out of this mindset of fear and incomptetence

Nothing will. It would be nice to have one though, drag-and-drop.

Can we make this a hybrid thread for GNUNet too and perhaps Freenet or would that be unwelcome? Besides that, here's a funny webm

/ipfs/QmZqDHczRND9m3yUCW5nbVpMCtw2FTawEorkRPPC1b8EPY

Music Video, Rob Zombie Presents: The Life and Times of a Teenage Rock God

/ipfs/QmQK2MNeyCkmdKwxZYHDL5a3rf5igfbVjjfruV8JkEzVcS

Is that bittorrent bot no one used from the last thread still here?

magnet:?xt=urn:btih:8f9871ed223283e3325d831ddeb617214da094d7

Some Holla Forums related text files

/ipfs/QmXNKk2wDL2sge2gvzJM5L7NjShN66ggkJmkAjxLADGByN

/ipfs/QmYP38DMdHJyeSv9NhWYZpsyDaMN6bRshxz6WeWAmDp8uS

I found a couple more old Holla Forums threads floating around, one of them from almost two years ago
/ipfs/QmWk7kZGeE5wvCgvQn4AcZU89bwrZTHqMdvqWChiCcbPUz
/ipfs/QmNzU35KGvttKMHsUU2MeG6apjkAGsAyCJbitNN8TXLrC5


Once you have it installed you can use the web interface (localhost:5001/webui) to add files by drag-and-drop. That's about it though. If you want to do much more you'll need to use the terminal. There are plans to make it more fully-featured but I think it's pretty low on their priority list. Then there's ipfs-js for the total normies, which I think they're also planning on wrapping up into an electron app (so you can run it standalone, as opposed to in-tab).

1) They're completely different projects with different goals. IPFS is supposed to be a protocol, not a filesharing software.
2) Integration with TOR is technically possible, though not recommended for obvious reasons. Integration with I2P is planned, though it seems to be a low priority.
3) Development of libp2p may mean that the content on I2P or Tahoe-LAFS will be accessible with IPFS.

It's a shame. I understand why you wouldn't want to polish something before it's done, but a little investment into a GUI no more involved than a tray icon and right click menu could go a long way.

/ipfs/QmS1dwbeNTqADMTTtqp3VMMXEjREYqpbE3uwaiQjMsAiwC

/ipfs/QmbUdo75ySyh3STrj2JwbunHMuimn8LNCSGFx4hkwXiaR

I don't need the GUI, but it would be useful for shilling the software for normies.

It would be enough with "drop file here to upload, drop link here to download"

I agree it's important for adoption, and it sucks not to be able to involve non-techie friends. Quite often I've just wanted to send some small file directly to someone, and thought it'd be so convenient to just right-click, get hash, message it to friend and have them download it directly from me. But it really is still alpha software (albeit pretty stable). It needs to get a lot more efficient before it's even worth trying to bring it to the public-at-large. Plus no one would use it for-itself, there need to be services on there which /happen/ to use IPFS which people want to use.

Can you actually use the javascript implementation to fetch files in a browser?

Pubsub will probably be the "killer app". You'll be able to stream whatever your DVB card can pick up, just like bittorrent but faster. Also hosting of politically controversial websites.

I have a folder of .zip and .tar.xz files. Is there any way to upload this folder and unpack the archives?

/ipfs/QmNqQvP1Rbp5wpTX8Eqxb7n2g7pQWe3V2bt53wtrJa3uvN
Ghost in the Shell (1995), BDremux, JP+EN+RU

Okay, so the tar archives can use ipfs tar add, but I have to unpack the zip files manually?

What the fuck is wrong with the web ui it comes with?

It's nicer to have a designated uploading tray icon.

It's an afterthought that hasn't been updated to keep parity with the new features. I can't, for example, add a file into the filestore instead of the blockstore. That's a dealbreaker in itself. I could go over all the other features, but that's secondary to the real issue: Normalfags don't like to use the command prompt. Their fear of CLIs is tough enough to overcome, but Windows makes that even worse. It's painful enough just to get them to run "ipfs daemon," much less teaching them how to add files from there.

All I want is a little .exe that launches a daemon and makes a little icon in your tray. From there you can right click and access all the various features, which then launch pop up windows. Like "Add File..." would launch a window where you can browse for files, select filestore/datastore, hashing method, directory wrapping, and all that. It takes a little effort to make but it's by no means an involved project, and I don't see how it could break anything below it. Mac and Ganoo+Loonix users could benefit from such a system as well.

The WebUI will come much later, and have a lot more of the features you're looking for.
Right now, they're working on optimizing the filedb, finalizing IPNS for the next release, and finishing work on the JS client.
Once those step are done (and maybe a couple more in-between), you'll have all the features you want.
IPFS development is very deliberate and very carefully thought through. If you care, you can check up on the RFCs and the developer roadmap.

Remember, this is a huge project.
Peer to peer, content-addressable, versioned, filesystem protocol. Just doing one of those right is a major undertaking, and they're improving some of these concepts, like the DHT scheme.

Instead of reflexively bitching about how the dev team is prioritising core functionality over beautification, why don't you check out what's actually going on?
github.com/ipfs/webui/pull/229
One of the main developers has been working on a major overhaul of the webui for a while now, in between his way-more-important work on js-ipfs. You can go build the code yourself if you want; if you think it's such an important feature then read the code, get in contact and offer to help. If you're not willing to do that then shut up. It's pointless to worry about accessibility before it's even worth accessing.

(Being worth accessing doesn't just mean being reliable and responsive, which is the domain of the dev team; it also means there being content on ipfs worth going to, which is the responsibility of you and me.)

There's a shell extension for Windows in progress and I'm pretty sure I saw similar for OS X and X11.
/ipns/QmaUgENG66kp6cyYUoiKREJWRaaQZmFt7EfFEnoMN1UvJZ
/ipfs/zDMZof1m163VYWYajZ5nATEv4VLZFdLQYdaPh8MEN8vHxZxBNn65


If you use the browser addon you can just click on them, if you use the local gateway you can just use wget or similar, I think people should push for support in other programs like jdownloader so that you can paste in ipfs hashes and it will pull from the gateway or your local gateway. I guess you could probably write a plugin for this now instead of waiting for them to do it.

Can someone use this in the next IPFS thread?

Edited version

ETA for next thread: 1 year
There's also these
/ipfs/QmZ8Modb7dmRVEcMxunxqEy9R3qQse4E51CCNohbfLSZzw

Added a description

That's too much, the first one is the best one but the English is a little weird.

For Windows step 3 makes no sense, running ipfs.exe doesn't install anything, it's just the program, step 2 is the whole installation.

The bash command should probably be made generic instead of just for amd64 Linux, use uname.

"In cmd or bash" should probably just disappear, leaving the commands themselves, it's implied. Use grave accents for `maximum autism`.

Now fix "Linux" with GNU/Linux.

Shortened it a bit

Fuck off CIA Nigger

How is IPFS different from BitTorrent?Since in IPFS all files are hashed separately, you can access any file without downloading the whole folder.Also, IPFS has a domain-name system called IPNS,allowing you to map IPNS names to IPFS hashes.How to install IPFS on Windows:Step 1: Download Windows Binary at dist.ipfs.io/#go-ipfsStep 2: Unzip the file to somewhere in your %PATH%IPFS="dist.ipfs.io/"; curl ${IPFS}$(curl ${IPFS} \ | grep -oe go\-ipfs.*linux\-amd64\.tar\.gz) | tar -xvzsudo go-ipfs/install.sh # bash command for Linux (x64)`ipfs init` will initialize IPFS for use in computers`ipfs daemon` will initialize the daemon in background(Please only run the daemon on desktops and servers)`ipfs swarm peers` will connect IPFS to the network`ipfs-update install` will update IPFS versionWeb console is available at localhost:5001/webui

Here's one I wrote for halfchan (2000 char limit, left some room for changelog, edition, etc), based on this OP
IPFS is a decentralized P2P replacement for HTTP and various other protocols.Quick summary:>files are addressed by hash, not by location>like torrents, but you can build websites on it>files and metadata are separate, so renaming or moving a file won't change its hash>can add files with drag and drop like pomf>can access network via HTTP gateways>can stream video via mpv or VLCHow it works:When you add a file, the files are split up individually into pieces and hashed. Any user can request one of these hashes and the nodes find seeds automatically.FAQ>Is it anonymous?As anonymous as BitTorrent: others can see your IP unless you use a VPN. You can build anonymity below (i.e. Tor/I2P/VPN) or above (i.e. GNUnet "fs" module) like the OSI model.>Is it fast?As fast as your network can offer, finding peers takes a few minutes for less popular files.>How is it different from BitTorrent?You can build websites on top of it. Rearranging or renaming files won't change their hash, so torrents where only a small text file is different won't steal seeders from the other ones.>How is it different from ZeroNet?IPFS is a file sharing network which you can build websites on, ZeroNet is made for websites, and isn't as stable or mature. IPFS layers with already existing infrastructure.Websites of interest:gateway.ipfs.io/ipfs/ - official IPFS gateway, slap this in front of a hash and it will download it from the network. This is slower than using the client and accepts DMCAs.glop.me/ - IPFS pomf clone. 10MB limit.Planned features for next version (0.4.10):github.com/ipfs/go-ipfs/milestone/33https://pastebin.com/Xxz9qMkv - list of IPFS hashesipfs.io/ipfs/QmU5XsVwvJfTcCwqkK1SmTqDmXWSQW aTa7ZcVLY2PDxNxG/ipfs_links.html(remove the space, spam filter)

>>>/hydrus/
Has basic ipfs integration as well and is multi platform.
hydrusnetwork.github.io/hydrus/help/ipfs.html

I was thinking in lines of a TLDR instruction manual... but yours can be used for another purpose.

Screenshot from the Windows shell extension video too.

Keep in mind that IPFS is nowhere near production and usage like that. They still transmit an overwhelming amount of debug information during regular usage.

See

How is IPFS different from BitTorrent?1. In IPFS all files are hashed separately,you can access files without downloading folders.2. In IPFS renaming files will not change the hash,people can't steal seeders with minute changes.3. IPFS has a domain-name system called IPNS,allowing you to map IPNS names to IPFS hashes.How to install IPFS on Windows:Step 1: Download Windows Binary at dist.ipfs.io/#go-ipfsStep 2: Unzip the file to somewhere in your %PATH%IPFS="dist.ipfs.io/"; curl ${IPFS}$(curl ${IPFS} \ | grep -oe go\-ipfs.*linux\-amd64\.tar\.gz) | tar -xvzsudo go-ipfs/install.sh # bash command for Linux (x64)`ipfs init` will initialize IPFS for use in computers`ipfs daemon` will initialize the daemon in background(Please only run the daemon on desktops and servers)`ipfs swarm peers` will connect IPFS to the network`ipfs-update install` will update IPFS versionWeb console is available at localhost:5001/webuiRelated links: glop.me and gateway.ipfs.io/ipfs/


Now we are talking!

thank you for confirming my assumption that it's bullshit

but they literally can: just change a single bit anywhere in the file

How should I rephrase
from ?

IPFS chunks files into blocks and uses rabin fingerprinting, if you change a single bit you still seed the unmodified chunks, especially when using rabin.

ipfs.io/docs/getting-started/

No, it's not a minor problem. Older torrents often die because they're split up into one torrent with just the movie, one torrent with the movie and a srt file, one torrent with the movie and a "downloaded from xxxtorrent.ru" text file, one torrent with the movie and 29 other movies, etc. Also, season packs are easier to create. It also adds several other features, for example the release group's site can also be hosted on IPFS.

This needs active tampering, the most common is for people to just repack it.

1. Every file is separate, so adding a file to a folder won't change the other files' hashes.
2. A file and its metadata (name, location, etc) are kept separate, so you can freely rename and repack files without splitting the seeders
3. IPFS has various other features, such as IPNS (decentralized DNS) and pubsub (decentralized multicast)

you realize release groups don't use websites, right?

I legit have at least 10 different files I'm actively trying to download via bittorrent right now spread across 5 torrents per file at least, each with varying percentages of completion. I hate this problem so fucking much, this fragmentation ruins the best aspect of torrents, their longevity and resilience. It's the same file across several torrents, it should be pulled all the same.

not to mention, release groups publish their releases in multiple pieces, which your shit wont help with, and even unzipped/unrar'd, they still have .nfo file_id.diz which need to be properly distributed along with the content

Got it
Any more ways of improving it?

the scene is super important guise

merge that 9x% with the 99.9% and it's almost guaranteed that the 99.9% one will finish

Typo and links are cleaned

Chill out dude, the next thread isn't for ages.

Who gives a shit, the point of release groups is to release, leave distribution up to the distributors. We already have a bunch of people repacking and redistributing releases using modern practices as they've always done, repacking things from rars to 7z and lzarc, moving from usenet to ftp to p2p, ddl, etc., this will be no different, people unpacking and adding to ipfs. The share threads on Holla Forums have been doing this for over a year now.

Some do.

Go to torrentproject.se and search for the file size.

Scene release groups pubish in rar files, but they're usually unpacked before reaching P2P.

If someone uploads a scene release without NFO, and someone else uploads one with NFO, the video file/whatever will still have the same hash.

The entire PATH thing is unimportant, you can just run it from CMD. There's an official getting started guide, copypaste from that instead.

I always try this, it works most of the time but not this time, it drops it back down to ~86%, I think the chunk sizes are different or something.

I don't even want to know how it's implemented.
literally everything since the 90's supports this
not a thing
not a thing. this reminds me of the retarded advertising of XMPP
torrent supports this
lol

Also for JAV you can use Japanese file sharing software such as Perfect Dark, Share, WinNY.

I read that and I need to make the getting started guide shorter.

vuze has a feature that does this with the dht, I tried it.


Searched perfect dark but got no results, maybe nobody was online at the time. It's no big deal I have enough of the file to see the scene I wanted. The point was to demonstrait this isn't a rare problem, I encounter this all the time for older torrents, even ones that used to be quite popular. Especially for some older pc games. I usually have to end up finding it somewhere else and being the one to initiate a re-seed. It's such a pain in the ass.

magnet:?xt=urn:btih:dfd58642a3848ee531c6c97c506c76e204f66ded&dn=1%2025%20%E7%B2%BE%E9%80%89%E7%BE%8E%E8%85%BF%E4%B8%9D%E8%A2%9C%E8%B6%B3%E4%BA%A421%E8%BF%9E%E7%99%BC&tr=mgtracker.org:6969/announce&tr=http://opentrackr.org:1337/announce&tr=udp://tracker.coppersurfer.tk:6969/announce&tr=udp://tracker.leechers-paradise.org:6969/announce&tr=udp://tracker.pirateparty.gr:6969/announce
This one has it and is seeded

Not an argument.
Not BitTorrent.
The WebUI does it.
The web gateways work fine.
Correct.

scene releases should be distributed with .nfo idiot, not two different files that you have to queue up separately by clicking 2*N buttons. even in torrents people do this.

Vuze DHT search is worse.

Read up on how IPFS works. You can repack files however you want since the seeders for each file are kept separate.

How to install:
Download IPFS from dist.ipfs.io/#go-ipfs
Windows:
After downloading, unzip the archive, and move ipfs.exe somewhere in your %PATH%.
Mac OS X and Linux
After downloading, untar the archive, and move the ipfs binary somewhere in your executables $PATH:
tar xvfz go-ipfs.tar.gzmv go-ipfs/ipfs /usr/local/bin/ipfs
To use:
* initialize the repo by running ipfs init
* start the IPFS daemon by running ipfs daemon
* You're now using IPFS. Open localhost:8080/ipfs/(your hash) in your browser to download it.

go read about zooko's triangle or something, i'm not spoonfeeding.
I don't think you understood what I meant by "not a thing". "Haz HTTP Gateway" isn't a feature of a protocol. There are gateways for freenet and tor as well, regardless of how those protocols are implemented and whether they inted to haz gateways. Any idiot can make some shitty HTTP gateway for a distributed protocol in 5 minutes to add an item to his resume, and this is why there are HTTP gateways for literally every "document based" p2p protocol in existence.

This thread and every IPFS thread (and the website which is just your standard unintellegible hipster bootstrap landing page filled with marketing bullshit and buzzwords) have done nothing to convince me why I should even acknowledge the existence of IPFS given other offerings which have existed for up to decades - much like the Go language.

...

You sound like a 13 year old snob lmao

I know about zooko's triangle. It's "solved" (see namecoin). IPNS is decentralized and secure, not human-readable.

It's not a feature of the protocol, but it's still worth mentioning.

Because IPFS is technically superior to the relevant competitors (bittorrent, zeronet, freenet, gnunet) and is the only protocol which stands a chance at getting mainstream adoption. Buzzword website is simply a necessary evil.

github.com/ipfs/community/blob/master/code-of-conduct.md#copyright-violations
into the trash it goes

Done

...

still isn't as cool as the animated gif

Seems like it would be a lot easier to use orbit-db
since it's already got lots and lots of development put into it. Then use it to do what suggests, matching hashes to imdb/anidb codes. When a user adds a hash, it should function like kodi: confirm which title corresponds to the file, then pull the metadata and use that (along with dht info) to make a new entry, then associate the hash of the movie with the hash of the entry and publish.

reminder the files command is cool
ipfs files mkdir /memesipfs files cp /ipfs/QmT3AJZC13Dfv1PjKBtQr4JMg4dcA8CHMCjZNHoNJUDZYM "/memes/video games.png"ipfs files cp /ipfs/QmXGsDC34vwtqfh6FqrRAbuJtiuhMYrdwFSkXyGJ5BtgcY /memes/smhtbhfam.jpgipfs files stat /memesQmdWpszK4gWWBKDQ17i7e8vv71S3VHMKYLbcaWZFJyWhYn

see

It's better to do it in two steps. First match ipfs hashes to bittorrent infohashes, then match them to official release group torrents. You then ask nodes to download and check. You could do it the other way too, where trusted nodes download from RSS and add them to IPFS upon torrent completion.

We discussed this in the /a/ Nyaa thread.
archive.fo/zlIyn#selection-7793.0-7793.9

Read the thread, we brainstormed a few different models and settled on the current idea. The goal is to make a site where there is still a central authority but the content is decentralized so it can run locally and very easily be ported to a clone site if necessary.

You can skip the central authority. Look at FMS. You could also make something like tokyotosho if the release groups cooperate.

ISO is superior.
Actually, this convention would be masterrace:
YYYYMMDD:HHMMSS.SSS[SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS]
1 Planck time unit ~= 0.00000000000000000000000000000000000000000001 seconds

Is there some way to mount a IPFS unixfs directory to a folder?

How do you mean? You can mount IPFS itself via fuse and either create a symlink, mount bind, etc. to a directory within it. You can also compose directories from multiple objects with the files command like this if you want to merge a bunch of shit into a single directory/hash and mount that one.

But then you can only read from it.
I mean something like this:
ipfs files mkdir /memesipfs files mount /memes /mnt/ipfs/memesipfs get /ipfs/QmT3AJZC13Dfv1PjKBtQr4JMg4dcA8CHMCjZNHoNJUDZYM -o "/memes/video games.png"ipfs get /ipfs/QmXGsDC34vwtqfh6FqrRAbuJtiuhMYrdwFSkXyGJ5BtgcY -o "/memes/smhtbhfam.jpg"ipfs files stat /memesQmdWpszK4gWWBKDQ17i7e8vv71S3VHMKYLbcaWZFJyWhYn

I remember the devs talking about that a while ago but haven't heard anything since, it's definatley something they want to do but haven't yet.
github.com/ipfs/faq/issues/64#issuecomment-149789308

It might be worth raising on the issue tracker to build interests again.

For now you might be able to make a separate fuse module for this that hashes everything you put in it, discards the data, and then mirrors the structure in the ipfs mfs. What most people probably do is just make the structure on disk and then hash that or hash everything they need and compose it with the files commands.

use autofs

I guess I still don't see how you concluded that a centralized SQL database is preferred to a decentralized orbit-db? The single-commit repo makes me think that most of the decisions taken so far have been on the back of speculation rather than exploration. Which is fine, but there's no need to act like you're locked in or something.

I appreciate that anime torrenters will be the main spring from which users are drawn, but I just wanted to say that there's no need to intertwine it so heavily with torrenting. I have files on my computer that I've shared on IPFS, and I want to tell people what they are and where to find them in the network -- that's the fundamental function of the site, right?

Grabbing some stuff from the old thread (at archive.fo/2HUKR)

GUIDE TO IPNS FOR NEW PEOPLE
When you add something to IPFS, it creates a hash link to a static file. That's fine, but what if you are trying to add a folder that keeps updating? Let's say it's your anime folder. You add new anime all the time, but you want one link that always points to it - even when the contents change. IPNS lets you do that. It lets you make an IPNS link that links to an IPFS hash you can change at any time, but the IPNS link never changes.

Make an RSA key for what you want to share. Example: Your anime folder.
Now anything you publish under this key will link to the same IPNS link, which points to whatever IPFS hash you choose.

So you add your anime folder.
Take the one that it spits out last. Should look like
with nothing after.

Now it's time to publish that hash.
Now you can link to ipfs.io/ipns/QmNewHashForIPNS and it points to your anime folder.

If at any point you wish to submit a new hash for your anime folder, run "ipfs add" on it again and then "ipfs name publish" with the new hash. If you want to get fancy, you could probably hack together a cron job that does this.


The problem here is that 99% of Windows users don't know what %PATH% is, much less where theirs is. What's the most concise way to describe how to set that up? I'm trying to mentally run through the process and I don't think I could do it without consulting a guide.

MAKING PRETTY LINKS WITH DNS
Another thing you could do if you want to share something with other people, is set up a domain. If you have the ability to change the DNS on a website, you can add a TXT record, with:
dnslink=/ipfs/$SITE_HASH
This way, people can access your folder from a MUCH shorter URL.
For example: /ipns/anonmatch.com/

You can (I think) also link your DNS to an IPNS hash. This is convenient, if for example you lose access to your host machine; in that case you just have to change the hash, and wait for the DNS to propagate.

HELPFUL ADD OPTIONS
Uses the filestore instead of the blockstore data structure. This means that, rather than making a copy of your file and doubling its hard drive space usage, it only creates a small amount of metadata, about 1/1000 of the size of the file you're adding.
As this feature is experimental, this is not available to use by default.
To enable this feature: Type in ipfs config Experimental.FilestoreEnabled true and then restart your daemon.

Changes the hashing algorithm used to hash your files. Try "blake2b-256" for a nice speed boost. (It will be the new default.)

Puts your files in a nice top-level directory, in which each file can be accessed with a file path after this directory's hash. In other words, it turns
into
which makes it easier to find, index, and navigate your files. Use this every time you add a directory!

Jesus christ dude give it a rest with the howto and pictures, it's taken up like half the thread. Delve deeper into how it works for your own knowledge, rather than re-phrasing the same intro stuff over and over. Re: windows users, the videos here are perfect, they're pitched at just the right level.

All the central authority is in charge of is blacklisting unwanted content from the index. It's a central point of trust but anyone can individually override it for themselves or mirror the index and host a clone with the blacklist removed. A web of trust would decentralize it more but the added complexity wouldn't improve the service, so there's no need for it.

They won't, the cartel has proven this in the pantsu.cat vs nyaa.si war they started. They're desperate and are clinging to any authority that will bend to their will. That's one of the main reasons why we need to make a site where the database is distributed and easily able to run locally.


I feel like we should wait to meme IPFS to normalfags until we have an index site. Where would you to tell them to post their hashes? I'll make a legit guide as a reference and to walk people through everything once we have something to show them.

Honestly they should probably just use one of the gui tools

Shift click on the folder you put ipfs.exe in, "open command window here", type "ipfs daemon", don't close the window

You can still copy the TT model, just scrape all the groups' RSS feeds and then have people contribute mappings between IPFS:BTIH. It takes less effort to verify than to create a mapping.

Any way of rephrasing it to be easily understanded?

...

/ipfs/QmXny7UjYEiFXskWr5Un6p5DMZPU87yzdmC3VEQcCx9xBC
he seems to be implementing some very involved datatype, and there may be big chunks of code in there directly from js-ipfs? But there's absolutely 0 documentation that I can find
github.com/cakenggt/ipfs-foundation-frontend

p.s. for those demanding noob-friendly ui, this dude agreed but doesn't seem to be actively developing it ... I dunno if it works or what
gitlab.com/siderus/Lumpy

The modern web was a mistake.

i decided to see how well ipfs works with other apps, so i created a has to work with the 3ds's homebrew app, freeshop
/ipfs/QmQZUET9xWpbYrEePHTx1qGkzb2pK9xDDxYkQX6LuekfM2
here is the hash if you want to enter it manually, just make sure you add a gateway of your choice.

and since this is a relatively small file, please pin it so more people can access it when i am offline

Has there been any work on group chats over multicast? You want something with the properties of IRC:
It would be useful to have, not just for chats but to use like you used IRC servers for stuff like XDCC and botnet control.

Questions: Which of these runs ipfs daemon even after the computer has rebooted?
ipfs daemonipfs daemon &(crontab -l 2>/dev/null; echo "@reboot ipfs daemon") | crontab -
Maybe
sudo cat

1. Download the zip file on dist.ipfs.io and uncompress it
2. use shortcut Windows+R and type `Shell:AppsFolder` in the box
3. Move ipfs.exe from the decompressed folder into the apps folder
4. open notepad and save the script below as ipfs.bat
@echo offtitle This is the IPFS daemonstart ipfs daemonpause
5. use shortcut Windows+R and type `Shell:Startup` in the box
6. Move the ipfs.bat that you made into the startup folder
7. Restart your computer and enjoy
Is the batch file correct, and is the app folder part of the %path%?

For those who want to use systemd instead of crontabs:
github.com/ipfs/examples/tree/master/examples/init#systemd

Aren't those mutually exclusive?

No. For example, with OTR:
They can only prove you signed A shared secret, not any of the messages or that you had a conversation with them
With ring signatures: You make a 2-party ring signature. They know they didn't sign it, so you must have. Anyone else doesn't know if you or them signed it.
Unencrypted: You send a message from IP X over TCP. You know where it came from, but you can't prove it.

Orbit has the first two out of those three. I don't think there are any exclusive chats or groups yet, but they are planned for IPFS at least.

are there many channels/users on orbit yet, or is it still just #ipfs and not much else?

Is fingerprint chunking implemented yet?

Dumb idea:
Using IPFS to avoid giving people views.
1. Create a greasemonkey script that redirect certain youtube IDs to IPFS videos
2. Create an SQL database to keep track of the video-IPFS dictionary
3. Use the SQL database to update the local browser DB inside greasemonkey
4. greasemonkey will submit new videos to the SQL database for them to add to the list

This?
If so you've been able to do --chunker=rabin for a while, I don't know what the default is.

What kind of morons are these people? I'd really like a source

Or just use youtube-dl.

looking through the source, here
github.com/ipfs/go-ipfs/blob/master/importer/chunk/parse.go
github.com/ipfs/go-ipfs/blob/master/importer/chunk/splitting.go
it doesn't look like rabin is the default -- the default is to use uniform-sized (1024 * 256) blocks. I don't know why, though, they've had rabin fingerprinting for ages, so it's not experimental, and from what I understand it's good for both intra- and inter-file block dedup. Probably it takes more cpu cycles and adding is already slow enough ...

That's disk I/O bottlenecked though.

Use youtube-dl to download, throw it into IPFS, make a database of YouTube ID to IPFS. Now Everytime someone give you a link from a channel who do not deserve to have views, it shows the IPFS file instead

You wouldn't use a SQL database in this case, you would use a key-value store. A simple JSON array and would be good so users can easily use a local copy of the database.


Use --hash blake2b-256 for a big speed improvement.

They're planning to make that the default soon as well.

But how do you update the key-value pair?

Pubsub ( ipfs.io/blog/25-pubsub/ ). It's still experimental and inefficient, but it's what orbit-db ( github.com/orbitdb/orbit-db ) uses, and they plan on improving it a lot in the long term. In fact, I would probably recommend just using orbit-db, it seems to be built for things like this and is already written in js so you could embed it in the script directly.

This is the #1 thing wrong with ipfs and has been for years. The --help text is fucking useless

heyo, its the guy who posted all that porn from the last thread
i decided to upload my own porn folder on ipfs, its about 50 GB. most files on there have a traceable link on pornhub, so they are /gif/ ready.
/ipns/QmVm4jMdZnewAAU3QPoUBJ6jpjjicRWsfcjfD7c47rf1KC
(the ipns link redirects you to the current hash, as i hear that is the best way to use ipns right now)

Are these obscure fangames of an old german commodore 64 mario bootleg acceptable?
QmcwwKfLxasARUu9WwJWGKxRUFQi2b4EXX7KBoGWw3K3r2

QmULYqRfvgNQWqVCECenGTsjWSwt21tuptn3h39qdUoZFQ

How are you making these, are they sharded?

I may have added them with the --nocopy option though I'm not sure.
Do these werk?
/ipfs/QmcwwKfLxasARUu9WwJWGKxRUFQi2b4EXX7KBoGWw3K3r2/Gianas%20Return%20Dreamcast%200.9.0%20RARE.rar
/ipfs/QmULYqRfvgNQWqVCECenGTsjWSwt21tuptn3h39qdUoZFQ/GianaWorlds_Installer

Those are the same hashes, they work in my local gateway just fine but they don't work with `ipfs get` for some reason, I can even do an ls on them but not a get. Weird. I think no-copy automatically enables something else, maybe that's the issue, I think it was `--raw-leaves` but that's unrelated to sharding.

Is this true or false?
`ipfs config Experimental.ShardingEnabled `
if it's true I think only people who also have it enabled can get it, that should be the default eventually but not yet

"The age of men will return;
and they're not gonna get their computer-grid, self-driving car, nano-tech, panopticon in place fast enough..."

QmSvqE2RdRX1X1UrtxSaNV2yWGuZCKLrpB9BjoCcVYipKr

Both FileStoreEnabled and ShardingEnabled are set as true on my end.
Does
QmPTVvMjMfZVrkVbkZE8hYU3W4JsrR4kQrxifJfkXLuLpV
exhibit the same problem as my other hashes?
Does enabling Sharding/Filestore on your end fix anything?

Yes, they work fine in the local gateway and public gateway, ls fine, but don't `get`.

Enabling sharding alone more than likely would fix the problem on my end but then it would mean my hashes would have the same problem for other people, and I don't think you can revert easily after its been enabled. Filestore hashes should work fine for everyone, if they have it enabled or not, enabling it just lets you create them, you don't need it on to retreive them, for sharding though you need that enabled to retrieve sharded hashes.

This is all I found on it
rbt.asia/g/thread/60020717

I-I've been tricked!

...well fugg, I've had that sharding thing enabled since late May so all my recent stuff is sharded already. Is it possibru to seed already sharded files without having sharding enabled on my end at least?
Would disabling sharding now retroactively change my pinned hashes?

so is it better to leave sharding off? i enabled sharding on my hash here

Probably better to leave it off for public hashes until it leaves the experimental phase, then they'll make the decision to either drop it or make it the default.


I'm not sure, I haven't messed with it myself and I don't know what is and isn't supported with that feature specifically. I know the archive teams uses it but I don't follow the blockstore stuff that close.

I guess you could test it by adding a new hash you're sure nobody else has, disable sharding, and then start the daemon again, then see if you can access it via the gateway or another node. Don't check with the public gateway first though because it will cache the content for some time, make sure you only check if you can get it after you disable.

Another thing to test is to rehash something you already hashed and see if the hashes are different (they should be), if they are and the content hasen't changed you should be back to normal.

I disabled sharding and now I got these.
Does 'get' work on them?
QmNMRDSGo7hBKsx4SkaHMZhYH8xhA6TXKMK3hNzQiVxLLZ
QmbacetkCBAWEt3hv1M4s9kLJrsQKHH5bFdptbLD6YnP5A

Works with get for me now.

alright, updated the ipns hash 4everyone

Directory sharding only changes directory hash, not file hash.

I'm trying to read about CRDTs but my brain isn't working. I feel there is great potential in this but I can't get the wheels in my head turning.
ipfs.io/blog/30-js-ipfs-crdts.md

There are some much more comprehensive notes here if you haven't already see them. Plus haadcode maintains a gh page with links.
github.com/pfrazee/crdt_notes
Seems like it'll be a necessary piece of the "ipfs wiki / user-contributed site" puzzle.

Right now it seems like the IPFS devs who are into CRDTs are either working on orbit or else an image-tagging datatype that seems kind of like a universal hyper-booru.
en.wikipedia.org/wiki/International_Image_Interoperability_Framework
github.com/ipfs/notes/issues/240

How do I search files in it given that I don't know its hash before?

there used to be something called ipfs-search, but it's dead now. the creator did leave a hash of it though
/ipfs/QmXA1Wiy3Ko29Q54Sq7pkyu8yTa7JmPokvgx4CXtQBWirt
it is 215GB index though, so make sure you got some space

Well, it is a lot of directories. Isn't there a tracker just like in torrent sites? Where I just put the filename and it gives me some hashes?

Well get on with it already!

How can you make that into a greasemonkey script for future use?

that's what ipfs-search used to be before it shut down. you could probably download the website's source code and try to run it offline, although i haven't tried that
github.com/ipfs-search/ipfs-search

I'm not too familiar with greasemonkey scripts but I imagine you would use something like browserify or webpack to bundle everything into a single js file.

New gateway up, and here's a nice bluegrass album.

ipfs.borg.moe/ipfs/QmVbkjWGnrTXQxtvtfnQLy7CbHVvKPKhu9V2oRWspiFk31

Would anyone be interested in an IPFS nasheed collection? I have all the official IS ones with intact metadata.

yes??????????????????

I might want to get a VPN first though. It's not illegal, but it might piss some people off. How do you run IPFS over Tor? Is it even possible?

Badly. It's really, really, really not meant for Tor.

So, faggots, is there a hash site for IPFS filesharing? Say it is so.

Nice I used to use the old gateway you had up.
Is CJDNS worth installing? If I install it and visit your blog will it transparently load over CJDNS instead of IPv4? Are there special CJDNS addresses people need to use? I wonder if it's possible to use IPFS over CJDNS so you get distributed content married to distributed routing. A globally unified IPFS meshnet sounds very promising.

see

Integration is planned, though it looks like they don't have the spec done yet.
github.com/ipfs/notes/issues/14

CJDNS addresses look just like IPv6 addresses to any application, so IPFS doesn't need to integrate it.

I don't know if it's actually doing anything because I haven't tested it yet but when I start the IPFS daemon it shows my cjdns ipv6 address as a listening addr, I assume it's working but I haven't disabled ipv4 to test it exclusively. If someone else wants to set up a cjdns exclusive ipfs node I'll try to pull content from you.

CJDNS is an autism-fuelled VPN. You can't get in unless someone else agrees to peer with you.

Just to be clear In my diagram every link made sprouting from computers 1 and 2 will have CJDNS access to the internet? Will 2 give internet access to 5 over Ad-Hoc, and in turn to 3?


Now that the dev team has hundreds of millions of dollars from the filecoin ICO we might actually see some progress on it.

That's pretty cool how transparent the integration is. I might play with it over IPFS later.


You only need outbound peers to connect and there's plenty of people publicly posting their keys and ISP IPs. You need them to add your own keys for you to accept inbound traffic, for example you can ssh into another computer but you can't get ssh'ed into. Yes it limits your hosting capabilities but I'm assuming community members will be more then happy to add you if you join their IRC for a week.

What do you mean? If they peer with each other, they'll route packets destinated for each other. If you run an outbound proxy on 1 or 2 any node that can access 1 or 2 can use their proxy.

That's what I was asking. If anyone in my LAN besides 1 or 2 want to generate new keys/IP addresses they're free to do so because they'll always be able to peer with my WAN nodes for outbound traffic. Is there any reason why I wouldn't want to regularly generate new IPs on my LAN nodes for increased privacy? My inbound/outbound WAN traffic can be handled by the static IPs on 1 and 2.

No. The CJDNS link is hyperboria.borg.moe. CJDNS is inside the IPv6 private address space (so like 192.168.xx but it's an IPv6 /3 so it's fucking monstrously huge).

If they peer correctly, yes.

These are very good videos. Very helpful, succinct, etc. Good work.

Hmm after reading the docs it looks like you're right.

This makes no sense. By chaining your own nodes to an integrated one over the internet like in my diagram it completely defeats the goal of cutting off (((reprehensible behavior))) unless people blacklist a node further upstream and destroy all their connections too.

Found the sekret-club autism.

With a DHT for peer discovery and configurable IP rotation for outbound traffic with a way to tag old dead outbound IPs to prevent DHT peers from downloading them and an optional second static IP for inbound traffic CJDNS would be faster and easier then i2p and almost as private (i2p could run on top if needed). Add OpenNic for DNS, IPFS for content delivery, and Ethereum for computing, DNS, authentication, money, and tokens, we've got ourselves a really nice darknet.


You should put that link on your CJDNS Public Peering page so people know. Otherwise people would have no idea it exists.

Good idea. Fixed.

Also, regular old DNS and IPFS already work on top of IPFS.

*on top of cjdns

There was a way to add a TXT record to ipns which I used to provide the /g/ ipfs page a few years back. Don't recall the details but it should be googleable.

It runs over UDP right now but the plan is to connect heterogenous meshnets

To elaborate: it's not another darknet like tor or i2p, it's a replacement for ipv4/6, it doesn't make sense to run all traffic anonymously

I recall there being a userscript that made links like clickable, anyone still have it?

github.com/ipfs/ipfs-companion

Thanks.

can we archive directly to IPFS some how?

We need a script that archives web pages to several services at the same time for redundancy,and labor saving.

We would archive to IPFS,GNUnet,wayback,archive.is and others all at the same time with the press of a single button.

you can get the zip bundles from archive.is and upload them

When js-ipfs is done, will we still need gateways?

Yeah, if you just want to share the content through HTTP without making a webpage. Just like you might want to share a file on a Pomf clone, you could do that with an IPFS gateway link.

discuss.ipfs.io/t/roadmap-for-production-readiness-of-various-features/1023
The important bits:
>Yeah, we're working on this. We have an IPNS publisher and resolver that uses pubsub for all of its operations: github.com/ipfs/go-ipfs/pull/40473 Its not finished yet, but anyone who wants to help push that forward should go try it out, review the code, comment, ask questions, etc.

I seriously hope filecoin turns out to be a scam that the ipfs team timed perfectly (riding on the crypto wave) to make mega $$ off coin retards in order to further their operations.

How about this?
github.com/victorbjelkholm/ipfscrape

That would be insanely counterproductive, especially since nobody in the world would ever want to associate with them, let alone work for them.

And widespread adoption of Filecoin means even widerspread adoption of IPFS. I pray it's not a scamcoin because it has far more to offer than just ICO money. I'd go so far as to say that it is the other half of the equation in decentralization: IPFS is the means, Filecoin is the motivation.

Thought I'd bump this since it's really the only big thread on Holla Forums anymore. No one has seeded my contributions t o my knowledge too. Here's a question

In my crontab I have the following:

@reboot ipfs daemon >>/dev/null 2>&1

but multiple tests show that this doesn't bring up the daemon. Why?

You seriously can't have that in your crontab for some reason.

...

What is it? I've legit grabbed everything in every ipfs thread we've had and host it for a long time. How do you know nobody else is hosting it?

I can't decide which is more cancerous.

Cancer. Any other question?

Cron commands require a full path. It doesn't parse $PATH at all.

if you don't like any of them, what do you use?

I feel like discourse is a better place for generic questions and notes, issue trackers should be for actual issues with the project, not questions relating to the project. That being said there's got to be a better solution than discourse, at least they didn't just do what most people do (rely on a subreddit).

The biggest issue too was having multiple github project pages, 1 for notes, one for questions, they might have had 1 for support, that's too many and it's not presented well with the github issue tracker. I honestly don't know what the best solution to this would be, traditional forums aren't even great for this imo.

Maybe they should use the wiki feature on github for the FAQ and discourse+irc for support.

i fucking hate webdevs

A pain in the ass for everyone involved, bbs style is better for this kind of thing and discourse is closer to that than a mailing list
At that point why not just use something that is closer to bbs by default.

email has always sucked shit for mass communication, it's perfect for 1:1 conversations and maybe small groups, but not as some ass backwards relay for an entire project

Nope.

I've figured out a simple and minimalist way to decentralize searching over IPFS. You still need a centralized crawler, but they can easily be replaced.
To crawl:
To search:
It's reasonably fast, the index will be cached by IPFS. It uses only existing software. It's quick to implement.
Something similar has been done with bittorrent, but IPFS is much better suited for this.

Just use nginx, jesus.

Nginx is a web server.

Should I make a new thread? This one has no image, it's annoying as fuck.

Wouldn't hurt anyone.

this one is barely a month old, it might get deleted. but if you do, use the image from

New:

Discourse was made by Jeff Cuckwood. It is far worse than issues. At least issues had a semi usable UI.


Hello Jeff Atwood, please kill yourself and quit shilling your bullshit software.


If you think Reddit is bad, discourse is basically if Reddit got assfucked by Facebook. Then 9 months later inside Atwoods fat ass he shits out Discourse. That's what it is.


Kill yourself. He and his boipussy friend aren't making improvements at all. It's shit and so is your post.

Formatting was fucked.

Qmd63MzEjASAAjmKK4Cw4CNMCb8NqSbL6yiVRfYnhMBT1H

QmXr7tE6teZgkzdcy5L4PM421FP3g3MpWUEGYfXJhEqfBb

How about you start a thread to shit talk discourse instead of doing it here.