Jdownloader alternative

is there a better alternative to this shit?
something not made by a shady company?

Other urls found in this thread:

8ch.net/tech/res/720101.html
git.teknik.io/abrax/lizard
files.catbox.moe/uroe8g.html
daniel.haxx.se/docs/curl-vs-wget.html
github.com/mirror/jdownloader
twitter.com/SFWRedditVideos

wget
curl

/thread

do those work with OCH?

What's so great about jdownloader is that it supports a shitton of file sharing sites and you can simply give it a web page to scrape all URLs from, which are grouped per hostname, and easily select the 20 episodes/parts of that thing you wanted to download. It also supports multipart files and such easily, and downloads faster by making multiple connections to the same host. If it can, it also bypasses captchas of some shitty file hosting sites, and waits properly if you surpass their "pls pay money for better downloads" limit.
That shit is not something wget/curl do.

So you're going to shitty websites run by shitty people to obtain files uploaded by shitty people and you're complaining the tool you use to do it is shitty.
Here's an idea: be grateful that you get to eat shit at all.

wget some better taste then weeb scum

Disclaimer: I'm not OP, but I'm the post you're replying to.
If you've ever tried to find a shitty obscure anime or game in a non-english language you'd know what I'm talking about. Sometimes you just have to deal with what the shitty people chose to upload that shit to, and jDownloader makes the process at least bearable. I barely use jDownloader, but in those cases there barely is any alternative.
I guess OP just wants to try to see if there's a better alternative. He's complaining it's shit because he has to use it for some things, and it'd be nice if there was a better alternative.

Check out plowshare.

Curl is not a downloader. Even its head developer tells people to use wget.

If you want to bypass capcha then use plowshare, assuming if he uploaded the file in a file hosting site.

Good thing I'm not retarded enough to watch japanese cartoon shows

jewdownloader a shit. i tried to get the source recently which they supposedly offer but the links were all broken. years ago i viewed the source or decompilation, i cant remember, and there was code to close if wireshark is open

jewdownloader is for scraping one click hosters, it isn't a general purpose program to make an HTTP request

no.

you need tools like this to download specifications for basic technology, such as the C language all you neckbeards fap about. unless you want to pay $200 to some shill per spec

do you enjoy living like that?
always being mad?

wget can do a lot more than just make a HTTP request. It's not a jdownloader alternative, but it's great for mirroring sites or scraping images.

then it is not relevant to this thread, go shill it elsewhere

plowshare?

Yes I do in fact, sage.

bump

maybe you should head back to reddo where anime is outlawed

where

yes I too am of the opinion that if you're not a pedoweeb then you're not a true channerâ„¢

glad we agree user

Something like you-get? Never used it tho

It's just newfags shitting on everything in a desperate attempt to fit in

It's javashit but nobody ever did better, so no.

!

Stop diluting a good word and go back to CSGO or wherever you retards lurk. I'm not even sure the "weeb" variant can even be used without sounding like a degenerate.

uget
aria2c

wget a shit
pig disgusting
aria great
aria rule
aria take over download

Thanks for the suggestion. Along with megatools I'm all set.

Have you got a screenshot of it by chance?

What's the aria equivalent of wget --mirror --convert-links --adjust-extension --page-requisites --no-parent?

Thanks for the suggestion. Along with megatools I'm all set.

Have you got a screenshot of it by chance?

Best site ever.

What can I use to privately archive/save threads on Holla Forums?

wget

Right click
Save page as html

pyload

There used to be a thread here OP about a program called "Lizard the 8ch monitor"
The installer was called "lizard-0.3-py3-none-any.whl"

It used to be located at 8ch.net/tech/res/720101.html before the April 1st thread/board holocaust.

The repo is here: git.teknik.io/abrax/lizard

an archive of that thread is here: files.catbox.moe/uroe8g.html

are you okay?

user was replying to you dumbfuck

dumbfuck

markov bot detected

Nice, thanks.

I got the source a couple weeks ago just fine. It took awhile as it is almost 1.5G.
svn://svn.jdownloader.org/jdownloader
I tried `grep -i wireshark` on the source and got nothing.

aria
aria win
aria rule
you beat wget pig farm
wget shit
wget lose
wget dishonored

can that handle captchas?

... it can load multiple parts if the hoster allows it(most don't) but other than that it's not somehow faster.

do you even ganoo plus loonix?

Nigger, it's in its job description.

There is a software called toucan ( like the bird ) or something like that.

Might be still around, though it was not always up to date with the hosting sites if I remember correctly.

but can it handle captchas automatically without paid services?

So what the fuck is it for?

It's for sending requests. It's a command line interface for libcurl (an extremely widely used library). It operates on a lower level than a real downloader.
wget has a lot of snazzy features for processing files it downloads. You can use it to mirror an entire website, or you can download a page together with all the assets that it needs to display properly, and you can tell it to rewrite file paths so you can display it all locally, etcetera. Those things are incredibly useful. Take a look at its man page to see the kinds of things it does.
curl has a lot of nifty features for sending complicated requests. You can tell it to convert linebreaks in uploads for compatibility with OS/390, or use obscure TLS variants, or retrieve only a byte range of a file. It supports a lot of unusual protocols, like gopher.
Even for the simplest, most common case of downloading a single file from a single URL without any funny business wget is better than curl. wget follows redirects, and stores what it receives from the server in a file. curl defaults to not following redirects, and sends the result to stdout because it assumes you want to immediately process the result instead of storing it.
curl vs wget discussions are inane. In any use case where it really matters which one you pick the choice is obvious. You wouldn't mirror a site with curl, and you wouldn't use wget to upload files.
daniel.haxx.se/docs/curl-vs-wget.html

jdownloader2

If you can not get the svn repository to work here is a mirror github.com/mirror/jdownloader