Movie Pirate Sites Disappear From Google’s Search Results

...

Other urls found in this thread:

torrentfreak.com/pirate-sites-disappear-from-googles-search-results-161209/
copyright.gov/fair-use/
copyright.gov/fair-use/more-info.html
en.wikipedia.org/wiki/Fair_use
twitter.com/SFWRedditImages

torrentfreak.com/pirate-sites-disappear-from-googles-search-results-161209/

Google has been delisting sites for some time now, I mean they even delisted this very site because of SJWs whining.

I use Yandex, not Google; Thought this was intruding anyways. Google while as remove
all pages of Holla Forums, except the homepage, they have never, until now believed in whole site removal for copyright infringement, insisting on only remove problematic pages, even for The Pirate's Bay, which is still listed at this point.

To absolutely nobody's surprise

Your making a big mistake, DuckDuckGo states in its TOS that it will work with the authorities. A good alternative would be Searx or Startpage.

100% of Holla Forums is infringing copyright. Do you honestly believe that the images on the threads are reproduced with permission of the copyright holders?

They've been doing this for over a decade

What the fuck happened to this board, it gets more underage and retarded every day.

When the problem is search engines removing results, the logical answer is to get results from multiple search engines.

startpage uses google results and they aren't even the full google results, it sucks.

Pirate Bay and others are still listed, usually they just removed certain links.

Every single site on the web has to work with the authorities. Any site that tells you they don't is lying to you.

absolut this

Although I agree that it's not good criticism of DuckDuckGo, that doesn't apply to onion services and such.

The main reason I don't use DuckDuckGo is that its founder also founded the Names Database, a service that aggressively farmed e-mail addresses. I don't trust him.

How can you live with yourself?

pls email [email protected]/* */ if you're a cat named sakamoto and want a cute furret to lick your paws
Black isn't a color it's an absence of color.

pls email [email protected]/* */ if you're a cat named sakamoto and want a cute furret to lick your paws
Page 10 is extremely painful

pls email [email protected]/* */ if you're a cat named sakamoto and want a cute furret to lick your paws

DDG has an onion service.

Ya, but Holla Forums isn't a hardcore pirate site like The Pirate's Bay, as far as copyright is concerned, we're not worse then DeviantArt.

Except you literally can't
You always get the first and last piece of every file

searx awlday maate

The Pirate Bay is not a piracy site. The Pirate Bay is a search engine.

i can see search everything on startpage

Back before magnet links pirate bay used to actually host the torrent files. They were a primary source. torrentz, by contrast, hosted nothing themselves and was only a search aggregator for actual hosts like pirate bay and kickass torrents.

Nope, always just been a search engine.

I don't watch movies anyway, even if they were legit free downloads.

Fair use applies to a lot of the content because it's either being used as satire, for the purposes of reporting news, or isn't being used to market additional content. That said, there is still some content which is probably on the bad side of copyright law. That said, there's a reason Holla Forums hasn't been sued to hell and back. Hint: it's not the google delist.

Because it's a dead site filled with 4chan rejects?

Google results are completely rigged these days anyway.
Page 1 is all paid listings (unless there aren't any available)

I'm happy to be a deplorable. 4chan is garbage made website.

Why don't trackers use IPFS so that there's no server to take down?

Google as a general purpouse search engine has been dead for a while now.

Bing has better image and video search. Ddg has least ads and filtered content.

...

Youtube-Mp3 is still there.

Here's a nice script I forgot where I got it but it will turn anything into an mp3. I use it on youtube videos fairly often.
#!/bin/bash###############Convert all files in directory into 128k mp3 files# Using ffmpeg###############for FILE in *;do ffmpeg -i "$FILE" -ab 192000 "$FILE.mp3";doneIt could def use improvement but it's better than using some faggy website to do what should be done locally.

nigga, that doesn't mean shit. You can claim whatever the fuck you want; that won't prevent you from getting your ass sued.
The burden is on you to prove you aren't infringing. That's how cucked fair use is.

I use convert2mp3.net to download videos or audio from youtube. you can choose the file format too.

i wrote a script to automate it, you give it the link and it'll grab the audio / video.

#! /usr/bin/env python3# md.py downloads youtube music using convert2mp3.netimport sysfrom selenium import webdriverfrom selenium.webdriver.common.by import Byfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.support import expected_conditions as ECfrom selenium.common import exceptionsimport requestsimport logginglogging.basicConfig(level=logging.DEBUG, format=" %(asctime)s - %(levelname)s - %(message)s")logging.disable(logging.CRITICAL)def save_file(link, filename): # save the file at link to the specified format print("\ndownloading file from:\n" + link) resp = requests.get(link, stream=True) total_length = resp.headers.get("content-length") try: resp.raise_for_status() except: print("error downloading file...") return chunk_size = 4096 with open(filename, "wb") as out_file: print("saving to filename:\t" + filename) if total_length is None: out_file.write(resp.content) else: print() dl = 0 total_length = int(total_length) for data in resp.iter_content(chunk_size): dl += len(data) out_file.write(data) print("\r" + str(int(100 * (dl / total_length))) + "%", end="") print("\ndownload complete!")def download_file(link, file_format): # download the video at youtube link, link, # using convert2mp3.net, in specified file format # FIREFOX_PATH = "/usr/bin/firefox" PHANTOMJS_PATH = "/usr/bin/phantomjs" CONVERT_URL = "convert2mp3.net/en/index.php" print("downloading " + file_format + " for video at link:\t" + link) #browser = webdriver.Firefox() browser = webdriver.PhantomJS(PHANTOMJS_PATH, service_args=['--ignore-ssl-errors=true', '--ssl-protocol=TLSv1']) browser.set_window_size(1280, 800)# browser.ignoreSynchronization = True browser.get(CONVERT_URL)# browser.ignoreSynchronization = False print("got page:\t" + CONVERT_URL) print("finding urlinput...") link_input_elem = browser.find_element_by_id("urlinput") link_input_elem.send_keys(link) print("finding dropdown button...") #format_button_elem = browser.find_element_by_class_name("btn.dropdown-toggle.btn-default") format_button_elem = browser.find_element_by_css_selector("button[data-toggle='dropdown']") # print(browser.page_source) print("clicking dropdown button...") format_button_elem.click() print("finding file format link text...") format_elem = browser.find_element_by_link_text(file_format) print("clicking file format link text...") format_elem.click() print("finding submit button...") submit_button_elem = browser.find_element_by_css_selector("button[type='submit']") print("clicking submit button...") print("waiting for converter service...") try: submit_button_elem.click() except exceptions.WebDriverException as e: debug_ss_filename = "md_error_screenshot.png" print("\nerror occurred while clicking submit button...") print(e) print("saved screenshot of error to " + debug_ss_filename) browser.save_screenshot(debug_ss_filename) browser.quit() return try: print("finding download link...") #print("waiting for converter service...") #download_button = WebDriverWait(browser, 60).until(EC.presence_of_element_located((By.CLASS_NAME, "btn.btn-success.btn-large"))) download_button = WebDriverWait(browser, 60).until(EC.presence_of_element_located((By.CSS_SELECTOR, "a.btn:nth-child(5)"))) dl_url = download_button.get_attribute("href") filename_label = browser.find_element_by_css_selector(".alert > b:nth-child(1)") filename = filename_label.text + "." + file_format save_file(dl_url, filename) browser.quit() except exceptions.TimeoutException as e: print("\nconverter service timed out, couldn't find the download button...") print(e) browser.quit() return USAGE_MESSAGE = "usage: ./md.py link (format)"arg_len = len(sys.argv)if arg_len != 2 and arg_len != 3: print(USAGE_MESSAGE) sys.exit(0)link = sys.argv[1]file_format = "flac"if arg_len == 3: file_format = sys.argv[2]download_file(link, file_format)

That's exactly what fair use does, nigger. The reason sites like YouTube get away with cuckoldery is because people let them. It's spelled out in fair use what applies to fair use. If they sue you just sue them back for lawyer's fees.


Why not ytdl?

because I didn't write ytdl.

youtube-dl --extract-audio --audio-format mp3


1 fucking line

./md.py link format

also 1 line. or, you are a no-coder not woke enough to realize there is code behind youtube-dl?

yea

There's also code behind that python compiler you're using.

which doesn't change my point in the slightest.

Ixquick still exists, and it does the same thing, with better results and a UI that doesn't rape your eyes.

Still doesn't justify reinventing the wheel, apart from the learning experience, of course.

Ixquick? Really? It uses results from Quant and Yahoo, both of which are really shitty engines. Searx, on the other hand, lets you choose whatever engines you want from a much larger selection. Personally I find its Pointhi style quite appealing and easy on the eyes, but if you don't like it, you can always write your own CSS to suit your taste.

agreed, i made it while playing with selenium. next i'm going to make a scraper to grab all videos on a channel, and automatically download new ones when they upload.

Then what are you doing on Holla Forums when Jim worked with Hiroshima Gook Moot on 2ch when they were datamining their users and leaking ips?

...

...

Because that didn't happen, try harder.

>>>Holla Forums

>this poster's opinion does not align with my own the Four Freedoms, therefore he should be gassed
>>>/oven/

The system is overwhelmingly stacked in favor of the rights holder(s). Only a court verdict determines whether it's fair use. Until then, you're infringing for all anyone cares (i.e. guilty until proven innocent).
And unlike basement dwellers on a Chinese cartoon imageboard, functional members of society can't afford to spend countless hours and lots of money fighting a lawsuit that may even cost them their job if their line of work is involved in the lawsuit.

Most people will just comply with takedown notices and move on, unless it's a bogus one (e.g. forged, claimed work doesn't appear, sender has no rights to claimed work). Contesting a legitimate one is asking for trouble.

Liking all those tasty fallacies fam.

Broken court system aside, it's not Fair Use's fault no one stands up for Fair Use laws. Also we're talking about them suing you, not the other way around. A counter-suit is relatively easy to file.

Stay mad. The reality is that only a court ruling can determine whether something is fair use or not. Until then, your work may get taken down at any time.

idiot


IP is a legit thing, fucktards.

no-code larper tbh. smh tbh fam.

>copyright.gov/fair-use/
>copyright.gov/fair-use/more-info.html
>en.wikipedia.org/wiki/Fair_use
I am not mad user. Disappointed perhaps. Amused at how misinformed you are, for sure. However angry I am not.

Why? Their surveillance is worse than U.S. surveillance.

The user you replied to is right. Only a court can adjudicate once and for all if something is Fair Use.

Protip: Very little of what's on Holla Forums is Fair Use. I can't imagine any judge okaying almost any meme as a legal transformative use instead of illegal derivative use.

Copyright law is almost wholly parasitic and is a cancer on the Internet and on chans. This is what we're like when crippled. Imagine how amazing we'd be without those chains binding us.

EVERY SINGLE TIME a fag uses wikipedia as a source, it undermines rather than supports his point.

Sounds like something a redditor might say user.

Google's censoring a lot lately.

Fucking awesome. Let the GoogleWeb be the new TV, let the normalshits have enough Web 2.5 trash to satisfy their appetite, and let anyone with two braincells to rub together get the right software to connect to the real 1990-2005 Wild West Internet.

This is funny because I normally only use jewgle for finding warez I cannot find on my traditional sources that I frequent. They are really good when it comes to finding warez when you search correctly. Also they started doing this a few years ago already but if you search for specific keywords like {name of movie} dvdrip they will give you results anyway.

youtble-dl -f bestaudio

Play it with a real media player.

youtube-dl

fuck, I'm spoiled by tab-completion

...

NOBODY COULD HAVE PREDICTED THIS!
YOU HEAR ME?
NOBODY!

Just use bookmarks, for fuck's sake.

down the memory hole

...