Trippy images break neural networks

A new paper just came out (arxiv.org/abs/1610.08401). Neural networks are commonly used to recognize image content. Turns out if you take ANY image and over-impose a special trippy image on top then these neural networks get completely confused. For example see the sock and dog classified as Indian elephants.

What does it likely affect?
- Porn filters on Google images and Youtube.
- Yahoo's porn filter (open_nsfw.gitlab.io/).
- Facebook face detection.
- Driver-less cars to some degree.
- Many more things, but my mind can only think about porn.

It means someone could probably make the Internet full of porn. But why would anyone do that?

Other urls found in this thread:

bulbapedia.bulbagarden.net/wiki/Animals_in_the_Pokémon_world
en.wikipedia.org/wiki/Frank_Rosenblatt
myredditvideos.com/
twitter.com/NSFWRedditImage

It could be interesting to know how to use it precisely, if it can breaks facial recognition to avoid facebook data collection, for exemple.

Was Pokémon meme magic trying to tell us something?
bulbapedia.bulbagarden.net/wiki/Animals_in_the_Pokémon_world

This is well-known, at least outside of "OMG THE SINGULARITY!!!!!" circles. There is this fable (I don't know if it actually happened) of a NN sound classifier that learns to rely on the resonance of the room and stops working if moved.

Pretty cool that somebody finally stepped up and implemented it, maybe the hype bubble will deflate a little now.

Wait, people can't see the obvious Indian Elephant in both photos?

Why is it called a neural network. Do they make fake neurons now?

Stupid Marketers.

One possible mitigation strategy would be trying to remove the overlay, perhaps by blurring the image with e.g. gaussian blur or some sort of frequency domain fuckery. It'd be interesting to know if there were frequencies one could remove from an image that would mitigate this without preventing the NN from working completely.

Ask some dead guys from the 50s.
The they wanted to create a mathematical model of neurons to study intelligence and learning. When you connect them you get a neural network

Neural networks consist of units that are connected to each other and have individually simple behavior for forwarding signals. The units are similar to real neurons.

You might be a cyborg. For that matter I might be as well, considering how often I fail at the captcha.

search:
"imagemagick" "fft"
it's indeed possible already, but you'd better have imagemagick built with `--with-hdri` option

en.wikipedia.org/wiki/Frank_Rosenblatt

Stop the fucking presses.

artificial neural networks is the complete name. simulated models of neural networks

gee, that's so deep. I couldn't have imagined it

Hey hiro, wanna try some snowcrash?

Yeah guys I totally can't tell there is a sock and a dog in those pics, I only see elephants.

Perhaps the neural network tested has achieved sentience.

But it's running on a shared server, and somewhere on that server someone is hosting a chan, and on that chan someone shitposted "you're thinking of a pink elephant". And now this poor AI can't help but think of elephants and it sees elephants in everything now. Thank god that shitposter wasn't the brown-pill poster, their AI probably would have reformatted drives to erase itself.

This is much more effective than random noise. Pic related.

Nope.

The technique works with my hopefully very real neurons. Before I expanded the second image, I didn't know what OP was talking about. After doing so, I can see very clearly that there is an overlay.

No shit. Everyone who knows anything about computer science or infosec already knew all this AI bullshit would be vulnerable to these kinds of things without even having to enter the field of AI. This is another case of fucktards imposing new unproven technology wherever they can stick it, and is no different than IoT. But ignore what everyone who knows shit is saying, because it's the FUTURE. Who cares whether it's ready. Now is the time for AI based censorship, prosecution, transport, medicine, etc.

1000 times this

There is a very simple mitigation for this.

So this is taking an image, and adding a small amount of, for lack of a better term, targeted noise.

So you take your image and deliberately add noise before running it through the classifier. Do it a couple of different times with different noise sets, if you find detection accuracy drops too badly. But generally the amount of noise adversarial images add are (pretty much by definition) small, so generally you don't even need to do that.

This moves finding an adversarial image from something trivial to something really difficult (& nondeterministic to boot).

The future is going to be an interesting time where faggots realize that their lust for technology like this will be the end of them.

Still completing NSAcaptchas you dirty 4channer

bump

pls email [email protected]/* */ if you're a cat named sakamoto and want a cute furret to lick your paws
drill hair and fugly faces

Oh. Minako.

Probably not if you try to EQ bass into them.

If they do, it's very illegal.

/qa/ is a fucking joke

its is not a sports car it is a grand touring car. By all means get a 280zx if that's what your heart desires. It is a great car that is well built and thought out.

Is this simulated epilepsy?