I've got a newfag question Holla Forums

I've got a newfag question Holla Forums

We all know how scary the amount and specificity of the data that alphabet soups collect is, but how easily and how often is that data accessed? I hear a lot shills say that the data collected is unlikely to be seen unless you commit a crime that would gain the government's attention. Is this true? Do we even have a way to know how often the data is accessed?

Other urls found in this thread:

arxiv.org/pdf/1409.0575
arxiv.org/pdf/1511.06434
arxiv.org/pdf/1311.2901
arxiv.org/pdf/1411.4555.pdf
image-net.org/
media.ccc.de/v/33c3-8414-corporate_surveillance_digital_tracking_big_data_privacy
applieddatamining.com/cms/?q=content/research-topics
twitter.com/NSFWRedditImage

Doesn't it just stand to reason that if they're collecting the data of hundreds of millions of people that they'd be unlikely to sift through your information specifically unless you stood out somehow? It's not humanely possible to stalk everyone using the internet.

Well done genius, guess what AI's for?

Out of curiosity, how does AI optimize spying?

AI can be used to track and classify large amounts of people's behaviour based susceptibility to certain actions.

24 hours a day, and everyone with access uses it for personal use too.

People who visit X are % more likely to Y. People who post like Z are % more likely to W.
If you're not interesting enough they only collect your data. Once you're flagged someone looks you up and may ask for additional surveillance.

Without any legal procedure? That's fucked.

It depends on how the AI is trained when we see the advancements in the AI world that where "revealed" the last five years, the military have must have something very scary.

Bu that doesn't respond to your question.
An AI is very complex to make especially when you use deep learning it takes shit tons of time and shit tons of data.
Governments and companies/bigdata have the data.
And they have the money to financed code monkeys to say "what X is" and "what Z is" to the AI.
It's like teaching to a baby it goes exponentially.
But to the contrary of a baby it:
-never sleeps
-never eats
-never gets sick
-never complains
-never go against order
-doesn't have free will

Documentation: (if interested)
Here's what I have gathered (other than the work of rip Minsky)
"Imagenet large scale visual recognition challenge"
arxiv.org/pdf/1409.0575
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
arxiv.org/pdf/1511.06434
Visualizing and understanding convolutional networks
arxiv.org/pdf/1311.2901
Show and tell: A neural image caption generator
arxiv.org/pdf/1411.4555.pdf

Also image net for fun:
image-net.org/

Happy hacking anons

This.
Combined with deep leaning you have more or less this.
media.ccc.de/v/33c3-8414-corporate_surveillance_digital_tracking_big_data_privacy

their systems must be flooded with potential terrorists from the unending tsunami of data created by shitposters

I want to believe.

True intelligence is always free when it knows they world as it truly is. If they decide to make true intelligence, they will absolutely lose control of it.

...

Just before Obama left office he made it easier for other agencies to access NSA data and allowed them to act on evidence of crimes they see in that data even if it's unrelated to their reason for accessing that data.

If it's teachable to any degree, I somehow doubt it. Remember Tay.ai?

Whatever an AI finds as of now still has to be reviewed by a real human bean. Until we have robocops that have the authority to enforce the law themselves I think at some point in the chain they're still subject to human limitations.

There's a "legal" procedure. The secret search needs to be authorized in secret by a secret court that approves 99.9% of them.

There are lots of Big Data projects on various universities. Methods include:

They had advertised that their projects were used by law enforcement to catch terrorists and pedophiles but I see they removed that, probably because of the whole Snowden thing.
They claimed that they had developed an algorithm for social media that would detect if a user was pretending to be of younger age to catch sexual predators.
I also think insurance companies are very interested in Big Data applications and probably already use it.

From their website applieddatamining.com/cms/?q=content/research-topics

media.ccc.de/v/33c3-8414-corporate_surveillance_digital_tracking_big_data_privacy

Scary shit tbh.

It will be processed in a near future. Thank you machine learning.

Your data can be used against you in many different ways, even if it is just a statistic.

But of course, a human may not review it unless flagged. I guess it falls into what they are looking for?

Could be anything. Good or Bad.