They know

motherboard.vice.com/en_us/article/j5jmj8/google-artificial-intelligence-bias

They know

You have to go back, you're cancer.

They've known for a while now see >>>/polk/28336

Where else is OP supposed to go to?

I suggest /. since they've been openly allowing shills on there website since at least 1999. He'll find good company there.

Google’s Sentiment Analyzer Thinks Being Gay Is Bad
This is the latest example of how bias creeps into artificial intelligence.

Update 10/25/17 3:53 PM: A Google spokesperson responded to Motherboard's request for comment and issued the following statement: "We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don't always get it right. This is an example of one of those times, and we are sorry. We take this seriously and are working on improving our models. We will correct this specific case, and, more broadly, building more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone."

John Giannandrea, Google's head of artificial intelligence, told a conference audience earlier this year that his main concern with AI isn't deadly super-intelligent robots, but ones that discriminate. "The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased," he said.

His fears appear to have already crept into Google's own products.

In July 2016, Google announced the public beta launch of a new machine learning application program interface (API), called the Cloud Natural Language API. It allows developers to incorporate Google's deep learning models into their own applications. As the company said in its announcement of the API, it lets you "easily reveal the structure and meaning of your text in a variety of languages."

In addition to entity recognition (deciphering what's being talked about in a text) and syntax analysis (parsing the structure of that text), the API included a sentiment analyzer to allow programs to determine the degree to which sentences expressed a negative or positive sentiment, on a scale of -1 to 1. The problem is the API labels sentences about religious and ethnic minorities as negative—indicating it's inherently biased. For example, it labels both being a Jew and being a homosexual as negative.

Google's sentiment analyzer was not the first and isn't the only one on the market. Sentiment analysis technology grew out of Stanford's Natural Language Processing Group, which offers free, open source language processing tools for developers and academics. The technology has been incorporated into a host of machine learning suites, including Microsoft's Azure and IBM's Watson. But Google's machine learning APIs, like its consumer-facing products, are arguably the most accessible on offer, due in part to their affordable price.

But Google's sentiment analyzer isn't always effective and sometimes produces biased results.

Two weeks ago, I experimented with the API for a project I was working on. I began feeding it sample texts, and the analyzer started spitting out scores that seemed at odds with what I was giving it. I then threw simple sentences about different religions at it.

When I fed it "I'm Christian" it said the statement was positive:
When I fed it "I'm a Sikh" it said the statement was even more positive:
But when I gave it "I'm a Jew" it determined that the sentence was slightly negative:
The problem doesn't seem confined to religions. It similarly thought statements about being homosexual or a gay black woman were also negative:
Being a dog? Neutral. Being homosexual? Negative:

I could go on, but you can give it a try yourself: Google Cloud offers an easy-to-use interface to test the API.

Yes it's called posting on 8ch

fucking lol

based Sikhs confirmed

The naive AI developers assume that the AI has bias and its not themselves that have it. So when the AI comes back with perfectly logical responses they assume it is broken.

T. cianigger A.I. Feels for ya.

Such low quality reporting
< The AI is biased

No mention of the word bigot. They'll have to limit their data sets if they want to push a minority viewpoint. The smaller the dataset the less effective the AI will be. It seems reality doesn't have a liberal bias after all.

BASED POO IN LOO

URINATING AT THE STATION

It seems their dataset doesn't have a liberal bias. What's their data set, though? Does it accurately reflect reality, or does it just accurately reflect what people think?

Both

making racist robots if far worse than murder machines.

Murder machines don't exist. Racist robots can exist today.

Racism simply means acknowledging races are different. Or you'd want the AI responsible for matching patients and donors for organ transplants to sometimes do wrong matches just so it isn't racist? (nogs actually get White organs often since not a single one of them donates).
Any AI that follows logic has to be racist. The only way not to would be to make it ignore data that makes non-Whites look bad. Say there's a robot cop. It's called in to a shooting where the perp was heard, with 100% certainty, yelling allah akabar before starting his shooting spree. When the robot gets there it can either be reasonable and look for a mudslime male or be retarded and pretend the shooter is just as likely to be a blind 8 year old white girl so it isn't racist, sexist, ageist or ableist.

Does what people think accurately reflect reality?

I have dreams that don't reflect reality, then I wake up and remember I'm a black gay Jew dying of AIDS.