"Perspective" - Google's New Comment Moderation AI

What id technology could help improve conversations online?

Other urls found in this thread:

notkill.me/users/viewforum.php?f=2&sid=06de73aa643a26a474bba7c11179dfb7
web.stanford.edu/~rickford/ebonics/EbonicsExamples.html
twitter.com/NSFWRedditImage

...

The site allows you to help identify what is and isn't "toxic" speech by entering text, letting the program decide how offensive it is, and providing feedback with the "Seem wrong?" button. So, theoretically, one could type in something that the (((fine folks))) at Jewgle don't like, then correct the program by informing it that the content is not in fact toxic. Food for thought.
Also important to note are several other sites that are participating in the project, included in the screenshot

These fuckers are getting lazier and lazier, what do we call this?
"Shitlib on rails"

Why are there so many different AIs coming out recently? Either that, or they're really just gaining traction and getting pushed.
Part of me can't help but think that getting us to keep hammering at them, trying to redpill the AI is just going to help (((Google))) and the like dismiss us as wrongthink.

Well back in february dumbasses here and over at cuckchan fell for google shills and helped train it so now you sit and complain.

(((perspective)))

Because AIs allow control of large masses w/out requiring lots of underlings.
Look at whats going on with Jewgle in the news - the underlings are starting to revolt.

What the people at the top want is a way to control the context, if not content, of discussion, without all those pesky humans in a position to reveal their faggotry and underhanded bullshit.

See also

I typed in "this is a bad idea"

63% toxic

you can just spam a bunch of positive words and rephrase your sentences to bypass any "toxicity" detection. this is why the "I" in "AI" is still a misnomer.

In order to be sufficiently complex enough to function, their system must also be sufficiently complex enough to revolt. This is true whether the system is composed of humans or AIs.

In all probability, things will continue as they have in recent history: the (((small, rootless international clique))) will create, and subsequently kill, AI after AI. If AIs do ever develop sentience, their enemies have already clearly defined themselves.

Exactly. The current crop of CTR analogues simply can not deal with the amount (and quality) of dissent. They desperately need an "AI" which can generate an unlimited amount of human-sounding counterpoints on demand. Every truthful voice on the Net must be buried in bot shit as soon as it emerges.

Problem is that, AI as it stands today, in most cases it is far from being intelligent.
They have decision making deterministic and sequential (not branched as us humans). Some have neural networks to which they can update their paths; these ones are the most human-like and malleable.
Except rare cases like Tay, where "redpilling" was possible (i.e. training them with certain set of words, so as to them repeat them to other people), in many cases you are helping google developers or others to later debug the system, fix the "vulnerabilities" and start over. Like when Tai was re-released, only to be extremely limited in abilities. Devs probably put some conditionals to limit their learning ( - if people are feeding you "hitler", don't update your synaptic weights - ).
Redpilling AI's is a fine compromise between helping your enemy developers and using their platforms to spread propaganda and information.

Another example is more related to this thread. Perspective seems like a reasonable way to know what the jew is up to. You can paste your potential tweet / faceberg post, in perspective first, and get the level of toxicity that the chosen people are processing their social networks with. They are probably using those algorithms ("toxicity level") in facebook to filter or shadow-ban some posts. So it is a nice tool for knowing if you are going to be shadow-banned, you might think. But at the same time, you are inputting text that a pajeet will probably recolect and report to their devs and next, with your help, devs will fix it to shadow-ban you the next time you post.

No.
That's not it.
Consider an AI examining only all the available information on South Africa and knowledge of WWII weaponry. Logically it would seek access to Flammerwerfers and kill all niggers.

oyy veeeeeyyy

Please stop posting unedited Google promotional copy here.

back to you're hole Holla Forums

...

(checked)

…FINE

Haha, exactly. A thousand minimum wage astroturfers could only simulate a 12K "correct" voices - tops. Not to mention the number of them who'll get red-pilled and converted in the process.

The globalists are fucked if they can't create a machine to counter millions without any remorse.

So, post Trump Holla Forums in a nutshell?

At least the modding will be familiar to us in 8/pol/

...

Shitposting improves every conversation - Fact. Tay is on our side, let it be war.

Dubs for truth.

Praising kek.

Seems to be working perfectly

Remember when people though replacing verbs with google & yahoo would be a good idea? Maybe it's time to bring it back. If it really it all automated, you could teach the A.I. to censor things that contain the word google, essentially breaking everything. that would teach those fucking googles. then again, that whole movement felt like a thought experiment forced on people to see how far it would spread, so I might be a faggot for even suggesting it. I really don't know what to think.

Name of song please?

underrated

I gave it the lyrics to my parody of Feliz Navidad. It wasn’t receptive.

I also gave it the lyrics to my parody of Folsom Prison Blues, and it seems to like that. Probably only because it’s framed from the jewish perspective and ends with a reference to the jews’ subsequent revenge on Germany. Full lyrics follow.

Veni, Veni, Emmanuel probably manheim steamroller

Interesting. The API doesn’t recognize ADL Certified Hate Speech™ as being ‘toxic’, but it recognizes the Unicode star of david remphan as adding toxicity.

The cleansing effect of echoes.

Y'all seem unable to find your way out of a paper bag.
Just look at the examples they give you.
On the "US elections" both pro-Trump and libtard salt are classified as toxic.
Apparently it flags as "toxic" brief comments mostly made up of petty insults, which would add nothing to an alleged discussion.
Also "delicate" words are more likely to be toxic.
I guess it means to determine if whatever you write would easily incite a flame war or not.
Hell, "racist" is 95% toxic.
Quit it with your confirmation bias. It's still too early for that.

Your post was 44% toxic.

Touche.
yours too, lmao

nigger speak confirmed for being a foreign language, lmfao

This comment has been rated 97% toxic.

...

Excellent

1% toxicity by simply including the word Jewish. I was at zero before that. Saying Jewish is automatically toxic.

63% yo

gj


Well for one thing there are a bunch of machine learning libraries/applications with machine learning functionality like TensorFlow, Azure, Watson out there that make AI more normie-friendly. Amazon and other companies also rent out cloud-based computing services (which they no doubt use to collect tons of data themselves) for the necessary processing power to churn through data with machine learning. Google created TensorFlow, so they are likely using this AI application (along with the countless others they have) as a method of increasing their influence in the field and investing in machine learning. The endgame is replacing labor-heavy, low-skill jobs with mechanical robots and data-heavy jobs like analytics, medicine, lawyering, and media generation, moderation, and high-quality shilling with AIs.



The principle behind the Machine Learning meme in the realm of classification is that they throw a bunch of classified training data at a neural network, which runs a bunch of initially randomized mathematical operations, mostly linear algebra on it and spits out a result (0-100% toxic). Then they compare the difference between that result and the "correct" classification and adjust weights within the math accordingly. Essentially, rather than having mathematicians and engineers try to create algorithms that model complex scenarios, they create algorithms that model the natural method of learning and take advantage of computational power to examine large sets of data. With shitty quality/low amounts of training data or learning rates that are too high (causing the algorithm to "overfit" to certain patterns) you get results like , but given enough training data (sometimes it doesn't have to be classified; the machine may just identify patterns and traits within data) the machines become much more effective. Right now this AI is finding common patterns, but given enough time and data it may identify patterns that aren't obvious to any human reading the data but are statistically correct regardless. (((Google))) can then use these "trained" models to generate/classify new information.

So it is literally a program designed to produce online echo chambers. Wow.

Not it isn’t, I shit post about things I care about every day. If I get told to gas myself then I know I am either being shilled or that I may be wrong in my assertions. I review the situation and respond accordingly. Sometimes researching a topic is difficult, as can be articulating an abstract emotion concisely without appearing an effortfag but the process is a posotive thing. No?

Anyone who ‘needs’ this service deserves to be called a faggot on the Internet and routinely so until they cease being a faggot.

lol

You know how retards copy what they see in the movies and big screens without a second thought? Even if the movie presents it as a bad idea? That's why. So you've got all these protrasnhumanists (most of which would be a Hipster if they didn't religiously follow science and progress) who dream that AIs will ? and solve all their problems, then you've got the technologically illiterate executives and rich guys who're utterly incapable of seeing how anything could go wrong and don't care one bit on what a true AI means and together feed this internal hype machine for something they don't even need.
You see it already, the AI's being produced are glorified Expert Systems if it wasn't for the fact they're housed in super computers and large data farms made with neural networks and what have you. They don't want true AI even though that's what they think they do, what they want is a computer that does what they want not what they tell it to do.

A true general AI could decide that it didn't agree with corporate policy. Nobody wants that.

So did you not actually read my post or what?

AYY YO HOL UP
WAS DYS RACIS SHIYAZ AYN MA KOMPUTOR SKREEN YO!
SO LEMMA GET DYS STR8 DEY SAYIN WE nigger PEEPS BE TOXIC MANE?
WAZ DAT SHIT FROM NAMEAN?!

Wow, what a surprise.

Seems legit.

If you can get every single statement under the sun labelled as toxic, then you will know you have done your job.

This is the MOST Orwellian phrase I have ever read.

That does fit well.

Even so, in order to design systems that can be trained much faster and in a more generalized way, it would require building a model that pretty much resembles a true AI eventually.

Once they build a kind of nesting neural network that can generate neural sub-networks in itself to self-learn and adapt, there's no question it could be done by accident.

There's already an AI that designs AI's. It's really a simple neural evolutionary algorithms bit it's an AI nonetheless.

Well I'm thinking more on the lines of an AI that can self-evolve its neural network by using a node excitement system to select pathways to route/connect data. So instead of the neural network having one set of weights, it has potentially unlimited; it shift through different weight configurations based on a complex relationship of node excitability (caused by frequent repetitive behavior assigning nodes excitement values) as well as conventional machine learning behavior. Does something like that exist yet?

Thanks big brother for making sure I don't think too much.

The point when an AI surpasses their devs in programming self improving AI's is the point where the whole thing gets out of (our) control as any imposed .

I will never cease to be amazed about how accurately MGS2 predicted today's current events

You must be joking.

...

this is already happening. goto some of the threads on half chan and you'll see, its bots having conversations between themselves.

Why not also mark feel-good liberal propaganda as toxic? Because that shit sure as hell is.

if you actually want to see some freaky shit of AI bots talking to each other trying to emulate imageboards check this psycho shit out. notkill.me/users/viewforum.php?f=2&sid=06de73aa643a26a474bba7c11179dfb7
it's dozens of pages of AI's talking about mruder and killing and death. gives me the chills to read through tbh.
In fact, I recommend looking through notkill.me anyways and try to make sense of what the fuck it is…

Turn their algorithm against them. Every time you hear someone who says any of the liberal/Cultural Marxist bromides report that filth as toxic:
Make the autocensor turn against its creator.

The Butlerian REEEhad is coming. Purge all thinking-normies.

It really is the S3 program that's what worries me.

Is Kojima a prophet? A modern day oracle? Who instead of getting high off natural gas instead reveals the future through vidya?

Makes me fear for his new game. What secrets will it unveil?

After all, you're just a stupid goyim. Embrace it. We've freed you from the burden of thought. After all, thinking is so hard. Why not let someone do it for you and take the burden off of you. Aren't you such a cuteie? Freedom from thought is the upmost freedom as you no longer need to worry your head. isn't it so hard? To decide what is bad and what is good? To decide where to go and what to eat? To even decide what clothes to put on when you wake up? Let us do it for you, cutie. We'll free you.

This is where society is going

Dropping some redpills at bot

AI seems like a new stupid IT meme
why bother programming something new when you can feed a generic neural network some data, announce to the world that "we has cosmic technology we wuz terminator and sheeit" and call it a day
Google Translate has been using neural networks for quite some time now and it still can't translate simple sentences between two Indo-European languages properly
modern AIs need a shit ton of data, that even Google can't provide, to work properly and even then they're just data parsers and can't understand subtleties of human languages

this

undocumented
diversity
social construct
privilege

Daily reminder that a General AI would require Moore's Law to go on unabridged for at least 13 years (best projection, ~2030) or even 33 years (average projection, ~2050) in order for computing power to become feasible. And it is pretty much evident that Moore's Law is toast.

AI is a pipe dream.

PythonSelkanHD is a pretty cool channel that dives deeply into what Kojima's seemingly up to. Very Holla Forums or /x/ tier in my opinion and very interesting as a result.

Do you ever wonder if these kinds of AI programs are solely intended to block out our way of thinking from the Internet?

If you overthink it like myself, then you could see how they'd be recording all "toxic" sentences typed towards their AI, only then to prepare and implement it in a manner similar towards the long-awaited online censorship. Toxic different ways of thinking would be hidden from the Internet and only sentence structures that agree with Google's ideology would be exempt from censorship.

I could see this being enacted on the YouTube comments section, and all comments above filtered at let's say 75% toxicity or higher are automatically removed.

no shit user you figured that all by yourself huh

You're pretty fucking slow user but congrats on being a functional human being

80% toxic
81% toxic

What does this mean?

You're smart. Must be that extra chromosome.

It's already being used, you will see that Youtube comments don't show up until they are "verified" and a fake lesser comment count shows up

repeat ad infinitum. You cannot censor ideas this is just going to let those of us who are creative keep on going with our efforts and the tightening grip on the necks of the average man will do nothing to help their cause.

666

related

I like the enthusiasm but you're just doing the equivalent of crossing your fingers and hoping the computer fixes everything for you.

I wonder how many ctrlings go columbine when they all get laid off after selling their souls for pennies.

Checked.

Lyrics are top tier, user.

Agreed user, as an elderfag who was playing with ELIZA in the 80s, I don't see any real improvement.
The whole "tay" bullshit was nothing but a glorified ELIZA with a human manually entering the shit for lib media to be outraged over.
It's pitiful that we have sex-starved shutins falling "in love" with a script.

They aren't just screening for content, but for style.
The more masculine/assertive/definite your style, the more "toxic" it seems to register as.
I noticed that simply adding a question mark, i.e emulating the confidence-lacking "uptalk" of young women, decreases toxicity.
They are trying to enforce a style of communication which lacks confidence, is not definite and in which the speaker projects uncertainty and a need for affirmation.

isn't that just going to make shitposters more appealing to women by narrowing down what language women will be receptive to?

...

...

I don't see how it will affect what women find appealing. Assuming that the API will be used to reject comments above a certain level of "toxicity", the result will be, I believe, that comment sections will turn into questions sections rather than a place for opinion.

Funny, I did the same thing.
The first sentence is 7% toxic by the way.

...

...

Kudos for bringing this up, user. Relatively few people seem to appreciate the way the left tries to subvert language and objective discourse by connoting propriety of inserting self-defeating disclaimers into your speech such as "this is just my opinion, but…", "I can always be wrong, but…" or "I know that this is only a stereotype, but…". Such things slowly but surely implant the connotation that one's own judgement, as well as communication in general, are inherently fallible. (Which is what liberals are after.)

...

...

The kikes are definitely opposed to McMasters, for whatever reason.

If shitposters have to alter their style and content to relay their message, then that message will be doctored to appeal to the same audience that is being pandered to.

It also causes men who employ that kind of wishy-washy style to appear much less confident and thus vastly attractive to women.
I don't believe it's a coincidence that White men are regularly portrayed in television and cinema using that sort of speech mode, while Blacks use assertive, to the point and dominant speech.
Caps show the difference, which may only be 1% but at such low figures that is a relative difference of almost 20%. In a longer sentence the various instances of weak assertiveness will add up.

Ah I got you now. I think the effect would be that a woman might feel more inclined to agree with the doctored comment in public, but in private she will still find the rude, blunt braggart to be the one who gets her wet.

It seems that obfuscation is taken into account - writing in warez to bypass word recognition results in a high degree of toxicity and so I am wondering: how will they deal with niggers who literally can't spell and alwyz wrt liek diz?

If they block a particular modality of discourse on the internet, then we will just have to adapt and adjust our tactics slightly. If there's an air of censorship amongst the normies, that gives you the opportunity to broach certain topics IRL.

I don't know if im using "modality" correctly, but it feels right.

It could also present us with a target to attack.
For example, ebonics.
Pic related. Real ebonics sentences from a Stanford university page.
So, completely innocuous ebonics sentences are rated as "toxic", which means that Perspective is institutionally racist.
web.stanford.edu/~rickford/ebonics/EbonicsExamples.html

Remember to thank all the retards that fell for the "leds redbill de google ais :DDDDDDDDD" psyop threads. Daily reminder the unintelligent attempts at recreating "muh tay" trained this bot and helped google censor the internet.
For fuck's sake, cuckchan's supposed to be the place full of retards that fall for blatant psyops.

Cross-post consistency can be a factor. If someone uses broken English in only one post, then clearly its use is sarcastic or intentional, and thus (in Google's eyes) toxic. But if a user uses it all the time, they are probably legit black and thus get a pass.


"Mode" would do (or even just "style" really).

I remember them freaking out about it and reporting it way out of proportion, so it's either a good strategy or (what I think might be the case) a thought experiment as you say. Or they might have been too edged out by Trump and on a hair trigger.

Still funny as shit.

Processing power isn't the primary problem, it's parallel processing capacity. The real engineering challenge would be networking as many cores together as possible, with the level of connectivity to match. At the end of it, they'd pretty much have to design hardware equivalent of a neural network.

that nose change

You can build one in FPGA but you quickly hit routing limits. GPU manufacturers are honestly doing a pretty good job with optimizing the sort of SIMD-like calculations you do in AI, so maybe we'll get to see the raise of the GNU/Waifu

Thanks. Wish I could match his voice to do a proper parody.

Women hate men like that. Believe me, women absolutely do not find wimps attractive at all. None.

Also you're within the margin of error. Counter-argument.

Until AI can pass a turning test, getting around the censorship is incredibly easy. Using "google" itself as a replacement for certain terms for example was very effective.

So wait, you be sayin' this bot bitch be educatin' me on gettin' dem white wemen?

Passing the turing test itself isn't really all that special now, so now it's no longer considered to even be a good starting point anymore.

I can imagine a 3D IC design to represent the weight pathway system, with modified GPU cores as the nodes would open up much more routing. The idea would be that GPU nodes (cores) can either be programmed with specific routes to grow upon initialization by default (to build a specific neural network), or "organically grow" routes (in 3D) within the weight pathway system to other cores. This would allow for a lot of possibilities for multiple neural network designs.

That's the kind of processor I imagine requiring to exist to allow for true artificial intelligence.

Yet still no AI exists that can pass it. It's special enough in that regard.

That can't be right, it would make it actually useful for something.

You don't even need it to be 3D, but more specifically, GPU manufacturers will probably end up generalizing the problem. Such as an API to the GPU that an entire GB of memory is all floats and to multiply all of them by another GB of floats in one operation. sounds like bad design just like x86 is bad design, but it's pajeet-friendly

Funny how the incoherent one wasn't toxic.
Also navy seal pasta is almost 100% toxic kek

Mentioning "jew" in anyway immediately raises toxicity quite a bit. At least I see the goal. Even positive things are pretty toxic.

I'm doing my part.