AI has taken over. Censoring written word and image/video is trivial now. It's not just a matter of site operators 'deleting' content. Negative feedback can be generated in huge volume in a believable way with AI in order to turn people off to certain ideas.
People have the instinct to try to fit in. Their opinions in the moment can be and very often are influenced by simple expressions of opinion like a one-sentence comment, especially if there are many such comments all agreeing.
This is the cutting edge of the media. Instead of just presenting ideas and implying their popularity, the ideas can be presented and the appearance of their popularity can not just be implied, but asserted with fabricated evidence. The fact that people emotionally identify with expressions as simple as snippets of text is being exploited to gain a great amount of leverage in influencing their minds.
It's as simple as this: people see that they can post anything they want online, they see their expression appear in the public setting, they see people they know respond, and can respond in turn.
This builds the very deep expectation that anything they see in the same place or in a similar place was made by a person more or less like them.
But it's not necessarily true in every case.
The implications of AI in society are complex. Basically, anyone who has enough computing resources can influence public opinion through the internet in an extremely powerful way that exceeds any other technique in history by far.
How can you deny that this is at very least a possibility? The technology and resources have existed for a long time and such practices would be extremely profitable. There is no possible argument against these facts.
Basically the internet has turned into and is becoming more so a thought feedback machine that puts down "unpopular" ideas and shoves them to places with no audience.
Let me just summarily counter some obvious objections:
1) The straw-man examples that microsoft has been advertising and posting all over the internet (especially on Holla Forums/4chan) of the "nazi chatbot" and "pretty awful image interpreter" will be cited. "THE CUTTING EDGE IS NOT THAT SHARP" they will say. These examples are not the cutting edge. These are toy versions of the real thing that are allotted a trivial amount of computing power and opened up to the public to guide their perceptions of the state of the art.
The examples brought forth by universities are more on par, but their funding and resources are astronomically outweighed by those of private business and government. You have to take that into account when estimating the state of the art.
2) CAPTCHA etc. are easily solved with computing power that is not expensive for large companies OR governments. CAPTCHA etc. serve only to prevent people with less computing power from doing the same thing as people with more computing power. Such controls are generated algorithmically and can be partially cracked algorithmically and then the results in specific cases can be analyzed with AI to come up with an approximate solution that is likely to be correct.
3) The most obvious path to refutation (what the shills would say first) would be "people think for themselves they are very smart!!!!!!!"
Maybe in isolation, but it takes a strong will to go against the grain.
We all want to fit in and will go to great lengths to do so.
If you really want to make a place for yourself, you need to know when to say 'when'.
You know it.
This point is the razor. Some people will delude themselves, some people will admit their own flaws. Any ideas on how to get around this problem are absolutely critical to the spread of the truth. So if you can, make a point of expressing your thoughts on this problem!