By Brian Fagioli Leonid Bershidsky
They should be regulated in the same way as TV stations and newspapers, which vet the information they publish.
There's something in common between amazing story of "Nicole Mincey," the pseudonymous Twitter user with 146,000 followers who was retweeted by President Donald Trump and then disappeared overnight along with a few other online personae, and a recent prank by a Berliner frustrated with his inability to get Twitter to remove hate speech. The common element is the obvious solution to both problems, which rarely surfaces in discussions of trolling, fake news and cyberbullying.
Social networks should be obliged to ban anonymous accounts. If they refuse to do so voluntarily, government regulators should force the issue.
Nicole Mincey was apparently a fake African American identity that helped sell Trump-related merchandise online. It was part of an enterprise supported by pro-Trump social media posts from several fake accounts representing people whose backgrounds, looks (illegally used stock photos, actually) and views might appeal to potential buyers. The whole scam blew up after the Trump retweet prompted the owner of the photo stock to look into the matter. But how many other pro-Trump and anti-Trump accounts on Twitter and Facebook are actually fake? How do we figure out which of the famous internet echo chambers are even real? Is there a way to make sure real people are not regularly misled and confused by the purveyors of fake opinions who are just trying to sell a bootlegged MAGA cap?
The German story also involves a retweet by a top government official -- Justice Minister Heiko Maas. In a video Maas tweeted this week, Shahak Shapira, an Israeli-born satirist and musician living in Berlin, explains that he tried to flag about 300 tweets violating Germany's hate speech laws to Twitter, but the few replies he received alleged that the posts didn't go against the platform's policy. Shapira then traveled to Hamburg, where Twitter's German office is located, and spray-painted the tweets on the pavement in front of the office building. "Jewish pigs," one said. "If you hate Muslims, retweet," said another. The accounts that tweeted this used pseudonyms, of course.
Germany has a recently-passed law obliging social networks to delete hate speech within 24 hours of it being reported. With the link to Shapira's video, Maas also tweeted a report from a government-funded study showing that Twitter only deletes 1 percent of hate-speech posts after they're reported by users, while Facebook erases 39 percent of such posts and YouTube 90 percent. All three platforms delete almost 100 percent of the posts after being contacted again via e-mail. "#HeyTwitter, that's not enough!" Maas wrote.
Both with Mincey and with the racist tweets in Germany, it took particularly persistent users to draw attention to spurious and offensive content. The networks, though they profess a willingness to fight fakes, cyberbullying and other abuses, aren't particularly proactive about it, and they have a plausible explanation: They cannot police their vast user bases, and they need a lot of help.
But there's an easy answer to that defense. Neither "Mincey" nor most of the tweets Shapira sprayed on the pavement in Hamburg would have been possible had Twitter required identifying information from users before creating accounts. The platform's anonymity -- its privacy policy specifically allows pseudonyms and multiple accounts -- gives bigots, swindlers and bullies a sense of impunity. It's not clear what else it does for users; after all, the accounts with the most followers -- those of public personalities and journalists -- are, as a rule, verified by Twitter. People don't attach much value to anonymous opinions. They may appreciate an account that specializes in a certain kind of content or even an interesting bot -- but what would be the harm in identifying their creators?