Four steps (((we)))’re taking today to fight "online terror"
Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution. We are working with government, law enforcement and civil society groups to tackle the problem of violent extremism online. There should be no place for terrorist content on our services.
While (((we))) and (((others))) have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.
Today, we are pledging to take four additional steps.
First, we are increasing our use of technology to help identify extremist and terrorism-related videos. […] We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove extremist and terrorism-related content.
Second, because technology alone is not a silver bullet, we will greatly increase the number of (((independent experts))) in YouTube’s Trusted Flagger programme. Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. […] We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants. This allows us to benefit from the expertise of specialised organisations working on issues like hate speech, self-harm, and terrorism. We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalise and recruit extremists.
Third, we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.
Finally, YouTube will expand its role in counter-radicalisation efforts. Building on our successful (((Creators for Change))) programme […]
We have also recently committed to working with industry colleagues—including Facebook, Microsoft, and Twitter—to establish an international forum to share and develop technology and support smaller companies and accelerate our joint efforts to tackle terrorism online.
Collectively, these changes will make a difference. And we’ll keep working on the problem until we get the balance right. Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them. Together, we can build lasting solutions that address the threats to our security and our freedoms. It is a sweeping and complex challenge. We are committed to playing our part.
At this point, it should be fucking clear, that they aren't actually targeting any ISIS or whatever recruitment videos, but essentially everything they and their experts consider "hate speeech".
Expect a massive takedown of everything on Youtube and other Jewgle services, that goes against the MSM, against the chosen narrative, or is generally considered fake news or wrongthink by the left.
DO NOT TRUST THE JEWGLE
DO NOT USE JEWGLE SERVICES
DO NOT HAVE A JEWGLE ACCOUNT
or if you really need to, ffs never give them any of your real information