In the aftermath of several deadly mosque attacks over the course of a couple weeks and the hate-filled content shared across social media platforms, it’s time Facebook, Twitter and YouTube better monitor its content and put measures in place that target such an influx of hate.
Online live streaming of two deadly mosque attacks in Christchurch, New Zealand remained online hours after the the attacks were carried out, specifically on Facebook and YouTube, where it was then shared across social media platforms. And as The Conversation reported, “the quick and seemingly unstoppable spread of this video typifies everything that is wrong with social media: toxic, hate-filled content which goes viral and is seen by millions.”
It’s time that social media platforms are held accountable for better monitoring of such hateful content. While most of them (Twitter, Facebook, YouTube, Google, Snapchat) are part of the European Commission’s #NoPlace4Hate program, which commits to the removal of hateful content within 24 hours, this is not doing enough to combat the influx of hate and stricter measures need to be put in place immediately.
A few specific ways that social media platforms could implement this might include better designed detection tools, enable easier, manual ways to take down hateful content, and limit the ability of content sharing.
This is an industry-wide problem that needs to be address as a whole. It’s time Facebook, Twitter, YouTube and many of the other social media platforms step-up their scope and invest in better content moderation to remove hate-filled content from spreading online.
Sign our petition and tell social media platforms to stop the spread of hateful content online.
If you liked this article, please donate $5 to keep NationofChange online through November.