× Home Blog Archives Get Started With Smart Moderation

UK Lawmakers Attack Social Networks for Failure to Remove Illegal Content

By Çiler Ay on Tue, 30 Jan 2018

UK lawmakers recently accused Google, Twitter and Facebook for allowing the spread of racism and hate speech on their platforms. The accusations come from a parliamentary committee report, which alleges the networks are prioritizing profit over user safety due to their sluggish removal of abusive and illegal content.

The Claim: Removing Abusive Content on Social Media is Too Slow

The House Affairs Committee noted that the networks’ size and global reach gave them a responsibility to abide by the law, which they’ve failed. They noted, for example, that YouTube copyright strikes are carried out almost immediately, while hate-inciting videos remain (and continue to be monetized). In some cases, the committee found terrorist recruitment material remained hosted on the networks after they had been flagged. When the networks have the resources to remove certain kinds of content swiftly, lawmakers say, they should likewise ensure their content complies with the law.

Another criticism the report raised is that responsibility to report harmful content falls on users; according to lawmakers, the big social networks effectively outsource their moderation to law enforcement monitoring the sites for extremist and illegal content.

This isn’t the first time lawmakers have had heavy words for the big networks. Months ago, Germany’s Justice Minister Heiko Maas proposed fining 50 million euros to networks that failed to remove illegal content on social media. According to Maas, Twitter only deleted 1% of flagged illegal content; Facebook deletes just 39%.

How The Big Networks Try to Protect Social Media Users

Just days after news of the House Affairs Committee report broke, Facebook announced it was adding 3,000 new members to its community monitoring team. Citing the need for a more streamlined service, Facebook is working to make reporting content faster and easier. Zuckerberg has also expressed plans of integrating AI into its content moderation efforts to catch problematic content.

Twitter’s system of filtering out harmful or illegal content is reactive; users can self-moderate by defining keywords they don’t want to see (and auto-hiding tweets from accounts with default avatars), but it doesn’t stop there.

A few months ago, Twitter cracked down on social media abuse by placing users in a temporary time out for swearing at verified users. It was originally assumed by users that this was an automatic process—triggered by use of politically incorrect or abusive language—though Twitter confirmed it uses a mix of user reports and its own team to aggressively monitor content. When it comes to permanent bans or suspensions, though, the platform still relies on user reporting rather than its own team seeking illegal content on social media.

On YouTube, responsibility again falls on users: either the viewers of content marking inappropriate video, or users and content creators alike reporting abusive comments.

How AI Can Protect Social Media Users

Social networks can and should integrate systems that don’t put the onus on users to report content. Not only does this process take time, but by design it requires users to be exposed to abusive content on social media—the very thing they should be protected from. With automated comment moderation powered by artificial intelligence, abusive content on social media can be removed almost immediately, before users see it.

Maas proposed that social networks offer a 24/7 service for moderating their platforms. While 24/7 moderation can be tough for human teams given the amount of content posted every minute online, automated comment moderation can perform all-day protection easily.

Our artificial intelligence comment moderation tool helps brands and networks big and small ensure their communities are safe. We’re available on all your social media and digital platforms, so you can provide round-the-clock protection for your fans with fast, simple integration. Developers of social platforms can integrate our tools to keep their entire networks free from spam, abuse and illegal content within the minute its posted.

With more and more being posted each day—and with new platform features to express ourselves—the problem of moderating content can only become more difficult without the proper tools. But by embracing AI and machine learning, social networks can stay ahead of the curve and provide a safe place for their users.

Tags: Social Media, Content Moderation