× Home Blog Archives Get Started With Smart Moderation

Instagram uses AI to filter profanity on comments

By Lena Harris on Tue, 30 Jan 2018

In a recent Wired profile, Instagram co-founder Kevin Systrom discussed his desire to clean up Instagram—and the internet—from abuse. Months ago, Twitter co-founder Ev Williams expressed similar concern about the toxic nature of discussion online. As social networks grow in popularity, they face greater numbers of comments to moderate to ensure a safe space for their users.

But does social media need to be cleaned? And if so, how can they do so without infringing on users’ freedom of speech?

Why It’s Important to Block Hateful Comments

Scrubbing a network or digital community clean of abusive comments is important for both users and the brands that call those networks home. For users, toxic comments often stifle conversation—in fact, cyberbullying online is a typical tactic of trolls to intimidate others into silence. Second, the inability to block hateful comments can negatively impact the mental health of users. Large networks and communities, therefore, have a responsibility to protect their users.

Cleaning up comments are also necessary for the health of brands that promote themselves there—which, in tandem, affects the health of the network itself. Failing to clean Instagram comments, for example, would dissuade brands from paying to promote posts: why spend the ad dollars for negative engagement? This makes tools for reputation management essential for networks like Instagram, YouTube, Facebook and more.

Finally, spam is prevalent across social media platforms, often bypassing their native filters. Spam is dangerous in that it can steal users’ data, download malicious software to their devices, or simply stuff discussion with off-topic comments generated by bots.

What Kevin Systrom and Other Social Media Leaders Are Doing

Social networks have been making great strides in protecting their users from abuse. Instagram, for example, appropriated Facebook’s DeepText software to proactively identify and block hateful comments. The software understands the meaning behind text. Kevin Systrom and his team trained it to moderate according to the platform’s community guidelines, using real Instagram comments.

Instagram uses AI to filter profanity on comments

Facebook, meanwhile, has been using a mix of artificial intelligence and human moderators to block hateful comments and illegal content. This includes drastically expanding its team of content moderators earlier this year. Facebook and Instagram both use AI and natural language processing to respond immediately to problematic posts, such as those that indicate imminent self-harm. In addition to these AI solutions, both platforms offer moderation tools for brands such as keyword blacklists.

As for YouTube, the platform has had a bumpy start with proactive content cleanup. This is because YouTube has the added angle of caring for advertisers and sponsors, who don’t want to support content that is overtly controversial or unsuitable for young audiences. Early this summer, a great swath of YouTubers found their content demonetized overnight—dubbed the “adpocalypse”—which continues to affect users today.

The problem of algorithmically protecting YouTube users from unsuitable content isn’t new. Earlier this year, the platform came under fire for hiding LGBTQ+ related content from younger viewers regardless of whether there was sexual content present or not. This was seen by many as silencing marginalized voices, which brings us to one of the biggest challenges of moderating a huge network: biases and politics can still come into play, even when employing AI.

The Challenge to Block Hateful Comments

It’s difficult to keep everyone happy when a platform is so big. Twitter, for example, has struggled in the past year to encourage growth through better policing content posted to the platform. Despite its efforts, everyone seems to be angry at the platform for different reasons: some users, feeling Twitter wasn’t doing enough to stifle hate, left the platform in favor of Mastodon. Other users felt the opposite: to them, Twitter was so bent on policing content that they felt voices were being censored unfairly and unjustly. They, too, left the platform—this time to Gab.

How could Twitter simultaneously police too much and too little? Moderation across a platform can be a messy business, and is often seen as political if clear guidelines and consistent enforcement aren’t in place. Leaders of these communities must ensure their efforts are consistent, and are responsible for any biases their moderation software might learn from humans. While AI is the inevitable future of moderation, it performs best when still led by and used in tandem with a human team.

When social platform populations rival those of nations, it’s important to consider the balance of free speech versus protection against hate.

Using a Personal Artificial Intelligence Assistant for Moderation

One solution to using artificial intelligence in a way that doesn’t silence a community is to use customized—rather than one-size-fits-all—artificial intelligence-based moderators. Instagram already lets users block hateful comments according to custom terms that they set. Other networks can and should follow suit.

Instagram uses AI to filter profanity on comments

With Smart Moderation, users can do just that across all their social media profiles. The software is easy to train according to your community’s needs and your moderation approach. Using the tool, you can more easily moderate comments from each of your profiles—YouTube, Facebook, Instagram comments and more—from a single dashboard, then let the software moderate automatically.

If you want to protect your community on Instagram, YouTube, Facebook, your website or somewhere else using artificial intelligence, try Smart Moderation for free today.

Tags: Social Media, Artificial Intelligence