× Home Blog Archives Get Started With Smart Moderation

Did Facebook’s Trending News Influence the Election?

By Lena Harris on Tue, 30 Jan 2018

To say that the world was shocked at the results of the United States’ presidential election is an understatement. Towards the end of an election that turned expectations upside-down all throughout, predictions indicated that a Clinton win was imminent—only to have forecasts flip hours after polls closed.

Unsurprisingly, the media has been blamed for constructing a reality leading up to the election that seemed to suggest the opposite as to what was actually true; pollsters like Nate Silver received the most blame, including a large writeup criticizing a culture of “factiness” by Nathan Jurgenson, a researcher for Snapchat. This all isn’t to say that Trump shouldn’t have won; merely, it shouldn’t have been such a shock that he did. Some are now resolving to avoid news at all.

The journalism industry in general is getting a bad rap as of late, and even Facebook—who is reticent to call itself a media company, but serves as a significant news source for users—is under scrutiny. Diminishing confidence in Facebook as a news platform began in late spring when a Gizmodo exposé suggested the platform suppressed news from conservative sources on its “trending news” feature. Facebook responded by firing its news curation team, now relying exclusively on an algorithm to surface news, claiming that without human editorializing the system would be unbiased.

Things didn’t go so smoothly. Just days after it had seemingly resolved the bias controversy, Facebook’s “trending news” section featured a hoax story about Fox News firing host Megyn Kelly for sympathizing too much with Clinton. Facebook apologized, but more unsettling is the fact that the platform has continued to feature false stories: the Washington Post has identified “five trending stories that were indisputably fake and three that were profoundly inaccurate” logged in a three-week period.

Many of Facebook’s hoax stories are sourced from hyper-partisan, obscure websites (and often plagiarized from one another), preying on the tough-to-satiate info diet that’s come under fire since the election. It’s an ironic mishandling of initial criticism against Facebook: the company initially wanted to suppress news that was likely untrue or biased, only to be accused of bias in doing so. It then revised its system, the result of which only caused partisan stories to proliferate—hundreds of which are unabashedly pro-Trump websites pulled together by teens in Macedonia, per the Guardian.

For what it’s worth, Facebook founder Mark Zuckerberg says claims that Facebook’s face news influenced Trump’s win is “a crazy idea.”

What’s the takeaway? It’s obvious: a system run by algorithms without human intervention isn’t infallible; AI and humans should work together for best results. The relationship between them should also be transparent. Even when human editors were employed by Facebook, the mystery behind exactly how stories were featured perhaps stoked skepticism by critics. The total removal of humans didn’t fix the problem’s cause—Facebook just avoided it, taking the easy way out.

When it comes to keeping people informed, your brand’s social media profiles shouldn’t be moderated solely by an algorithm, either. With Smart Moderation, you can reach that algo-human sweet spot easily: this artificial intelligence-based comment moderation tool can erase spam and abusive comments within the minute they’re posted. Once all the nonsense it taken care of in real time, you can focus your efforts towards dealing with customers’ and fans’ sincere engagement.

While Smart Moderation works automatically, don’t make Facebook’s mistake: you don’t want to miss out on all the good feedback and consumer insight in your comments section!

Tags: Social Media, Facebook