This article is more than 1 year old

Facebook has a solution to all the toxic dross on its site – wait, it's not AI?

No, it's human janitors toiling away, cleaning up wads of hate and terror incitement

Facebook is once again trying to scrub clean its public image after it was criticized for allowing extremism to spread on its social media platform.

“Our stance is simple: There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them,” the company declared in a blog post on Thursday.

“Although academic research finds that the radicalization of members of groups like ISIS and Al Qaeda primarily occurs offline, we know that the internet does play a role – and we don’t want Facebook to be used for any terrorist activity whatsoever,” it admitted.

The post then goes on to describe five areas where its artificially intelligent software could help squash the harmful spread of propaganda on its network – emphasis on could:

  • Image matching: If an image being uploaded matches one previously seen of a known terrorist or in a terrorism video, it will be removed before it reaches the platform. This will prevent other accounts from uploading the same video.
  • Language understanding: Facebook is experimenting with analyzing text that has been flagged for praising terrorist organizations such as ISIS and Al Queda, in the hopes that they can learn how to automatically detect malicious content in the future.
  • Removing terrorist clusters: Pages, groups, posts or profiles supporting terrorism will be used to weed out any other related material. The goal is to be able to work out whether an account is friends with a high number of accounts that have been disabled for terrorism, or if they share similar attributes.
  • Recidivism: Detecting and closing down recurring fake accounts with the purpose of spreading terrorism.
  • Cross-platform collaboration: Preventing the same terrorist accounts accessing other of Facebook’s apps, including WhatsApp and Instagram, by sharing user data among the different platforms.

Facebook has good intentions, but its systems are not yet advanced enough to carry out the tasks above. Algorithms still have trouble understanding the broader context of what makes content harmful and if something should be considered terrorism or not.

So for now, despite boasting about how its AI could solve its problems, Mark Zuckerberg’s empire will instead rely on human users to report harmful accounts and terrorist content. Over 150 people are employed by the California giant to focus on countering terrorism, we're told.

It's a similar situation with the spread of fake news. There are plans to use AI to help weed out false information, but for now Facebook is relying on human brains to flag down clickbait. ®

More about

TIP US OFF

Send us news


Other stories you might like