Moderated AI: Humans supplementing AI, and not the other way round

Humans assist AI with filtering and with training subjective topics, such as hate speech in the form of sarcastic comments. It is not so much AI assisting humans, but sustaining a reciprocal mutualistic relationship.

Case Study: Facebook’s Filters

Facebook employs thousands of human content moderators and struck partnerships with human fact-checking organizations, as A.I. alone would not do the trick, given the sophistication of bad actors who find ways around the platform’s filters.

“Human judgment would be required in certain situations, like a comment (“nice jacket”) that could be either an earnest compliment or a sarcastic insult, depending on the context. And when it comes to some of Facebook’s thornier problems, like trying to enforce consistent hate speech policies, human intervention may always be necessary.”

Reference:

0