Facebook's moderation problems are only going to get worse this year


Facebook's moderation problems are only going to get worse this year

A Mark Zuckerberg video appeared on Instagram last summer, in which the Facebook boss could be seen claiming that he was in possession of stolen data of billions of people, and had complete control over their lives, secrets and future. In the end, it was found to be an art project instead of an attempt to spread misinformation. Regardless, Facebook didn’t ban it, stating that the video didn’t violate its policies!

That episode revealed how big time tech organisations aren’t equipped to handle the consequences of fake media generated by Artificial Intelligence (AI), popularly referred to as deepfakes. Facebook’s battle with fake news continues even today, and is expected to assume new proportions in 2020. However, looking at it closely, it’s not Facebook’s fault as deepfakes are very difficult to moderate, not because they can’t be spotted, but because they fall into such a huge category that any effort to ‘check’ AI-edited videos and images can result in impacting a whole lot of harmless content as well.

Facebook has announced moderation policy covering deepfakes

Even though Facebook announced that it will ban deepfakes, as highlighted above, placing a blanket ban on them would mean also removing popular content such as artificially aged faces and gender swapped Snapchat selfies. Banning any type of politically misleading deepfake content would take the tech companies back to the political moderation problems they’ve been trying to tackle for many years now. And considering the fact that there is no quick-fix algorithm which can automatically detect AI-edited content, any ban would result in creating more amount of work for human moderators. In other words, Facebook will have a lot of work on its hands.

Facebook has announced moderation policy covering deepfakes

Regardless, this seems like the direction that all major platforms, including Reddit and Facebook are taking. Both announced moderation policies covering deepfakes recently. However, instead of stamping out the format in entirety, they seem to be taking a narrower focus. In a statement, Facebook informed that the company will get rid of all manipulated and misleading media that has been created with machine learning or AI in a manner that it isn’t evident to an average individual, and might make people think that the subject said words they never did. But the company emphasised that the ban will not cover any type of satire/parody or misleading edits created using the conventional means, such as Nancy Pelosi’s video that went viral last year showing her slurring.

Plenty of loopholes

Experts believe that such policies have plenty of loopholes. Facebook’s ban only involves media that is edited, including speech. It implies that posting of a deep fake video wherein a politician is shown shaking hands with a terrorist, burning the American flag or taking part in a white nationalist rally, wouldn’t be removed. This was confirmed by Facebook too.

Although these omissions are the most glaring of all, they show the difficulty level involved in separation of deepfakes from rest of the content, and the platform moderation on the whole.