Facebook is doing a lot of make people happy and the latest in such attempts is that it is now developing an artificial intelligence (AI) system to automatically flag off offensive material in live streams. Speaking to reporters, Joaquin Candela who is the director of applied machine learning at Facebook said the “algorithm that detects nudity, violence, or any of the things that are not according to our policies,”
Before this announcement, the commonly had largely relied on its users to report offensive material which is then reviewed by human employees. But users have often argued that this process has not been as effective considering the urgency with which some material need to be removed from the Facebook community. Just this past July, Facebook removed a live-recorded video of Lavish Reynolds (a Facebook user) who had posted the video of the Philando Castile (her boyfriend) shooting by the police. The video was removed after some users reported the video as offensive and after a series of protests by other users, Facebook allowed the video but this time with a warning to users who wanted to wanted to watch the video. This raised the question, what’s offensive and what’s a legitimate broadcast of a perceived human right violation?
Fast forward to November US presidential elections, Facebook was again accused of doing little to fight fake news which could have had an effect on the way the election turned out. To this, the company announced a series of solutions to this problem eventually.
Using AI to flag off unwanted material by Facebook is still at the research stage though but to achieve this eventually, Candela said “One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down.” In any case for this to be effective, human expertise is still needed.
Facebook had fired its human team that was tasked with tackling the issue of fake news on its platform only for a bigger problem around that to arise later in the year with respect to the US elections. It’s not clear if Facebook plans to re-hire the team and even expand it to effectively carry the task of ridding the site of fakes news and unwanted material.
Each week, Facebook received tens of millions of such requests from users to which they built an automation process to detect duplication and send them to the right desk and that’s it. But when you have 1.8 billion users, it might be time to lets that automation process do much more.