In its mission to cultivate a secure and user-friendly environment, social media giant, Facebook, has delivered a new initiative. The company has begun to develop an artificial intelligence (AI) system tailored specifically to identify offensive material in live streams.
The director of applied machine learning at Facebook, Joaquin Candela, shares the company’s vision. Candela discussed the uses of the AI system, revealing its potential to detect nudity, violence, or any content that violates Facebook’s community standards. This algorithm represents Facebook’s latest invention to ensure online safety.
Among the traditional defense lines, Facebook has previously relied heavily on users having to report all offensive content. The content is then reviewed by human moderators. Although this method has been largely efficient, there have been some criticisms. Many users have voiced discontent with its responsiveness and effectiveness, given the immediacy that some material warrants removal.
A clear instance of this was during an incident in July, when a video depicting the shooting of Philando Castile was posted by Facebook user Lavish Reynolds. Initially, Facebook removed the video after several user reports cited the content as offensive. However, after widespread protests, the social networking site allowed the video to reappear, this time with a warning for users. This incident implored the reevaluation of what designates a video as offensive or as a legitimate broadcast of a human rights violation.
Further controversy surrounded the company during the United States presidential elections in November when Facebook was criticized for its passive stance against fake news proliferation. Critics suggest that this flagrant deception potentially influenced the election’s outcome. In response, Facebook has outlined a series of solutions aimed to suppress false information.
Though still in the research phase, using AI to isolate offensive content lays the groundwork for Facebook’s new era of moderating content. Joaquin Candela provides insight into the procedure, asserting, “One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down.” Regardless of the AI’s effectiveness, there remains the need for human expertise to make final judgments.
Prior to this development, Facebook dismissed its human team assigned to combat fake news, a move that resulted in more controversies surrounding misinformation during the U.S. elections. Whether Facebook plans to reinstate this team to purge the platform of fake news and other inappropriate content is not confirmed.
Within a week, Facebook handles vast amounts of user reports. They developed an automated process to expediate requests and direct them to the appropriate team. However, with Facebook’s 1.8 billion users, this process demands further enhancement. Thus, augmenting their AI system seems like a valid plan to create a safe digital community.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.