Facebook has implemented a couple of innovations to renew its dented image. Following the growing criticism and backlash it has faced over the years, the social network on Tuesday, says it will no longer allow graphic images of self-harm on its platform.
Social media companies have been heavily criticized for doing little to curb the spread of hate speech, self-harm, and other vices across social media. Facebook and other big techs say they employ artificial intelligence systems to moderate posts that go against their policies. However, it’s unfortunate that the AI systems and the thousands of humans used to stem the flow of content that violates the policies of social networks and media platforms.
Sometime in June, the world mourned the death of 50 people who were shot over livestreamed video on Facebook. Facebook was criticised for not responding promptly and still having traces of the video because the AI system could not wipe it off completely. It had been reshared over and over, making it seemingly difficult for the system to detect.
Facebook said on Tuesday that it would make it possible for self-harm related images to be harder for people to search on its photo-sharing app, and will do everything possible, so it doesn’t appear as recommended in the Explore section on the photo-sharing app.
In preparation for the World Suicide Prevention Day, Twitter and Facebook said that content related to self-harm or suicide would no longer be reported as abusive. This comes with the notion that the concept has been stigmatized. The World Health Organisation says that about 8 million people globally commit suicide or that one person in every 40 seconds commit suicide.
Aside from employing AI systems to moderate posts on its platform, Facebook uses a team of moderators who watch live broadcasting to look out for violent acts such as suicide or genocide.
Governments worldwide are combatting the spread of hate speech, self-harm, misinformation, and pornography which are fast gaining ground on the social media space. The platforms are often blamed for encouraging these vices because unfortunately, the AI systems they employ cannot do the entire work without human intervention.
Amazon revealed last month that it plans to promote helplines to customers after some users called its attention that some other users on its platform had searched for nooses and other harmful items. Google, Facebook, and Twitter are already issuing helpline numbers to curb suicide.