Exactly a month ago, tech giants- Facebook, Google, Twitter and Microsoft unanimously agreed to take down extremist contents within two hours of them being uploaded. Facebook has reported that the use of artificial intelligence to remove such terrorist-related posts has been working.
However, there are terrorist groups other than Al Qaeda and Islamic State, which is why more work needs to be done. The automated system was specially designed to detect the language and stylistics of the known terrorist groups, Al Qaeda and IS, which may have difficulty with identifying those of emerging groups. Monika Bickert, Global Policy Chief, Facebook and Brian Fishman, Head of Counter-terrorism policy wrote in a report:
‘A system designed to find content from one terrorist may not work for another because of pf language and stylistic differences in their propaganda. But we hope over time, that we may be able to responsibly and effectively expand the use of automated systems to detect content from regional terrorist organisations too.’
Impressively, the automated technique has detected 99% of the extreme posts, and not the users, as previously expected. This for Facebook is a win, seeing that Mark Zuckerberg had expected the system to take many years to be fully developed.
Initially, the social media giant relied on manpower and a software to determine which contents to be removed, making it quite easy for the extreme contents to be promoted. With the use of AI nevertheless, the contents are automatically detected and removed immediately.
The technology makes use of matching videos and photos that have been used by the terrorist groups, to make its verdict. The automated system also detects when a deleted material has been reposted and removes immediately.
Also in use, is the ‘digital fingerprints’. This system runs a check on pictures and videos to find similarities with already flagged materials, as well as the frequency of occurrence in certain words and phrases. Once a content is flagged as having such similarities, it is deleted within an hour of it being posted and in some cases, extreme materials never go live.
A new German law in that came into force in October allows social media firms be fined for failure to remove extremist contents within twenty-four hours. Now that Facebook has achieved this by using an automated system, more achievements will definitely be expected. For instance, tracing the culprits behind the handles or pages to promote such propaganda, when in fact, there are some hate speeches that require human review before they are considered extreme. This is because the automated system cannot understand the language use.
While Facebook and other social media firms are working work to put an end to the promotion of hate speeches and extreme materials, the hoodlums will definitely be seeking other means to bypass the structure of the AI in place.