Facebook Deploys Artificial Intelligence as a Counter Terrorism Measure

EC Proposed-New Changes to ePrivacy Directive to Affect Facebook and other Companies

News filtering in suggests Facebook’s new Artificial Intelligence program would be deployed to stop graphic pictures such as those of a terrorist attack from being uploaded. This looks to be in response to complaints that not much seems to be done as regards keeping terrorist content from its site. This new strategy would help remove these contents from the site.

Artificial intelligence would be used in conjunction with human monitors who help to evaluate the content on a case by case fashion. Although developers believe it would be deployed on a larger scale with time.

One of the first use of this application is the identification of uploads that breach the terms of use of Facebook, such as videos and photographs of killings beheadings, shootings and other terrible actions, Thus, stopping Facebook users from uploading them on the platform. This comes on the heels of increasing terror attack content of late, raising the crucial question of what exactly the social media firms are doing to check such excesses.

In a report late Thursday, Facebook came up with a description of how the Artificial Intelligence system would, with time, instruct itself how to select and identify red flag phrases used to promote terrorist organisations.

This same process could assist in identifying users of the platform who are linked to pages or groups that have ties with terrorist organisations or maybe who are recurrent visitors on such site and create false accounts so as to disseminate such content. The projection is that the technology would at some point resolve everything on its own, although human moderators are still very much needed.

Facebook’s head of policy management team, Brian Fishman, was quoted as saying that the company had a squad of about 150 experts working in about thirty languages carrying out the reviews. Facebook has come under heavy criticisms over the fact that it seemed not to be doing enough to monitor its website for videos and pictures uploaded by terrorist groups.

A month ago, the British prime minister, Theresa May, was quoted as saying that she would confront internet companies such as Facebook to do more in terms of monitoring and stop them. According to her, the extremist ideology cannot be given a free breeding ground. Ms may come out with this statement on the heels of the Manchester concert bombings that claimed the lives of 22 persons.

This demand from the British prime minister is, however, a departure from the realities obtained in these social media platforms. According to a fellow at the International Center for counter-terrorism Mr J. M. Berger, a big part of the challenge is for firms like Facebook is in detecting what can be qualified as terrorism, meaning it should go beyond statements that favour groups like ISIS.

The big question on the lips of many however is, “will it be effective or will it be too extreme a measure?” What exactly is the aim? Is it to discourage people from joining terrorist groups or just simply to stop them from posting extremist contents on the platform?