Remember meForgot password?
    Log in with Twitter

article imageFacebook explains how it uses AI to catch terrorists online

By James Walker     Jun 16, 2017 in Technology
Facebook has publicly detailed its approach to combatting terrorism on its platform. The company addressed recent criticism it has received in the wake of terrorist attacks, stating "there's no place" on Facebook for terror and hate.
The company published a detailed news post this week that provides the most complete explanation yet of the approach it takes when dealing with terrorism.
It's the first in a series of planned posts titled "Hard Questions." Facebook plans to use the articles to start public discussions around the difficult decisions it faces each day, including the impact of fake news, the role of social media in democracy and what happens during "digital death."
In its post on terrorism, Facebook described keeping its users safe as "an enormous challenge." It said that its 2 billion global users and 80 supported languages make finding and removing terrorist content a major problem. The company is working on several new automated systems to alleviate some of the issues, including the introduction of AI-powered monitoring tools.
Facebook is now combatting the online presence of some terrorist groups using natural language understanding and image matching technologies. These neural networks are able to detect previously identified terrorism photos and videos when they're uploaded, preventing them from being spread by different users. Another neural network is actively learning how to spot posts that praise or advocate known terror organisations, allowing them to be flagged to human operators.
While AI will help to reduce the impact of terrorism posts on the platform, Facebook noted that it needs to tread carefully. The context of each image is hugely important. Some "terrorist" photos, such as a photo of an armed man wearing an ISIS flag, may be posted by other users who oppose the groups represented.
News reports, terrorist criticism and reverse propaganda attempts could all include this kind of image. Facebook's AI systems need to be able to decide whether to act on a specific photo based on the account that posted it. There's also wider contextual factors to consider that can indicate the probability that the post was created by hostile terror groups.
Facebook's also deploying other systems that are capable of flagging up new accounts created by banned terrorists, assessing the likelihood that an individual user could be radicalised and linking terrorist profiles across the company's app portfolio. Whether posts are uploaded to Facebook, Instagram or WhatsApp, the company wants to be able to identify it.
Facebook's publication of these details marks a significant change in the company's attitude towards terror reporting. Tired of being accused of enabling terrorists to retain a presence online, the company has decided to act by revealing more information on its ongoing efforts to combat the threat. Its overall message is simple: "There's no place on Facebook for terrorism."
"We remove terrorists and posts that support terrorism whenever we become aware of them," the company said. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities."
Facebook said the wider aim of its Hard Questions series is to demonstrate that it's serious about answering questions on the role of social media in society. The company stressed its commitment to appropriately managing its "responsibility and accountability" that comes with its huge scale.
Facebook's also open to hearing ideas about the posts themselves. If you want to hear how the social network responds to some of the biggest issues it faces, you should send your suggestions to More details can be found in Facebook's Hard Questions announcement post.
More about Facebook, Terrorism, Social media, Ai, Artificial intelligence
Latest News
Top News