Remember meForgot password?
    Log in with Twitter

article imageFacebook states its commitment to stamping out 'all' hate speech

By James Walker     Jun 28, 2017 in Technology
Facebook has explained how it responds to and removes hate speech while moderating its network. The company outlined how it defines hate speech and treads the fine line between thwarting abuse and censoring the right to free speech.
Facebook tackled the topic of hate speech in a "Hard Questions" news post this week. The company is using the series to inform users how it responds to some of the most complex issues it faces. It's Facebook's attempt to appear more open and transparent as it comes under fire for its approaches to fake news, terrorism and, in this instance, hatred.
The most eminent issue when tackling hate speech is working out how to define it. Facebook said its current working definition is "anything that directly attacks people" based on their protected characteristics. These are primarily fixed personal attributes, such as race, ethnicity, religion, sex and gender identity.
Facebook illustrated the complexities around defining hate speech by highlighting the different laws on the subject in two countries. In the U.S., any form of statement is protected by the Constitution's endorsement of free speech. Germany has much tighter regulation, placing restrictions on inciting hatred through speech that could lead to prison time. Facebook has to navigate between the varied and often polarised laws in the countries in which it operates.
When removing hate speech, context is critical to whether a post is taken down. Facebook said it routinely runs into trouble when a post includes a phrase that may be interpreted as hateful by one group of users but humorous by another. These include region-specific slang and words which have different meanings around the world.
The company acknowledged that sometimes it gets things wrong, removing posts that are invoking a hateful phrase in criticism of the hatred or to "reclaim" the word. As moderators only see the text of posts, without the wider context of the user's profile and personality, they have to make a decision based on its specific content. Facebook said its mistakes are "deeply upsetting" and "cut against the grain of everything we are trying to achieve."
In total, Facebook takes down over 288,000 posts containing hate speech every month. These include threats against individuals, denouncements of entire ethnic groups and attempts to incite violence in the wider community. Facebook said it is an "open platform for all ideas" and expressions but will not tolerate "hateful and ugly" messages.
With Facebook now beyond the two billion users mark, it is having to moderate content at a scale never before attempted. Artificial intelligence systems are under development to assist its community moderators. For now, the effort remains a primarily human one though, "built on the eyes and ears of everyone" on the platform.
Facebook is planning to hire another 1,500 people to join its 3,000-strong enforcement team, helping it to better protect its users. The company said it is "committed to improving" by developing new models to identify hate speech and proactively explaining its decisions as it progresses.
More about Facebook, hard questions, Hate speech, Terrorism, Online safety
Latest News
Top News