Email
Password
Remember meForgot password?
    Log in with Twitter

article imageOp-Ed: Twitter to ban dehumanizing speech

By Ken Hanly     Sep 25, 2018 in Internet
Twitter executives Del Harvey and Vijaya Gadde call the new proposed rules part of a continuing effort to promote healthy conversations on Twitter and limit harms caused by talk on the platform.
Dehumanizing speech
The executive post reads: “Language that makes someone less than human can have repercussions off the service, including normalizing serious violence." Once the new rules come into effect the Twitter Rules will have an added sentence: “You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.”
The post then defines its terms: Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).
Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.
Rules overlap with common provision against hate speech and racism
Twitter has had complaints in the past that their rules applied only to race in isolation and certain protected classes. However, the new rules will apply to all groups. The old rules created loopholes that allowed hate speech against black children but not white men. The new rules will also be able to address situations where social networks have fueled racial violence.
Twitter asking for comments on the new rules until October 9th
Twitter is not just relying upon experts in setting forth the new rules but is asking for input from users perhaps in imitation of the US federal rule-making process. Users are asked to submit concerns or "examples of speech that contribute to a healthy conversation but may violate this policy".
The entire statement plus a form for feedback can be found here. On the feedback questions you are limited to 280 characters to reply to questions. After all, this is a Twitter feedback.
Some comments
Should Twitter be trying to encourage healthy conversations or should it just be serving as a platform to encourage conversations, sharing of views, and of links? It may be that in order to avoid lawsuits etc. where there are hate speech laws that Twitter may feel compelled to ban some accounts after warning them first. However, even here if Twitter were more concerned about free speech it would not act until there was some threat from law enforcement authorities to act. It could get legal opinions on a specific issue, not just react to criticism.
Censorship of Twitter
Twitter has to abide by the laws of the countries where it operates. Four countries consider Twitter conversation so unhealthy that they ban Twitter altogether. Twitter is banned in North Korea, China, Iran, and Turkmenistan. Other countries have had occasional bans or blocked specific sites as discussed in Wikipedia. After a successful complaint by a government official, companies, or others about an illegal tweet Twitter notifies users in the country that they cannot see it or sometimes the authorities themselves block it. Even where Twitter is banned users find ways to circumvent the ban.
Wikipedia suggests that Twitter also censors itself: "According to the Terms of Service agreed upon by users of Twitter, the web site may suspend accounts, temporarily or permanently, from their social networking service. One such example is on 18 December 2017 where it banned the accounts belonging to Paul Golding, Jayda Fransen, Britain First, Traditionalist Worker Party...In 2018, Twitter rolled out a Quality Filter, nicknamed QFD. It hides content and users deemed "low quality" from search results, and limits their visibility, leading to accusations of "shadow banning". After conservatives claimed it censors users from the political right, Alex Thomson, a VICE writer, confirmed that many prominent Republican politicians had been "shadow banned" by the Quality Filter.[3] He later reported that Twitter announced that they would issue a "fix" in the near future.[4]" Leftist groups and individuals have also complained of being shadow banned.
A problematic example
After a recent terrorist attack in London representative Clay Higgins a Louisiana Republican called for the slaughter of radicalized Muslims: “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.” This outburst passed through Facebook''s workers who were deleting offensive speech, However, a post by poet Didi Delgado of Black Lives Matter drew a punishment of her account being disabled for a week and the removal of her post that said: "All white people are racist. Start from this reference point, or you’ve already failed.” While this was Facebook the same discrimination could happen on Twitter. No doubt Twitter would claim that there are no "protected categories" under the new rules but in practice this could be different. Theoretically it would ban both tweets.
The solution to the problem in my view would have been not to ban either post. If users do not like them they can ignore them, answer them, or block the offending tweet user. Twitter should give preference to causing the least harm to free speech. It should leave the authorities to consider ways of negating any negative effects the use of the platform has on the general society.
This opinion article was written by an independent writer. The opinions and views expressed herein are those of the author and are not necessarily intended to reflect those of DigitalJournal.com
More about Twitter, twitter moderation, Hate speech