Remember meForgot password?
    Log in with Twitter

article imageYouTube to ban ads on hateful and demeaning videos

By James Walker     Jun 2, 2017 in Technology
YouTube has announced a revision of its ad policies that will see all monetisation revoked from videos with hateful, incendiary and demeaning content. The company has also addressed videos which use characters from family entertainment "inappropriately."
In a blog post today, YouTube said it has been called upon by advertisers and content creators to take a "tougher stance" on certain kinds of video on its platform. Today, it said it would step up to the task by making it harder for infringing publishers to make money from their work.
In particular, YouTube is working to remove hateful content from its platform. The company has provided an updated and clearer definition of what it considers to be a "hateful" video. Under the new rules, any video that does not meet the standard will be deemed ineligible for making revenue from ads.
YouTube now defines hateful content as material designed to "discriminate, disparage or humiliate" any individual or group based on several distinguishing factors of their personality or background. These include ethnicity, nationality, religion, disability, sexuality, gender and other characteristic features that are commonly preyed upon by people looking to systematically marginalise or discriminate other individuals.
YouTube recognised that its systems and guidelines "aren't perfect." The new policies have been constructed after thousands of hours of conversations with advertisers. They come after YouTube has been criticised for placing ads alongside videos which have shown hateful content. The subsequent public outcries have caused some major advertisers to withdraw from the platform.
In a direct appeal to advertisers and publishers today, YouTube said it is working to improve the situation and create a healthier ecosystem. It's adopting new approaches based on input from the community, designed to help brands ensure their material only appears next to videos which they're willing to support.
"We recognize there is still more work to do," YouTube said. "We know we have to improve our communications to you, our creators. We also need to meet our commitment to our advertisers by ensuring their ads only appear against the content they think is suitable for their brands."
As well as hateful content, YouTube is also targeting creators who publish "incendiary and demeaning" videos to its platform. This new rule is designed to reduce the influx of videos that "gratuitously" attempt to incite, annoy or cause disrespect to individuals and groups.
YouTube's taking a stance on another growing form of controversial content too. The company announced it will remove ads from videos that make inappropriate use of family entertainment characters. These videos include characters from shows designed for young audiences that have been placed into highly sensitive narratives with distinctly adult themes.
YouTube will be taking a hard line against this form of content, recognising the risks it entails. Any video that includes a recognisable character from a family-oriented show in "violent, sexual, vile or otherwise inappropriate behaviour" will have its ads removed. The rules apply to all creators, even if the video is targeting comedy or satire rather than deliberate provocation.
The new policies should help to make YouTube a safer place to consume content. They're an obvious attempt to demonstrate that Google is taking the issue seriously and is aware of new forms of hateful content. The company noted that while some videos may still meet its terms of service, the requirements for advertising are more stringent and the rules will be strictly enforced.
More about Google, YouTube, Advertising, hateful content, Online advertising
Latest News
Top News