The U.K. government, without much success, tried to convince social media companies and other providers of online content to self-police so that vulnerable people and young people are not exposed to inappropriate content. Now the government is set to draw up legislation that will impose high penalties on social media companies as well as technology firms, such as those who operate Internet browsers, if they are seen to have failed to protect users from harmful content.
The proposed law arises from the publicized case of 14-year-old schoolgirl Molly Russell, who committed suicide after, as her parents have stated, viewing online material on depression and suicide. Much of the material was said to have been accessed via Instagram.
With the specifics of the new legalisation, the government Department for Digital, Culture, Media and Sport will establish an an independent watchdog that will write a “code of practice” for technology and social media companies to follow. The proposal will go out for public consultation for a 12 week period.
Already the proposal has attracted supporters and detractors. The supporters are pleased that measures will be put in place to help to minimize the exposure that young and vulnerable people will face to material that potentially feeds anxiety or depression. The opponents fear that the proposals will, place restrictions on free speech.
In terms of precedent, Germany has a law called NetzDG which came into effect at the beginning of 2018. The law places similar controls on technology firms, applying to companies with more than two million registered users in the country. The European Union is additionally considering tighter controls, specifically on prohibiting the hosting or sharing of terror videos.
Speaking with the BBC and taking a break from the complexities of Brexit, government minister Jeremy Wright said: “The era of self-regulation for online companies is over. Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough.”
