“Big Box” social media portals, such as Facebook, YouTube and Twitter, have come to play a significant role in our lives, guiding how we interact with our friends and colleagues and how we share information with those we know (and sometimes those we don’t).
Suddenly our every thought matters, and we upload the most minute and private details of our personal lives. In doing so, we place a huge amount of trust in these websites that this information will not be abused or made more available than we intend. We greedily peruse the thousands of photos that our friends upload and are ready to click on any link that shows up in our news feed.
It is undeniable that these portals have changed the inherent definition of social interaction, as we hide behind our computers yet somehow feel closer than ever to our friends and followers.
But have you ever stopped to question social media’s commitment to our well-being?
While these sites, on the surface, are vibrant and innovative platforms for staying connected to people and the world around you, there is a darker, sometimes even nefarious side that often goes unnoticed.
Community standards and acceptable use policies may give us the warm and fuzzy feeling that somebody is watching out for us, but these are little more than words without proper enforcement. With all the recent press regarding pornography and child pornography on Facebook, I’m left wondering: What is being done to enforce these standards?
Thanks to the Nov. 14-16, 2011, porn spam attack that hit Facebook, it is more or less common knowledge that the social network’s enforcement policy relies on live human moderation of content flagged by community members via the “report/mark as spam” option – a mechanism that failed during the porn spam attack because users trended towards commenting on posts rather than reporting them.
This begs the question: Is this passive moderation enough to deserve my trust as a user? And more importantly: Whose responsibility is it to keep the porn off Facebook?
In a given week, some 52.4 percent of the total volume of blatantly pornographic images newly identified for blocking by filtering company NetSpark’s Dynamic Graphic Inspection mechanism are found hosted on Facebook’s servers, as well as 75 percent of the total volume of newly detected images flagged as problematic in nature and in need of further inspection. One can assume these images are just a drop in a bucket given the sheer volume of photos uploaded daily to the social media portal.
Facebook’s total active subscriber base is inching toward the 1 billion user mark, and a 2012 projection of $4 average annual revenue per user translates to $2.5 billion in profits for Facebook, all because of our blind faith in the security of the “world” the social network has built for us. With this in mind, is the expectation that Facebook take a more proactive stance in enforcing the terms of its “Statement of Rights and Responsibilities,” which explicitly states, “You will not post content that: is hateful, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence” really so ridiculous?
Automated detection software is already in use to inspect images against a database of child pornography images from the National Center for Missing & Exploited Children in an attempt to control the distribution of child porn over the social network, yet the content is still being uploaded and accessible before it is detected for removal, leaving me to ask, “Why?”
If a solution exists that is capable of high-volume scanning of image uploads to accurately confirm that the images do not violate the most severe of content prohibitions, why is this technology not being implemented? When did Facebook delegate to its subscriber community the job of flagging the inappropriate content for inspection, and what emotional and/or mental harm is this causing those users (particularly the thousands of underage users active on the site)?
Perhaps Facebook should look to YouTube for direction in this arena. The video-sharing portal has developed a number of safety features, including the ability to enable safety mode and the YouTube for Schools educational video library, to ensure that appropriate content can be accessed reliably and responsibly, without risk of exposure to harmful content.
I don’t deny Facebook’s contribution to the Internet, or its usefulness as a social connector. But at the end of the day, I have to say I feel a bit let down by its general disregard for social responsibility and hope that my ad-click revenue will be put to better use in 2012.