Email
Password
Remember meForgot password?
    Log in with Twitter

article imageFacebook faces new privacy criticism, moderators can see info

By Leigh Goessl     Mar 6, 2012 in Internet
A new controversy related to Facebook and privacy emerged after it was discovered that the network's outsourced content moderators can see user information that was previously alluded to be unseen.
Recently, Gawker exposed how Facebook handles content that is either questionable, offensive or breaks terms of service.
The process is not internal as many users probably assume it is, but is handled by outsourced workers. The social network giant allegedly pays these moderators $1 an hour to scan offensive content reported by users.
Moderators are outsourced workers
Facebook hires moderators through oDesk, and these workers are responsible for examining flagged content and then either delete, ignore, or escalate the flag; if the latter choice is made, the content goes back to California, where a Facebook employee takes action.
Gawker's article highlighted some of the mystery that is behind the network's 'policing' and outlined how Facebook's moderation process works. The information was shared by a former moderator, who was discontented and feels Facebook is "exploiting the third world."
In Facebook's defense, for years various companies have farmed out this sort of task to outsourced workers, and in some instances, unpaid volunteers. It is not an uncommon practice in the digital world. However, presumably most users would expect a level of privacy and diligence in safeguarding user information.
It's like "looking at a friend's" page
New information has surfaced that illuminates Facebook hasn't been, perhaps not surprisingly, protecting user privacy as much as the network has alluded. Facebook has a long history of privacy-centric controversies.
According to The Telegraph, Facebook said in response to the Gawker piece, “No user information beyond the content in question and the source of the report is shared.”
Turns out that may not exactly mean what it sounds as if it does. Appears this depends upon your definition of "content."
Moderators apparently see information beyond the questionable, often disturbing, content that is routinely reported by Facebook users. According to one former moderator, he was able to see the name of the user uploading the reported as offensive content, the subject of the image or person tagged in the photo, and also the person who did the reporting. Reportedly, there are no security measures in place to prevent screen shots from being captured and looking up additional information on the user online, as one former moderator admitted to doing.
Amie Derkaoui, 21, of Morocco, showed The Telegraph screenshots of exactly what moderators are able to see when they evaluate the flagged content received. Derkaoui claims moderators could take screen shots if they chose and said what he saw equated to much personal information, describing it as like "looking at a friend’s Facebook page."
Derkaoui also said he was not "explicitly told" the oDesk client he was working for was Facebook.
Facebook defines 'content'
Facebook responded to The Telegraph by saying, “On Facebook, the picture alone is not the content. In evaluating potential violations of our rules it is necessary to consider who was tagged and by whom, and well as additional content such as comments…Everything displayed is to give content reviewers the necessary information to make the right, accurate decision.”
In a separate Telegraph report, the privacy issues are further highlighted. The publication notes no real vetting is done for the people hired, as individuals work from home and do not seem to be subjected to criminal background checks.
Yet they are able to see information that many users might place behind Facebook's privacy settings. And there is no way to control moderators taking these images and republishing them on the web, or as security specialist Graham Cluley told The Telegraph, "By sharing information about a Facebook account holder, there is obviously the potential for abuse and blackmail."
Privacy issues
Over time there have been numerous controversies relating to Facebook's practices and philosophies regarding privacy.
Which leaves pause to question whether or not Facebook has learned anything from previous backlashes? Prior issues that have angered everyone from privacy advocates to users to getting the attention of government agencies.
If moderator workers are disgruntled enough about their low salaries, is there cause for concern any potentially unsavory might try and earn a few extra bucks by screen-capturing user information?
After all, Gawker had reported, "As a sort of payback, Derkaoui gave us some internal documents."
Not a bit stretch to wonder how many others serving in this role could potentially be amassing information assumed by users to be internally protected by Facebook.
Derkaoui told The Telegraph, “Facebook has to increase these wages. One dollar an hour is the lowest wage at oDesk and I believe it must be the worst salary paid by Facebook. They also have to recruit people to do this job from around the world, not only those from the third world… And they need to keep users’ data private too."
Media reports say both Facebook and oDesk have not given comment regarding the moderation process or rate of pay. The network also did not confirm or deny whether or not it is still using oDesk services for the moderation process. However, a Facebook spokesperson did say, "These contractors are subject to rigorous quality controls and we have implemented several layers of safeguards to protect the data of those using our service."
More about Facebook, Privacy, facebook moderators, user information, moderators
 
Latest News
Top News