Deepfakes have a history of being used to manipulating users into thinking something is real when it is not – such as spreading fake political propaganda that can affect a candidate’s reputation. Many users of social media were hopeful that Twitter’s new policy might have sought to outlaw deepakes entirely.
A recent example of a deepfake was the video put to together by the U.K. Conservative Party about a Labour Party opponent. The video showed British Labour Party’s Brexit minister stumped on question about EU deal. The reality was an edited video, made to appear that Keir Starmer was too stumped to answer a straightforward question. The video was condemned by most neutral observers as an afront to truthful political discourse, as the New York Times reported.
It is for reasons like this that the Twitter policy is disappointing for many. Damien Mason, digital privacy advocate at ProPrivacy, tells Digital Journal that Twitter’s approach to protecting users from deepfakes is not nearly enough. He also notes that users of Twitter should speak up to the proposed policy.
Mason begins by looking at the positives: “It’s commendable that Twitter has committed itself to tackle deepfakes, which are an inherent invasion of privacy. Not only do they imitate our likeness without consent of the subject, but they can also be humiliating, misleading and have serious ramifications on politics and other sensitive areas.”
However, there are clear weaknesses with what is being proposed: “Unfortunately, the social media’s approach is severely lacking, with one foot firmly out the door at all times. Twitter stays intentionally vague on when it will remove manipulated media, requiring deepfakes to be physically threatening or already having caused enough harm after the fact.”
He adds that Twitter’s policy only goes so far: “Labeling the material is less than half the battle and does nothing to help the true victims of such attacks. It helps to avoid political catastrophes, such as defaming government officials with doctored videos but ignores the larger scope of deepfake content creators.”
Mason does acknowledge the complexities at play between freedom of expression and individual privacy: “It’s a difficult balance between free speech on a platform and moderating such non-consensual, invasive and personal content. I would suggest that Twitter perfect its flagging and report system regarding Deepfakes, allowing victims to submit their request and unquestionably take harmful material regarding themselves off the site.”
Mason encourages any one with an interest in this subject to contact Twitter: “Definitely get your voice heard on the matter,” is his succinct advice.