This is very much a visual culture. You’re hit with tens of thousands of images per minute, real and fake. Ai is at the forefront of this bombardment. The plague of fake images, brought to you courtesy of the misinformation industry, is all AI.
This is also nothing like a smart culture. You’re allowed to be an idiot because it’s expected of you. As usual, the warnings arrived well before the fact, and were ignored. The legal situation is as blurry and unfocused as ever.
All of a sudden and as usual, everything everyone was warned about years ago is now a problem. Monotonous, isn’t it?
There’s a major quality issue with the deepfakes. Australian Associated Press has a very good, clear article about how wrong these AI fakes can be. It impacts everything about AI fake images, including their much-vaunted training methods.
These AI pictures are truly absurd. The Wright brothers are replaced with what looks like Tweedledum and Tweedledee. There’s no similarity at all with the real Wright brothers.
The point about AI training is simple. You’d think these lazy image-makers would have trained the AI to at least compare with the real images. Apparently, that’s too much trouble.
The market reach of fake AI images is pretty much universal. That’s not good news for anyone trying to promote anything, including themselves.
By a strange coincidence, this brings us back to who owns images of people? The people own their own images. They’re very much part of top-tier proof of identity, and that shouldn’t even be questioned as a legal ownership right.
…But who owns fake images if they’re given different names? As long as you’re not infringing on someone’s identity, it should be OK, right?
Not necessarily. The famous Taylor Swift deepfakes are a case in point. In this case, the images are close enough, and they do actual damage to the person.
Facial recognition is well-known as a core human must-have social skill. If it looks enough like Taylor Swift, you’re likely to think that it is Taylor Swift. Damage is automatically done by the publication of the images. Even if the sole instruction is “make an image of an attractive brunette” and the image is generated innocently, it’s still a potential problem.
To explain – AI is trained on large numbers of images. People who generate a lot of imagery are unavoidably included in the training materials. Something that looks like Taylor Swift is inevitable.
Add a bit of lowbrow nastiness and the desire to get money out of porn-obsessed morons, and you get porn attached to anyone’s face. Hard to understand? No, it isn’t.
There are forensic ways of managing this sort of thing. Think of it as a “forensic blockchain for images”. You can establish whether an image is too much like a person fairly easily. You could even cross-check a too-similar image before publication using AI. It really is an “image by numbers” thing, quite simple.
Meanwhile, back in Dumb Fakes Land, nobody’s thinking about things like this. Fake images are replacing influencers, photographers, and reality.
There are serious problems with deepfaking anything, including privacy violations, breach of commercial image copyright, and way too many et ceteras. If you deepfake a trademark or use it without accreditation, or go beyond “fair use”, you may have just published a million-dollar lawsuit or several.
Remember, these are unquestionably bona fide legitimate privacy and property issues. The publishers and the AI don’t have a toenail clipping to stand on, even in theory. All they can hope for is that the images don’t match too closely under scrutiny.
Dumb, it is. If anyone thinks people will miss an opportunity to make money out of a deepfake, they’re out of their minds. …Which sorta raises the question of why do deepfakes at all?
There is a market for this garbage. It’s new. It’s cute. It’s stunningly predictable. It’s quick. It’s cheap. It’s godawful, therefore it’s mainstream media. It’s lowest common denominator, therefore it’s good.
This is as dumb as getting AI to do your accounts. You are literally assuming that an automated system can tell the difference between fraud and real numbers. In this case, you’re assuming that people whose lives are based on their images won’t fight tooth and nail to protect those images.
AI deepfakes are also very much a major high-toxicity thing on social media. Nobody seems to be too fussed that hate campaigns are based on a lot of fake imagery and spin. The endlessly-remarked issue that X is now probably inhabited by as many bots as people doesn’t seem to matter much.
It’s commercial suicide, but what’s new? Bots don’t buy sponsors’ products, but that’s obviously OK with someone. Bots don’t get threatened round the clock, either. The odd but real picture is that non-existent people are now generating fake images at the expense of publishers.
We’re now at the black hole formation stage of fakes. These things can destroy their reason for existence already. We now have an artificially stupid technology which can put itself and its publishers out of business and create liabilities every second. Happy?
_____________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.