As AI becomes more mystic, it becomes more dangerous. It’s an odd spectacle to watch the tides of pure babbling ignorance come in while such vast amounts of risks from AI are identified on an almost hourly basis.
These risks come with serious dollar values. Even now, well before a true AI social and business environment has emerged, major risks are showing up routinely.
Big Tech may well believe its publicity. By now, it more or less has to believe it because there’s that much money tied up in it.
Nobody else with the slightest level of familiarity with AI believes it at all. Even the G7 doesn’t.
Let’s look at a few issues, shall we? Say, biometrics?
The incredibly enthusiastic and yet strangely disingenuous science and cottage industry of security biometrics is promoting itself with way too few questions being asked.
There’s nothing particularly secure about biometrics in any application.
Biometrics are quite literally who you are for ID purposes. As such, they are high-priority privacy safeguards, in theory, if not in practice.
Online security, including Google and Microsoft 365, are actively promoting this recognition tech and potentially spraying unprotected biometrics all over the internet.
They are supposedly a set of “unique identifiers.” They’re also symptoms of the outdated ecology of thought infesting this whole new class of tech.
For instance:
Biometrics were unique about 30 years ago.
Unique, except Hollywood has been copying them for that many years in movies. They’re at cut and paste level now.
Unique, except the same set of numbers and vectors can be used as the basis for any number of deepfake copies and tweaks.
You’re already at the forensic level of biometrics at this very early point in discussion. Every pixel could be a conviction. Fuh, isn’t it?
These biometric basics are called “fingerprinting” by understandably skeptical and very wary experts.
Obviously, AI that can do lifelike Zoom calls and fake job interviews with convincing deepfakes couldn’t possibly steal your biometrics, now, could it?
This is an unfashionable viewpoint.
Everything must be wonderful with AI, biometrics, and rampant cybercrime because some pitiful lost dork in Big Tech PR says so.
Biometrics are being touted as the cure for fraud, not the most likely cause of emerging and future fraud.
It’s a bit like saying the best place to put a live minefield is in your living room.
That’s because biometrics and AI are already very much in your living space.
AI, so conscientiously, stupidly, and deliberately deregulated in the US, will be the main driver and fulcrum of cybercrime in the future. That’s what this is all about, in so many ways. Corrupt AI agents, AI fraud, biometric ID theft, and the rest of the avoidable garbage of mismanaged tech are on their way.
To quote Mr Cohen, “Hallelujah.”
It’s hard to be idealistic. The internet was supposed to plug in humanity. It plugged in criminals, morons, and worst still, politicians. AI doesn’t even need actual people to operate.
These are the questions for Big Tech:
How can you even pretend any of these issues are being managed at all? Where’s your proof?
AI is deregulated, so what are the legal protections?
Do we have to sue you to get even theoretical security every five minutes? Eight billion people in a single class action could be interesting. It’d be like Bleak House on a slightly larger scale.
What rights, if any, do people have if their biometrics are stolen and used to commit crimes?
What do the ineffectual, senile, and decrepit laws say about any possible instance of things like that, if anything at all?
If a deepfake crashes the markets, who kisses it better?
At least try and pretend to know what you’re doing occasionally.
_______________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
