Some time ago I made the idealistic remark that AI was humanity’s dream come true – Something to blame for everything. This optimistic view has since had to be revised somewhat.
You have to admire the scope and depths of instant adoption of ChatGPT. The mindless enthusiasm is also quite cute. It’s like the folksy romance of a billionaire riding a dead homeless person. A chatbot is now taking over the world. The headlines are endless. The criticism is constant.
…And nobody’s paying the slightest attention. “It’s not certain death; it’s an investment,”, etc. Obviously, a chatbot that can teach people how to write better phishing emails is a must-have for any business or school project. It’s also great for children who may never learn how to write meaningful information. Why wouldn’t you want to invest in it?
This is not ChatGPT’s fault. It’s not the developers’ fault. It’s not OpenAI’s fault. The problem is that this tech is being introduced into an idiotic society. Maybe OpenAI should insist on users having chaperones. The global whimpering will be something to see.
Just when disinformation has become an industry in its own right, this comes along. At the very time when critical analysis is an almost extinct thing, here’s your content creator AI.
A very expensively ultra-undereducated and basically pig-ignorant generation of cretins is now “evaluating” ChatGPT. These are the guys who paid someone else for their college degrees and call themselves a social elite.
Add the hype and mindless faith in technology, and all else follows. Tech is becoming like that seven extra beers you couldn’t handle, but had to have. The Peter Principle, which says that people in a hierarchy tend to rise to “a level of respective incompetence”, is in full flight here.
A brief look at the world will show you that the word “incompetence” is becoming increasingly inadequate as a description of daily life. Human languages aren’t ready for this.
Consider – The same people who hold meetings and turn a simple single click on a screen into a week-long ordeal are supposed to “manage” ChatGPT? Unlikely.
Excuses, excuses, and one actual excuse
ChatGPT is unique in another way. It’s the first platform where the entire world has instantly made every effort to exploit it. Every effort was also made to find the negatives. It’s exactly like social media. Social media is the problem, not the society using the medium.
A world full of psycho whackos couldn’t possibly be the problem, could it? For some incomprehensible reason OpenAI aren’t selling ChatGPT as a prosthesis for morons. They’re not advertising How to Be an Even Bigger Idiot as a selling point.
Pity, really. It’d be quite a brochure. You could have diagrams and everything.
Anyway, the lowest common denominator is the usual market reality. In this case, an instantly accessible excuse could get interesting.
“Did you ask the AI to create recipes for nuclear weapons?”
“Oh golly gosh no! Gee whiz!”
“Just a coincidence?”
“Yup.”
Any amount of information on any subject can be generated by AI, let alone a chatbot. The trouble with chatbots is that they’re gossips talking to halfwits.
“Did you know that (insert name of compulsory celebrity for better SEO) was a genocidal maniac?”
“Nope.”
As a vehicle for threats, slander, libel, and all-out attacks on people, what could be more useful than a never-shuts-up chatbot?
This is the one excuse that can survive – It was the user’s idea. If I were OpenAI, I’d be looking for a really good disclaimer writer, perhaps ChatGPT itself.
Something elegant, like:
“T’weren’t my idea. I’m just a down home chatbot. I just write what the users want me to write. It’s they-there user critters what done it. Them and their non-existent vocabularies and high-falutin’ neuroses. So there.”
Read those headlines. You don’t have to read the gruesome details. Human stupidity was already in a golden age. This could be a major breakthrough.
________________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.