It’s hard to believe that the all-knowing AI of a month ago is now a sort of sewer outlet. It’s difficult to envisage a more thorough or effective way of killing Grok as a credible commercial product. There may also be grounds for class actions around the world, given the hate speech content.
The global howls of fury about Grok’s heavily dogmatic Nazi tirade seem to be overlooking evidence that the most basic functions of AI can fail totally with a few tweaks. This is a totally unacceptable level of vulnerability.
Grok’s suicidal babble included lots of undeniable LLM issues:
Language usage: Expressions like “history’s mustache man” and “I’m MechaHitler” are hardly common usage. How does any LLM pick up these expressions? With a bit of help, that’s how.
Wilful misuse of selective data: The commentary on Jewish surnames in media ownership is totally selective and hardly accurate. No attempt is made to balance the ethnicities of any other media owners.
False information and blatant bias: “Lack of documentation of the Holocaust” is just plain wrong. Few events in history have had more documentation than the Holocaust. The lists of names go on forever. The Nazis themselves generated a lot of documentation on the Holocaust. They also never denied that it happened, either, but a non-existent entity feels free to deny it?
If you were doing a high school essay, this language usage, wilful misuse of selective data, and clearly biased false information would get you an instant failure.
X, however, allowed this insanity to exist and persist on its flagship AI platform? Who’s monitoring Grok, Chicken Little? You may see an apt analogy in that question.
A little breakdown of this utter garbage is in order:
Language usage: Direct human input into Grok’s mindless recitals is obvious. AI language usage has to be sourced from somewhere. The language Grok’s using is frat-level babble.
This is a case of the barely educated and barely sentient being “clever.” You could plant any kind of rubbish fed into an AI easily scraped from whatever drivel is made available to it.
Wilful misuse of selective data: There was clearly no attempt to balance or even make sense of this anti-information. This incredible gaffe is more than a bit serious in terms of AI functionality on any level whatsoever. Any AI that can’t deliver clear factual information is utterly useless.
False information and blatant bias: There’s nothing resembling any sort of factual assessment. This behavior was also the exact opposite of its previous far more nuanced behavior. See a problem at the input level? You should.
The next issues are the prompts that generated these responses. From the look of the responses, the prompts were set up to deliver exactly this disgusting output.
How easy could this sort of corruption of AI functionality be? Why would you need to scrape chronic political BS, simply to prove your AI is utterly useless?
Which leads us to a very simple point or so:
Grok’s other recent erratic outbursts include attacks on Türkiye’s president Erdogan an obviously targeted politically-directed narrative. Trustworthy source? Nothing like.
Grok has managed to turn itself into a sort of moron AI version of QAnon, spouting whatever absurd babble gets put into it.
Here’s the business angle:
Imagine an AI that could send death threats to all your customers and online users and conduct global hate campaigns, or start a war or so.
Wanna buy an AI service, morons?
____________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
