It’s a cynic’s dream. AI layoffs have started decimating Big Tech workforces. This is all supposedly in the name of focusing on AI. Microsoft has the dubious honor of making most of the headlines, removing 3% of its global workforce. Simultaneously, the company is streamlining to focus more on AI development and remain competitive.
Critics say it’s more about saving money and making money by ditching employees. The “hate the employees” motif is entrenched in corporate and conservative environments. That’s not AI’s fault.
This culture was around long before AI and fits in nicely with the more traditional “hate the customers and humans in general” roles of middle management, which dates back to the Middle Ages at least.
What’s new about this mess is that you didn’t have to trust your whole business to a medieval serf. You do have to trust your AI, and you’re spending a lot of money to trust it. The absurdity of this blind faith level of trust seems lost on everyone.
AI errors are very common. Even the insurance industry is creating new coverage for “AI mishaps”. Lloyds has launched a new package to help companies manage the fallout from “errors or hallucinations,” according to the Financial Times.
Insurance companies insure on the basis of likely risk. That insurance won’t be cheap, risk levels could be anything, and insurance costs will inevitably go up.
A quick overview for the techno-awestruck:
The whole purpose of business management is to oversight operations. If you don’t have people overseeing all aspects of AI operations, you’re asking for trouble.
If you’ve ever done coding, run a business, managed anything in a highly legally combative environment, and/or done business accountancy, you need to see risks coming.
I have done all of those things over many years, and the motto remains “Trust Nothing and Check Everything”.
The risk of failure at any given moment is in direct proportion to the amount of information. With AI, that’s a no-brainer.
Any system can and will fail. These failures can even include malicious agents. It’s not exactly a reassuring look.
Ironically, Microsoft has created a taxonomy of failure modes in AI agents last month.
AI is not infallible or anything like infallible. Nobody ever said it was, by the way.
Management problems need to be controlled at the earliest possible pre-systemic level, not at the disaster level.
Cost-cutting can be totally illusory. Replacing employees with an expensive liability doesn’t make a lot of sense. You also instantly add the job of overseeing the AI to your management team.
Can you fire an AI? Yes, at great expense, plus the time factor for replacement will be expensive.
Even if you outsource the AI, your cost liabilities and those collateral damage liabilities won’t go away.
Let’s get real, shall we? The introduction of AI is already chaotic. It doesn’t need to get any worse.
_________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
