Australian/American Software as a Service (SaaS) giant Atlassian laid off 1600 people this week in what appears to be a reluctant but painful repositioning exercise.
The current state of coding is core business for Atlassian. Software as a Service is a huge sector right in the centre of the storm created by AI coding. Atlassian is naturally trying to adapt to an emerging and somewhat neurotic market.
The big picture is chaotic, to put it politely. Understandably, the many dimensions of AI coding are creating havoc and obvious indecision in business software development. AI can write code, sure, but there are many related, potentially expensive issues.
There’s an irritatingly familiar back story to this mess. The recent big selloff in software development stocks underlines a further fundamental problem. The market seems to think AI will do it all.
It can’t, it won’t, and it shouldn’t. This generation of AI is barely potty-trained. It’s clunky, and it’s error-prone. Just tacking on an LLM and expecting vibe coding to do it all is far beyond absurd. It’s dangerous.
If you think someone’s semiconscious, underqualified level of literacy instantly translates into telling AI to write great code, you’re not doing a lot of thinking. Of course, it won’t turn out pristine, perfect code for all occasions. You might get ballpark, but you’re a long way from business-standard trustworthy code.
AI isn’t particularly literate anyway. Pedantic, yes. Inflexible, yes. Linguistic syntax errors as code are still potential syntax failures. This is nitpicking at a truly obsessive level, but if you don’t pick the nits properly, your code won’t run at all. Imagine an entire language as an opportunity for coding bugs.
Software as a service is essentially the customization of software for business purposes. It can’t be a guessing game. It has to work well within the operational metrics and performance demands of businesses. That’s what SaaS is all about.
There’s a certain karmic irony in the fact that so soon after the software selloff that AI coding is now creating havoc in big businesses like Amazon. If you’re seeing dollar signs heading for the exits, bingo.
Add to this the equally ironic fact that AI has a newly discovered talent for finding coding bugs. At the same time, Anthropic has created a code review tool to manage AI coding quality. What a coincidence.
If you’re somehow getting the impression from this rhapsody of realism that AI needs strict supervision, you’ll at least avoid going broke.
An enchanting narrative for the curious about how much damage a simple glitch in software can do:
I supervised a project that issued notices trying to extract statutory fees and document lodgements from the merry burghers of Sydney. The recipients were accountants, lawyers, and corporate managers. We trustingly issued 40,000 notices to the people who had already paid and lodged their documents, but not the ones who hadn’t. It was as much fun as it sounds.
At the same time, the database was erasing old data when it entered new data. It was bliss, and it took weeks to fix. It almost derailed the project entirely.
We never got an answer as to exactly how this dog’s breakfast happened, but could a few lines of code have done it? Yes. Did we get threatened with lawsuits? Of course. Feeling better about your coding options? Point made.
Meanwhile, back at the software situation:
You don’t have to go back to writing code on stone tablets.
You do need an absolutely idiot-proof, properly tested regime for managing code quality.
You will definitely need SaaS as a built-in fixer.
Do NOT trust AI coding to be some sort of fairy god-agent for your business. Check everything ruthlessly.
_________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
