Connect with us

Hi, what are you looking for?

Tech & Science

The healthcare playbook for reliable AI

The enterprise AI industry has a dirty secret. Despite billions in investment and breathless coverage of each new model release, most AI projects never make it out of the lab. Research from Gartner puts the failure rate somewhere north of 80%. The pattern is so consistent it has become a cliche: impressive demo, enthusiastic pilot, quiet death.

Photo courtesy of Kousik Rajendran.
Photo courtesy of Kousik Rajendran.
Photo courtesy of Kousik Rajendran.

Opinions expressed by Digital Journal contributors are their own.

The enterprise AI industry has a dirty secret. Despite billions in investment and breathless coverage of each new model release, most AI projects never make it out of the lab. Research from Gartner puts the failure rate somewhere north of 80%. The pattern is so consistent it has become a cliche: impressive demo, enthusiastic pilot, quiet death.

Kousik Rajendran thinks he knows why. And his explanation has nothing to do with model architecture or training data. It has everything to do with a lesson he learned building healthcare software in Coimbatore nearly a decade ago.

In 2016, his team at Healtho5 Solutions pushed an update to their patient engagement platform. Hours later, a hospital administrator called. The system had sent appointment reminders to patients who had already been discharged. Nobody was hurt. But the incident crystallized something for Kousik about the difference between software that works and software that works reliably.

“In most software, a bug is an inconvenience. In healthcare, a bug can mean a patient misses a critical follow-up. The stakes are different. That changes how you build.”

Healtho5 built MedEngage, a platform helping healthcare providers automate patient communications and track outcomes. The technical challenges went beyond typical enterprise software. Integration with electronic health record systems that varied wildly across providers. Strict regulatory requirements around patient data. Near-perfect reliability requirements because gaps in communication could affect health outcomes.

The experience forced a certain discipline. Every automated action needed an audit trail. Systems needed to degrade gracefully rather than fail completely. Human oversight was built into workflows from the start, not added as an afterthought. Edge cases that seemed unlikely still got handled because in healthcare, unlikely events happen daily.

Kousik carried these lessons to Amazon Web Services in 2020, where he spent nearly five years as a Principal Solutions Architect working with healthcare and life sciences clients. The role offered a view into how the largest healthcare organizations approached AI adoption. What he saw confirmed his suspicions about why so many projects fail.

The successful organizations shared certain traits. They treated AI like critical infrastructure rather than an experiment. They invested in data quality before worrying about model sophistication. They built for maintainability from day one. They planned for failure, designing systems that could detect problems, alert humans, and recover gracefully. In other words, they followed the playbook that healthcare regulations had forced Kousik to learn years earlier.

“Healthcare does not forgive mistakes the way other industries do. If your e-commerce recommendation engine has a bad day, someone sees an irrelevant product. If your patient communication system has a bad day, someone might miss a screening appointment.”

Today Kousik runs Aivar Innovations, an AI services company he co-founded to help enterprises bridge the gap between experimentation and production. The company has two platforms: Convogent AI for voice applications and Velogent AI for process automation. Its approach borrows directly from healthcare principles.

The first principle: observability by design. Every AI system needs comprehensive logging and monitoring from the start. Teams need visibility into not just what the system is doing but why it made specific decisions. In healthcare, Kousik had to explain automated actions to regulators, providers, and sometimes patients. That discipline, he argues, is valuable everywhere.

The second: graceful degradation. Systems need to keep functioning in reduced capacity when components fail or encounter unfamiliar situations. Fallback behaviors. Timeout mechanisms. Clear escalation paths to human operators.

The third: continuous validation. Production systems need ongoing testing against real-world data, not just development datasets. Performance drifts as patterns change. Systems need to detect drift before it becomes a problem.

The approach appears to be gaining traction. Aivar works with clients across financial services, healthcare, logistics, and manufacturing. The growth has attracted attention from investors, with Bessemer Venture Partners among those backing the company in its seed round.

One detail that often comes up in conversations about Aivar: it is headquartered in Coimbatore, a city in Tamil Nadu better known for textiles than technology. In an industry clustered around Bangalore, Hyderabad, and the Bay Area, the choice seems counterintuitive. Yet Aivar is part of a growing wave of AI startups attracting early-stage capital regardless of geography. Kousik sees advantages in his location. Lower talent costs extend runway. The engineering pool is smaller but more stable, with less of the constant attrition that plagues major tech hubs. And there is value in distance from the hype cycle.

“When you are in a major tech hub, there is pressure to chase whatever is trendy. Every conversation is about the latest model. In Coimbatore, we focus on what actually works for customers.”

Kousik sees the broader market at an inflection point. The technology has matured. Large language models have made natural language interfaces practical. Computer vision accuracy is production-ready. Costs have dropped dramatically. Investor interest in AI services continues to grow, reflecting a bet that implementation will matter as much as innovation. The capability gap, as Kousik puts it, has closed. But the deployment gap remains wide open.

His prediction: the next phase of enterprise AI will be defined by operational maturity, not model sophistication. The winners will be organizations that treat AI as infrastructure, that invest in the unglamorous work of integration and monitoring and maintenance. The skills that matter will shift from building impressive demos to deploying systems that run reliably month after month.

It is a perspective shaped by years of building software where reliability was not optional. Healthcare regulations and consequences forced a discipline that most industries lack. Kousik is betting that discipline is exactly what enterprise AI needs. The lessons from environments where systems cannot fail may prove more valuable than any breakthrough in model architecture.

Avatar photo
Written By

Jon Stojan is a professional writer based in Wisconsin. He guides editorial teams consisting of writers across the US to help them become more skilled and diverse writers. In his free time he enjoys spending time with his wife and children.

You may also like:

Business

Amodei, in his blog post, said the company disputes the legal basis of the action but sought to reassure customers.

Business

Tailoring a resume to a specific job opportunity involves looking carefully at the job description and using keywords.

Business

The allocation will depend on the performance of the shares and, for Alphabet, on the amount of dividends paid.

Business

For the past 30 years, Halina Krauze has sat atop a 15-metre (49-foot) crane surveying the Gdansk shipyard.