As enterprises race to adopt AI, many are discovering that experimentation alone isn’t enough to deliver real business value. Legacy systems, fragmented data, and outdated architectures continue to limit how far AI initiatives can scale, often stalling promising ideas at the pilot stage.
To discover more, Digital Journal spoke with Arun “Rak” Ramchandran, CEO of digital engineering company QBurst.
Ramchandran shares his perspective on why modernizing the data foundation is critical to AI success, the misconceptions leaders have about replacing legacy systems, and the practical steps organizations can take to move from AI fascination to measurable ROI, without disrupting core business operations.
Digital Journal: Why do legacy systems hold companies back from fully leveraging AI, and what’s the biggest misconception leaders have about fixing them?
Arun “Rak” Ramchandran: Legacy systems often suffer from messy data and workflows that aren’t ready for AI’s speed and scale. Without a proper Data Foundation, AI cannot provide meaningful results; instead, it can end up amplifying errors in case the context is missing. The biggest misconception, however, is that modernization requires a total replacement. Leaders often mistakenly think they must delete an old system to start a new one. when one can use integrations and wrappers to make old systems talk to new AI. Technology today has evolved in order to support this scenario.
While this approach allows organizations to make a beginning to adopting AI, the risk for many enterprises is falling into the Retrofitting Trap. They are accumulating AI Debt by bolting GenAI onto systems that weren’t designed for it. This is why so many initiatives stall at the pilot phase – they lack the underlying architecture to scale. We are moving from a period of AI Fascination to AI Accountability, where the focus is no longer on which LLM to use, but on how to demonstrate tangible ROI.
This challenge is compounded by a lack of foundational readiness. Many organizations rush to experimentation while bypassing essential investments in data strategy, data engineering, and governance. Without modernized data foundations and clear control frameworks, AI initiatives remain isolated PoCs, rather than enterprise capabilities.
Breaking this pattern requires a shift to AI-first design. Instead of asking where AI can be added, organizations must design systems with AI outcomes in mind from day one, by aligning architecture, data flows, and governance to support intelligent automation at scale.
Practically, this starts with data engineering. Building robust, well-governed data pipelines and models upfront creates the conditions for AI to scale sustainably. When the foundation is right, AI moves from experimentation to impact.
DJ: What are the practical steps companies should take to modernize legacy systems without disrupting core business operations?
Ramchandran: The most critical step is prioritizing the modernization of the Data Foundation. This could mean unifying all your data – structured or unstructured – from various silos, into a common DataLake, where it can be nomalized to a standard. This then allows enterprises to leverage AI in a transformative way, with the most measurable impact.
Though more practically, companies should start with Internal Augmentation in areas where homogenous data may already exist. Departmental use cases, like using AI agents in HR, Finance, or Engineering (coding and testing for ex.) to automate specific tasks, without touching the core legacy engine are scenarios where a beginning, that demonstrates impact, can be made. This creates a safe zone for innovation, and drives adoption, a very critical cultural aspect for the success of AI initiatives.
Eventually, organizations do need to think of how to foster responsible AI. This requires enterprises to setup systems for AI governance, security and measuring impact, as they scale.
Here are the three steps that can prepare organizations as they move towards scaled and transformative success with AI and agentic workflows.
- Prioritize Data Foundation Modernization: For organizations operating on legacy architectures, the first step is modernizing the data foundation to enable metadata, lineage, and data quality metrics for siloed data. This ensures agents have the contextually rich, explainable data they need. Integrated platform offerings on the cloud and the introduction of GenAI-based tools have made this modernization journey faster and straightforward. While using GenAI with legacy architecture is possible, the token requirement for getting meaningful results would be extremely high and cost prohibitive.
- Establish Enterprise Knowledge Layers: Organizations that have not modernized their systems will have a lot of accumulated knowledge undocumented. Building the knowledge layers to capture this transient accumulated knowledge within the system would be the second high-priority task. This is the missing layer in many organizations’ AI adoption journey. Again, thanks to GenAI, these knowledge sources need not always be documents. They can also be Audio, Video, Images etc. thereby making capture and sourcing quicker and more feasible.
- Define Agent Boundaries and Ways of Work: The third step is to ensure that agents adhere to all best practices and security compliances currently followed by the organization, the industry and the community. Governance frameworks, security policies, and observability frameworks enable agents to think and act effectively within the boundaries and the established ways of the organization’s work.
DJ: How important is data readiness compared to infrastructure upgrades when preparing AI adoption?
Ramchandran: A lesson I’ve carried through every tech cycle – from the early days of mobile in 2009 to the cloud revolution – is that you cannot automate chaos. AI is only as powerful as the data feeding it. QBurst is driving growth for our customers by ensuring that the boring but essential work is done, namely Data Estate Modernization and Advanced Data Engineering.
Enterprise AI adoption also lags consumer AI for a reason: governance, security, and compliance are non-negotiable. These are not obstacles to work around, but requirements to build for. Organizations must establish trust frameworks that include guardrails, GenAI observability, and explainability. This establishes the need for a modern infrastructure as well, that allows for implementing uniform governance, security and monitoring, across disparate teams driving independent projects and initiatives. Upgrading Infrastructure to that extent or leveraging pre-integrated and standardized environments on the Cloud, can help solve for most of this need.
DJ: Can companies become AI-ready without completely replacing legacy systems, and what role do hybrid or cloud-based approaches play?
Ramchandran: As I said earlier, technology today does allow one to make a quick beginning in AI adoption, by creating an AI scaffolding to a legacy system. This works for very finite and specific use cases, usually limited to departmental or role specific scope.
The true power of AI can be unleashed when one thinks of AI to transform the way work is done. To do this, while one may retain some legacy apps for the time being, relooking at the Data Estate and architecting it for an AI first approach is critical. Transformative AI cannot be successful without thinking of a modern approach to Data. Even when using pre-trained models, data unification which allows co-relation of data with the specific context is essential.
Thankfully, Data technology today allows for an easier modernization approach. Hybrid DataLakes for example, allows one to retain data in the backend database of legacy applications, while still bringing shadow copies to the integrated repositories, allowing for central processing and analytics. Modern Cloud platforms allow for accelerated setup and integration of such Datalakes in a cost-effective way. This improves the production release time for most enterprise AI initiatives considerably.
DJ: What risks do organizations face if they delay modernizing legacy systems in an increasingly AI-driven market?
Ramchandran: They face a massive Opportunity Cost. This is a once in a generation opportunity to break away from the pack and fast movers will benefit the most in terms of market share. Organizations that wait will be stuck in the AI fascination phase while competitors move to AI accountability and real ROI.
The new age is not about having an AI layer to your applications. People are no longer talking just about building AI capabilities and wrappers around applications. Now, applications are meant to be surfaced inside of AI environments. Essentially, AI is becoming the UI, and apps are meant to be inside the GenAI interfaces. Agentic commerce is becoming real very fast. This is an absolute new age architecture, with AI at the center taking over the UI. Building AI as a wrapper to legacy applications will only drag all initiatives further away from bringing these capabilities.
Also, Intelligence and execution are becoming abundant, and as abundance increases, value shifts from effort to outcomes. AI fundamentally breaks the logic of hourly billing. This is why the industry is moving toward outcome-based models. Metrics such as tickets resolved without human intervention or workflows completed end-to-end by AI provide clear, measurable value. These models treat capability as software, not labor, which can be described as “service-as-software.”
Approaches like Managed Agents and Service-as-a-Software offer a more sustainable path forward. They shift the focus from paying for effort to paying for intelligent results, enabling predictable costs, continuous improvement, and shared upside from automation. Managed Agents allow human engineers and AI agents to work together toward business goals, while Service-as-a-Software makes value measurable through outcomes rather than hours spent.
In an AI-driven world, the most aligned commercial models are those that reward results, not effort—creating a win-win for both enterprises and service providers.
