The role that GenAI is playing in the work world is steadily increasing. Such technology presents advantages for firms; however, it can also present challenges.
To learn more, Digital Journal caught up with Arun “Rak” Ramchandran, President and Global Head – GenAI Unit, Hi-Tech & Professional Services at Hexaware. In the interview, Ramchandran touches on some of the difficulties that come with adoption and related AI challenges.
Digital Journal: What is your take on the current adoption rate of GenAI in enterprises?
Arun Ramchandran: The promise of GenAI is real, but the pace of enterprise-level adoption is slower than anticipated. 2024 was expected to be the year of production-level deployments, with full-scale enterprise implementations of GenAI. However, it’s still primarily a year of Proof of Concepts (POCs), albeit more advanced and focused on applications that generate real value. Enterprises are cautious, mainly due to concerns about data readiness, security, and governance mechanisms. This transition is happening gradually.
DJ: Why do you think enterprise adoption has lagged behind expectations?
Ramchandran: There’s a gap between the intent and adoption of GenAI in enterprises. While many CIOs recognize its importance—a significant majority say it’s crucial for their operations—the real implementation rate is low. Several factors are contributing to this: data security concerns, enterprise data readiness, regulatory & compliance issues, and the overall complexity of integrating GenAI into existing business models. That said, we’re beginning to see more maturity in the way enterprises approach GenAI. They’re now asking for help moving POCs into production, which signals a shift in mindset.
DJ: How is GenAI technology evolving, particularly in enterprises?
Ramchandran: We’re observing two technological events. The first is in terms of a shift toward smaller models and experimentation with the open-source ecosystem. Enterprises are starting to adopt smaller, more domain-specific language models that are more efficient and tailored to tasks like customer service or code generation. These smaller models are gaining traction because they’re easier to implement and don’t require the massive computational resources of LLMs. There is growing interest in open-source GenAI models, with Meta’s Llama model being a prime example.
Open-source models are becoming competitive with closed-source alternatives, offering enterprises more flexibility and cost-effectiveness.
The second shift concerns unlocking enterprise data and getting it ready for use with GenAI models, as well as improving the accuracy and relevance of the output. Concepts like RAG (Retrieval Augmented Generation), Finetuning and Pre-Training, and incorporation of human feedback require a lot of readiness in terms of enterprise data and that evolution is happening in a big way now.
DJ: Which industries are seeing the most adoption of Gen AI?
Ramchandran: We’re seeing traction in industries that are either highly data-driven or disrupted by technology. High-tech companies, for instance, are using GenAI because it’s core to their operations, while sectors like insurance, financial services, and healthcare are adopting it due to their reliance on structured data. These industries are using GenAI for tasks like claims processing, customer service, and financial modelling. Retail, media, and telecom sectors are also picking it up, as they’re facing technological disruption and can benefit from the efficiencies GenAI offers in areas like personalized marketing and e-commerce optimization.
DJ: What about the industries that are slower to adopt GenAI?
Ramchandran: Sectors that are more physical or heavily regulated, such as agriculture, oil, and gas, are adopting GenAI at a slower pace. These industries typically have more logistical challenges when it comes to integrating AI, and they also have strict regulatory requirements that complicate the implementation of AI technologies.
DJ: How do you see the future of GenAI impacting employee productivity and job roles?
Ramchandran: GenAI is more of an augmentation tool than a job-replacing technology. While there’s some fear that it could eliminate jobs, especially in coding or customer service, the reality is that GenAI will likely enhance employee productivity rather than replace them. Employees who learn to work alongside these tools will be more effective.
For instance, engineers using GenAI to generate or review code can accomplish more, but the technology isn’t at a stage where it can autonomously replace entire teams. Companies are already integrating GenAI into daily operations, from generating proposals to automating workflows, and employees are benefiting from the increased efficiency.
DJ: What are the biggest challenges facing GenAI adoption today?
Ramchandran: Data readiness, security, and regulatory concerns are three of the biggest barriers. Most enterprises are realizing that they don’t have their enterprise data ready for use by GenAI models. Enterprises also need to ensure that their data is protected and comply with regulations like GDPR or HIPAA. These concerns are particularly relevant in industries like healthcare and financial services, where privacy and regulatory compliance are critical.
DJ: What do you think about the role of regulation in the development of GenAI?
Ramchandran: Regulation is both a challenge and an opportunity. Different regions are taking different approaches—Europe’s AI Act is more prescriptive, while the U.S. is focused on self-regulation with stricter state-level rules like California’s AI Bill. Enterprises need to be aware of these regulations as they deploy GenAI. There are specific rules around the training of models and data usage, and enterprises must ensure compliance to avoid risks. We believe it is important to help clients by providing guidance on how to implement AI responsibly while navigating this regulatory landscape.
DJ: How do you see the competition between open-source and closed-source Gen AI models evolving?
Ramchandran: Open-source models are quickly catching up with their closed-source counterparts. Meta’s Llama model, for example, has sparked interest in the open-source community, and we’re seeing more enterprises experiment with these alternatives. Open-source models are not only cost-effective but also offer flexibility in customization, making them an attractive option for enterprises that want more control over their AI solutions. In the future, I expect the competition between open-source and closed-source models to intensify, yet both approaches will coexist, offering different benefits based on the use case.