At ManageEngine, the agents are not all human.
CEO Rajesh Ganesan refers to them as digital employees.
These AI-powered agents are designed to perform defined tasks within enterprise systems, such as triaging support tickets or filling out RFPs. Each agent is issued a user identity, assigned access permissions, and governed by a set of rules, much like a human employee. The system tracks what they can see, what they can do, and which applications they are allowed to interact with.
This workflow is not theoretical — it’s already operating at global scale.
ManageEngine, which is part of Zoho Corp, provides IT management software to more than 280,000 organizations in nearly 200 countries.
The company runs on Zoho’s cloud infrastructure where it has built its own portfolio of enterprise IT tools, including a shared data platform and its own internal AI development layer.
Ganesan has been with the company since the early 2000s and now oversees one of the most widely deployed software suites in enterprise IT.
In an interview with Digital Journal in Toronto at one of the company’s user conferences, Ganesan kept coming back to the idea that accountability, not automation alone, will determine how well companies integrate AI into their operations.
As more enterprises adopt agentic forms of AI, where systems can take action on their own, he believes a critical focus should be on governance.
Ganesan mentioned some of this during his keynote at the event, where he outlined the challenges of AI readiness. Companies that are ready for agentic AI are thinking beyond implementation and are focused on how to govern it, measure it, and build accountability into every layer of deployment.
“Just throwing all the available AI is not going to solve your problems,” he said. “You need to have clarity.”
Accountability has to scale with automation
ManageEngine has been investing in machine learning for more than a decade, gradually expanding the scope of what AI can do across its product suite.
Recent advances in large language models have enabled the company to explore more complex, semi-autonomous tasks. But even as capabilities evolve, structure remains the constant. Whether applied to customer-facing features or internal systems, AI is governed with the same level of intention and oversight.
“We will build digital employees the same way you build a human employee,” Ganesan said. “We will create the identity. We will create the entitlements. We will define what data they can see, what actions they can take, and we’ll track it.”
That same mindset extends to how the company uses AI behind the scenes.
One example is content. ManageEngine has made a policy decision not to use generative AI to write documentation, support material, or marketing copy. All content is authored by people, then reviewed and polished using AI tools.
“All our content should be original,” Ganesan said. “We will write our own, but to spruce the content up, we will go to GenAI systems.”
This kind of clarity is what Ganesan sees as essential for responsible AI integration. It’s not just about what the technology can do, but how and where it should be applied. In both customer-facing systems and internal processes, the goal is to avoid unchecked automation and build guardrails that reflect the company’s values.
AI should help you do better work, but it should also reflect how you want that work to be done.
That principle runs through ManageEngine’s decisions. AI is not being adopted for speed alone — it’s being shaped to align with the company’s long-standing emphasis on user trust, control, and product integrity.
For Ganesan, that means resisting shortcuts, defining boundaries, and making sure every deployment is accountable to both business goals and the people behind them.
Testing with purpose
At this moment in time, not every AI tool delivers the same return. Ganesan’s team has tested a variety of code-generation platforms across their engineering workflows. While they found them helpful in some areas, such as UI development, other uses were less productive.
“For user interface, it’s able to do a good job. It’s about 70% efficient,” he said. “But for the rest of the code we already have, like fixing a bug, it’s only a 1-2% increase in productivity.”
Ganesan noted that things are evolving quickly and that capabilities will likely improve, but he emphasized the importance of running experiments to understand where AI can actually deliver value.
“Unless we try all these experiments, there is no way for us to know where to even apply them, or not,” he said. “Some areas are useful, and in others we are not seeing results yet.”
This kind of measured approach is not about resisting innovation. It is about making sure the business context comes first.
“You have to blend data with instinct,” Ganesan said. “You cannot just go with blind faith.”
Unlearning is part of the job
When asked what he wishes he had known earlier in his AI journey, Ganesan did not hesitate.
Between 2012 and 2022, his teams invested heavily in building specific models for sentiment analysis, anomaly detection, and recommendation engines. Each use case had its own system, its own maintenance, and its own pipeline.
“We were building a different model for everything,” he said. “And we thought that was the right approach.”
Then came large language models. Suddenly, one generalized system could handle many of the same tasks, with less complexity.
“We should have guessed it long back,” Ganesan said. “We would have diverted a lot of our efforts and investments into something else.”
Rather than defending sunk costs, he sees it as a reminder that adaptability matters more than being early.
ManageEngine is still testing where AI delivers meaningful results and where it doesn’t, and Ganesan is watching the progress carefully.
“We are yet to see the same levels of productivity that are being claimed,” he said. “But it is coming. And we have to be ready to shift when the time is right.”

Culture is the infrastructure
The AI systems Ganesan is building are not separate from the human systems that surround them. If anything, he believes the biggest risk in AI adoption is ignoring how much culture matters.
He points to cybersecurity as an example.
Ganesan said companies often treat it as the responsibility of a specialized team, but that mindset can fall short. At ManageEngine, the goal is to make security a shared responsibility across the organization.
Concentrating accountability in one team can create blind spots and leave other employees disconnected from risk. While Ganesan didn’t describe past failures, the shift toward broader responsibility reflects a belief that security works best when everyone is engaged.
“Security is everybody’s responsibility inside the organization,” he said. “Don’t treat it as an outsourced function.”
To reinforce that mindset, the company uses contextual training and real-time interventions. If an employee tries to paste personal information in a chat, the system will blur the text and display a warning. If a potential phishing link is clicked, the user sees an immediate alert.
“When it happens 20 times, 30 times, people subconsciously learn,” he said. “Before clicking, they think.”
The same philosophy applies to AI.
Without structure, training, and shared responsibility, even the best technology can create risk. Ganesan does not believe in handing over control to the machine. He believes in designing systems where AI makes people better, and where people, in turn, are equipped to lead.
“AI is going to be extremely, extremely useful in how application systems are architected in the future,” he said. “But the fundamentals will not change.”
The future of enterprise systems may lean toward autonomy, but that only makes accountability more essential to how they are designed, deployed, and led.
Digital Journal is a media partner of ManageEngine UserConf Toronto.
This article was created with the assistance of AI. Learn more about our AI ethics policy here.

