Before he led multimillion-dollar transformations or architected human-centred AI strategies for the federal government, Jean-Paul Lalonde was in a classroom in Sudbury, Ontario, helping students with physical disabilities find the right technology to access education.
“I was teaching adaptive technologies, basically trying to find the technologies to bridge the gap to level the playing field for these students,” says Lalonde, now chief information and data officer at the Impact Assessment Agency of Canada (IAAC). “It is an extremely fulfilling job, probably one of my most fulfilling jobs.”
That early experience of empowering others with tech has shaped every career decision since. Whether leading strategy at the Canada Mortgage and Housing Corporation (CMHC) or launching generative AI tools to combat financial crime at Fintrac, Lalonde says he’s always been chasing that same feeling: “In every single one of my jobs, I was always looking to make that difference.”
Bridging tech and trust in the public sector
At IAAC, Lalonde is guiding the agency through another ambitious transformation. This time, it’s about applying artificial intelligence to one of the most complex and politically sensitive mandates in government: environmental impact assessment.
“Our job is to look at all the major projects in Canada — pipelines, offshore drilling, mines — and find a balance of advancing these projects and preserving the environment,” he explains. That includes meaningful consultation with Indigenous communities, which his team has made more accessible through redesigned digital platforms.
But faster doesn’t mean automated. Lalonde is clear that AI isn’t replacing the human judgment at the heart of these decisions — it’s supporting it. In fact, that’s the core of his strategy: using AI to enhance, not override, the critical thinking, values, and responsibilities that define public service.
One of the biggest barriers to adoption, he says, is fear — fear among staff that AI is coming for their jobs.
“When we built a model specifically to say we are not going to replace you… all of a sudden that fear started to diminish,” he says. “Then you could get to that conversation where the user is like, okay, I’m going to use this thing. I’m actually going to expand my capability.”
By tackling that fear head-on, Lalonde is opening the door to meaningful, sustained adoption that prioritizes people over hype and maintains public trust.
AI isn’t a tool — it’s a capability
What makes Lalonde’s approach unique is his emphasis on AI literacy across all roles, not just among developers or technical teams.
“We don’t see [AI] as technology. We see it as a business capability,” he says.
His team uses a four-level capability matrix to guide adoption:
Level 1 – Awareness and experimentation: Staff are introduced to AI through simple, practical tasks — summarizing documents, rewording text, drafting emails — using tools like ChatGPT, Copilot, and other government-supported platforms. The goal is to build confidence through hands-on use.
Level 2 – Structured use and analysis: Users begin learning how to properly capture and structure information in AI platforms. This includes applying AI to analysis and research tasks, drafting more complex content, and managing the lifecycle of documents or communications with AI assistance.
Level 3 – Workflow automation with AI agents: At this stage, users create custom AI agents that can handle repeatable, time-consuming tasks. Multiple agents can be “chained” together to automate workflows, reducing repetitive work and increasing productivity.
Level 4 – System integration: Advanced users and teams embed these AI-driven workflows into backend systems, aligning automation with core organizational operations and maximizing efficiency at scale.
But the guardrails are just as important as the tools. “You’re not only giving the people the tools and the skills, you’re giving them the accountability,” he says. “There’s always a human in the loop at the end. And they’re the ones signing off on the work, not the AI.”
A model for the future — shared and evolving
Lalonde’s model has already sparked international interest, earning invitations to join networks in the UK, Dubai, and to deliver a keynote address in Singapore. He envisions a future where organizations publicly commit to human-centred AI approaches — possibly even using a certification to attract mission-aligned talent.
“This is kind of just the beginning,” he says. “I’d like to make this perhaps even a national standard.”
And while he acknowledges the risks, he remains focused on education as the most powerful defence. “The genie is out of the bottle,” he says. “So the best thing we can do is just teach people how to use it properly.”
Learn more at CIOCAN’s Peer Forum
Jean-Paul Lalonde will share his full framework and lessons learned during his session, Amplifying Human Potential: A Human-Centred Approach to AI Adoption, at the CIO Association of Canada’s Peer Forum. His talk is a call to action for CIOs to move beyond generic AI strategies and embrace human-centred leadership in a fast-changing world.
“I think every CIO should be a level three in my guide,” he says. “This is not for their developers anymore. This is for them.”
The session offers both a roadmap and a rallying cry for Canada’s IT leaders to take the reins on AI adoption, with humanity at the core.
Digital Journal is the official media partner of the CIO Association of Canada.

This article was created with the assistance of AI. Learn more about our AI ethics policy here.
