Canada is in the middle of deciding how artificial intelligence will shape its economy, public services, and social systems.
This week’s federal report, Engagements on Canada’s next AI strategy: Summary of inputs, captures the ideas and tensions shaping Canada’s AI debate as the government is working toward a national strategy expected in 2026.
The document is a summary of what the government heard during a 30-day public consultation that ran from Oct. 1-31, 2025 alongside 32 reports submitted by the 28 AI strategy task force members. It reflects the first formal step toward shaping a renewed national AI strategy.
According to the summary, more than 11,300 submissions were received from individuals and organizations across the country, spanning business, academia, civil society, the arts, and the public sector.
Innovation, Science and Economic Development Canada (ISED) describes the process as the largest public consultation the department has undertaken.
The report summarizes feedback and key themes. It offers an early view of how Canadians and experts are thinking about AI’s economic, social, and regulatory implications, while leaving many of the harder decisions still ahead.
“What this stage tells us is that the government is approaching AI as a broad economic and societal issue, not just a technology file,” says Benjamin Bergen, CEO of the Canadian Venture Capital and Private Equity Association (CVCA) and a member of the AI strategy task force.
“The consultation reflects a real effort to understand how AI fits into Canada’s broader economic and public policy landscape.”

What the summary highlights
The report organizes feedback into several focus areas: research and talent; AI adoption across industry and government; commercialization; scaling Canadian firms; safety and trust; education and skills; infrastructure; and security.
Within those sections, the summary points to recurring concerns about attracting and retaining AI talent, moving AI beyond pilot projects, protecting Canadian intellectual property, and strengthening domestic compute and data infrastructure.
It also captures caution about environmental impact, job disruption, privacy, and reliance on foreign-controlled platforms.
The feedback reflects both optimism about AI’s potential and skepticism about its risks, without attempting to reconcile that tension.
“That tension is real, and it shouldn’t be seen as a contradiction,” Bergen says. “It reflects maturity in how AI is being discussed in Canada.”
Watch: Benjamin Bergen speaks with Digital Journal at the Inventures conference.
What’s missing
For Elena Yunusov, executive director of the Human Feedback Foundation, reading the consultation summary alongside the expert submissions raises questions about what gets lost in writing the summary.
“When I compared the expert submissions to the summary, it felt like many of the more concrete ideas were toned down,” she says. “That may be expected in a summary, but it does mean the real trade-offs aren’t visible yet.”
One example Yunusov points to is open source, which she sees as a practical way to lower costs, reduce dependence on single vendors, and make AI tools more accessible across sectors. Several task force submissions reference open-source approaches in the context of cybersecurity, infrastructure, and accessibility. That emphasis, she says, is lost in the summary itself.
“What stood out to me was that open source came up again and again in the expert submissions, in different contexts, but it didn’t really make it into the summary,” Yunusov says. “That’s concerning, because it’s not controversial or binary. It’s not an either-or choice.”
Yunusov recently explored this argument in an article for the Centre for International Governance Innovation, where she described open source as a potential “third path” for Canada, distinct from dependence on foreign platforms or costly attempts to build everything domestically.
“If we’re serious about sovereignty, we have to define it in our own interests,” she says. “We’re not going to replicate everything domestically, and we shouldn’t try. Open source lets us collaborate globally while still building agency and independence.”

Who benefits, and who risks being filtered out
Yunusov raises a broader concern that the choices behind the summary may hint at which organizations are included or excluded from Canada’s AI strategy.
While the summary frequently references commercialization, scaling, and productivity, it pays less attention to civil society organizations and nonprofits, despite their role in delivering public services and shaping public trust.
“We’ve overindexed on commercialization,” she says. “That matters, but nonprofits and civil society are largely missing from the strategy conversation, even though they’re on the front lines delivering services and building public trust.”
The report does reference equity, inclusion, and trust as priorities, but offers limited detail on how those principles would translate into adoption support, access to compute, or funding for organizations outside traditional business or research categories.
“You don’t build public trust by talking about AI,” Yunusov says. “You build it by putting AI to work for people in ways they can see and feel. Nonprofits are uniquely positioned to do that because they’re not chasing returns. They’re trying to solve real problems.”
From consultation to execution
As the process moves toward a strategy, both Yunusov and Bergen point to clear signs they will be watching for as the strategy takes shape.
For Yunusov, one of the clearest indicators will be whether stated principles translate into access.
“What I’ll be looking for is whether the strategy actually supports the organizations that people rely on every day,” she says. “If nonprofits and small organizations are still excluded from access to compute, funding, and adoption support, then we’ll know this stayed at the level of principles.”
What Bergen wants to see is how the strategy moves from intent to action.
“A strategy starts to take real shape when it’s accompanied by clear signals on execution,” he says. “That means procurement frameworks that create real customers for Canadian AI firms, capital and tax policies that support scaling and IP retention, and infrastructure decisions that address compute, power, and data constraints.”
Bergen says progress will show up when AI policy starts shaping how money is invested and how governments buy AI, not just in more consultation.
The report says ISED used AI tools, including Canada-based Cohere, to help review thousands of submissions, speeding up what would normally be a manual review.
Even so, Digital Journal has heard from other leaders in the AI ecosystem who say the overall pace of the strategy still feels slow relative to how quickly AI is being adopted in practice.
For now, the summary sets expectations without resolving them. Whether the forthcoming strategy delivers on those expectations will determine how consequential this first step turns out to be.
Final shots
- The consultation shows that Ottawa is listening. The strategy will show what they heard. It will be judged by which ideas make it into execution, and which quietly fall away.
- Signals to watch include who gets access to compute, procurement, and adoption support, not just who is mentioned in principle.
- The real test will be how Canada’s AI strategy reshapes incentives and infrastructure.
