As AI reshapes industries and daily life, the Canadian government is taking new steps to ensure the technology develops in a responsible and secure way.
In its latest move to strengthen AI governance, the federal government has refreshed its AI advisory council, launched a new risk-focused advisory group, and added more companies to its voluntary AI code of conduct. The goal is to balance innovation with responsibility, ensuring AI development doesn’t outpace the safeguards needed to keep it in check.
François-Philippe Champagne, minister of innovation, science and industry, announced a series of initiatives aimed at strengthening AI oversight and governance on March 6.
The measures include updating the membership of the Advisory Council on Artificial Intelligence, launching the Safe and Secure Artificial Intelligence Advisory Group, and releasing an implementation guide for managers overseeing AI systems.
Strengthening AI governance and expertise
The Advisory Council on Artificial Intelligence, originally created in 2019, provides recommendations to the government on AI strategy, economic growth, and responsible development. Its refreshed membership will now be co-chaired by Diane Gutiw, vice-president and global AI research lead at CGI, and Olivier Blais, co-founder and vice-president of decision science at Moov AI.
The newly-launched Safe and Secure Artificial Intelligence Advisory Group will be chaired by Yoshua Bengio, scientific director of Mila. This group will focus on assessing risks associated with AI and advising the newly established Canadian AI Safety Institute on research priorities related to AI safety.
Ongoing investments in AI safety and innovation
The federal government has positioned AI as a strategic priority, committing $2.4 billion in Budget 2024 to support AI-related initiatives. This funding aims to enhance computing infrastructure, accelerate safe AI adoption, and provide skills training for workers adapting to AI-driven changes.
Canada has also been active in international AI policy discussions, contributing to global efforts on establishing AI safety standards and participating in international AI safety conferences.
But beyond policy, oversight is also about industry buy-in. To that end, six more organizations, including CIBC, Clir, Cofomo Inc., Intel Corporation, Jolera Inc., and PaymentEvolution, have signed onto Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, joining 40 others committed to responsible AI development.
Industry leaders commit to responsible AI development
Industry leaders are voicing their support for the voluntary AI code of conduct and the government’s latest AI initiatives.
“Artificial intelligence is one of the most transformative technologies of our time, and there is no doubt that it is here to stay,” Champagne said in a statement.
“As AI technology continues to evolve, our government is committed to making sure that Canadians can benefit from it safely and that companies are developing it responsibly. The measures announced today are a positive step forward in securing an AI ecosystem that works for — and in the interests of — all Canadians.”
Companies that have signed onto the voluntary code say responsible AI governance is critical for building trust and fostering innovation.
“Generative AI tools provide exciting opportunities for our bank to foster innovation, enhance productivity, and do more for our clients,” said Dave Gillespie, executive vice-president, infrastructure, architecture and modernization at CIBC.
“As we continue to leverage generative AI tools thoughtfully across our bank to help deliver on our client-focused strategy, we’re pleased to be a signatory to the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems and to support the ongoing development of a robust, responsible AI ecosystem in Canada.”
Intel Canada also reaffirmed its commitment to ethical AI practices.
“AI’s transformative potential comes with a responsibility to develop and deploy it ethically,” said Asma Aziz, general manager of Intel Canada.
“We are honoured to stand at the spearhead of this change alongside the Government of Canada, driving a future where AI is not just powerful, but also ethical and sustainable. At Intel, we are committed to a strong AI strategy that prioritizes trust, transparency, and governance, ensuring our technology not only drives innovation but also creates meaningful, positive impact for businesses, communities, and society at large.”
Canada’s AI leadership on the global stage
The country was the first in the world to introduce a national AI strategy.
Since 2016, Canada has invested over $4.4 billion into AI and digital research infrastructure, supporting AI research, computing, and skills development. In 2024, the government introduced the Canadian AI Safety Institute as a new initiative to focus on AI risks and governance.
With a growing network of industry, research, and policy stakeholders engaging in AI governance, the country is reinforcing its commitment to ensuring AI serves the public good while remaining competitive in a rapidly evolving global landscape.
