AI is pretty universal at this point. You plug a question into ChatGPT and get a quick roundup of online research in your niche. Or, you enjoy a suite of AI-powered features on your Google Pixel phone. Perhaps you’re a business owner who has used these online tools to help write a long email or draft a blog post.
Pretty harmless, right? That depends on how you use generative AI.
What if you apply this tech to more significant world industries and pursuits like defence? Every organization and country opens itself to a bit of risk. After all, Canadian computer scientist and deep learning expert Yoshua Bengio shared remarks of caution and called for democratic governance in the age of AI, specifically in light of Canada’s latest voluntary AI code of conduct.
What is the code and how is it intending to protect Canadians from the risks of AI? Let’s dive in.
Canada’s Voluntary AI Code of Conduct
The code’s standards and guidelines invite companies to take a robust, responsible approach to using AI in their organizations while collaborating and sharing information.
The main goal? Implement strong risk management systems, to protect companies from cyber attacks and potentially biased, discriminatory decisions.
Measures and principles
The code highlights measures to be undertaken within broader principled categories displayed below:
Accountability:
- Comprehensive risk management framework, including policies, procedures, and staff training
- Information sharing on risk management
- Multiple lines of defence like audits
Safety
- Comprehensive assessment of potential adverse impacts like risks for misuse
- Proportionate measures to mitigate the risk of harm
- Guidance on appropriate system usage for managers and developers
Fairness and equity:
- Curated datasets for training and bias identification
- Diverse testing methods to mitigate the risk of bias
Transparency:
- Publishing limitations and capabilities of the system
- A reliable and available method to detect content generated
- Publishing training data descriptions and risk mitigation instructions
- Clearly identifying systems as AI to avoid misconceptions of human activity
Human oversight and monitoring:
- Harmful use monitoring
- Database of incident reports
Validity and robustness:
- Testing methods prior to deployment
- Adversarial testing to identify vulnerabilities
- Cybersecurity risk assessment
What experts are saying
Niche leaders have largely received the voluntary code with positive outlooks, especially noting that the code is indeed voluntary and not mandatory. Meanwhile, the code gives some industry experts pause.
“I do believe your code of conduct will guide us into making the right decisions at the right moment for the right reasons.”
- Isabelle Hudon, President of the Business Development Bank of Canada (BDC)
“We don’t need more referees in Canada. We need more builders. Let other countries regulate while we take the more courageous path and say ‘come build here.’
- Tobi Lutke, CEO of Shopify
“We now welcome the introduction of a Code of conduct that supports the development and management of responsible and robust AI systems to maintain trust towards customers. We are supportive of other brands being held to the same standards.“
- Anne Thériault, Vice President of Coveo,Legal, CISO, DPO and assistant secretary
Read Canada’s latest voluntary AI code of conduct here.
