ISED Releases Companion to Proposed AI Law: Timelines, Guidelines, and Enforcement
ISED Releases Companion to Proposed AI Law: Timelines, Guidelines, and Enforcement
The number of companies that have adopted artificial intelligence (“AI”) has more than doubled since 2017. The speed of AI growth coupled with its increasing use has led to concerns of a lack of oversight over the industry. Recently, a number of high-ranking AI experts penned an open letter calling on companies to temporarily pause their AI developments for six months to establish and implement a set of shared safety protocols for advanced AI design and development.
Governments are beginning to step in to regulate the industry. The European Union, United Kingdom, United States, and most recently China have begun to propose or implement AI regulations to curb and control the potential negative effects of AI.
Canada is no different. Last summer, Canada introduced the Artificial Intelligence and Data Act (“AIDA”) as a part of Bill C-27, which is currently before Parliament. In parliamentary debates, the AIDA has received criticism that it lacks any real substance, since it leaves key definitions and requirements up to regulations that have not yet been drafted. Perhaps in response to this criticism, last month, Innovation, Science and Economic Development Canada (“ISED”) released a companion policy to the AIDA (the “AIDA Companion”) which provides further details for how AIDA would work.
This bulletin will provide a summary of the main points in the AIDA Companion as well as offer some key takeaways for business who are currently involved in developing and using AI systems.
A Two-Year Timeline
The AIDA Companion emphasizes that the Government will take an agile approach to AI regulation, working in close collaboration and consultation with stakeholders to develop and evaluate regulations. The current timeline for the implementation of the initial set of AIDA regulations is expected to proceed as follows:
- Consultation on regulations (6 months)
- Development of draft regulations (12 months)
- Consultation on draft regulations (3 months)
- Coming into force of initial set of regulations (3 months)
According to this timeframe, AIDA provisions would not come into force until two years after Bill C-27 receives Royal Assent (2025 at the earliest).
Greater Clarity on High-Impact Systems
The proposed AIDA primarily focuses on the regulation of “high-impact AI systems”. One of the main criticisms of the legislation is that it does not define this term. While the AIDA Companion also refrains from providing a precise definition, it puts forward a list of key factors to help businesses determine which AI systems will be considered high-impact. These factors include:
- Evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
- The severity of potential harms;
- The scale of use;
- The nature of harms or adverse impacts that have already taken place;
- The extent to which it is not reasonably possible to opt-out from that system for practical or legal reasons;
- Imbalances of economic or social circumstances, or age of impacted persons; and
- The degree to which the risks are adequately regulated under another law.
In addition to these set of factors, the AIDA Companion also identifies a few specific types of AI systems that are of particular interest to the Government due to their potential impact. They include screening systems affecting access to services or employment; biometric systems used for identification and inference; systems that can influence human behavior at scale (e.g. online content recommendation systems) and systems critical to health and safety.
A Focus on Harm and Biased Output
The AIDA states that a person responsible for a high-impact system must establish measures to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system.
- Harm is defined as any physical harm, psychological harm, damage to property, or economic loss to an individual. The AIDA Companion adds that some individuals or groups of individuals, such as children, might be more susceptible to harm from high-impact AI systems and will necessitate specific risk mitigation efforts.
- Biased Output means content that is generated, or a decision, recommendation or prediction that is made, by an AI system that adversely differentiates, directly or indirectly without justification, in relation to an individual based on the prohibited grounds of discrimination in the Canadian Human Rights Act. Under the AIDA, biased output does not include AI systems that differentiate for the purpose of reducing disadvantages suffered by a group of individuals due to discrimination.
It will be interesting to see how ISED will approach the problem of biased output in AI systems developed through machine learning (often referred to as “black-box systems” for their difficulty to interpret). Without a clear view into the decision making process, it is often difficult to determine whether such systems are biased. To this end, the Office of the Privacy Commissioner of Canada recently released a blog series discussing the pros and cons of various models of algorithmic fairness, and practical approaches to measuring and increasing fairness based on the output of such systems.
We Have Guiding Principles – But Specific Measures Have Yet to Be Developed
Neither the AIDA nor the AIDA Companion list specific measures that businesses will need to implement to identify, assess and mitigate the risks of harm or biased output prior to a high-impact system being made available for use. However, the AIDA Companion provides a list of six guiding principles that will be used to develop future obligations for high-impact AI systems. The principles are intended to align with international norms on the governance of AI systems and include:
- Human Oversight & Monitoring: people managing the operations of high-impact systems must be able to engage in meaningful oversight. This means that high-impact AI systems must have a baseline level of interpretability. Proper oversight will also require measuring and assessing the AI systems and their output.
- Transparency: the public should be provided with appropriate information about how a high-impact system is being used so they can understand the capabilities, limitations and potential impacts of the system.
- Fairness and Equity: businesses building high-impact systems must be aware of the biases and potential discriminatory outcomes their systems can produce and take action to mitigate the risk of these outcomes for both individuals and groups.
- Safety: high-impact systems must be proactively assessed to identify and mitigate harms stemming from their use, including through reasonably foreseeable misuse.
- Accountability: organizations must put in place internal governances processes and policies to ensure compliance with all legal obligations of high-impact AI systems in the context in which they will be used.
- Validity and Robustness: AI systems should perform consistently according to their intended objective and should be stable and resilient in a variety of circumstances.
According to the AIDA Companion, measures will be tailored to the context and risks associated with specific activities carried out by a business in the lifecycle of a high-impact AI system. For example, a company that designs a high-impact AI system will not have to comply with the same measures as a company that makes an AI system available for use. The measures or obligations imposed on each company will be proportionate to the risk associated with their activities regulated under the AIDA.
Two Stage Approach to Oversight and Enforcement
Enforcement has been a serious topic of discussion, since the current draft of the AIDA provides for fines of up to the greater of $25,000,000 and 5% of global revenue.
The AIDA Companion provides an in-depth explanation of the AIDA’s oversight and enforcement strategy. Enforcement will likely develop over two stages. When the AIDA first comes into force, enforcement will focus on helping businesses come comply by educating businesses and establishing guidelines. The Government’s intention is to provide businesses with sufficient time to adjust to the new framework.
Once businesses have had time to adjust to the new framework, the Government will make use of more stringent measures to ensure compliance, including administrative monetary penalties, prosecution of regulatory offences and prosecution of true criminal offenses. Monetary penalties will be applied by the Minister of Innovation, Science and Industry while the Public Prosecution Service of Canada will be responsible for the prosecution of an offence under the AIDA or its regulations. It is important to note that, while the AIDA already carries certain criminal offenses, the current draft only allows for the creation of an administrative monetary penalty scheme. The exact penalties will need to be determined by the regulations following a consultation process.
Key Takeaways for Businesses
- The Government intends to take an agile and flexible approach to regulating AI in close collaboration with stakeholders.
- If the AIDA is passed, it will be at least two years before specific rules and measures are implemented. During this time, the Government intends to conduct consultations with stakeholders and publish draft regulations.
- Enforcement of the AIDA is proposed to take place in two stages: (1) a collaborative and education-focused approach, and (2) more stringent measures, including monetary penalties and prosecution.
- The specific definition of high-impact AI systems has not yet been determined, but will likely include consideration of key factors, such as risks of harm, scale of use, and the vulnerability of impacted persons, as described above.
- While no specific rules and measures have yet been proposed, the AIDA Companion lists six guiding principles to underpin the development of specific regulations. These principles are (1) human oversight & monitoring, (2) transparency, (3) fairness & equity, (4) safety, (5) accountability, and (6) validity & robustness.
- The rules would apply differently to various players in the AI space, including those involved in the development, commercialization, and operation of AI systems.
 The state of AI in 2022—and a half decade in review | McKinsey
 Pause Giant AI Experiments: An Open Letter – Future of Life Institute
 The AIDA has been introduced as part of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, short titled as the Digital Charter Implementation Act. See LegisInfo status here.
 Privacy Tech-Know blog: When worlds collide – The possibilities and limits of algorithmic fairness Part 1 and Part 2 | Office of the Privacy Commissioner of Canada.
by Robbie Grant and Sam Kelley (Articling Student)
A Cautionary Note
The foregoing provides only an overview and does not constitute legal advice. Readers are cautioned against making any decisions based on this material alone. Rather, specific legal advice should be obtained.
© McMillan LLP 2023
Insights (5 Posts)
OPC Takes Firm Stance on Privacy in the Workplace: Takeaways for Employers
The OPC's Workplace Privacy Guidance discusses employee monitoring and workplace privacy matters, relevant for all employers, not just federal.
CBSA’s Proposal to Modernize the Valuation for Duty Regulations
CBSA's proposed amendments to modernize the Value for Duty Regulations alter the definition of "sold for export to Canada" and "purchaser in Canada".
Proposed Updates to Canada’s Anti-Money Laundering and Terrorist Financing Regime
Finance Canada's proposed changes relating to anti-money laundering and armoured car companies, mortgage lenders, and money service businesses.
Net Benefit Reviews Under the Investment Canada Act – Some Practical Thoughts
Glencore plc’s recent approaches to the shareholders of Teck Resources has once again brought the Investment Canada Act into the spotlight.
The Bureau Issues Final Enforcement Guidance on the New Criminal Prohibition on Wage-Fixing and No-Poaching Agreements
Providing insights on the Competition Bureau’s 2023 Guidelines describing the Bureau’s approach to wage-fixing and no-poaching agreements.
Get updates delivered right to your inbox. You can unsubscribe at any time.