Skip to content.

The Dawn of AI Law: The Canadian Government Introduces Legislation to Regulate Artificial Intelligence in Canada

This article is part of our Bill C-27 Business Insights Series: Navigating Canada’s Evolving Privacy Regime, written by McCarthy Tétrault’s multidisciplinary Cyber/Data team. This series brings you practical and integrative perspectives on Canada’s Bill C-27: Digital Charter Implementation Act, 2022 and how your organization can stay ahead of the curve.

View other blog posts in the series here.

 

Following a global trend in artificial intelligence (“AI”) regulation, which has taken momentum in the U.S. and the EU, Canadian legislators appear to be poised to move beyond policy frameworks, developed by both private and public bodies, and adopt hard law. On June 16, 2022, Bill C-27 was introduced in Parliament for its first reading. Titled The Digital Charter Implementation Act, 2022, Bill C-27 is the second attempt at reforming federal privacy law in Canada. It follows Bill C-11, which died on the order paper when the 2021 election was called. While much of Bill C-27 is borrowed from Bill C-11, there are areas where the new bill substantially differs from its predecessor. One of the most significant changes in Bill C-27 is the introduction of the Artificial Intelligence and Data Act (“AIDA”), an entirely new law which aims to regulate the development and use of AI in Canada. If adopted, the AIDA would become Canada’s first law specifically dedicated to regulating AI.

Canada’s AI regulation landscape prior to Bill C-27

Bill C-27’s AIDA represents a significant change to the framework governing Canada’s AI sector. Canada does not currently have a comprehensive legal framework to regulate AI. In the public sphere, there is the Directive on Automated Decision-Making (the “Directive”) which imposes a number of requirements, primarily related to risk management, on the federal government’s use of automated decision systems.

To date, privacy law has been the vehicle of choice to regulate AI systems. Bill 64, which will reform Quebec’s privacy law, is an example of this approach. Among its measures are transparency requirements for public bodies and private enterprises that make decisions exclusively based on automated processes. It creates obligations on the user of an automated process to explain what personal information was used to render the decision, as well as the decision’s general rationale. Bill 64 also creates a general obligation for businesses to perform “privacy impact assessments” to evaluate projects that involve the acquisition or development of information systems or electronic delivery systems that involve the processing of personal information.[1] These assessments may be required in certain contexts involving AI systems.

The original text of Bill C-11 also contained similar obligations. Through the Consumer Privacy Protection Act (“CPPA”), Bill C-11 would have implemented a requirement for businesses to make available “a general account of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have significant impacts on them”. However, the Bill C-11 provisions were quite limited compared to the AIDA now proposed by Bill C-27, which is a standalone regime entirely dedicated to AI systems. In contrast, Bill C-11 would have regulated “automated decision systems” through a limited number of privacy law sections. In addition to introducing the AIDA, Bill C-27 retains the requirements that Bill C-11 would have put in place. It will thus be interesting to see the interplay between the two proposed federal regimes—one applicable to AI systems and the other to automated decision systems—since they could potentially overlap and result in conflicting obligations and outcomes.

Bill C-27: What is changing?

In its preamble, Bill C-27 recognizes the importance of “trust” in the digital and data-driven economy as the key to ensuring its growth and fostering a more inclusive and prosperous society. In order to strengthen the trust of Canadians in AI systems, the AIDA would create significant new governance and transparency requirements for businesses that use and develop them. It would also grant the federal government extensive regulatory powers over the AI sector. The focus of the AIDA is on “regulated activities,” defined as “designing, developing or making available for use an artificial intelligence system or managing its operations” as well as “processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system” in the course of international or interprovincial trade and commerce.[2] The scope of application of the AIDA thus intentionally leaves room for the enactment of provincial AI laws. The AIDA also sets out specific requirements for “high-impact systems” which are a subset of AI systems. This approach of governing AI systems based on their level of risk is likely inspired by the EU’s approach, as discussed below. While the AIDA does not define what constitutes a high impact system, it states that the federal government will establish its formal definition through regulation at a later date.

Below is an overview of the AIDA’s new requirements.

Governance

  • System assessment: Anyone responsible for an AI system must assess whether it is a high impact system according to regulation to be provided by the federal government.[3]
  • Risk management: Anyone who is responsible for a high-impact AI system must establish measures to identify, assess and mitigate the risks of harm or biased output[4] that could result from the use of the system.[5]
  • Monitoring: Anyone responsible for a high-impact AI system must establish measures to monitor compliance with the risk management measures, as well as their effectiveness.[6]
  • Data anonymization: Anyone carrying out a regulated activity and who processes or makes available for use anonymized data in the course of that activity must establish measures with respect to the manner in which the data is anonymized and the use or management of anonymized data.[7]
  • Record keeping: Anyone carrying out a regulated activity must keep records describing in general terms the measures they establish for risk management, monitoring, and data anonymization as well as the reasons supporting their assessment of whether a system is a “high-impact system.”[8]

Transparency

  • Publication requirements for developers[9]: Anyone who manages or makes available for use a high-impact system must publish on a publically available website a plain-language description of the system that includes an explanation of:
  • how the system is intended to be used;
  • the types of content that it is intended to generate and the decisions, recommendations or predictions that it is intended to make;
  • the mitigation measures set up as part of the risk management measures requirement; and
  • any other information prescribed by regulation.
  • Notification in event of harm: Anyone responsible for a high-impact system must notify the Minister as soon as possible if the use of the system results in, or is likely to result in, material harm[10].

Ministerial orders

The Minister designated by the Governor in Council to administer the AIDA is granted significant powers to make orders and regulations, as follows.

  • Record collection: The Minister may order that anyone provide them with their records related to system assessment, risk management, monitoring measures, and data anonymization.[11]
  • Auditing: If the Minister has reasonable grounds to believe that anyone has contravened any of the aforementioned requirements or an order made under their record collection powers, they may require an audit to be conducted. The Minister may then order that person to implement any measures to address issues revealed by the audit report.[12]
  • Cessation: The Minister may order that any person who is responsible for a high-impact system cease using it or making it available for use if there are reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm.[13]
  • Publication: The Minister may order that anyone responsible for a high-impact system, or who engages in regulated activity, publish information related to any of the requirements listed above on a publically available website. However, the Minister may not require disclosure of confidential business information. The Minister may also publish on a publicly available website information that relates to a party’s use if there are reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm and the publication of the information is essential to prevent the harm.[14]
  • Disclosure: The Minister may disclose information they obtain to other public bodies such as the Privacy Commissioner or the Human Rights Commission, for the purpose of enforcing other laws.[15]

Significant penalties for non-compliance

The AIDA would also introduce significant penalties that are greater in terms of magnitude than those found in Bill 64 or the EU’s General Data Protection Regulation.

  • Administrative monetary penalties: The federal government will have the ability to establish an administrative monetary penalty scheme for violations of the AIDA and regulations made under it.[16]
  • Fines for breaching obligations: It is an offense for anyone to contravene their governance or transparency requirements. Breaching those obligations can result in fines of up to the greater of $10,000,000 and 3% of gross global revenues. When it is prosecuted as a summary offense the fine is up to the greater of $5,000,000 and 2% of gross global revenues. For individuals, the court may issue a discretionary fine or, in the case of a summary conviction, a fine not more than $50,000.[17]
  • New Criminal Offenses related to AI systems: The AIDA proposes creating new criminal offenses: (i) knowingly using personal information obtained through the commission of an offence under a federal or provincial law to make or use an AI system; (ii) knowingly or recklessly designing or using an AI system that is likely to cause harm and the use of the system causes such harm; and (iii) causing a substantial economic loss to an individual by making an AI system available for use with the intent to defraud the public. For businesses, committing these offenses can result in a fine of up to the greater of $25,000,000 and 5% of gross global revenues. When it is prosecuted as a summary offense the fine is up to the greater of $20,000,000 and 4% of gross global revenues. For individuals, on conviction on indictment, the court may issue a discretionary fine, a sentence of up to five years less a day, or both. In the case of a summary conviction, an individual may be liable to a fine of not more than $100,000, a sentence of up to two years less a day, or both.[18]

The Consumer Privacy Protection Act

In addition to proposing the AIDA, Bill C-27 also seeks to enact general privacy measures that are relevant to AI. Bill C-27 re-introduces the CPPA, which imposes several transparency related requirements on makers and users of automated decision systems.[19]

The CPPA would require organizations to make available, in plain language, “a general account of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have a significant impact on them”.[20] It would also require organizations that use automated decision systems in a way that could have a significant impact on individuals to provide them with an explanation of their decision making process upon request. Notably, these requirements apply to all users of automated decision systems, meaning that AI systems that are not considered high-impact under the AIDA would still be required to comply. These transparency obligations are quite similar to the ones set out in Bill 64, although Bill 64 goes a step further in providing individuals a right to “submit observations to a member of the personnel of the enterprise who is in a position to review the decision”.[21]

The EU Act

There are key similarities between Bill C-27’s approach to AI regulation and the EU’s proposed Artificial Intelligence Act (“EU Act”). Introduced in April of 2021, the EU Act was one of the first attempts to comprehensively regulate artificial intelligence and has influenced how other jurisdictions approach AI regulation. Its stated objective is to improve the functioning of the EU market by laying down harmonized rules governing the development, marketing, and use of AI. To achieve these objectives, the EU Act establishes minimum standards aimed at addressing the risks associated with AI without unduly constraining or hindering innovation. Both the EU Act and Bill C-27 adopt a risk-based approach to AI regulation.

The EU Act classifies AI systems into categories of risk, with the intensity of regulation corresponding to the risk level: (i) unacceptable risk; (ii) high risk; and (iii) low/minimal risk. The high-risk category is subject to multiple regulatory requirements similar to those imposed on high-impact systems by Bill C-27, including regulations relating to documentation and record-keeping, human oversight, and transparency. Similarly, low/minimal risk systems are largely exempt from regulation, except for the duty to comply with basic transparency requirements. This limited requirement for low risk systems is comparable to the transparency obligations imposed on all automated decision systems by the CPPA. Notably, the current draft of the AIDA does not include an outright ban of AI systems carrying unacceptable risks. This differs from the EU Act which strictly prohibits certain types of AI systems, including manipulative or exploitative systems that can cause harm, “real-time” remote biometric identification used in public spaces by law enforcement, and all forms of social scoring systems.

The EU Act gives a possible indication of how the government may go about defining a high-impact AI system. The EU Act identifies two categories of high-risk AI systems. The first category comprises AI systems intended to be used as safety components of products, or as products themselves, that are required to undergo an ex-ante third-party conformity assessment. This category would include AI systems used in critical infrastructure, medical devices, mobile devices, Internet of Things products, toys, and machinery. The second category comprises listed AI systems used for a purpose that poses a risk to health, safety, and/or fundamental rights, such as the freedom of assembly or the right to non-discrimination. The preamble to Bill C-27 recognizes the importance of ensuring AI systems are consistent with international standards to prevent harm, giving further indication that the EU Act will influence the federal government’s approach to regulation. We note, however, that the scope of the AIDA might be more limited than the EU Act, since the former’s definition of AI systems only includes technological systems that process “data autonomously or partly autonomously”,[22] whereas the latter does not require any degree of autonomy. In fact, the EU Act’s definition only requires AI systems to be developed using certain listed techniques, including machine learning, logic- and knowledge-based approaches, and statistical approaches.[23] Thus, simple imitations of human capacity or systems which are programmed as decision trees might be excluded from the AIDA but included in the EU Act. Whether this nuance translates into notable differences in enforcement remains to be seen.

Conclusion

Bill C-27, if adopted into law, is set to have a significant impact on businesses by creating new requirements for those who make, use, or work with AI. The bill imposes several new obligations on the AI sector which are backed by serious penalties for non-compliance. Due to its novelty, the bill could also influence how other jurisdictions develop their AI laws. Considering the anticipated long road ahead before the EU Act is finalized and adopted by the European Parliament and Council, it is even conceivable that Canada could leapfrog the EU as the first jurisdiction in the world to adopt a comprehensive legislative framework to regulate the responsible deployment of AI.

The federal government has given itself a large degree of flexibility in how it will implement and enforce the provisions of Bill C-27 that relate to AI. Much of the bill’s specific details related to obligations and requirements for the AI sector will be clarified after the bill’s passage through regulation and government orders. This is possibly an indication that the federal government intends on taking an adaptive and dynamic approach to AI regulation, which is a rapidly evolving sector. Nonetheless, looking to other jurisdictions such as the EU can give businesses a sense of how the regulatory framework might evolve. Businesses may also leverage existing soft law frameworks on AI, such as the Responsible AI Impact Assessment Tool developed by the International Technology Law Association in order to begin exercising best practices.

To learn more about how our Cyber/Data Group can help you navigate the privacy and data landscape, please contact national co-leaders Charles Morgan and Daniel Glover.

 

[1] Articles 21 and 110 of Bill 64.

[2] Article 5(1) of the AI Act.

[3] Article 7 of the AI Act.

[4] The AI Act defines a “biased output” as “content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. […]”

[5] Article 8 of the AI Act.

[6] Article 9 of the AI Act.

[7] Article 6 of the AI Act.

[8] Article 10 of the AI Act.

[9] Article 11 of the AI Act.

[10] Article 12 of the AI Act. The AI Act defines “harm” as “(a) physical or psychological harm to an individual; (b) damage to an individual’s property; or (c) economic loss to an individual.”

[11] Articles 13-14 of the AI Act.

[12] Article 15 of the AI Act.

[13] Article 17 of the AI Act.

[14] Article 18 and 28 of the AI Act.

[15] Article 26 of the AI Act.

[16] Article 29 of the AI Act.

[17] Article 30 of the AI Act.

[18] Articles 38-40 of the AI Act.

[19] The CPPA defines an automated decision system as “any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique”

[20] Article 62(2)(c) of the CPPA.

[21] Article 110 of Bill 64.

[22] The AI Act defines an "artificial intelligence system" as “a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.”

[23] The EU Act defines an "artificial intelligence system" as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. Annex 1 provides: “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b)Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c)Statistical approaches, Bayesian estimation, search and optimization methods.”

Additional Resources

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address