Skip to content.

One Step Closer to AI Regulations in Canada: The AIDA Companion Document

Alongside a flurry of important artificial intelligence (“AI”) news in recent weeks (OpenAI’s GPT 4…, Midjourney V5…, and back-to-back announcements by Google and Microsoft that generative AI soon be integrated into their productivity tools…), the Federal Government released preliminary guidance on the forthcoming Artificial Intelligence and Data Act (“AIDA”).

In a previous blog, we summarized the key provisions of AIDA, which was introduced in Parliament in June 2022 through Bill C-27, alongside the Consumer Privacy Protection Act and the Data Protection Tribunal Act. Our colleague Barry Sookman also recently published a detailed analysis of AIDA here.

AIDA seeks to set out clear requirements for the responsible development, deployment and use of AI systems by the private sector. Its stated goal is to put in place a rigorous regulatory framework governing responsible adoption of AI systems to limit the harms those systems may cause, including reproducing and amplifying biases and discrimination in decision-making that may, in turn, propagate considerable systemic harms and erode public trust in the technology, which could have a chilling effect on the development of AI.

This is an important mission. However, one of the main issues commentators have had regarding AIDA, aside from the fact that it was introduced with no prior consultation of either the public or industry, is that AIDA’s current draft leaves many essential components to be laid out in future regulations. For instance, what constitutes a “high-impact system”, a core concept of AIDA subject to specific requirements, is left to be determined at a later stage by regulations to be adopted by the Governor in Council and enforced by a newly created “Artificial Intelligence and Data Commissioner”, who will be a senior official of the department over which the Minister of Innovation, Science and Economic Development Canada (“ISED”) presides.[1]

We do not have drafts of these regulations yet, but on March 14, 2023, ISED published a Companion Document for AIDA that provides insight into the Federal Government’s thinking when it comes to AI regulations (the “AIDA Companion Document”). In this blog, we present some of the main takeaways of the AIDA Companion Document.

Timeline for Regulations

The AIDA Companion Document aims to reassure Canadians in two ways:

  • By acknowledging that Canadians are apprehensive about risks brought by AI systems and assuring them that the Federal Government has a plan to make them safe; and
  • By reaffirming to researchers and innovators that regulations are not aimed at chilling innovation or limiting good faith actors.

In order to accomplish these goals, ISED proposes the adoption of an agile approach to developing regulations, one that involves various stakeholders and keeps tabs on the latest developments of AI technology (which, as we have seen in the last few months, can unfold at lighting speed).

The AIDA Companion Document promises an open and transparent process to the forthcoming drafting and adoption of AIDA regulations, with ISED proposing broad and inclusive consultations with the public and key industry players such as academics, AI business leaders, and civil society. In addition, ISED gives us for the first time a tentative timeline for the adoption of AIDA regulations. Following Bill C-27 receiving royal assent,[2] the implementation of AIDA regulations will occur through a sequence of steps, according to the AIDA Companion Document:

  • Consultations on regulations (6 months);
  • Development of draft regulations (12 months);
  • Consultation on draft regulations (3 months); and
  • Coming into force of initial set of regulations (3 months).

Assuming Bill C-27 is finalized in the next few months, this proposed timeline for the adoption of AIDA regulations means that they would not enter into force before 2025 — at the earliest. This is reassuring for two important reasons: (1) because it provides stakeholders with a channel to provide recommendations and feedback on the regulations before they come into force; and (2) because this will render the coming regulatory framework more predictable for industry participants. Companies that are developing or making AI systems available for use, and those that are planning to do so in the coming years, will benefit from monitoring this regulatory process closely, especially if their AI systems may be considered “high-impact” (more on that below).

Moreover, even after the coming into force of AIDA regulations, ISED has indicated in the AIDA Companion Document that it proposes a gradual approach to enforcement: the first years of AIDA would be dedicated to “education, establishing guidelines, and helping businesses to come into compliance through voluntary means”, to avoid sudden shocks to the ecosystem. Thus, even if AIDA enters into force in 2025, AIDA and its regulations will likely not be fully enforced (especially their criminal offence provisions) before 2026 or 2027 based on the AIDA Companion Document. This staggered approach to implementation of AIDA is reminiscent of the approach taken by the Quebec legislator with the new Law 25 (previously Bill 64), which was adopted in September 2021, but whose provisions have been gradually entering into force over the course of a three year period between September 2022 and September 2024.

ISED appears to be attempting to thread a delicate needle: demonstrating its commitment to adopt a comprehensive legal framework for AI that is consistent with the EU’s proposed Artificial Intelligence Act (the “EU AI Act”), while at the same time reassuring market players that, consistent with the current US “hands off” approach, it does not want to unduly stifle innovation.

Content of Regulations

In addition to a clearer implementation timeline, the AIDA Companion Document also provides us with substantive information about what we can expect from the regulations themselves. As mentioned above, AIDA contains few details about the specific types of AI uses on which the Federal Government intends to impose the strictest requirements: the so-called “high-impact systems”. Without giving us an exhaustive list, the AIDA Companion Document includes some examples of systems that are of interest to the Federal Government:

  • Systems that screen people for services or employment (this domain has recently become the object of a new regulation in New York City imposing mandatory algorithmic bias audits[3]);
  • Biometric systems used for identification and inference (interestingly, this is also targeted by Quebec’s new Law 25 (previously Bill 64) and its amendment to the regulation of biometric databases);
  • Systems that can influence human behaviour at scale (such as AI-powered online recommendation systems); and
  • Systems critical to health and safety (such as self-driving cars).

We note that this list does not specifically mention generative AI systems such as ChatGPT (text generation) or Midjourney (image generation). However, later in the AIDA Companion Document, ISED refers to AI systems that “perform generally applicable functions – such as text, audio or video generation.” It then suggests that the developers of those systems would need to document and address risks of harm and bias in their systems. However, developers that do not continue to manage such systems once they are in production would have different obligations than those directly involved in their daily operations. This is significant, as it suggests that ISED may consider certain generative AI systems as “high-impact”, although it is not explicit in that regard.

In Europe, some legislators have adopted a similar position. In the Common Position on the EU AI Act published in November 2022, the Council of the European Union proposes that certain enhanced obligations required of providers of high-risk systems (such as transparency obligations) also be applied to providers of “general purpose AI.” Some lawmakers have gone even further, proposing that large language models like GPTs should be outright classified as “high-risk” under the EU AI Act.[4] In response, big tech companies have begun to lobby for the exact opposite: an exception that would exclude developers of general purpose AI from the risk-based framework altogether.[5] The final position adopted by EU legislators will be of importance on this side of the Atlantic, as ISED confirms in the AIDA Companion Document that it will be monitoring developments in foreign jurisdictions to ensure international alignment among regulatory frameworks.

For now, ISED lays out for the first time a non-exhaustive list of factors that may influence the “high-impact” classification of AI systems under the proposed regulations:

  • Severity of potential harms and nature of already occurring harms;
  • The scale of use;
  • Evidence of risks of harm to health and safety, and risk of adverse effects on human rights;
  • Imbalances of economic or social circumstances;
  • Difficulty in opting out from the AI system; and
  • The degree to which the risks are adequately regulated under another law.

Under AIDA, businesses that design, develop or “make available for use” high-impact AI systems will face the strictest obligations and notably be accountable for ensuring that employees implement mechanisms to mitigate the risks of such systems. However, as mentioned above, businesses that are only involved in the design or development of a high-impact AI system and that have no ability to monitor the system post-development would have different obligations from those who remain responsible for its operation. In short, ISED states that the degree of responsibility of an actor in the AI value chain will be “proportionate to the level of influence that an actor has on the risk associated with the system.” This will make identifying what role (or roles) a business plays in the AI system a key component of any AI compliance initiative, as this will determine the types of regulations the business is subject to, and thus the regulatory costs and risks.

AIDA is currently thin in detail about the specific obligations that come with those “high-impact” systems, as those are also left to be defined in the coming regulations. However, ISED discloses in the AIDA Companion Document the principles that will guide the development of those obligations:

  • Human Oversight & Monitoring;
  • Transparency;
  • Fairness and Equity;
  • Safety;
  • Accountability; and
  • Validity & Robustness.

These are similar to other principles of responsible AI published in the last few years, notably the ones proposed by the US National Institute of Standards and Technology, which we have discussed in a previous blog and which are analogous to the principles set out in the iTechLaw Responsible AI: Global Policy Framework, discussed here and here.

As an aside, “making available for use” is a broad concept, but the AIDA Companion Document includes an important clarification. Models and tools published by researchers as open-source software will not be considered making an AI system available for use, since models published this way are not complete, “fully-functioning” AI systems. However, it is not clear who exactly would be considered a “researcher” by ISED and whether models published by industry players (such as Meta’s Llama, a foundational large language model, for instance), as opposed to academics, would also benefit from that exception.

Enforcement

Violation of obligations laid out in AIDA will be addressed through three different mechanism: (1) administrative monetary penalties; (2) prosecution of regulatory offence; and (3) criminal charges. The AI Document Companion few provides additional information on that topic, but ISED reminds us that AIDA would be enforced (except for criminal offences) by a newly created AI and Data Commissioner. One point that ISED emphasizes is that prosecution of criminal offences under AIDA would be exclusively conducted by the Public Prosecution Service of Canada (“PPSC”), with the ability for the minister in charge of AIDA to refer cases to the PPSC, but bearing no further role in criminal prosecution. Finally, ISED indicates that other enforcement activities would not be carried out in a vacuum; they will involve external experts to provide support to the administration and enforcement of AIDA, use independent auditors to conduct audits, and appoint an advisory committee.

Conclusion

It is evident from the principles articulated in the AIDA Companion Document that the Federal Government recognizes the looming transformational shift of our AI-intermingled lives. The emphasis on transparent and inclusive consultations, human rights considerations, an evolving and agile approach, and strict requirements focused on high-impact systems are steps in the right direction. However, we must acknowledge as well that many grey areas remain that will not be brought to light until we finally have the draft regulations in our hands, particularly when it comes to the definition of high-impact systems and the precise contour of the obligations of different actors in the AI lifecycle. Despite the reassuring tone of the AIDA Companion Document, it remains alarming that the Federal Government appears to be committed to the adoption of an AI legislative framework that contemplates sanctions of up to $25 million or 5% of global revenue for violations of AIDA even before stakeholders have been provided an opportunity to clarify such threshold issues as “what high-impact systems will be subject to AIDA?” and “what compliance obligations will apply to such high-impact systems?”.

As Canadians await further details on AIDA’s full content, businesses that are already developing, deploying or using AI systems (or are contemplating doing so) should keep a close watch on regulatory developments and consider proactively implementing responsible AI principles in their AI projects based on available Canadian and international guidelines. The members of McCarthy Tétrault’s Cyber/Data Group are experienced practitioners and can assist in all matters involving complex technology law matters, including AI compliance.

 

[1] S. 33(1), AIDA.

[2] Bill C-27 remains at the second reading in Parliament and has still not reached the committee stage. It might therefore receive royal assent this year.

[3] Known as the Automated Employment Decision Tool Law, it went into effect on January 1, 2023, but its enforcement has been postponed until April 15, 2023.

https://venturebeat.com/ai/for-nycs-new-ai-bias-law-unanswered-questions-remain/

[4] https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/

[5] https://techcrunch.com/2023/02/23/eu-ai-act-lobbying-report/

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address