Skip to content.

Responsible AI: European and North American Perspectives on the new EU draft AI Regulation

On June 29, 2021, the Human Technology Foundation, in partnership with Ernst and Young, IEEE, ITechLaw and McCarthy Tétrault, hosted a webcast titled, “Responsible AI: European and North American Perspectives on the new EU draft AI Regulation” on the European Commission’s draft regulation on artificial intelligence (“AI”). 

Panelists Kilian Gross (Head of Unit on AI Policy Development and Coordination, DG Connect, European Commission), Nye Thomas (Executive Director, Law Commission of Ontario) and Alpesh Shah (Senior Director Global Business Strategy & Intelligence, IEEE Standards Association) were joined by moderator Charles Morgan of McCarthy Tétrault to discuss the underlying regulatory framework, as well as the advantages and disadvantages of the European Commission’s approach to AI regulation.

The Draft Regulation

On April 21, 2021, the European Commission released a draft regulation on the use of AI (the “Draft Regulation”), which follows up on the European Commission’s white paper “On Artificial Intelligence – A European approach to excellence and trust”, published in February 2020.  The Draft Regulation proposes a legal framework for AI systems by setting out harmonized rules alongside a coordinated plan to strengthen AI uptake and innovation across the EU member states, while guaranteeing EU citizens’ rights and safety.

The Draft Regulation takes a broad human-centric and risk-based approach to AI, with the aim of strengthening trustworthy AI technology development, investment, and innovation across the EU.  As the first-ever comprehensive legal framework on AI, it has been compared to the European General Data Protection Regulation (“GDPR”).

Mr. Gross, head of the European Commission’s AI policy unit that prepared the Draft Regulation, started the discussion by giving a high-level overview of the purpose and operational framework of the Draft Regulation.  He highlighted that the European Commission’s approach to AI began with the paradigm that AI systems present an opportunity for businesses, governments, and citizens.  Since AI systems are a rapidly developing space and in order to future-proof the Draft Regulations, the Commission sought to create a single, flexible and neutral definition of AI systems that would be linked to the concrete list of AI techniques found in Annex I of the Draft Regulation:

Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with

This “future-proofing” approach to defining AI systems is intended to allow users to be reasonably certain as to whether a given technique falls within the definition of AI systems, while removing barriers to adding AI techniques to the Draft Regulations as they are developed over time. 

Two key features of the Draft Regulation, which were highlighted by Mr. Gross, is the human-centric and risk-based approach to AI systems that forms the core of the regime.  Using this approach, the European Commission developed a single regulatory framework capable of being used in both the public and private sectors.  The framework requires proponents or regulators to categorize AI systems by use cases into risk categories that range from minimal or no risk (the majority of AI use cases) to unacceptable risk (a minority of use cases that contradict European values), and take the appropriate next steps to ensure compliance:

Categorizations of AI, showing levels of risk associated therewith.
Source: Presentation, Kilian Gross, June 29, 2021

Examples of unacceptable risk cases are use of AI for subliminal manipulation resulting in physical/psychological harm, exploitation of children or the mentally disabled resulting in physical/psychological harm, general purpose social scoring, and remote biometric identification for law enforcement purposes (unless for a specifically exempted use).

Mr. Gross emphasized that the purpose of the Draft Regulation is to support responsible and trustworthy AI systems development through a dynamic regulatory environment that evolves as the technology develops into the future.  A copy of his presentation, generously shared by the European Commission, can be found here.

Finding the Right Model of Regulation

The Draft Regulation is a uniquely European piece of legislation that proposes to regulate the public and private sector under a “one size fits all” model that sets ground rules and manages risk in a manner that cuts across sectors.  Mr. Gross explained that the rationale behind this approach was the recognition that the specific problems with AI systems (such as concerns about transparency, explainability, accountability, safety, reliability and non-discrimination) are essentially the same regardless of sector and the concern that a sectorial approach would result in slower and more piecemeal regulatory development.  Mr. Shah of the IEEE Standards Association, agreed that, given the rate of AI systems innovation, the approach of starting with a foundation of broad, human-centered, and harmonized principles would allow various sectors to develop standards and certifications in a practical manner.  Contextual modification of the general framework, he continued, would allow sector specific development, while achieving cross-sector clarity. Mr. Shah also outlined several relevant IEEE initiatives in support of the Draft Regulation, including the IEEE 7000 series AI Ethics and technical standards and its AI Ethics Process Certification Program (ECPAIS) – both of which concretely apply to the principles of human-centricity, transparency, accountability, algorithmic bias and privacy.

Under the Draft Regulation, public and private sector uses are distinguished primarily through use cases and risk analyses.  Risk is assessed by looking at the end use / impact of the AI technique, which will vary across sectors, despite the deployment of the same device or system by public and private actors.  For example, the impact of AI techniques used in policing to generate evidence are in general categorized as being high risk, whereas those same techniques, if used in a private sector setting, may not have the same high risk label. 

The European Commission also designed the Draft Regulation to work together with industry standards developed by international standards bodies, such as the IEEE Standards Association.  Moreover, Mr. Gross emphasized that harmonization with GDPR is particularly important since all of the data collected and used by AI systems in Europe will need to comply with GDPR.

Mr. Thomas indicated that it is uncertain whether a “one size fits all” regulatory regime (applicable to both the public and private sectors) would be legal under Canada’s constitutional framework.  As such, the question of how AI systems ought to be regulated, so that any regulations are consistent but appropriately different across the public and private sectors, is particularly relevant in the Canadian context.  Law makers will need to ensure that public sector legal rules, which do not have a parallel in the private sector (for example, rules relating to rights or public participation in Canadian criminal or administrative law), are not watered down to achieve a uniform rule for both public and private sector actors.  Further, a number of questions arise where public sector actors use private sector technology, such as “should the same disclosure obligations apply?” and “what is the impact of such transparency and explainability obligations on confidentiality and commercial trade secrets?”  These, and questions about how AI systems will fit into Canada’s privacy regime, are questions that Canadians will have to grapple with as AI techniques become more commonplace.

Advantages and Disadvantages of Risk-Based Approaches to Regulation

The panel concluded with their thoughts on the advantages and disadvantages of risk-based approaches to AI system regulation.  Mr. Thomas pointed out that there is a fundamental tension between promoting an industry or innovation and managing risk.  An advantage of the risk-based approach to regulation is that it allows proponents and regulators to tailor their procedure to a specific circumstance and adapt mitigation strategies to address the risks that are actually present.  A disadvantage is that the development of many standards and tools can quickly lead to a complicated regulatory environment. 

Other areas of uncertainty with a risk-based approach that were identified by Mr. Thomas relate to the following questions:

  • who should be the party assessing risk – the proponent or an independent regulatory body? and
  • what are the obligations that attend to the identification of specific risk levels?

Mr. Thomas acknowledged that these are complicated questions that Canadian law makers will have to think through when developing an approach to AI systems.  Compliance costs are also an issue to be balanced so that safety and risk management do not stifle innovation. 

All three panelists agreed that, even though regulatory balance will be learned over time, human centricity – that is, a focus on human dignity and agency – needs to be a paramount feature of any regulation.  This ensures that any AI systems are deployed in the service of citizens and important decisions are not made without systemic human control.

To learn more about AI systems and regulation, see the Responsible AI: A Global Policy Framework by ITechLaw and following blogs on Techlex:

To stay up to date on AI-related and other technology law developments, subscribe to our TechLex blog. To learn more about how our Cyber/Data Group can help you navigate the privacy and data landscape, please contact national co-leaders Charles Morgan and Daniel Glover.

Authors

Subscribe

Stay Connected

Get the latest posts from this blog

Please enter a valid email address