Skip to content

Brought to you by

Dentons logo

Privacy and Cybersecurity Law

Coverage and commentary on developments in data protection.

open menu close menu

Privacy and Cybersecurity Law

  • Home
  • About Us

Stretching the boundaries through artificial intelligence: the European proposal for a dedicated regulation. A risk-based perspective.

By Giangiacomo Olivi
September 10, 2021
  • Europe
  • General
Share on Facebook Share on Twitter Share via email Share on LinkedIn

A wide scope of application.

The European Commission’s “Proposal for a Regulation laying down harmonized rules on Artificial Intelligence”, published on April 21, 2021 (the proposal) represents an opportunity to reaffirm the role of the European Union in defining global standards and promoting the development of artificial intelligence that is reliable and consistent with the values and interests underlying the European Union itself. Consequently, the proposal creates the perfect opportunity to recreate the so-called “Brussel-effect” already experienced with the GDPR: the European legislation seen and used as a model at (one might say) the global level.

The European Commission has chosen the instrument of the regulation, and not of the directive, in order to ensure that the new rules are applied as uniformly as possible throughout the Union. It will also apply to all non-European subjects, including large and well-known US and Chinese players, who will be subject to the application of the regulation on artificial intelligence.

The proposal would be applicable to anybody who places on the market or puts into service artificial intelligence systems in the European Union – regardless of whether they are established within the Union or in a third country. It would also apply to developers and users of artificial intelligence systems, who are located in a third country, if the result produced by the intelligent system is used within the European Union:

“This Regulation applies to (a) provider(s) placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; (b) users of AI systems located within the Union; (c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union”.

The classification of artificial intelligence systems.

This broad subjective scope of application is counterbalanced by a much narrower objective scope.

The proposal maintains and adopts the risk-based approach already recommended in the White Paper on Artificial Intelligence published on February 20, 2020.

Artificial intelligence systems are distinguished and classified into:

  • Systems which present an unacceptable risk to the health and security of individuals, such as systems focused on the conditioning of human behavior through the use of subliminal techniques or the exploitation of vulnerability;
  • High risk – such as systems destined to be used for the recruitment or selection of personnel;
  • Low risk, such as chatbots;
  • Minimal or negligible risk, such as anti-spam filters.

The proposal prohibits the use of artificial intelligence systems presenting an unacceptable risk.

The remainder of rules of the proposal concern and apply only to high-risk systems, with the exception of the transparency requirements prescribed for the use of low-risk artificial intelligence systems.

No rules are introduced for systems of artificial intelligence, which present a negligible level of risk, even though the European Commission itself has specified that the vast majority of the systems of artificial intelligence fall into this last category. It seems almost as if it wants to demonstrate its trust in artificial intelligence and confirm that there is no intention of over-regulation (which would create an indirect disincentive to the development and use of artificial intelligence systems). We shall see if the European Union succeeds in keeping faith with this approach…

The prevention and risk management approach.

High-risk artificial intelligence systems – the only ones to be regulated by the proposal – are identified according to two different criteria.

As a general rule, an artificial intelligence system is considered high-risk if it has the following two characteristics:

  • It is intended for use as a safety component of a product, or is itself a product covered by the European Union harmonization legislation listed in Annex II to the proposal;
  • The product – whose safety component is the artificial intelligence system – or the artificial intelligence system itself as a product is required to undergo a third-party conformity assessment before being placed on the market or put into service, pursuant to the European Union harmonization legislation listed in Annex I to the proposal.

There is also a special criterion, according to which, all systems that are identified in a specific list attached as Annex III to the proposal are high-risk. The classification, in this case, is based on the intended use of the intelligent system, and it is foreseen that the European Commission can and will update the list periodically.

All high-risk artificial intelligence systems, regardless of the criterion by which they are identified, are subject to the obligations set out in the proposal. These compliance obligations aim to prevent and manage risks to the health and safety of individuals, rather than compensate for them. In fact, the proposal does not contain any rules on the subject of responsibility for the actions and omissions of intelligent systems. The rules proposed by the European Commission include only obligations of compliance, vigilance and control, which apply throughout the entire life cycle of the artificial intelligence system and to all the subjects involved in the chain – from the developer to the final user – who are therefore responsible.

Basically, the proposal does not take an ex-post approach, based on risk remediation: but rather an ex-ante approach, based on risk prevention, identification and management.

However, the European Commission has already confirmed that it will soon look into the responsibility of artificial intelligence: the resulting regulatory context will most likely be complete. It remains to be seen whether it will also be sufficient to convince those, who still nurture diffidence about the usefulness and goodness of artificial intelligence tools.

***

This is an extract from the chapter “Artificial Intelligence: context, perspectives and boundaries between different rights” by Giangiacomo Olivi, published in the volume “AI Anthology: legal, economic and social aspects of artificial intelligence” by Ginevra Cerrina Feroni, Carmelo Fontana and Edoardo Raffiotta and edited by Il Mulino. The volume collects the speeches made during the event organized by the Italian Data Protection Authority on artificial intelligence on the April 19-20, 2021.

Eager to know more? Take part to our AI Survey (open until September 17, 2021) and stay tuned for our AI Whitepaper later this autumn!

Survey button

The survey is anonymous: we will publish only aggregated results, which cannot be attributed to any individual respondent.

Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest blog posts by email.
Stay in Touch
Giangiacomo Olivi

About Giangiacomo Olivi

Giangiacomo Olivi is a partner in Dentons’ Milan office, Europe Co-head of the Data Privacy and Cybersecurity group and Europe Co-head of the Media sector group. He is a member of the global Intellectual Property and Technology practice.

All posts Full bio

RELATED POSTS

  • Europe

EU close to doing a deal on Safe Harbor

By Nick Graham
  • General

Brazilian General Data Protection Law (LGPD) entered into force and requires attention

By Patricia Barbosa and Isabella Rovesta
  • Cybersecurity
  • Data Breach
  • Data Transfers
  • Europe
  • General
  • New and Proposed Laws
  • Privacy Rights
  • United Kingdom
  • United States

The new SCCs and what you need to know

By Tatiana Kruse

About Dentons

Dentons is designed to be different. As the world’s largest law firm with 20,000 professionals in over 200 locations in more than 80 countries, we can help you grow, protect, operate and finance your business. Our polycentric and purpose-driven approach, together with our commitment to inclusion, diversity, equity and ESG, ensures we challenge the status quo to stay focused on what matters most to you. www.dentons.com

Dentons boilerplate image

Twitter

Categories

  • Accountability
  • Asia Pacific
  • Canada
  • Cloud Computing
  • Consumer Protection
  • Cybersecurity
  • Data Breach
  • Data Transfers
  • Employee Privacy
  • Enforcement
  • Europe
  • General
  • Government Information
  • Health Information Privacy
  • Latin America
  • Marketing, Cookies & Spam
  • New and Proposed Laws
  • Privacy Notices
  • Privacy Rights
  • Record Retention
  • Smart Cities
  • United Kingdom
  • United States

Subscribe and stay updated

Receive our latest blog posts by email.

Stay in Touch

Dentons logo

© 2023 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site