Artificial intelligence feeds on data: both personal and non-personal. It is no coincidence, therefore, that the European Commission’s “Proposal for a Regulation laying down harmonized rules on Artificial Intelligence”, published on April 21, 2021 (the Proposal), has several points of contact with the GDPR.
The GDPR’s use as a model for the proposal is visible in numerous aspects:
- Scope of application, which extends beyond Europe’s borders
- Sanctions system, which proposes the same sanctions as in the GDPR, with the addition of a further sanction equal to €30 million or 6 percent of the total annual worldwide turnover of the preceding financial year, whichever is greater, for cases of violation of the prohibition of the use of artificial intelligence systems which present an unacceptable risk, and violation of the rules on training data sets
- Obligations of transparency towards users regarding the documentation and self-certification of compliance with the applicable regulations
- Creation of a European collegial body, the “European Artificial Intelligence Board”, which recalls (in its denomination, composition and prerogatives) the “European Data Protection Board”
Furthermore, the proposal sets out an obligation (sanctionable by the highest penalty) to use high-quality datasets to train and instruct artificial intelligence systems. To be considered a high-quality dataset, the personal data included therein must be processed in accordance with the GDPR.
The compliance verification immediately brings to mind the impact assessment pursuant to Article 35 of the GDPR. The obligation to report serious incidents or malfunctions recalls the obligation to notify data breaches. The need for human involvement, supervision and control, and requirement to be able to explain the functioning of the artificial intelligence system have already been imposed (although ‘only’ for automated processing from which consequences may arise for data subjects) by Article 22 of the GDPR.
These obligations of security assessment, security by design, and ethics by design, will mean that those involved will be held to ever-increasing levels of accountability and will require ever-stronger forms of collaboration. This is a fundamental goal of both the proposal and the GDPR. Proceeding from the assumption that technology is not good or bad in itself, but depends on how people use it, this collaborative model can and should characterize the evolution of regulation in the technological (and strategic) sphere, up to the point of being applied by legal systems that, at least in some respects, may seem incompatible.
Artificial intelligence not only crosses over into data protection law, but also extends to other areas of law, such as intellectual property, competition, consumer protection, and insurance, to name but a few.
In fact, there are no boundaries for artificial intelligence when it comes to law. Just as, taking a closer look, there should be no talk of ‘artificial intelligence law’. Due to its intrinsic ability to replace man, artificial intelligence is, by its very nature, transversal, and therefore it touches on all rights. In fact, it goes even further, because it requires breaking down even the very boundaries of law to create a bridge with technology.
The proposal sets out transparency obligations (applicable also to low-risk artificial intelligence systems), in an attempt to find the right compromise between the need for artificial intelligence to be explainable and intelligible to non-professional (and often sceptical) users, and the difficulty – even for the developers of intelligent systems themselves – of fully understanding and explaining how artificial intelligence actually works.
Breaking down boundaries?
The proposal has started to break down legal boundaries: one can only hope that it will also succeed in (helping to) fostering greater collaboration among nations.
Thanks to the Commission’s initiative, the European Union is now leading the race to adopt legislation dedicated to artificial intelligence. However – as intended by the Commission – it cannot limit itself to leading. It must engage other nations in a collaborative dialogue, so that the regulations, which they ultimately (inevitably and necessarily) adopt are coherent, and benefit all economic operators.
This is an extract from the chapter “Artificial Intelligence: context, perspectives and boundaries between different rights” by Giangiacomo Olivi, published in the volume “AI Anthology: legal, economic and social aspects of artificial intelligence” by Ginevra Cerrina Feroni, Carmelo Fontana and Edoardo Raffiotta and edited by Il Mulino. The volume collects the speeches made during the event organized by the Italian Data Protection Authority on artificial intelligence on the April 19-20, 2021.
Eager to know more? Take part to our AI Survey (open until September 17, 2021) and stay tuned for our AI Whitepaper later this autumn!
The survey is anonymous: we will publish only aggregated results, which cannot be attributed to any individual respondent.