We held a Dentons Privacy Community webinar on Data Protection in the Age of Artificial Intelligence. Monika Sobiecki (Senior Associate, Dentons), Giangiacomo Olivi and Antonis Patrikios (Partners, Dentons), and Antony Habayeb (Founder and CEO of Monitaur) gave their thoughts on the developing landscape and we wanted to share some key takeaways from the session.
Where should we start with Data Protection in the Age of Artificial Intelligence?
- As global investment in Artificial Intelligence (AI) rises, and with implementation of AI systems predicted to increase over the next 5 years, the EU approach is to seek to promote a trustworthy and human-centric model.
- This will shape future AI legislation in Europe, and in particular how data processing – the key component of AI – is regulated.
- Regulatory approaches are fast evolving at both EU and member state level, and organisations are already considering how to implement governance frameworks to ensure that AI they deploy is compliant.
How is the EU regulatory framework on Artificial Intelligence developing?
- There is currently a patchwork of laws not drafted with AI in mind. For example: consumer law; competition law; product safety laws; equality laws; and regulations governing medical devices.
- There is appetite at the EU level to plug these gaps with new legislation. The European Commission published a White Paper entitled On Artificial Intelligence – A European approach to excellence and trust (February 2020), which reveals the direction of travel.
- Forthcoming legislation will provide a formal definition of AI and a mechanism for identifying “high risk” AI. It will also provide a framework for pre-assessment and mitigation of risks before AI is developed or deployed.
- Legislation will draw on the guidelines published by the High-Level Expert Group on AI: Ethics Guidelines for Trustworthy AI (April 2019). Under the Guidelines, trustworthy AI should be lawful, ethical, and robust.
- The Guidelines also set out 7 requirements that AI systems should meet in order to be deemed ‘trustworthy’. They are:
- Human agency and oversight;
- Technical robustness and safety;
- Privacy and data governance;
- Diversity, non-discrimination and fairness;
- Societal and environmental well-being; and
- In July 2020, the High-Level Expert Group on AI released The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for organisations to carry out a self-assessment. Currently, compliance with the Guidelines and completion of the ALTAI are both entirely voluntary.
How is the UK regulatory framework on Artificial Intelligence developing?
- The UK has no specific law governing AI. Whether a targeted regulatory approach will emerge from the consultation on the Government’s recently published National Data Strategy remains to be seen.
- The regulatory framework in the UK is best described as ‘Privacy Plus’; meaning data privacy laws (currently the GDPR and the Data Protection Act 2018), plus AI-specific guidance issued by the Information Commissioner’s Office (ICO).
- The ICO has published 3 key pieces of guidance:
- Big data, artificial intelligence, machine learning and data protection (2017)
- Explaining decisions made with AI (June 2020)
- Guidance on AI and data protection (July 2020)
- The content of the guidance stretches data protection requirements onto specific AI-related issues. For example, data processing must be ‘fair’ and so the ICO considers that this requirement encompasses AI systems needing to factor in problems of statistical accuracy and non-discrimination.
- In contrast to the EU framework, the ICO does have enforcement powers relating to the way in which data privacy laws interact with their guidance. Whilst this guidance is only “best practice” for complying with data privacy laws, it will certainly inform the ICO’s approach to audits and enforcement.
- The ICO is continuing to publish material on this, and in particular on accountability measures it expects from organisations deploying AI. These materials should be read alongside the ICO’s recently published Accountability Framework which applies to all organisations processing personal data.
- It remains to be seen whether other supervisory authorities across the EU will follow the ICO’s lead in bringing AI-related issues within the existing data protection regulatory framework.
How can organisations ensure transparency, accountability, and auditability at the operational level?
- When approaching AI, and in particular risk assessments and DPIAs, a common concern of privacy professionals is a knowledge gap between regulatory requirements and practical implementation.
- In short, many privacy professionals do not have the necessary technical understanding of AI to assess systems accurately. Privacy professionals fear that if they do not understand the systems they are advising on, then achieving compliance with, for example, transparency, fairness, human intervention or accountability requirements (let alone demonstrating it) becomes impossible.
- Concerns about how to accurately mitigate risk, a general lack of expertise in AI, and the increased costs of new infrastructure demands can act as a handbrake on implementing AI, which can stifle innovation and leave organisations trailing the market.
- There is also a concern that there are dangers in rushing ahead: for example without an audit trail to underpin an AI decisioning process, this exposes the business to risk of regulatory penalties and claims.
- However daunting AI may seem, the good news is that for the majority of AI systems which are likely to be implemented in the near future, organisations can establish transparency and assurance controls to mitigate risks.
Dentons collaboration with Monitaur
- Dentons is looking to work with Monitaur, a Machine Learning Assurance company, to help risk, data governance, compliance and audit departments establish necessary policies, standards and controls that ensure that AI is transparent, compliant, fair, accountable and safe.
- Monitaur’s software ensures that every single decision from and versions of an AI system are recorded, auditable and proactively monitored in a way that makes demonstrating compliance and managing risk straightforward and accessible to non-technical risk, compliance and audit individuals.
- Given many organisations are uncertain about how to map existing policies and the (currently) high level AI requirements onto granular data protection accountability measures, we are looking forward to collaborating with Monitaur to assist our clients.