Guidance on Artificial Intelligence and Data Protection
Image by geralt from Pixabay.
For many of us, Artificial Intelligence (“AI”) represents innovation, opportunities, and potential value to society.
For data protection professionals, however, AI also represents a range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque processes and algorithms.
Data protection and information security authorities as well as governmental agencies around the world have been issuing guidelines and practical frameworks to offer guidance in developing AI technologies that will meet the leading data protection standards.
Below, we have compiled a list* of official guidance recently published by authorities around the globe.
- 1/17/2022 – Government of Ontario, “Beta principles for the ethical use of AI and data enhanced technologies in Ontario”
The Government of Ontario released six beta principles for the ethical use of AI and data enhanced technologies in Ontario. In particular, the principles set out objectives to align the use of data enhanced technologies within the government processes, programs, and services with ethical considerations being prioritized.
- 12/12/2022 – Cyberspace Administration of China, Regulations on the Administration of Deep Synthesis of Internet Information Services
http://www.cac.gov.cn/2022-12/11/c_1672221949354811.htm (in Chinese) and
http://www.cac.gov.cn/2022-12/11/c_1672221949570926.htm (in Chinese)
The Regulations target deep synthesis technology, which are synthetic algorithms that produce text, audio, video, virtual scenes, and other network information. The accompanying Regulations FAQs state that providers of deep synthesis technology must provide safe and controllable safeguards and conform with data protection obligations.
- 9/26/2021 – Ministry of Science and Technology (“MOST”), New Generation of Artificial Intelligence Ethics Code
http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html (in Chinese)
The Code aims to integrate ethics and morals into the full life cycle of AI systems, promote fairness, justice, harmony, and safety, and avoid problems such as prejudice, discrimination, privacy, and information leakage. The Code provides for specific ethical requirements in AI technology design, maintenance, and design.
- 1/5/2021 – National Information Security Standardisation Technical Committee of China (“TC260”), Cybersecurity practice guide on AI ethical security risk prevention
https://www.tc260.org.cn/upload/2021-01-05/1609818449720076535.pdf (in Chinese)
The guide highlights ethical risks associated with AI, and provides basic requirements for AI ethical security risk prevention.
- European Telecommunication Standards Institute (“ETSI”) Industry Specification Group Securing Artificial Intelligence (“ISG SAI”)
The ISG SAI has published standards to preserve and improve the security of AI. The works focus on using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack.
- 4/21/2021 – European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”
The EU Commission proposed a new AI Regulation – a set of flexible and proportionate rules that will address the specific risks posed by AI systems, intending to set the highest global standard. As an EU regulation, the rules would apply directly across all European Member States. The regulation proposal follows a risk-based approach and calls for the creation of a European enforcement agency.
- 4/5/2022 – French Data Protection Authority (“CNIL”), AI and GDPR Compliance Guide and Self-Assessment Tool
https://www.cnil.fr/fr/intelligence-artificielle/ia-comment-etre-en-conformite-avec-le-rgpd (in French)
https://www.cnil.fr/fr/intelligence-artificielle/guide (in French)
In its newest set of resources for AI compliance, CNIL offers a step-by-step guide to GDPR compliance when utilizing AI. Additionally, CNIL published a Self-Assessment Guide for AI Systems, which allows organizations to assess the maturity of their AI systems with regard to the GDPR, along with best practice guidance.
- 9/3/2020 – CNIL, Whitepaper and Guidance on Use of Voice Assistance
https://www.cnil.fr/sites/default/files/atoms/files/cnil_livre-blanc-assistants-vocaux.pdf (in French)
This whitepaper explores legal and technical considerations for developers and businesses which may utilize voice assistance technology in light of recent AI technology development. It further includes best practices and recommended approaches.
- 5/24/2022 – Federal Office for Information Security, Towards Auditable AI Systems whitepaper
This paper emphasizes the need for methods to audit AI technology, to guarantee trustworthiness and ensure integration of emerging AI standards. The paper seeks to embolden the auditability of AI systems by proposing a newly developed certification system.
- 6/15/2021 – Federal Financial Supervisory Authority (“BaFin”), “Big Data and Artificial Intelligence”
This paper provides key principles and best practices for the use of algorithms and AI in decision making processes.
- 5/6/2021 – Federal Office for Information Security (“BSI”), “Towards Auditable AI Systems”
This whitepaper addresses current issues with and possible solutions for AI systems, with a focus on the goal of auditability and standardization of AI systems.