Image by geralt from Pixabay.
For many of us, Artificial Intelligence (“AI”) represents innovation, opportunities, and potential value to society.
For data protection professionals, however, AI also represents a range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque processes and algorithms.
Data protection and information security authorities as well as governmental agencies around the world have been issuing guidelines and practical frameworks to offer guidance in developing AI technologies that will meet the leading data protection standards.
Below, we have compiled a list* of official guidance recently published by authorities around the globe.
- 9/26/2021 – Ministry of Science and Technology (“MOST”), New Generation of Artificial Intelligence Ethics Code
http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html (in Chinese)
The Code aims to integrate ethics and morals into the full life cycle of AI systems, promote fairness, justice, harmony, and safety, and avoid problems such as prejudice, discrimination, privacy, and information leakage. The Code provides for specific ethical requirements in AI technology design, maintenance, and design.
- 1/5/2021 – National Information Security Standardisation Technical Committee of China (“TC260”), Cybersecurity practice guide on AI ethical security risk prevention
https://www.tc260.org.cn/upload/2021-01-05/1609818449720076535.pdf (in Chinese)
The guide highlights ethical risks associated with AI, and provides basic requirements for AI ethical security risk prevention.
- 7/14/2021 – European Commission’s Joint Research Center (“JRC”), AI Watch – AI Standardisation Landscape
Most recently, the JRC published this report on the AI standardization landscape. The report describes the ongoing standardization efforts on AI and aims to contribute to the definition of a European standardization roadmap.
- European Telecommunication Standards Institute (“ETSI”) Industry Specification Group Securing Artificial Intelligence (“ISG SAI”) Standards
The ISG SAI has published standards to preserve and improve the security of AI. The works focus on using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack.
- 4/21/2021 – European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”
The EU Commission proposed a new AI Regulation – a set of flexible and proportionate rules that will address the specific risks posed by AI systems, intending to set the highest global standard. As an EU regulation, the rules would apply directly across all European Member States. The regulation proposal follows a risk-based approach and calls for the creation of a European enforcement agency.
- 9/3/2020 – French Data Protection Authority (“CNIL”), Whitepaper and Guidance on Use of Voice Assistance
https://www.cnil.fr/sites/default/files/atoms/files/cnil_livre-blanc-assistants-vocaux.pdf (in French)
This whitepaper explores legal and technical considerations for developers and businesses which may utilize voice assistance technology in light of recent AI technology development. It further includes best practices and recommended approaches.
- 6/15/2021 – Federal Financial Supervisory Authority (“BaFin”), “Big Data and Artificial Intelligence”
This paper provides key principles and best practices for the use of algorithms and AI in decision making processes.
- 5/6/2021 – Federal Office for Information Security (“BSI”), “Towards Auditable AI Systems”
This whitepaper addresses current issues with and possible solutions for AI systems, with a focus on the goal of auditability and standardization of AI systems.
- 8/18/2021 – Office of the Privacy Commissioner for Personal Data (“PCPD”), “Guidance on the Ethical Development and Use of Artificial Intelligence”
This guidance discusses ethical principles for AI development and management while also highlighting recent development in AI governance around the globe. The guidance further includes a helpful self-assessment checklist in its appendix concerning businesses’ AI strategy and governance, risk assessment and human oversight, development and management of AI systems as well as communication and engagement with stakeholders.
- 9/28/2021 – INDIAai, “Mitigating Bias in AI – A Handbook For Startups”
INDIAai, a government-based initiative, published this formalized framework for startups. The handbook identifies different risk factors that may lead to bias in AI.
- 7/15/2021 – Data Security Council of India (“DCSI”), “Handbook on Data Protection and Privacy for Developers of Artificial Intelligence in India”
The handbook establishes guidelines for responsible and ethical AI development in line with the applicable legal data protection framework. While the handbook does not provide technical solution but instead focuses on the ethical and legal objectives to pursue when designing AI systems, it does provide for a checklist of questions and good practices which developers shall keep in mind while in the design process.
- 2/24/2021 – National Institution for Transforming India (“NITI Aayog”), “Responsible AI”
In this paper, the Government think tank highlights the ethical and legal framework for AI technology management. The paper further includes a self-assessment guide for AI usage in its annex.
- International Organization for Standardization (“ISO”) – ISO/IEC JTC 1/SC 42 Standards
Together with the International Electrotechnical Commission (“IEC”), ISO has published a number of AI standards in recent years. The newest standards, published in March of 2021, provide background about existing methods to assess the robustness of neural networks. Additional AI standards are currently under development.
- 8/4/2021 – Ministry of Internal Affairs and Communications (“MIC”), AI Network Society Promotion Council Report
https://www.soumu.go.jp/main_content/000761967.pdf (in Japanese)
The report highlights recent trends in AI utilization as well as efforts to promote secure and reliable social implementation of AI.
- 10/20/2020 – Personal Data Protection Commission (“PDPC”), “Compendium of Use Cases”
This practical illustration includes a number of use cases which outline how different organizations have effectively aligned their AI governance practices with the Model Framework.
- 1/21/2020 – PDPC, “Model AI Governance Framework (Second Edition)
along with “Implementation and Self-Assessment Guide for Organizations” https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGIsago.pdf
The Model Framework focuses on internal governance, decision making models, operations and customer relationship management. The Implementation Guide provides useful industry examples, best practices and specific practice guides for the employment of AI.
- 8/11/2021 – Personal Information Protection Commission (“PIPC”), Summary Report on Data Protection Regulatory Sandbox
https://www.pipc.go.kr/np/cop/bbs/selectBoardArticle.do?bbsId=BS074&mCode=C020010000&nttId=7487#LINK (in Korean)
The report highlights data protection considerations resulting out of 133 cases, specifically including the use of robotics and new technology, such as unmanned moving objects.
- 7/20/2021 – PIPC, “AI Personal Information Protection Self-checklist”
The checklist provides guidelines for the protection of personal information gathered and used by artificial intelligence. The checklist includes requirements and guidelines for each stage of development of AI systems and is meant to account for the flexibility and continuous change of AI technology.
- 1/12/2021 – Spanish Data Protection Authority (“AEPD”), “Audit Requirements for Personal Data Processing Activities Involving AI”
This paper is aimed to help evaluate regulatory compliance of AI systems by providing methodologies and control objectives to be included in data protection audits for processes that incorporate AI components or solutions.
- 9/15/2021 – Turkish Personal Data Protection Authority (“KVKK”), Recommendations When Processing Personal Data Using AI
https://kvkk.gov.tr/SharedFolderServer/CMSFiles/25a1162f-0e61-4a43-98d0-3e7d057ac31a.pdf (in Turkish)
This guidance discusses fundamental principles for AI development and management, and advises developers, manufacturers, and service providers on privacy by design and data minimization approaches.
- 9/22/2021 – UK Secretary of State for Digital, Culture, Media & Sport (“DCMS”), “National AI Strategy”
The UK Government announced its National AI Strategy, which aims to invest and plan for the long-term needs of the AI ecosystem, support the transition to an AI-enabled economy, and ensure the UK governs AI effectively.
- 7/20/2021 – Information Commissioner’s Office (“ICO”), Beta Version of AI and Data Protection Risk Toolkit
ICO recently released the beta version of the AI and Data Protection Risk Toolkit, which contains risk statements to help organizations using AI to correctly assess the risk of their processing practices. The toolkit provides suggestions and practical steps for technical and organizational measures used to mitigate risks and demonstrate compliance with applicable data protection laws. It further includes references to other core resources. Release of the final version of the toolkit is planned for December 2021.
- 5/5/2020 – ICO, “Explaining Decisions Made with AI”
This detailed guidance released by the ICO in cooperation with the lan Turing Institute gives businesses practical advice to explain the legal framework and effects of AI decisionmaking processes and the necessary considerations for compliance with existing data protection laws.
- 12/18/2020 – ICO, “Six Things to Consider When Using Algorithms for Employment Decisions”
The blog post explores the risks and opportunities of AI in an employment context and highlights key points for businesses to consider before implementing algorithms for hiring purposes.
- 7/30/2021 – Department of Homeland Security (“DHS”), “Artificial Intelligence and Machine Learning Strategic Plan”
The strategic plan of DHS’ Science and Technology Directorate (“S&T”) outlines its goals that are committed to ensuring that AI/ML research, development, test, evaluation, and departmental applications comply with statutory and other legal requirements, and sustain privacy protections and civil rights and liberties for individuals. It further advises stakeholders on recent developments in AI/ML and the associated opportunities and risks.
- 5/5/2021 – Electronic Privacy Information Center (“EPIC”), New National Artificial Intelligence Initiative Office Website.
The White House launched its new website, AI.gov, featuring policy priorities, reports, and news regarding AI.
- 4/19/2021 – Federal Trade Commission (“FTC”), “Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI”
In this blog post, the FTC offers guidance for companies in their use of AI, specifically instructing them to show transparency and accountability when employing new algorithms.
- 4/8/2020 – FTC, “Using Artificial Intelligence and Algorithms”
In this blog post, the FTC outlines best practices when relying on algorithms and highlights key principles such as transparency, fairness, accuracy, and accountability.
- 9/9/2019 – National Institute of Standards and Technology (“NIST”) – “U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tool”
Following an executive order directing federal agencies to develop international standards to promote and protect innovation and public confidence in AI technologies, NIST published this plan. The plan intends to provide guidance regarding priorities and appropriate levels of engagement in matters of AI standards.
*While extensive, this list is not meant to be exhaustive. We will do our best to update this list from time to time, and add new guidance as it becomes available.