Human hand holding a smartphone. AI machine in the background working on the phone.

Guidance on Artificial Intelligence and Data Protection

Image by geralt from Pixabay.

For many of us, Artificial Intelligence (“AI”) represents innovation, opportunities, and potential value to society.

For data protection professionals, however, AI also represents a range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque processes and algorithms.

Data protection and information security authorities as well as governmental agencies around the world have been issuing guidelines and practical frameworks to offer guidance in developing AI technologies that will meet the leading data protection standards.

Below, we have compiled a list* of official guidance recently published by authorities around the globe.

Canada:

  • 1/17/2022 – Government of Ontario, “Beta principles for the ethical use of AI and data enhanced technologies in Ontario”
    https://www.ontario.ca/page/beta-principles-ethical-use-ai-and-data-enhanced-technologies-ontario
    The Government of Ontario released six beta principles for the ethical use of AI and data enhanced technologies in Ontario. In particular, the principles set out objectives to align the use of data enhanced technologies within the government processes, programs, and services with ethical considerations being prioritized.

China:

  • 9/26/2021 – Ministry of Science and Technology (“MOST”), New Generation of Artificial Intelligence Ethics Code
    http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html (in Chinese)
    The Code aims to integrate ethics and morals into the full life cycle of AI systems, promote fairness, justice, harmony, and safety, and avoid problems such as prejudice, discrimination, privacy, and information leakage. The Code provides for specific ethical requirements in AI technology design, maintenance, and design.
  • 1/5/2021 – National Information Security Standardisation Technical Committee of China (“TC260”), Cybersecurity practice guide on AI ethical security risk prevention
    https://www.tc260.org.cn/upload/2021-01-05/1609818449720076535.pdf (in Chinese)
    The guide highlights ethical risks associated with AI, and provides basic requirements for AI ethical security risk prevention.

E.U.:

  • European Telecommunication Standards Institute (“ETSI”) Industry Specification Group Securing Artificial Intelligence (“ISG SAI”)
    https://www.etsi.org/committee/1640-sai
    The ISG SAI has published standards to preserve and improve the security of AI. The works focus on using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack.
  • 4/21/2021 – European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”
    https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788
    The EU Commission proposed a new AI Regulation – a set of flexible and proportionate rules that will address the specific risks posed by AI systems, intending to set the highest global standard. As an EU regulation, the rules would apply directly across all European Member States. The regulation proposal follows a risk-based approach and calls for the creation of a European enforcement agency.

France:

Germany:

Hong Kong:

  • 8/18/2021 – Office of the Privacy Commissioner for Personal Data (“PCPD”), “Guidance on the Ethical Development and Use of Artificial Intelligence”
    https://www.pcpd.org.hk/english/resources_centre/publications/files/guidance_ethical_e.pdf
    This guidance discusses ethical principles for AI development and management while also highlighting recent development in AI governance around the globe. The guidance further includes a helpful self-assessment checklist in its appendix concerning businesses’ AI strategy and governance, risk assessment and human oversight, development and management of AI systems as well as communication and engagement with stakeholders.

India:

  • 9/28/2021 – INDIAai, “Mitigating Bias in AI – A Handbook For Startups”
    https://indiaai.s3.ap-south-1.amazonaws.com/docs/AI+Handbook_27-09-2021.pdf
    INDIAai, a government-based initiative, published this formalized framework for startups. The handbook identifies different risk factors that may lead to bias in AI.
  • 7/15/2021 – Data Security Council of India (“DCSI”), “Handbook on Data Protection and Privacy for Developers of Artificial Intelligence in India”
    https://www.dsci.in/sites/default/files/documents/resource_centre/AI%20Handbook.pdf
    The handbook establishes guidelines for responsible and ethical AI development in line with the applicable legal data protection framework. While the handbook does not provide technical solution but instead focuses on the ethical and legal objectives to pursue when designing AI systems, it does provide for a checklist of questions and good practices which developers shall keep in mind while in the design process.
  • 2/24/2021 – National Institution for Transforming India (“NITI Aayog”), “Responsible AI”
    http://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf
    In this paper, the Government think tank highlights the ethical and legal framework for AI technology management. The paper further includes a self-assessment guide for AI usage in its annex.

International:

  • International Organization for Standardization (“ISO”) –ISO/IEC 38507:2022
    https://www.iso.org/standard/56641.html
    Together with the International Electrotechnical Commission (“IEC”), ISO has published a number of AI standards in recent years. The newest standards published in April 2022, called “Governance implications of the use of artificial intelligence by organizations”, provides guidance for the governing body of organizations regarding the use and implications of AI.

Japan:

  • 4/8/2022 – Ministry of Economy, Trade, and Industry (“METI”), Artificial Intelligence Introduction Guidebook for Small and Medium Sized Companies
    https://www.meti.go.jp/policy/it_policy/jinzai/AIutilization.html (in Japanese)
    The Guidebook provides SMEs with guidance on how to prepare for and begin utilization of AI in their enterprises, providing practical steps for decision-making.
  • 2/15/2022 – Ministry of Internal Affairs and Communications (“MIC”), Guidebook on Cloud Services Using AI
    https://www.soumu.go.jp/main_content/000792669.pdf (in Japanese)
    The Guidebook summarizes the steps to keep in mind when developing and providing AI cloud services while gaining the trust of users and considering data collection requirements.
  • 1/28/2022 – METI, Governance Guidelines for Implementation of AI Principles
    https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20220128_2.pdf
    The METI has released an updated version of its Guidelines for the Practice of Artificial Intelligence Principles, outlining AI governance rules which include risk analysis, systems design, implementation and evaluation, along with providing practical examples.
  • 8/4/2021 – MIC, AI Network Society Promotion Council Report
    https://www.soumu.go.jp/main_content/000761967.pdf (in Japanese)
    The report highlights recent trends in AI utilization as well as efforts to promote secure and reliable social implementation of AI.

Saudi Arabia:

  • 4/27/2022 – Saudi Food and Drug Authority (‘SFDA’), “Guidance on Review and Approval of AI and Big Data based Medical Devices”
    https://beta.sfda.gov.sa/sites/default/files/2021-04/SFDAArtificial%20IntelligenceEn.pdf
    The Guidance sets out the requirements for obtaining a Medical Devices Marketing Authorization for AI-based medical devices within the KSA. It applies to the standalone software type of medical devices, which diagnose, manage, or predict diseases by analyzing medical Big Data using AI, as well as to AI software that is configured with hardware.

Singapore:

South Korea:

Spain:

Turkey:

U.K.:

U.S.:

  • 7/30/2021 – Department of Homeland Security (“DHS”), “Artificial Intelligence and Machine Learning Strategic Plan”
    https://www.dhs.gov/sites/default/files/publications/21_0730_st_ai_ml_strategic_plan_2021.pdf
    The strategic plan of DHS’ Science and Technology Directorate (“S&T”) outlines its goals that are committed to ensuring that AI/ML research, development, test, evaluation, and departmental applications comply with statutory and other legal requirements, and sustain privacy protections and civil rights and liberties for individuals. It further advises stakeholders on recent developments in AI/ML and the associated opportunities and risks.
  • 5/5/2021 – Electronic Privacy Information Center (“EPIC”), New National Artificial Intelligence Initiative Office Website.
    https://www.ai.gov/
    The White House launched its new website, AI.gov, featuring policy priorities, reports, and news regarding AI.
  • 4/19/2021 – Federal Trade Commission (“FTC”), “Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI”
    https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai
    In this blog post, the FTC offers guidance for companies in their use of AI, specifically instructing them to show transparency and accountability when employing new algorithms.
  • 4/8/2020 – FTC, “Using Artificial Intelligence and Algorithms”
    https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms
    In this blog post, the FTC outlines best practices when relying on algorithms and highlights key principles such as transparency, fairness, accuracy, and accountability.

Additional Industry Whitepapers, Bulletins, and Recommendations:

  • 3/16/2022 – National Institute of Standards and Technology (“NIST”), “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence”
    https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
    In this Special Publication, NIST analyzes the challenges of AI bias, aiming to provide some detailed socio-technical guidance for identifying and managing AI bias.
  • 1/26/2022 – Information Technology Industry Council (“ITI”), Recommendations on NIST AI Risk Management Framework
    https://www.itic.org/documents/artificial-intelligence/ITICommentsonAIRMFConceptPaperFINAL.pdf
    In response to the AI Risk Management Framework concept paper released by NIST, the ITI has published a series of recommendations in order to improve the framework and encourage NIST to align the framework with prior works as well as standards that are currently under development in international standards bodies.
  • 1/20/2022 – European Institute of Innovations & Technology (“EIT”), AI Maturity Tool
    https://ai.eitcommunity.eu/ai-maturity-tool/
    The EIT published a web-based AI maturity tool which allows businesses to assess how prepared they are for the use of AI, and which will allow businesses to compare their maturity level to that of other organizations in the future.
  • 1/18/2022 – Information Technology Industry Council (“ITI”), Recommendations on AI-enabled Biometric Technologies
    https://www.itic.org/documents/artificial-intelligence/ITICommentsBiometricTechRFIFINAL.pdf
    ITI released a series of recommendations addressed to the U.S. Government regarding the use of AI and biometric technologies, elaborating on governance programs and practices that may be useful to consider in the context of biometric technologies, including with regard to performance auditing and post-deployment impact assessment.
  • 12/14/2021 – National Institute of Standards and Technology (“NIST”), “AI Risk Management Framework Concept Paper”
    https://www.nist.gov/system/files/documents/2021/12/14/AI%20RMF%20Concept%20Paper_13Dec2021_posted.pdf
    NIST has developed for public review a concept paper for the Artificial Intelligence Risk Management Framework (“AI RMF”), intended for voluntary use and to address risks in the design, development, use, and evaluation of AI products, services, and systems. NIST stated that it intends to release the AI RMF 1.0 in early 2023.
  • 7/14/2021 – European Commission’s Joint Research Center (“JRC”), Report
    https://publications.jrc.ec.europa.eu/repository/handle/JRC125952
    Most recently, the JRC published this report on the AI standardization landscape. The report describes the ongoing standardization efforts on AI and aims to contribute to the definition of a European standardization roadmap.
  • 9/9/2019 – National Institute of Standards and Technology (“NIST”) – “U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tool”
    https://www.nist.gov/artificial-intelligence/ai-standards-federal-engagement
    Following an executive order directing federal agencies to develop international standards to promote and protect innovation and public confidence in AI technologies, NIST published this plan. The plan intends to provide guidance regarding priorities and appropriate levels of engagement in matters of AI standards.

*While extensive, this list is not meant to be exhaustive. We will do our best to update this list from time to time, and add new guidance as it becomes available.