Human hand holding a smartphone. AI machine in the background working on the phone.

Guidance on Artificial Intelligence and Data Protection

Image by geralt from Pixabay.

For many of us, Artificial Intelligence (“AI”) represents innovation, opportunities, and potential value to society.

For data protection professionals, however, AI also represents a range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque processes and algorithms.

Data protection and information security authorities as well as governmental agencies around the world have been issuing guidelines and practical frameworks to offer guidance in developing AI technologies that will meet the leading data protection standards.

Below, we have compiled a list* of official guidance recently published by authorities around the globe.

China:

  • 9/26/2021 – Ministry of Science and Technology (“MOST”), New Generation of Artificial Intelligence Ethics Code
    http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html (in Chinese)
    The Code aims to integrate ethics and morals into the full life cycle of AI systems, promote fairness, justice, harmony, and safety, and avoid problems such as prejudice, discrimination, privacy, and information leakage. The Code provides for specific ethical requirements in AI technology design, maintenance, and design.
  • 1/5/2021 – National Information Security Standardisation Technical Committee of China (“TC260”), Cybersecurity practice guide on AI ethical security risk prevention
    https://www.tc260.org.cn/upload/2021-01-05/1609818449720076535.pdf (in Chinese)
    The guide highlights ethical risks associated with AI, and provides basic requirements for AI ethical security risk prevention.

E.U.:

  • 7/14/2021 – European Commission’s Joint Research Center (“JRC”), AI Watch – AI Standardisation Landscape
    https://publications.jrc.ec.europa.eu/repository/handle/JRC125952
    Most recently, the JRC published this report on the AI standardization landscape. The report describes the ongoing standardization efforts on AI and aims to contribute to the definition of a European standardization roadmap.
  • European Telecommunication Standards Institute (“ETSI”) Industry Specification Group Securing Artificial Intelligence (“ISG SAI”) Standards
    https://www.etsi.org/committee/1640-sai
    The ISG SAI has published standards to preserve and improve the security of AI. The works focus on using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack.
  • 4/21/2021 – European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”
    https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788
    The EU Commission proposed a new AI Regulation – a set of flexible and proportionate rules that will address the specific risks posed by AI systems, intending to set the highest global standard. As an EU regulation, the rules would apply directly across all European Member States. The regulation proposal follows a risk-based approach and calls for the creation of a European enforcement agency.

France:

  • 9/3/2020 – French Data Protection Authority (“CNIL”), Whitepaper and Guidance on Use of Voice Assistance
    https://www.cnil.fr/sites/default/files/atoms/files/cnil_livre-blanc-assistants-vocaux.pdf (in French)
    This whitepaper explores legal and technical considerations for developers and businesses which may utilize voice assistance technology in light of recent AI technology development. It further includes best practices and recommended approaches.

Germany:

Continue Reading Guidance on Artificial Intelligence and Data Protection