Photo of the front of The White House and lanscaping in front of The White House.

THE WHITE HOUSE’S BLUEPRINT FOR AI BILL OF RIGHTS

Image by David Mark from Pixabay.

In 2021, the global artificial intelligence (AI) market was estimated to value between USD 59.7 billion and USD 93.5 billion. Going forward, it is expected to expand at a compound annual growth rate of 39.4% to reach USD 422.37 billion by 2028.

However, as financial and efficiency incentives drive AI innovation, AI adoption has given rise to potential harms. For example, Amazon’s machine-learning specialists discovered that their algorithm learned to penalize resumes that “included the word ‘women’s,’ as in ‘women’s chess club captain.’” As a result, Amazon’s AI system “taught itself that male candidates were preferable.”

As our compiled list of guidance on artificial intelligence and data protection indicates, policymakers and legislators have taken notice of these harms and moved to mitigate them. New York City enacted a bill regulating how employers and employment agencies use automated employment decision tools in making employment decisions. Colorado’s draft rules require controllers to explain the training data and logic used to create certain automated systems. In California, rulemakers must issue regulations requiring businesses to provide “meaningful information about the logic” involved in automated decision-making processes.

In truth, the parties calling for AI regulation form a diverse alliance, including the Vatican, IBM, and the EU. Now, the White House joins these strange bedfellows by publishing the Blueprint for an AI Bill of Rights.

What is the Blueprint for AI Bill of Rights?

The Blueprint for AI Bill of Rights (“Blueprint”) is a non-binding white paper created by the White House Office of Science and Technology Policy. The Blueprint does not carry the force of law; rather, it is intended to spur development of policies and practices that protect civil rights and promote democratic values in AI systems. To that end, the Blueprint provides a list of five principles (discussed below) that – if incorporated in the design, use, and deployment of AI systems – will “protect the American public in the age of artificial intelligence.”

To be clear: failing to incorporate one of these principles will not give rise to a penalty under the Blueprint. Neither will adoption of the principles ensure satisfaction of requirements imposed by other laws.

However, the lack of compliance obligations should not inspire a willingness to ignore the Blueprint, for the authors expressly state that the document provides a framework for areas where existing law or policy do not already provide guidance. And given that many state privacy laws do not currently provide such guidance, the Blueprint provides a speculative glimpse at what state regulators may require of future AI systems.

The Blueprint’s Five Principles for AI Systems

Continue Reading THE WHITE HOUSE’S BLUEPRINT FOR AI BILL OF RIGHTS
Image of Lily Li, Founder of Metaverse Law and time and date for Evenness' de:centralized series '22 event "The Principles of a Fair & Trustworthy Economy."

Metaverse Law to Speak at World IP Day event

On April 26th, World IP Day, Lily Li of Metaverse Law will be a panel speaker for the “Principles of a Fair and Trustworthy Economy” event. This event is organized by Evenness in partnership with Women in AI and is part of a series of events about frontier technologies, the metaverse, NFTs, virtual fashion and more.

Join us on April 26th at 11am EST/ 8am PST/ 5pm CET for this event. The virtual doors open 30 minutes before the event. Click here for more information.

Human hand holding a smartphone. AI machine in the background working on the phone.

Guidance on Artificial Intelligence and Data Protection

Image by geralt from Pixabay.

For many of us, Artificial Intelligence (“AI”) represents innovation, opportunities, and potential value to society.

For data protection professionals, however, AI also represents a range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque processes and algorithms.

Data protection and information security authorities as well as governmental agencies around the world have been issuing guidelines and practical frameworks to offer guidance in developing AI technologies that will meet the leading data protection standards.

Below, we have compiled a list* of official guidance recently published by authorities around the globe.

Canada:

  • 1/17/2022 – Government of Ontario, “Beta principles for the ethical use of AI and data enhanced technologies in Ontario”
    https://www.ontario.ca/page/beta-principles-ethical-use-ai-and-data-enhanced-technologies-ontario
    The Government of Ontario released six beta principles for the ethical use of AI and data enhanced technologies in Ontario. In particular, the principles set out objectives to align the use of data enhanced technologies within the government processes, programs, and services with ethical considerations being prioritized.

China:

  • 12/12/2022 – Cyberspace Administration of China, Regulations on the Administration of Deep Synthesis of Internet Information Services
    http://www.cac.gov.cn/2022-12/11/c_1672221949354811.htm (in Chinese) and
    http://www.cac.gov.cn/2022-12/11/c_1672221949570926.htm (in Chinese)
    The Regulations target deep synthesis technology, which are synthetic algorithms that produce text, audio, video, virtual scenes, and other network information. The accompanying Regulations FAQs state that providers of deep synthesis technology must provide safe and controllable safeguards and conform with data protection obligations.
  • 9/26/2021 – Ministry of Science and Technology (“MOST”), New Generation of Artificial Intelligence Ethics Code
    http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html (in Chinese)
    The Code aims to integrate ethics and morals into the full life cycle of AI systems, promote fairness, justice, harmony, and safety, and avoid problems such as prejudice, discrimination, privacy, and information leakage. The Code provides for specific ethical requirements in AI technology design, maintenance, and design.
  • 1/5/2021 – National Information Security Standardisation Technical Committee of China (“TC260”), Cybersecurity practice guide on AI ethical security risk prevention
    https://www.tc260.org.cn/upload/2021-01-05/1609818449720076535.pdf (in Chinese)
    The guide highlights ethical risks associated with AI, and provides basic requirements for AI ethical security risk prevention.

E.U.:

  • European Telecommunication Standards Institute (“ETSI”) Industry Specification Group Securing Artificial Intelligence (“ISG SAI”)
    https://www.etsi.org/committee/1640-sai
    The ISG SAI has published standards to preserve and improve the security of AI. The works focus on using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack.
  • 4/21/2021 – European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”
    https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788
    The EU Commission proposed a new AI Regulation – a set of flexible and proportionate rules that will address the specific risks posed by AI systems, intending to set the highest global standard. As an EU regulation, the rules would apply directly across all European Member States. The regulation proposal follows a risk-based approach and calls for the creation of a European enforcement agency.

France:

Germany:

Continue Reading Guidance on Artificial Intelligence and Data Protection