In 2021, the global artificial intelligence (AI) market was estimated to value between USD 59.7 billion and USD 93.5 billion. Going forward, it is expected to expand at a compound annual growth rate of 39.4% to reach USD 422.37 billion by 2028.
However, as financial and efficiency incentives drive AI innovation, AI adoption has given rise to potential harms. For example, Amazon’s machine-learning specialists discovered that their algorithm learned to penalize resumes that “included the word ‘women’s,’ as in ‘women’s chess club captain.’” As a result, Amazon’s AI system “taught itself that male candidates were preferable.”
As our compiled list of guidance on artificial intelligence and data protection indicates, policymakers and legislators have taken notice of these harms and moved to mitigate them. New York City enacted a bill regulating how employers and employment agencies use automated employment decision tools in making employment decisions. Colorado’s draft rules require controllers to explain the training data and logic used to create certain automated systems. In California, rulemakers must issue regulations requiring businesses to provide “meaningful information about the logic” involved in automated decision-making processes.
In truth, the parties calling for AI regulation form a diverse alliance, including the Vatican, IBM, and the EU. Now, the White House joins these strange bedfellows by publishing the Blueprint for an AI Bill of Rights.
What is the Blueprint for AI Bill of Rights?
The Blueprint for AI Bill of Rights (“Blueprint”) is a non-binding white paper created by the White House Office of Science and Technology Policy. The Blueprint does not carry the force of law; rather, it is intended to spur development of policies and practices that protect civil rights and promote democratic values in AI systems. To that end, the Blueprint provides a list of five principles (discussed below) that – if incorporated in the design, use, and deployment of AI systems – will “protect the American public in the age of artificial intelligence.”
To be clear: failing to incorporate one of these principles will not give rise to a penalty under the Blueprint. Neither will adoption of the principles ensure satisfaction of requirements imposed by other laws.
However, the lack of compliance obligations should not inspire a willingness to ignore the Blueprint, for the authors expressly state that the document provides a framework for areas where existing law or policy do not already provide guidance. And given that many state privacy laws do not currently provide such guidance, the Blueprint provides a speculative glimpse at what state regulators may require of future AI systems.
The Blueprint’s Five Principles for AI Systems
- Safe & Effective Systems. The Blueprint demands that individuals be protected from unsafe or ineffective systems. To do this, an AI system should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring. The system should be designed to protect individuals from harms stemming from “unintended, yet foreseeable,” uses or impacts, and it should not utilize inappropriate or irrelevant data in the design, development, or deployment stages.
- Algorithmic Discrimination Protections. The Blueprint warns that algorithmic discrimination based on a classification protected by law may violate legal protections. Designers and developers should, in part, include equity assessments as part of the AI system’s design, ensure accessibility for people with disabilities, and use representative data for demographic features.
- Data Privacy. Taking a page from the EU’s GDPR, the Blueprint states that AI systems should, by default, seek a person’s permission to use, access, transfer, and delete your data. However, the Blueprint recognizes that consent cannot always form the basis for processing, and it states that where consent is not possible, alternative privacy by design safeguards should be used. The Blueprint calls for greater data privacy protections for surveillance technologies and sensitive domains (e.g., health, work, criminal justice).
- Notice & Explanation. As with most data privacy laws and regulations, the Blueprint cares about providing individuals with meaningful and useful information, so a person knows how and why an outcome was determined by the AI system.
- Human Alternatives, Consideration, & Fallback. The Blueprint states that individuals should, where appropriate, be given the choice to opt out of automated systems in favor of a human alternative. The Blueprint stresses this option as crucial for sensitive domains (e.g., criminal justice, employment, education, and health).
Current and upcoming state laws, such as the California Privacy Rights Act and the Colorado Privacy Act, seek to regulate AI technologies yet currently lack guidance on how that regulation should occur. For this reason, although the Blueprint lacks force of law, innovators and adopters of AI technology should take notice of its overall themes, as these themes may manifest the force of law through adoption by state regulators and agencies.
Until then, Metaverse Law will continue to monitor the legal landscape for new developments and update our reference material accordingly for guidance on AI and data protection.