Human hand holding a smartphone. AI machine in the background working on the phone.

Guidance on Artificial Intelligence and Data Protection

Image by geralt from Pixabay.

For many of us, Artificial Intelligence (“AI”) represents innovation, opportunities, and potential value to society.

For data protection professionals, however, AI also represents a range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque processes and algorithms.

Data protection and information security authorities as well as governmental agencies around the world have been issuing guidelines and practical frameworks to offer guidance in developing AI technologies that will meet the leading data protection standards.

Below, we have compiled a list* of official guidance recently published by authorities around the globe.

Canada:

  • 1/17/2022 – Government of Ontario, “Beta principles for the ethical use of AI and data enhanced technologies in Ontario”
    https://www.ontario.ca/page/beta-principles-ethical-use-ai-and-data-enhanced-technologies-ontario
    The Government of Ontario released six beta principles for the ethical use of AI and data enhanced technologies in Ontario. In particular, the principles set out objectives to align the use of data enhanced technologies within the government processes, programs, and services with ethical considerations being prioritized.

China:

  • 12/12/2022 – Cyberspace Administration of China, Regulations on the Administration of Deep Synthesis of Internet Information Services
    http://www.cac.gov.cn/2022-12/11/c_1672221949354811.htm (in Chinese) and
    http://www.cac.gov.cn/2022-12/11/c_1672221949570926.htm (in Chinese)
    The Regulations target deep synthesis technology, which are synthetic algorithms that produce text, audio, video, virtual scenes, and other network information. The accompanying Regulations FAQs state that providers of deep synthesis technology must provide safe and controllable safeguards and conform with data protection obligations.
  • 9/26/2021 – Ministry of Science and Technology (“MOST”), New Generation of Artificial Intelligence Ethics Code
    http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html (in Chinese)
    The Code aims to integrate ethics and morals into the full life cycle of AI systems, promote fairness, justice, harmony, and safety, and avoid problems such as prejudice, discrimination, privacy, and information leakage. The Code provides for specific ethical requirements in AI technology design, maintenance, and design.
  • 1/5/2021 – National Information Security Standardisation Technical Committee of China (“TC260”), Cybersecurity practice guide on AI ethical security risk prevention
    https://www.tc260.org.cn/upload/2021-01-05/1609818449720076535.pdf (in Chinese)
    The guide highlights ethical risks associated with AI, and provides basic requirements for AI ethical security risk prevention.

E.U.:

  • European Telecommunication Standards Institute (“ETSI”) Industry Specification Group Securing Artificial Intelligence (“ISG SAI”)
    https://www.etsi.org/committee/1640-sai
    The ISG SAI has published standards to preserve and improve the security of AI. The works focus on using AI to enhance security, mitigating against attacks that leverage AI, and securing AI itself from attack.
  • 4/21/2021 – European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”
    https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788
    The EU Commission proposed a new AI Regulation – a set of flexible and proportionate rules that will address the specific risks posed by AI systems, intending to set the highest global standard. As an EU regulation, the rules would apply directly across all European Member States. The regulation proposal follows a risk-based approach and calls for the creation of a European enforcement agency.

France:

Germany:

Continue Reading Guidance on Artificial Intelligence and Data Protection
Image of the entrance to the United States Supreme Court building.

Will the Courts Treat Foreign Data Privacy Laws as Fact or Farce in U.S. Contracts? Whose Law Will Prevail in Privacy Disputes?

Image Credit: MarkThomas from Pixabay.

[Originally published as a Feature Article: Will the Courts Treat Foreign Data Privacy Laws as Fact or Farce in U.S. Contracts?, by Amira Bucklin and Lily Li, in Orange County Lawyer Magazine, May 2021, Vol. 63 No.5, page 40.]

by Amira Bucklin and Lily Li

In 2020, when lockdown and shelter-at-home orders were implemented, the world moved online. Team meetings, conference calls, even court hearings entered the cloud. More than ever, consumers used online shopping instead of strolling through malls, and online learning platforms instead of classrooms. “Zoom” became a way to meet up with friends over a glass of wine, or conduct job interviews in a blouse, suit jacket, and yoga pants.

This has had vast consequences for personal privacy and cybersecurity. While most consumers might recognize the brand of their online learning platform, ecommerce store, or video conference tool of choice, most consumers don’t notice the network of service providers that work in the background. A whole ecosystem of connected businesses and platforms that collect, store, and transfer data and software, all governed by a new set of international privacy rules and contractual commitments. Yet, many of these rules have not been tested in the courts, and they have several implications in the context of privacy.

The Privacy Conundrum

This month marks the three-year anniversary of the EU’s General Data Protection Regulation (GDPR). As expected, its consequences have been far-reaching, and fines for violations have been staggeringly high.

The GDPR requires companies in charge of personal data (“data controllers”) to enter into data processing agreements with their service providers (or “data processors”), including, at times, standard data protection clauses drafted by the EU Commission. These data processing mega-contracts (ranging from 1-100+ pages) impose a series of foreign data protection and security obligations on the parties.

A unique challenge presented by these contracts is the fact that such data processing agreements and model data protection clauses often include their own choice of law provisions, calling for the applicability of EU member state law, and requiring the parties to grant third-party beneficiary rights to individuals in a wholly different country.

This challenge is not just limited to parties contracting with EU companies, either. Due to the GDPR’s extraterritorial scope, two U.S.-based companies can enter into a contract subject to the laws of the State of California, but which includes a data processing addendum or security schedule that is subject to the laws of the United Kingdom, France, or Germany.

What happens if there is a dispute between these parties regarding their rights and responsibilities, which are subject to foreign data protection laws? How will U.S. courts treat these disputes? How much deference will—and should—a U.S. court provide to foreign interpretations of law?

Continue Reading Will the Courts Treat Foreign Data Privacy Laws as Fact or Farce in U.S. Contracts? Whose Law Will Prevail in Privacy Disputes?
Privacy nutrition label

Opt-Out Icons and Apple Privacy Labels: The Visual Privacy Policy

Image Credit: FDA Nutrition Label, modified by Metaverse Law

The growing frequency and severity of privacy incidents within the past decade—the Facebook-Cambridge Analytica data scandal and Equifax data breach, to name just a few—has made consumer privacy a topic of public attention and concern.

In response to consumers’ increased wariness regarding their private data, some companies are trying to use privacy labels and icons to signal a commitment to privacy protection. The ultimate goal is to make privacy more accessible, transparent, and understandable.

This article reviews the history and current trends around privacy icons and labels.

Privacy Visuals Part I: Icons

In 2010, the Digital Advertising Alliance (DAA) rolled out its “YourAdChoices” icon – a clickable blue triangular icon found on ads. This was one of the first privacy icons available. The DAA developed this icon in response to speculated federal regulation in the advertising industry.

Digital Advertising Alliance (DAA) YourAdChoices icon, appears as blue outlined triangle with inset letter 'i'
YourAdChoices icon. Image taken from https://digitaladvertisingalliance.org/.

To address Congressional inquiries into consumer privacy (and any possible resulting legislative efforts), the DAA formed a self-regulatory program with a set of privacy principles for participating companies and developed the YourAdChoices icon. Participating companies can voluntarily elect to place this symbol on their advertisements. By its nature, the DAA self-regulatory program and use of the YourAdChoices icon is not enforced by law. However, the DAA enforces the program by offering a consumer complaint process, public investigation procedure, and if necessary, escalation to a government agency, which happened in the case of SunTrust Bank in 2014.

Typically, the YourAdChoices icon is placed on cross-context behavioral ads—that is, ads targeted to consumers based on a profile of that consumer’s characteristics, preferences, and internet activity. If a browsing consumer views an ad that was targeted to them, they can click the YourAdChoices icon next to the ad to control whether ads should be personalized to them while browsing and to learn why that certain ad was displayed to them.

When the California Consumer Privacy Act (CCPA) came into effect in 2020, it created new privacy requirements for over 500,000 business nationwide . One of the requirements is to prominently display a “Do Not Sell My Personal Information” link on a business’ homepage, if a business is subject to CCPA, and “sells” or discloses a consumer’s personal information for valuable consideration. If a consumer submits a request through the link, the business must allow consumers to opt-out of the sale of that consumer’s personal information.

In response to this new requirement, the DAA designed a green version of the YourAdChoices icon for CCPA use. This is called the Privacy Rights Icon.

Digital Advertising Alliance (DAA) Privacy Rights icon, appears as green outlined triangle with inset letter 'i'
Privacy Rights icon. Image taken from https://digitaladvertisingalliance.org/.

When implemented correctly by participating companies, the green Privacy Rights icon brings consumers to www.privacyrights.info, a website set up by the DAA to help centralize and facilitate “Do Not Sell” requests across all participating companies.

While the two DAA icons above are forms of industry self-regulation, the California Office of the Attorney General (OAG) has also designed a “Do Not Sell” button to accompany the Do Not Sell link.

Continue Reading Opt-Out Icons and Apple Privacy Labels: The Visual Privacy Policy
Cell phone with image of lock on the screen.

Reasonable Security: Implementing Appropriate Safeguards in the Remote Workplace

Photo by Franck on Unsplash

In 2020, with large portions of the global workforce abruptly sent home indefinitely, IT departments nationwide scurried to equip workers of unprepared companies to work remotely.

This presented an issue. Many businesses, particularly small businesses, barely have the minimum network defenses set up to prevent hacks and attacks in the centralized office. When suddenly everyone must become their own IT manager at home, there are even greater variances between secure practices, enforcement, and accountability.

“Reasonable Security” Requirements under CCPA/CPRA and Other Laws

Under the California Consumer Privacy Act (CCPA), the implementation of “reasonable security” is a defense against a consumer’s private right of action to sue for data breach. A consumer who suffers an unauthorized exfiltration, theft, or disclosure of personal information can only seek redress if (1) the personal information was not encrypted or redacted, or (2) the business otherwise failed its duty to implement reasonable security. See Cal. Civ. Code § 1798.150.

Theoretically, this means that a business that has implemented security measures—but nevertheless suffers a breach—may be insulated from liability if the security measures could be considered reasonable measures to protect data. Therefore, while reasonable security is not technically an affirmative obligation under the CCPA, the reduced risk of consumer liability made reasonable security a de facto requirement.

However, under the recently passed California Privacy Rights Act (CPRA), the implementation of reasonable security is now an affirmative obligation. Under revised Cal. Civ. Code § 1798.100, any business that collects a consumer’s personal information shall implement reasonable security procedures and practices to protect personal information. See our CPRA unofficial redlines.

Continue Reading Reasonable Security: Implementing Appropriate Safeguards in the Remote Workplace
social network patents

Facebook, Patents, and Privacy: Social Media Innovations to Mine Personal Data

Social Media Patents & Privacy Data

[©2016. Published in GPSOLO, Vol. 37, No. 5, September/October 2020, by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association or the copyright holder]

* Updated November 25 to include references to CPRA/ Prop24.

The episode “Nosedive” of the television series Black Mirror envisions a society built on social credit scores. In this dystopia, all social media networks have converged into one platform—think Facebook, TikTok, Yelp, and Equifax combined.

This umbrella social platform allows users to rate each other on a five-point scale after each social interaction. Those with a high score gain access to job opportunities, favorable zip codes, and even high-status relationships. Those with a low score have the social ladder kicked out from under them, leading to a downward cycle of estrangement—and in the case of Black Mirror’s protagonist, jail time.

While the society in “Nosedive” seems far-fetched, is the technology behind it plausible?

Facebook Patents That Impact Privacy

According to Facebook’s patents, the answer is a resounding “yes.”

In a series of filings spanning almost a decade, Facebook has obtained several patents that allow social media platforms to track, identify, and classify individuals in new and innovative ways. Below are just few.

Tracking individuals via dust. U.S. Patent No. 9485423B2, “associating cameras with users and objects in a social networking system” (filed September 16, 2010, patented June 25, 2013), allows social media networks to identify an individual’s friends and relationships by correlating users across the same camera. To do so, an algorithm analyzes the metadata of a photo to find a camera’s “signature.”

Continue Reading Facebook, Patents, and Privacy: Social Media Innovations to Mine Personal Data
1 2 3