Logo for LinkedIn social media platform.

hiQ v. LinkedIn: User Agreements in the Age of Data Scraping

Image by BedexpStock from Pixabay

On November 4, 2022, LinkedIn announced a “significant win” for the platform and its members against “personal data scraping.” The win resulted from a 6-year legal battle that asked, in part, whether LinkedIn must allow hiQ Labs to scrape data from the public profiles of LinkedIn members.

Last Friday, the U.S. District Court for the Northern District of California answered that question by ruling that LinkedIn’s User Agreement “unambiguously prohibits hiQ’s scraping and unauthorized use of the scraped data.” And as such, hiQ breached LinkedIn’s User Agreement “through its own scraping of LinkedIn’s site and using scraped data.”[1]

An Overview of Data Scraping

Data scraping is a technique by which a computer program extracts data from another program or source. The technique typically uses scraper bots, which send a request to a specific website and, when the site responds, the bots parse and extract specific data from the site in accordance with their creators’ wishes.

Scraper bots can be built for a multitude of purposes, including:

  • Content scraping – pulling content from a site to replicate it elsewhere.
  • Price scraping – extracting prices from a competitor.
  • Contact scraping – compiling email, phone number, and other contact information.

In today’s economy, data is key, and data scraping is an efficient means of acquiring huge amounts of specific data. Yet, this court ruling signals that companies may need to be more cautious about how and where they use data scraping bots.

hiQ’s Data Scraping Violates LinkedIn’s User Agreement

Founded in 2012 as a “people analytics” company, hiQ Labs provides information to businesses about their workforces. To do this, hiQ extensively relied on using automated software to scrape data from LinkedIn’s public profiles. hiQ then aggregated, analyzed, and summarized that data to create two products, “Keeper” and “Skill Mapper,” which allowed businesses to improve their employee engagement and reduce costs associated with external talent acquisition.

However, in 2017, LinkedIn sent a cease-and-desist letter threatening legal action against hiQ, arguing that LinkedIn’s User Agreement prohibits data scraping. Specifically, the User Agreement states:

Continue Reading hiQ v. LinkedIn: User Agreements in the Age of Data Scraping
Customer service representative with headset on and computer in front of her, communicating with another person.

TWO-PARTY CONSENT REQUIREMENTS FOR RECORDING CALLS

Image by Slash RTC from Pixabay

For a call recording to be lawful, federal law[1] and most states require at least one party to the conversation to consent to the recording. However, many states go further, requiring two-party (or all-party) consent for a call to be lawfully recorded.

As the following list demonstrates, navigating the state law nuances of two-party consent for recording calls can require some finesse.

CALIFORNIA

Requires prior consent from all parties to record a confidential in-person, telephone, or video communication.[2]

However, case law indicates that where a person communicating is made aware that the conversation is being monitored or recorded, there may be no violation because there is no objectively reasonable expectation of privacy.[3] Moreover, by continuing with the conversation after being so warned, consent is given by implication.[4]

CONNECTICUT

Allows call recording if:

  • all parties have consented to the recording,
  • recording is preceded by a verbal notification which is recorded as well, or
  • recording is accompanied by an automatic tonal warning.[5]

DELAWARE

Requires two-party consent for recording telephone or other private conversations.[6]

However, a district court held the state law was meant to emulate its federal equivalent,[7] so one-party consent may, in some circumstances, satisfy the consent requirement.

FLORIDA

Requires prior consent from all parties to record an oral communication.[8]

However, the law does not cover when the person communicating had no reasonable expectation of privacy,[9] which may occur when the parties are notified at the outset that the call will be monitored or recorded.

ILLINOIS

Requires all parties to consent to recording either an in-person or transmitted communication when at least one party intends the communication to be of a private nature under circumstances reasonably justifying that expectation.[10]

MARYLAND

Requires all parties to a communication to consent to the recording.[11]

However, Maryland courts have interpreted this to be limited to situations where parties have a reasonable expectation of privacy.[12]

Continue Reading TWO-PARTY CONSENT REQUIREMENTS FOR RECORDING CALLS
Photo of the front of The White House and lanscaping in front of The White House.

THE WHITE HOUSE’S BLUEPRINT FOR AI BILL OF RIGHTS

Image by David Mark from Pixabay.

In 2021, the global artificial intelligence (AI) market was estimated to value between USD 59.7 billion and USD 93.5 billion. Going forward, it is expected to expand at a compound annual growth rate of 39.4% to reach USD 422.37 billion by 2028.

However, as financial and efficiency incentives drive AI innovation, AI adoption has given rise to potential harms. For example, Amazon’s machine-learning specialists discovered that their algorithm learned to penalize resumes that “included the word ‘women’s,’ as in ‘women’s chess club captain.’” As a result, Amazon’s AI system “taught itself that male candidates were preferable.”

As our compiled list of guidance on artificial intelligence and data protection indicates, policymakers and legislators have taken notice of these harms and moved to mitigate them. New York City enacted a bill regulating how employers and employment agencies use automated employment decision tools in making employment decisions. Colorado’s draft rules require controllers to explain the training data and logic used to create certain automated systems. In California, rulemakers must issue regulations requiring businesses to provide “meaningful information about the logic” involved in automated decision-making processes.

In truth, the parties calling for AI regulation form a diverse alliance, including the Vatican, IBM, and the EU. Now, the White House joins these strange bedfellows by publishing the Blueprint for an AI Bill of Rights.

What is the Blueprint for AI Bill of Rights?

The Blueprint for AI Bill of Rights (“Blueprint”) is a non-binding white paper created by the White House Office of Science and Technology Policy. The Blueprint does not carry the force of law; rather, it is intended to spur development of policies and practices that protect civil rights and promote democratic values in AI systems. To that end, the Blueprint provides a list of five principles (discussed below) that – if incorporated in the design, use, and deployment of AI systems – will “protect the American public in the age of artificial intelligence.”

To be clear: failing to incorporate one of these principles will not give rise to a penalty under the Blueprint. Neither will adoption of the principles ensure satisfaction of requirements imposed by other laws.

However, the lack of compliance obligations should not inspire a willingness to ignore the Blueprint, for the authors expressly state that the document provides a framework for areas where existing law or policy do not already provide guidance. And given that many state privacy laws do not currently provide such guidance, the Blueprint provides a speculative glimpse at what state regulators may require of future AI systems.

The Blueprint’s Five Principles for AI Systems

Continue Reading THE WHITE HOUSE’S BLUEPRINT FOR AI BILL OF RIGHTS
Social media apps on the screen of an electronic device

CALIFORNIA’S SOCIAL MEDIA TRANSPARENCY LAW

Image by Pixelkult from Pixabay.

Disclosure Obligations, Hate Speech & AG Reports

Legislators across the United States have been grappling with how to regulate social media companies. In Texas, the 5th Circuit upheld a law limiting how social media platforms can moderate content.[1] In Florida, a brief was filed asking the U.S. Supreme Court to reverse the 11th Circuit’s decision to strike down a law preventing how social media platforms can moderate users.[2] Now, with Governor Newsom signing AB 587 into law, California joins the legislative efforts.

Effective January 1, 2024, AB 587 imposes new disclosure and reporting obligations on companies operating social media platforms. A social media platform falls under the law if:

  • The company operating the platform generated at least one hundred million in gross revenue during the preceding calendar year;[3]
  • The platform is a “public or semipublic internet-based service or application”[4] with users “in California;”[5]
  • A substantial function of the platform is to connect users to allow them to “interact socially” with each other in the platform;[6] and
  • Users can:
    • construct “public or semipublic” profiles for the purpose of signing in and using the platform;[7]
    • populate a list of other users with whom they share a social connection within the platform;[8] and
    • post content viewable by other users.[9]

In addition, the law does not apply to services or applications for which user interactions are limited to direct messages, commercial transactions, or consumer reviews of products, sellers, services, events, or places, or any combination thereof.[10]

Disclosure Obligations

A covered social media platform must disclose to users the existence and contents of the platform’s terms of service.[11] In addition, the terms of service must disclose:

Continue Reading CALIFORNIA’S SOCIAL MEDIA TRANSPARENCY LAW
Image of computer coding. Some of the coding is blurred.

THE CALIFORNIA AGE-APPROPRIATE DESIGN CODE

Image Credit: Markus Spiske from Unsplash

***Update: On September 15, 2022, Governor Newsom signed AB 2273, establishing the California Age-Appropriate Design Code Act.

Who It Covers, What It Requires & How It Compares to the UK

Effective July 1, 2024, the California Age-Appropriate Design Code imposes obligations on businesses[1] that provide an “online service, product, or feature” that is “likely to be accessed by children.”[2] Children are defined as California residents[3] “who are under 18 years of age.”[4] The law provides factors for whether an online service, product, or feature (S/P/F) is “likely to be accessed” by California residents under the age of 18:[5]

  • It is directed to children as defined by COPPA.[6]
  • It is determined, based on competent and reliable evidence regarding audience composition, to be routinely accessed by a significant number of children, or it is substantially similar to an online S/P/F that meets this factor.
  • It displays advertisements marketed to children.
  • It has design elements known to be of interest to children, including games, cartoons, music, and celebrities who appeal to children.
  • Based on internal research, a significant amount of the audience is children.

An online S/P/F is defined by what it is not, and the definition notably exempts the “delivery or use of a physical product.”[7] This exemption diverts from the UK version of the law, which covers “connected toys and devices.”[8]

Compared to the UK’s Common-Sense Approach

The US version of the law provides no guidance on what it means for a “significant number of children” to “routinely access[]” the online S/P/F. However, the law makes clear in its legislative findings that covered businesses may look to guidance and innovation in response to the UK version when developing US-covered online S/P/F.[9]

ICO states that the term “likely to be accessed by” is purposefully broad, covering “services that children [are] using in reality,” not just those services specifically targeting children.[10] However, ICO recognizes that the term is not so broad as to “cover all services that children could possibly access.”[11] The key difference is whether it is “more probable than not” that an online S/P/F will be accessed by children, and businesses should take a “common sense approach to this question.”[12]

To illustrate this point:

Continue Reading THE CALIFORNIA AGE-APPROPRIATE DESIGN CODE
1 2 3 14