On November 4, 2022, LinkedIn announced a “significant win” for the platform and its members against “personal data scraping.” The win resulted from a 6-year legal battle that asked, in part, whether LinkedIn must allow hiQ Labs to scrape data from the public profiles of LinkedIn members.
Last Friday, the U.S. District Court for the Northern District of California answered that question by ruling that LinkedIn’s User Agreement “unambiguously prohibits hiQ’s scraping and unauthorized use of the scraped data.” And as such, hiQ breached LinkedIn’s User Agreement “through its own scraping of LinkedIn’s site and using scraped data.”
An Overview of Data Scraping
Data scraping is a technique by which a computer program extracts data from another program or source. The technique typically uses scraper bots, which send a request to a specific website and, when the site responds, the bots parse and extract specific data from the site in accordance with their creators’ wishes.
Scraper bots can be built for a multitude of purposes, including:
- Content scraping – pulling content from a site to replicate it elsewhere.
- Price scraping – extracting prices from a competitor.
- Contact scraping – compiling email, phone number, and other contact information.
In today’s economy, data is key, and data scraping is an efficient means of acquiring huge amounts of specific data. Yet, this court ruling signals that companies may need to be more cautious about how and where they use data scraping bots.
hiQ’s Data Scraping Violates LinkedIn’s User Agreement
Founded in 2012 as a “people analytics” company, hiQ Labs provides information to businesses about their workforces. To do this, hiQ extensively relied on using automated software to scrape data from LinkedIn’s public profiles. hiQ then aggregated, analyzed, and summarized that data to create two products, “Keeper” and “Skill Mapper,” which allowed businesses to improve their employee engagement and reduce costs associated with external talent acquisition.
However, in 2017, LinkedIn sent a cease-and-desist letter threatening legal action against hiQ, arguing that LinkedIn’s User Agreement prohibits data scraping. Specifically, the User Agreement states:Continue Reading hiQ v. LinkedIn: User Agreements in the Age of Data Scraping
For a call recording to be lawful, federal law and most states require at least one party to the conversation to consent to the recording. However, many states go further, requiring two-party (or all-party) consent for a call to be lawfully recorded.
As the following list demonstrates, navigating the state law nuances of two-party consent for recording calls can require some finesse.
Requires prior consent from all parties to record a confidential in-person, telephone, or video communication.
However, case law indicates that where a person communicating is made aware that the conversation is being monitored or recorded, there may be no violation because there is no objectively reasonable expectation of privacy. Moreover, by continuing with the conversation after being so warned, consent is given by implication.
Allows call recording if:
- all parties have consented to the recording,
- recording is preceded by a verbal notification which is recorded as well, or
- recording is accompanied by an automatic tonal warning.
Requires two-party consent for recording telephone or other private conversations.
However, a district court held the state law was meant to emulate its federal equivalent, so one-party consent may, in some circumstances, satisfy the consent requirement.
Requires prior consent from all parties to record an oral communication.
However, the law does not cover when the person communicating had no reasonable expectation of privacy, which may occur when the parties are notified at the outset that the call will be monitored or recorded.
Requires all parties to consent to recording either an in-person or transmitted communication when at least one party intends the communication to be of a private nature under circumstances reasonably justifying that expectation.
Requires all parties to a communication to consent to the recording.
However, Maryland courts have interpreted this to be limited to situations where parties have a reasonable expectation of privacy.Continue Reading TWO-PARTY CONSENT REQUIREMENTS FOR RECORDING CALLS
In 2021, the global artificial intelligence (AI) market was estimated to value between USD 59.7 billion and USD 93.5 billion. Going forward, it is expected to expand at a compound annual growth rate of 39.4% to reach USD 422.37 billion by 2028.
However, as financial and efficiency incentives drive AI innovation, AI adoption has given rise to potential harms. For example, Amazon’s machine-learning specialists discovered that their algorithm learned to penalize resumes that “included the word ‘women’s,’ as in ‘women’s chess club captain.’” As a result, Amazon’s AI system “taught itself that male candidates were preferable.”
As our compiled list of guidance on artificial intelligence and data protection indicates, policymakers and legislators have taken notice of these harms and moved to mitigate them. New York City enacted a bill regulating how employers and employment agencies use automated employment decision tools in making employment decisions. Colorado’s draft rules require controllers to explain the training data and logic used to create certain automated systems. In California, rulemakers must issue regulations requiring businesses to provide “meaningful information about the logic” involved in automated decision-making processes.
In truth, the parties calling for AI regulation form a diverse alliance, including the Vatican, IBM, and the EU. Now, the White House joins these strange bedfellows by publishing the Blueprint for an AI Bill of Rights.
What is the Blueprint for AI Bill of Rights?
The Blueprint for AI Bill of Rights (“Blueprint”) is a non-binding white paper created by the White House Office of Science and Technology Policy. The Blueprint does not carry the force of law; rather, it is intended to spur development of policies and practices that protect civil rights and promote democratic values in AI systems. To that end, the Blueprint provides a list of five principles (discussed below) that – if incorporated in the design, use, and deployment of AI systems – will “protect the American public in the age of artificial intelligence.”
To be clear: failing to incorporate one of these principles will not give rise to a penalty under the Blueprint. Neither will adoption of the principles ensure satisfaction of requirements imposed by other laws.
However, the lack of compliance obligations should not inspire a willingness to ignore the Blueprint, for the authors expressly state that the document provides a framework for areas where existing law or policy do not already provide guidance. And given that many state privacy laws do not currently provide such guidance, the Blueprint provides a speculative glimpse at what state regulators may require of future AI systems.
The Blueprint’s Five Principles for AI SystemsContinue Reading THE WHITE HOUSE’S BLUEPRINT FOR AI BILL OF RIGHTS
Image by Pixelkult from Pixabay.
Disclosure Obligations, Hate Speech & AG Reports
Legislators across the United States have been grappling with how to regulate social media companies. In Texas, the 5th Circuit upheld a law limiting how social media platforms can moderate content. In Florida, a brief was filed asking the U.S. Supreme Court to reverse the 11th Circuit’s decision to strike down a law preventing how social media platforms can moderate users. Now, with Governor Newsom signing AB 587 into law, California joins the legislative efforts.
Effective January 1, 2024, AB 587 imposes new disclosure and reporting obligations on companies operating social media platforms. A social media platform falls under the law if:
- The company operating the platform generated at least one hundred million in gross revenue during the preceding calendar year;
- The platform is a “public or semipublic internet-based service or application” with users “in California;”
- A substantial function of the platform is to connect users to allow them to “interact socially” with each other in the platform; and
- Users can:
In addition, the law does not apply to services or applications for which user interactions are limited to direct messages, commercial transactions, or consumer reviews of products, sellers, services, events, or places, or any combination thereof.
A covered social media platform must disclose to users the existence and contents of the platform’s terms of service. In addition, the terms of service must disclose:Continue Reading CALIFORNIA’S SOCIAL MEDIA TRANSPARENCY LAW