Social media apps on the screen of an electronic device

CALIFORNIA’S SOCIAL MEDIA TRANSPARENCY LAW

Image by Pixelkult from Pixabay.

Disclosure Obligations, Hate Speech & AG Reports

Legislators across the United States have been grappling with how to regulate social media companies. In Texas, the 5th Circuit upheld a law limiting how social media platforms can moderate content.[1] In Florida, a brief was filed asking the U.S. Supreme Court to reverse the 11th Circuit’s decision to strike down a law preventing how social media platforms can moderate users.[2] Now, with Governor Newsom signing AB 587 into law, California joins the legislative efforts.

Effective January 1, 2024, AB 587 imposes new disclosure and reporting obligations on companies operating social media platforms. A social media platform falls under the law if:

  • The company operating the platform generated at least one hundred million in gross revenue during the preceding calendar year;[3]
  • The platform is a “public or semipublic internet-based service or application”[4] with users “in California;”[5]
  • A substantial function of the platform is to connect users to allow them to “interact socially” with each other in the platform;[6] and
  • Users can:
    • construct “public or semipublic” profiles for the purpose of signing in and using the platform;[7]
    • populate a list of other users with whom they share a social connection within the platform;[8] and
    • post content viewable by other users.[9]

In addition, the law does not apply to services or applications for which user interactions are limited to direct messages, commercial transactions, or consumer reviews of products, sellers, services, events, or places, or any combination thereof.[10]

Disclosure Obligations

A covered social media platform must disclose to users the existence and contents of the platform’s terms of service.[11] In addition, the terms of service must disclose:

  • Permitted user behavior and activities on the platform, and activities that may subject the user or their content to negative actions;[12]
  • Potential negative actions that may be taken, such as removal, demonetization, deprioritization, or banning;[13]
  • Contact information for asking questions about the terms of service;[14] and
  • A process by which users can flag content, groups, or other users believed to be violating the terms of service.[15]

These disclosure obligations should feel familiar to businesses already operating in the social media industry. The more onerous requirements stem from the law’s reporting obligations to the California AG.

Reporting Obligations to the California AG

A covered social media company, on a “semiannual basis,” must provide the California AG with a “terms of service report.”[16] As part of this report, the company must detail whether it defines the following categories of content in its terms of service:

  • Hate speech or racism.
  • Extremism or radicalization.
  • Disinformation or misinformation.
  • Harassment.
  • Foreign political interference.

Interestingly, the law is written so as not to require a covered company to define these categories of content; rather, it merely requires disclosure of whether the company does so.

That said, much of what the law requires as part of the report to the AG pertains to the company’s actions taken in response to content falling within one of the above categories. For example, the company must disclose any existing policies intended to address the above categories of content,[17] and the total number of content items flagged for belonging to one of those categories.[18]

Failure to submit a report as required can result in a civil penalty of $15,000 per violation per day. So, while the law appears not to require defining the above categories, it seems unlikely that a company can provide a conforming report – and therefore avoid the penalty – without defining what constitutes hate speech, harassment, and so forth.

But this raises an important compliance question: how should a company define these categories? And could a company violate the law if, say, they define misinformation or foreign political interference in a way that does not comport with the California AG’s expectations?

Given the current legal challenges facing other social media laws across the country, the law will likely be challenged on First Amendment grounds, so time will tell whether the law survives long enough to answer these questions.

In the meantime, companies should consider how to navigate the growing state laws either requiring or forbidding moderation of user activities and content.


[1] https://www.politico.com/news/2022/09/16/5th-circuit-upholds-texas-law-forbidding-social-media-censorship-again-00057316.

[2] https://www.axios.com/2022/09/21/florida-supreme-court-social-media-law.

[3] AB 587, 22680.

[4] 22675(e). This excludes services or applications meant to facilitate communication between employees or affiliates within a business or enterprise, so long as the service or platform restricts access to those categories of users. 22675(c).

[5] 22675(e). The law provides no guidance on what it means for a user to be “in California,” but the bill’s legislative introduction uses the language “consumers residing in California.”

[6] 22675(e)(1)(A). And while the law does not define “interact[ing] socially,” services or platforms that provide “email or direct messaging” services do not satisfy this requirement on that basis alone. 22675(e)(1)(B).

[7] 22675(e)(2)(A). Again, this exempts services or platforms in which employees or affiliates can create profiles, when that service or platform restricts access only to those categories of users. 22675(c).

[8] 22675(e)(2)(B).

[9] 22675(e)(2)(C).

[10] 22681.

[11] 22676(a).

[12] 22675(f).

[13] 22676(b)(3).

[14] 22676(b)(1).

[15] 22676(b)(2).

[16] 22677(a).

[17] 22677(a)(4)(A).

[18] 22677(a)(5)(A)(i).