File folders with a small lock in the corner

Will the CCPA and Other State Privacy Laws Face Constitutional Attack?

Image Credit: Pettycon from Pixabay

This article is Part 2 of 3 in a series exploring proposed federal privacy laws and constitutional concerns of privacy laws in the United States. Part 3 will discuss the constitutional challenges facing a proposed federal privacy law. 

In the first part of this series, we examined several federal privacy bills proposed this year, as Congress eagerly tries to pass a single harmonizing federal law. The issue of preemption continues to divide Republican and Democrat lawmakers, however, with the former in favor of an express provision allowing preemption stricter state privacy laws such as the CCPA and the latter largely against such a provision. 

Regardless of whether a federal law passes, with an express preemption provision, state privacy laws are still at risk of constitutional attacks. There are two primary ways that a state privacy law may be challenged: (1) invalidation under the Dormant Commerce Clause, and (2) invalidation under First Amendment grounds. State legislators contemplating the passage of their own privacy laws will need to consider these constitutional issues in the drafting phase, or risk facing opposition on constitutional grounds.

Dormant Commerce Clause

Extraterritoriality

Under the Dormant Commerce Clause, the doctrine of extraterritoriality invalidates state laws attempting to regulate commerce that occurs outside state borders. See Edgar v. MITE Corp., 457 U.S. 624, 642–643. Even if legislators did not intend a law to reach extraterritorially, that intention is not definitive of whether the law regulates commerce outside state borders. “The critical inquiry is whether the practical effect of the regulation is to control conduct beyond the boundaries of the State.” Healy v. Beer Inst., Inc., 491 U.S. 324, 336 (1989) (citing Brown-Forman Distillers Corp. v. N. Y. State Liquor Auth., 476 U.S. 573, 579 (1986)).

In evaluating a law’s extraterritorial reach, judges must also “[consider] how the challenged statute may interact with the legitimate regulatory regimes of other States and what effect would arise if not one, but many or every, State adopted similar legislation.” Healy, 491 U.S. at 336. The broad purpose of the Commerce Clause is to prevent one state from encroaching jurisdiction lines and imposing its own regulatory schema onto another state, and reducing onerous inconsistencies in legislation.

Therefore, the critical question becomes: does a state’s privacy law attempt to regulate commerce outside of state borders? 

Considering the nature of the internet—most likely yes. “The Internet is a decentralized, global communications medium linking people, institutions, corporations, and governments all across the world.” Am. Libraries Ass’n v. Pataki, 969 F. Supp. 160, 164 (S.D.N.Y. 1997). Given that the purpose of the Internet is to facilitate far-reaching communications between people and organizations across both state and country lines, a state’s privacy law will almost certainly regulate commerce outside state borders in practice, if not intentionally so. Many companies process data of internet visitors without gathering geolocation data. In such cases, rather than risk noncompliance, some businesses will ostensibly choose to comply with all state privacy laws regardless of the location of its customers. For instance, a small business located outside of California with minimal contacts with California consumers may have little choice but to comply with the CCPA if it has no idea where its users are located. Practically speaking, a state privacy law will invariably affect commerce outside state borders. 

Furthermore, the effects of privacy enforcement will be more apparent once more state and local governments pass their own privacy laws. The term “patchwork” is often used to describe state and local privacy laws today—but it could look even more disjointed. Theoretically, if every state passed its own version of California’s CCPA, then compliance with all 50 state privacy laws may not be feasible, if there are inconsistent instructions. Businesses already see this happening with state breach notification laws. Each law dictates its own special rules and thresholds for notifying the state Attorney General, state Department of Health Services, or other authority, as well as time to notification and contents of notification. 

Pike v. Bruce Church, Inc.

Even if a law may incidentally regulate extraterritorial commerce, that law may be justified when subjected to a balancing test weighing the burden of the law against a legitimate state interest. See Pike v. Bruce Church, Inc., 397 U.S. 137 (1970). In Pike, the Supreme Court held that a state law serving a legitimate interest is only invalid when “the burden imposed on [interstate] commerce is clearly excessive in relation to the putative local benefits.” Pike, 397 U.S. at 142.

The question then becomes: do the benefits to consumer privacy protection resulting from a state’s privacy law outweigh any burden on interstate commerce?

This is likely where the true battle will be fought. In 2017, the cost of privacy compliance for multinational companies ranged from $1.4 million to $21.6 million, with a median cost of $4 million per company. Predictably, a fair portion of these total costs include the price of compliance with a multitude of state and local privacy laws. Privacy compliance is a costly endeavor widely impacting organizational, operational, and technical business processes and will likely continue to grow in the next ten years. Despite these sky-rocketing costs, many opine that state privacy laws, even the most stringent laws like the CCPA, do not actually provide consumers with significant protection. There is much room for argument in this area and the balancing of interests will continue to shift as costs change and the benefits to consumers become more concrete.

American Libraries Association v. Pataki

The Dormant Commerce Clause has already been invoked to analyze the constitutionality of a state law regulating internet activity. In 1997, the U.S. District Court for the Southern District of New York overturned a state law prohibiting the online dissemination of content depicting “nudity, sexual conduct or sado-masochistic abuse” to minors. Pataki, 969 F. Supp. at 163. The plaintiffs to the action included content-providing library organizations and the ACLU, among others, who sought to enjoin enforcement of the law for fear of prosecution.

The limitation of this case is obvious—being only a federal trial court opinion, it cannot command precedent as mandatory authority. However, Pataki provides persuasive authority and a line of reasoning that other courts may adopt in the context of Internet regulation.

First Amendment

A court may also invalidate a state’s privacy law if it finds a violation of a speaker’s First Amendment right to free speech.

Judges examine the constitutionality of laws through several levels of scrutiny depending on the interests involved. For example, in the context of First Amendment rights, political and ideological speech is generally protected under the strict scrutiny standard. If the government regulates political or ideological speech, the government must show that the law is narrowly tailored to achieve a compelling government interest. However, judges examine laws regulating commercial speech—regarded as less important to protect—with intermediate scrutiny. Restrictions that are based on the content of speech or the identity of the speaker receive a “heightened” scrutiny—somewhere in between intermediate and strict scrutiny. Therefore, the type of speech that a privacy law purports to regulate will be very significant to the determination of which standard to apply, and by proxy, whether it is likely to be found constitutional or unconstitutional.

Finally, First Amendment jurisprudence not only protects the rights of speakers, but also the rights of listeners to access papers, information, and ideas. While individuals more frequently wield First Amendment law as a shield rather than a sword, some have argued for access to public court records using First Amendment law as a sword when those records are in danger of deletion due to privacy concerns. (For more in-depth discussion on the right to access public court records subject to a “right to deletion” or “right to be forgotten” request, please see Personal Privacy Should Not Outweigh Access to Public Court Records.)

Sorrell v. IMS Health Inc.

In Sorrell, a Vermont law prohibited the sale of pharmacy records (“prescriber-identifying information”) that tracked doctors’ prescribing practices to marketers of pharmaceutical and drug companies. The intended purpose of the law was to protect medical privacy. The Supreme Court struck down the law, finding it to be a content-based restriction of commercial speech because the law prohibited the disclosure of records for marketing purposes, but not for others, such as for research or educational purposes. Therefore, the Vermont law was subject to a heightened scrutiny standard. Under the heightened standard, the Court did not find the law necessary to protect medical privacy.

Sorrell is instrumental for the proposition that a state law’s limitations on who may receive data may lead the court to a finding that the law restricts speech based on content or speaker, leading to application of a heightened scrutiny standard. Notably, the dicta in the opinion also points to possible treatment of the processing and sale of data as speech worthy of First Amendment protections—not as conduct nor a commodity.

While federal lawmakers continue to debate over the provisions to be included in a federal privacy law, state legislators may themselves be deliberating over whether to pass a state privacy law as a gap filler. However, any state legislator should consider the above issues and work proactively to eliminate constitutional concerns through careful drafting.

Gold gavel on platform

Searching for the One Ring to Rule Them All: A Look at 8 U.S. Federal Privacy Bills

Image Credit: 3D Animation Production Company from Pixabay

This article is Part 1 of 2 in a series exploring proposed federal privacy laws in the United States. Part 2 will discuss the constitutional challenges facing not only a proposed federal privacy law but those facing existing state privacy laws as well.

As predicted in our Privacy Law Forecast for 2019, legislators have raced to introduce national privacy regulation in both the House and Senate this year.

In contrast to the European Union’s GDPR, a hodgepodge of sectoral laws govern privacy in specific industries: medical, financial, educational, and marketing sectors, among others. States have enacted laws to protect their residents. And on top of that, Section 5 of the Federal Trade Commission Act (15 U.S.C. § 45) grants authority to the FTC to enforce against unfair and deceptive acts and practices.

This all results in a confusing and burdensome “patchwork” of national, state and sectoral rules. (For more in-depth discussion on the current U.S. privacy regulatory landscape, please see American Privacy Laws in a Global Context.)

Given this regulatory environment, legislators are keen to put forth a single federal privacy law to standardize this “patchwork” and forestall the passage of dozens more state privacy bills. Some have set a deadline, hoping to pass a federal privacy law before the CCPA comes into effect on January 1, 2020. Since the start of 2019, lawmakers have introduced about 230 bills that regulate privacy in some way in either the House or Senate.

The following is a sample of comprehensive bills from both sides of the aisle. Though these bills are unlikely to pass committee, they indicate what policies lawmakers are considering in the current negotiations:

Title Introduction Date Sponsor Notes
American Data Dissemination Act of 2019 (“ADD Act”) January 16, 2019 Senator Marco Rubio (R-FL) This bill would require the FTC to submit recommended privacy regulations on “covered providers” (defined as any person that provides services over the internet) to Congress. If Congress fails to enact a law based on the FTC’s recommendations, the FTC would promulgate a final rule incorporating its proposed regulations. Only the FTC has powers of enforcement. This bill further allows for the preemption of state law.
Social Media Privacy Protection and Consumer Rights Act of 2019 January 17, 2019 Senator Amy Klobuchar (D-MN) This bill would require online platforms to inform the user of any data collection and use, offer the user a copy of their personal data, and allow the user to opt out of data tracking. The bill also requires breach notification within 72 hours of detection. Only the FTC and state attorneys general have the power to enforce violations.
Digital Accountability and Transparency to Advance Privacy Act (“DATA Privacy Act”) February 27, 2019 Senator Catherine Cortez Masto (D-NV) This bill would require companies to provide users with a fair processing notice and to allow users to access, port, or delete their own records. It would mandate users’ opt-in consent in situations involving sensitive data or data outside the parameters of the business-consumer relationship. Companies that collect data on more than 3,000 people a year and revenues greater than $25 million per year must appoint a Data Protection Officer (DPO). The FTC, state attorneys general, and any other officer authorized by the State to bring civil actions would have the power to enforce this law.
Own Your Own Data Act March 14, 2019 Senator John Kennedy (R-LA) This bill would require social media companies to have a “prominently and conspicuously displayed icon” that a user can click to easily access and port their information. It would characterize user account registration as a “licensing agreement” wherein the user would license the user’s data to the social media company.
Information Transparency & Personal Data Control Act April 1, 2019 Representative Suzan DelBene (D-WA) This bill would require any company to first procure users’ opt-in consent before processing sensitive data. Companies must also provide users with fair processing information. The bill requires companies to obtain third-party privacy audits and to submit the audits to the FTC biannually. Only the FTC would enforce this law. This bill further allows for the preemption of state law.
Balancing the Rights of Web Surfers Equally and Responsibly Act of 2019 (“BROWSER Act”) April 10, 2019 Senator Marsha Blackburn (R-TN) This bill would require providers of broadband internet access service and edge services to notify users of the providers’ privacy policies; obtain users opt-in consent in order to process sensitive information and opt-out consent for non-sensitive information; and prohibits providers from conditioning services on waivers of privacy rights. The bill further allows for the preemption of state law.
Privacy Bill of Rights April 11, 2019 Senator Edward Markey (D-MA) This bill would require companies provide users with fair processing information and the right to access, port, or delete their own records. Companies would be prohibited from offering “take-it-or-leave-it” arrangements or financial incentives in exchange for users’ personal information. Companies would also have to procure users’ opt-in consent before processing personal information. Under this bill, companies must designate an employee in charge of privacy/security compliance, no matter the size or annual revenue of the company. The FTC, state attorneys general, and individuals would be able to sue to enforce the law.
Do Not Track Act May 21, 2019 Senator Josh Hawley (R-MO) This bill would establish a national Do Not Track (DNT) system and require any website or application operator to search for a DNT signal upon connection. The bill would make it illegal to collect data from devices displaying a DNT signal. Only the FTC and state attorneys general have the power to enforce violations.

As we can see, the fault lines are clear and not surprising. Democratic lawmakers generally favor a private right of action for consumers to sue a company that has mishandled consumer data. Republican lawmakers are generally against including such a provision. Republican lawmakers typically favor an express right of preemption, so that a laxer federal privacy law may preempt stringent state laws such as the CCPA. Democratic lawmakers are largely against the inclusion of such provisions, unless the bill provides consumer rights equivalent in scope and depth to the CCPA.

Regardless of whether or not a federal privacy law passes, businesses and the courts have their work cut out for them. Constitutional and interpretive challenges will plague the reach of any state or federal comprehensive privacy law, making it difficult to assess coverage for overlapping sector, state, and federal rules.

Consequently, as we will discuss further in our next article, legislators should consider these constitutional challenges head on prior to passing the “one” best bill to rule them all. Without clearly articulating the scope of any privacy law (e.g. does it extend across state borders and internationally), its preemption over or exclusions for other laws (e.g. GLBA, HIPAA, COPPA), and its relationship to third parties that only touch data incidentally – any comprehensive legislation will just add to the quagmire of current laws.

Image of gears directing arrows to shield.

The 2019 Capital One Breach Compared to the 2017 Equifax Breach: Evolving and Improving Attitudes toward Data Security, Breach Detection, and Breach Notification

Image Credit: Khanittha Yajampa via Dreamstime.com

On September 7, 2017, Equifax announced that it had suffered a data breach that exposed the personal data of nearly 147 million people. Two years following the Equifax breach, Capital One also suffered a data breach nearly as massive in scope, affecting approximately 100 million users in the United States and 6 million users in Canada.

A casual observer might think that the two breaches are similar. After all, they both affected a large financial institution and encompassed over a million financial records. The similarities end there, however. Capital One implemented security measures to protect its customer data and engaged in a speedy response to an insider threat. Equifax failed to implement even basic data protection measures and was laggardly in reporting the inevitable breach.

Only time will tell what the full repercussions will be of these two breaches. But based on the facts in front of us, Capital One’s quick response to this breach will ultimately protect more customers in the long run. Comparing the circumstances surrounding the two breaches show a positive trend toward companies taking their customers’ data more seriously and mindfulness of ever-increasing consumer vigilance about their own data.

The Timeline of Each Breach – Head in the Sand v. Speedy Responder

In the case of Equifax, the company detected a breach on July 29, 2017, but failed to notify the public until September—40 days later.

To make matters worse, the breach was not detected until several months after the actual breach, even though the security vulnerability was reportedly known to Equifax. Months prior to the actual breach, a security researcher attempted to inform Equifax about the researcher’s inadvertent and unauthorized access to millions of Equifax customers’ sensitive personal data records. This included social security numbers and birthdates. Although it would have taken a matter of hours or minutes to deploy a fix, Equifax never addressed the reported vulnerability until after the breach had occurred.

In comparison, the Capital One breach occurred when former Amazon Web Services (AWS) employee Paige Thompson stole customer data and posted it to her GitHub, a repository for software development coding and programs. 

On July 17, 2019, a security researcher alerted Capital One to this potential breach, by emailing Capital One through an address exclusively reserved for “ethical” hacker disclosures. Based off the information in this email (i.e., Thompson’s GitHub account), Capital One launched an internal investigation of the breach. That led to detection of the breach on July 19. On July 29, 2019, Capital One announced to the public the details of its investigation.

All told, only 10 days passed from the moment of detection to notification of the public in the Capital One breach. Capital One’s quick response may have been influenced by public resentment of how long it took for Equifax to notify its customers of a breach—long enough for senior executives to collectively sell millions of dollars’ worth of stocks within days of detecting the breach in 2017.

Recently, the FTC announced a settlement with Equifax for at least $575 million for damages relating to its data breach in 2017. While a substantial amount to be sure, many have also criticized perceived inaction by both legislators and the Consumer Financial Protection Bureau (CFPB) in response to the Equifax breach. There is substantial public opinion that Equifax got off easy with an FTC settlement that essentially equates to a “cost of doing business.” 

Better Security Control—Protecting What’s In Your Wallet

Following the announcement of Equifax’s data breach, Equifax was lambasted in media reports for its egregious security practices, in particular, its storage of administrative credentials and passwords in unencrypted plain text files. By using plain text instead of encryption, Equifax exposed its sensitive data to hackers without protection. 

In contrast, Capital One encrypted all customer data as standard practice. Due to the circumstances of the breach, Thompson was also able to decrypt the data. However, Capital One also noted in its press release that it tokenizes select fields that are particularly sensitive, including Social Security numbers and account numbers. Tokenization provides an additional layer of protection by replacing the sensitive field with a unique “token” or “cryptographically generated” placeholder. The original sensitive information is stored in a different location and remains protected. Capital One’s practice of tokenization likely protected over 99% of its held Social Security numbers and bank account numbers. Capital One’s adoption of stronger security measures, beyond basic encryption, shows its awareness of and protection against increasingly sophisticated hacks.

While breach incidents are unfortunately becoming more common, Capital One’s response to its recent breach shows that incident response plans are becoming more robust. Corporate attitudes are trending toward privacy and security teams being an integral part of an organization, as well as investments in technical and operational security controls having great value.

Breaches in the Future?

Looking forward, we can all use the Equifax and Capital One breaches to inform us with respect to all businesses’ privacy and security obligations. As just a few high-level takeaways:

  1. Properly encrypt all personal data held on customers and employees, based on the data’s level of sensitivity.
  2. Assess whether your current privacy and information security team needs additional support and/or training to handle your organization’s size and sensitivity of data.
  3. Implement proper security controls, including access permissions and physical facility controls.
  4. Don’t forget that “insider threats” caused by employee and ex-employee handling of data is just as problematic as outside hacks.
  5. Promptly investigate “ethical hacker” or security researcher notifications about your company’s security.
  6. Have an incident-response plan in place to guide decision-making following a detected breach.

Above all, be prepared! Organizations of all sizes now handle massive amounts of data collected both on physical servers and on cloud databases. It is critical that they understand not just the current minimum data protection obligations imposed upon them, but also learn from past security incidents and realize that the bar for compliance is continually in motion with every breach.

Image of scale weighing human against law section code

Privacy Rights in Class Action Lawsuits – Should Putative Class Members Opt-In Before Their Personal Information Is Disclosed in California Consumer Privacy Act Litigation?

[Originally published in Orange County Lawyer Magazine, May 2019, Vol. 61 No.5.,by Lily Li and Matthew Wegner; Image Credit: kmicican from pixabay.com]

In 2020, the nation’s toughest data privacy law will take effect in California. The California Consumer Privacy Act of 2018 (CCPA) imposes harsh restrictions on companies seeking to sell consumers’ data, including statutory penalties for any breaches of data. This legislation was spurred by public outrage against the Facebook-Cambridge Analytica scandal and Equifax, Target, and Yahoo data hacks, and reflects a growing trend to protect consumer data privacy.

As with so many legislative and judicial movements in California—for example, the Save-On decision, which ushered in a wave of wage-and-hour class actions in the early 2000s, or Business & Professions Code section 17200, which before Proposition 64 was tacked-on to countless consumer class actions—the CCPA is likely to usher in a host of new class action litigation as plaintiffs (and their attorneys) seek to recover statutory damages for data privacy violations.

Continue Reading Privacy Rights in Class Action Lawsuits – Should Putative Class Members Opt-In Before Their Personal Information Is Disclosed in California Consumer Privacy Act Litigation?
Federal Trade Commission logo

The FTC Ramps Up Privacy Enforcement

Following increased congressional scrutiny over its data privacy enforcement practices in 2018, the FTC has ramped up its enforcement actions in recent months, giving some real bite to current federal privacy laws:

  • On February 27, 2019 the FTC filed a complaint against the operators of lip-syncing app Musical.ly—now known as TikTok – for failing to seek parental consent before collecting the personal information of users under the age of 13. In response to the FTC’s complaint, TikTok agreed to pay a $5.7 million settlement to the agency, marking the largest-ever COPPA fine in US history.
  • Throughout March, the FTC obtained settlements against 4 separate robocall operations: NetDotSolutions, Higher Goals Marketing, Veterans of America, and Pointbreak Media. These cases charged these separate entities for violations of the FTC Act (unfair and deceptive trade practices) and the agency’s Telemarketing Sales Rule (TSR) – including its Do Not Call (DNC) provisions.
  • On March 26, 2019 the FTC announced a broad inquiry into the data collection practices of broadband companies under Section (b) of the FTC Act. The agency issued orders to AT&T Inc., AT&T Mobility LLC, Comcast Cable Communications doing business as Xfinity, Google Fiber Inc., T-Mobile US Inc., Verizon Communications Inc., and Cellco Partnership doing business as Verizon Wireless, seeking information about the collection, retention, and sharing of personal information. The FTC investigation highlights recent consumer concerns about data privacy and tracking by ISPs, following high-level acquisitions of content providers like AOL, Yahoo, and DirectTV. We are watching closely, as this may be the start of one of the first joint privacy-antitrust enforcement actions by the FTC.

These enforcement actions highlight the FTC’s role as the de facto data protection authority for the United States. Yet, the FTC’s mandate extends far beyond data privacy, and includes regulatory authority over false advertising claims, anticompetitive behavior, and merger review. While Congress continues to debate the passage of a federal bipartisan privacy bill, it behooves them to keep in mind the current staff and funding limitations of the FTC in any proposed drafts.

1 2 3 4 6